]> Extending LVM physical volumes 🌐:aligrant.com

Extending LVM physical volumes

Alastair Grant | Tuesday 17 January 2012

I'm considering switching from Xen Type-2 to ESXi Type-1 hypervisor for my general do anything server.

The problem with this is, ESXi is thin on the ground when it comes to managing the host - where a Type 2 can sit on top of Linux and take advantage of all sort of useful things, such as UPS and Soft-RAID. ESXi is pretty dumb when it comes to acting as an Operating System to the host machine.

On of my reservations with this move is how to organise my files. As virtual images would be stored outside of my primary machine/guest, but my primary guest acts as my NAS and stores a lot of data on it. Putting in bigger disks is also tricky due to physical space in the box and the fact that the existing disks are already pretty huge. As a result, I need to balance up my need for spare space in ESXi's datastores for new virtual machines (which being a general server will range between 1 and however many things I'm currently tinkering with). Against having enough space in my primary guest to store the gigs of files.

I figured if I'm stingy on the space for the primary guest, I could later grow the disks when needed. Find in theory, but putting it into practice can be a little tricky. Especially when you also chuck in RAID striping.

In this example I have two physical data-stores in ESXi. Each store has one virtual drive for the guest OS (to allow striping/RAID from the guest). I also created a small 64MB drive to hold a boot partition to keep aligning things easier in my stripe.

The two main drives were filled up with a single LVM partition, and then this was jammed with logical volumes (/, /home & swap) maxing out the capacity of the virtual drives.

I then grew the virtual disks in ESXi. When I returned to Linux I used openSUSE's YaST Partitioner to view the LVM. I was unable to increase the size of my physical volume to make use of the extra space in the disk - although I could add other disks (so an alternative here is to add new virtual drives in ESXi and just tag them in). A bit of searching turned up this process:

echo 1 > /sys/block/sdb/device/rescan
echo 1 > /sys/block/sdc/device/rescan

Now I ran this without looking in too much detail whether it was required - looks harmless though! I have two drives to I ran it on both (sdb and sbc).

Then the magic is with the "pv" commands. "pvdisplay" will show you your current LVM setup, which should show your previous disk sizes for each drive.

Running the command:

pvresize /dev/sdb
pvresize /dev/sdc

This increased the size of the physical volume to the new capacity of the hard drives.

Again, the YaST partitioner bailed when trying to increase the logical volume sizes, so back to the console with:

lvresize -L+2GB /dev/base/data

This will increase the volume by 2GB (so put whatever you want in), and the volume is listed as /dev/base/data, so replace with whatever you called yours.

You will still need to resize the filesystem, I am currently using btrfs, so the command is:

btrfs filesystem resize 1:max /home

The 1: is the device id, max means fill it up, and the path is where I had the flesystem mounted to (not the btrfs path). Use btrfs filesystem show to get details.

Breaking from the voyeuristic norms of the Internet, any comments can be made in private by contacting me.