How to: nested LVM resize

Suppose you have a virtualization host, and each VM is assigned its own LVM partition for disk storage.

Expanding the storage

This one is trivial. You call lvresize on the host to add more space to the LVM partition. Then you shut down the VM and start it back up again so that it realizes it has some extra space now (note that a simple reboot doesn’t seem to cut it on KVM hosts, looks like the domain needs to be destroyed for the changes to take effect).
After that, from inside the VM, you can perform an online FS resize with something like:

lvresize --resizefs --size 100G /dev/mapper/my-cool-vg-MyLogVol00

Shrinking the storage

A bit more tricky, this one is. First of all, there is no such thing as online shrinking of a ext3/4 FS. Therefore, all operations need to happen while the VM is powered down, which will cause a tad more downtime.

On the host machine, use a nifty util called kpartx to create device mappings from the guest VM’s LVM partition:

kpartx -av /dev/my-host-vg/my-cool-vm

Now, the volume group inside the LVM partition should be visible on the host. Run vgscan to verify that:

[root@host ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "my-cool-vg" using metadata type lvm2
  Found volume group "my-host-vg" using metadata type lvm2

So the new volume group is there, but it’s currently marked inactive. Let’s fix that:

vgchange -ay my-cool-vg

Now, we can perform any operations with the nested LVMs. Time to shrink the partition and the FS:

lvresize --resizefs --size 50G /dev/mapper/my-cool-vg-MyLogVol00

Deactivate the volume group:

vgchange -an my-cool-vg

Now, we need to shrink the “guest” physical volume. Circle back to the kpartx output to see which mappings it added. In my setup, the first partition was the boot partition, and the second was the actual nested LVM.

pvresize --setphysicalvolumesize 51G /dev/mapper/my-host-vg-my-cool-vm2

Calculate the size carefully as this will also need to accommodate any other non-LVM partitions you have on the guest VM (in my case, I added the /boot partition here).

Now, we can get rid of the mappings that kpartx created:

kpartx -dv /dev/my-host-vg/my-cool-vm

Finally, resize the LVM partition on the host machine:

lvresize --size 52G /dev/my-host-vg/my-cool-vm

Start up the VM. Done.

Renaming the “guest” volume group

In some environments, the names of both volume groups may turn out to be the same (for example, if you didn’t consider this while writing your kickstart templates). In that case, you’ll see an error message like this:

[root@host ~]# vgscan
  Reading all physical volumes.  This may take a while...
  WARNING: Duplicate VG name VolGroup00: Existing Zclhms-H89I-VH1b-thtf-URlf-URTT-4eqwmU (created here) takes precedence over 8TlO1m-qb5b-fieh-wHDB-qzwQ-HeFe-tXIP0N
  WARNING: Duplicate VG name VolGroup00: Existing Zclhms-H89I-VH1b-thtf-URlf-URTT-4eqwmU (created here) takes precedence over 8TlO1m-qb5b-fieh-wHDB-qzwQ-HeFe-tXIP0N
  Found volume group "VolGroup00" using metadata type lvm2
  Found volume group "VolGroup00" using metadata type lvm2

In order to proceed, we’ll need to rename the “guest” volume group:

vgrename 8TlO1m-qb5b-fieh-wHDB-qzwQ-HeFe-tXIP0N my-new-vg

Problem solved? Not really, since the VM will fail to start after this.

# activate the new volume group
vgchange -ay my-new-vg
# create a temporary mount point
mkdir my-vm-temp
# mount the partitions
mount /dev/my-new-vg/MyLogVol00 my-vm-temp/
mount /dev/mapper/my-new-vg-my-cool-vm1 my-vm-temp/boot/
# update fstab
sed -i "s/VolGroup00/my-new-vg/g" my-vm-temp/etc/fstab
# update bootloader
sed -i "s/VolGroup00/my-new-vg/g" my-vm-temp/boot/grub/grub.conf
# unmount
umount my-vm-temp/boot/
umount my-vm-temp/

That was quite a detour, but now you can proceed with the actual resize. Yay.