Examples of using Logical Volume Manager (LVM) on CentOS & RHEL 6.x & 7.x


With the usage of LVM in the default CentOS / RHEL 6.x & 7.x installation process, it’s now important for every Linux Admin to know LVM. We provide examples to Create Partitions, Physical volumes (pv), Volume Groups (vg), Logical Volumes (lv), and Create File systems using storage provided by Logical Volumes. We cover adding an entry in /etc/fstab, Mounting a file system to a logical volume, Increasing the size of a logical volume and file system, and Removing a Logical Volume.

The example creates the following storage structure

Logical Volume Manager

Our examples use an unpartitioned hard disk (/dev/sdb).
We First, create two Physical Partition – /dev/sdb1 and sdb2 on /dev/sdb.
Second, create an LVM Physical Volume in each Physical Partition (/dev/sdb1 and sdb2)
Third, create an LVM Volume Group – /dev/mynew_vg using the space provided by /dev/sdb1 and sdb2.
Note: Volume Groups provide receive storage from one or more Partitions or Hard Drives.
Fourth, create Logical Volumes (vol01 & vol02) from space provided by our Volume Group.
Note: Logical Volumes live within a Volume Group.
And Finally, create File Systems using the space provided by our Logical Volume (vol01 & vol02).

Let’s get started – Create two Physical Partitions

To create partitions on a device use cfdisk or parted.
# cfdisk /dev/sdb

List the partitions created

# fdisk -l
or
# parted -l

Create physical volumes

Use the pvcreate to create a physical volume on each physical partition.

# pvcreate /dev/sdb1
# pvcreate /dev/sdb2

The pvdisplay command displays all physical volumes in the server.

# pvdisplay

Alternatively the following command can be used:

# pvdisplay /dev/sdb1

Create Volume Group

Create a volume group “mynew_vg” which serves as a container for physical volumes on /dev/sdb1

# vgcreate mynew_vg /dev/sdb1

To include both partitions at once you can use the command:

# vgcreate mynew_vg /dev/sdb1 /dev/sdb2

Additional physical volumes are added to a volume group by using the vgextend command.

# vgextend mynew_vg /dev/sdb2

The vgdisplay command displays all volume groups in the server.

# vgdisplay

Create Logical Volumes

To create a logical volume, named “vol01”, with a size of 400 MB using storage provided by volume group “mynew_vg” use the command:

# lvcreate -L 400 -n vol01 mynew_vg

The following command creates a logical volume (vol02) with a size of 1GB:

# lvcreate -L 1000 -n vol02 mynew_vg

The lvdisplay command displays all logical volumes in the server.

Create File system on logical volumes

Let’s create an EXT3 file system using the storage space provided by a logical volume.

Note: Our logical volume (vol01) is viewed as a device by the EXT3 file system.

# mkfs.ext3 -m 0 /dev/mynew_vg/vol01

the -m option specifies the percentage reserved for the super-user, set this to 0 if you wish not to waste any space, the default is 5% space reserve.

Edit /etc/fstab to automount our new file system at /home/foobar on reboot

Add an entry for your newly created logical volume into /etc/fstab

/dev/mynew_vg/vol01 /home/foobar ext3 defaults 0 2

Mount logical volumes

Let’s create our mount point /home/foobar, mount all our file systems, and display our mounted file systems.

# mkdir /home/foobar
# mount -a 
# df -h

 Increasing the size of a logical volume

One of the advantages of logical volume manager (LVM) is the ability to rapidly adjust the sizes of logical volumes. To increase the size of a logical volume by 800 MB execute the command:

# lvextend -L +800 /dev/mynew_vg/vol01

The lvextend command increases the size of a logical volume using available space within a volume group.

Once the logical volume is increased in size, the file system using the logical volume can be increased in size:

# cd /; umount /home/foobar; resize2fs  /dev/mynew_vg/vol01; e2fsck -f /dev/mynew_vg/vol01
# mount -a; df -h

Remove logical volume

The command lvremove can be used to remove logical volumes with no mounted file systems.

# lvremove /dev/mynew_vg/vol02

Replacing disks

One of the benefits of LVM is the ability to replace a falling or failed drive. When SMART or your RAID utility starts reporting issues, it is time to move data off the drive (if a backup doesn’t exist). First, add a drive to the system that is as large as the one to be replaced.

To move data, we move Physical Extents of a Volume Group to another disk, or more precisely, to another Physical Volume. Use the LVM pvmove utility.

In our example, the failing drive is /dev/sda1. We added replacement drive /dev/sdb3. Next, add /dev/sdb3 to the Volume Group that contains /dev/sda1. Unmount any filesystems in this Volume Group before adding /dev/sdb3.

We then use pvmove:

# pvmove /dev/sda1
pvmove -- moving physical extents in active volume group "test1"
pvmove -- WARNING: moving of active logical volumes may cause data loss!
pvmove -- do you want to continue? [y/n] y
pvmove -- doing automatic backup of volume group "test1"
pvmove -- 12 extents of physical volume "/dev/sda1" successfully moved

Now that /dev/sda1 contains no Physical Extents, we can reduce it from the Volume Group:

# vgreduce test1 /dev/sda1
vgreduce -- doing automatic backup of volume group "test1"
vgreduce -- volume group "test1" successfully reduced by physical volume:
vgreduce -- /dev/sda1

When a drive can’t be read

If a drive fails without warning and can’t be read by pvmove, data lost occurs if the PV was not mirrored. The correct course of action is to replace the failed PV with an identical one or a partition of the same size.

The directory /etc/lvmconf contains backups of the LVM data and structures that make the disks into Physical Volumes and list which Volume Groups that PV belongs to and what Logical Volumes are in the Volume Group.

After replacing the faulty disk, use the vgcfgrestore command to recover the LVM data to the new PV. This restores the Volume Group, but does not restore the data that was in the Logical Volumes.

Leave a comment