March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

LVM volumes on CentOS / RHEL 7 with System Storage Manager

LVM volumes on CentOS / RHEL 7 with System Storage Manager

Logical Volume Manager (LVM) is an extremely flexible disk management scheme, allowing you to create and resize logical disk volumes off of multiple physical hard drives with no downtime. However, its powerful features come with the price of a somewhat steep learning curves, with more involved steps to set up LVM using multiple command line tools, compared to managing traiditional disk partitions.

Here is good news for CentOS/RHEL users. The latest CentOS/RHEL 7 now comes with System Storage Manager (aka ssm) which is a unified command line interface developed by Red Hat for managing all kinds of storage devices. Currently there are three kinds of volume management backends available for ssm: LVM, Btrfs, and Crypt.

In this tutorial, I will demonstrate how to manage LVM volumes with ssm

[root@centos7server ~]# yum install system-storage-manager
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.vodien.com
* extras: mirror.vodien.com
* updates: mirror.vodien.com
Package system-storage-manager-0.4-5.el7.noarch already installed and latest version
Nothing to do
[root@centos7server ~]#

[root@centos7server ~]# ssm list
————————————————————-
Device         Free      Used      Total  Pool    Mount point
————————————————————-
/dev/fd0                         4.00 KB
/dev/sda                        40.00 GB          PARTITIONED
/dev/sda1                      500.00 MB          /boot
/dev/sda2   0.00 KB  39.51 GB   39.51 GB  centos
/dev/sdb                        20.00 GB
/dev/sdb1   0.00 KB  20.00 GB   20.00 GB  vg01
/dev/sdc                        20.00 GB
/dev/sdc1  17.07 GB   2.93 GB   20.00 GB  vg01
/dev/sdd                        20.00 GB
————————————————————-
—————————————————
Pool    Type  Devices      Free      Used     Total
—————————————————
centos  lvm   1         0.00 KB  39.51 GB  39.51 GB
vg01    lvm   2        17.07 GB  22.93 GB  39.99 GB
—————————————————
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     35.51 GB  xfs   35.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

Add a Physical Disk to an LVM Pool

Let’s add a new physical disk (e.g., /dev/sdb) to an existing storage pool (e.g., centos).
The command to add a new physical storage device to an existing pool is as follows.

[root@centos7server ~]# ssm add -p vg01 /dev/sdd
Physical volume “/dev/sdd” successfully created
Volume group “vg01” successfully extended

To Remove the Phyiscal disk from LVM Pool

[root@centos7server ~]# ssm remove -p vg01 /dev/sdd
usage: ssm [-h] [–version] [-v] [-f] [-b BACKEND] [-n]
{check,resize,create,list,add,remove,snapshot,mount} …
ssm: error: unrecognized arguments: -p
[root@centos7server ~]# ssm remove  vg01 /dev/sdd
Do you really want to remove volume group “vg01” containing 1 logical volumes? [y/n]: y
Logical volume vg01/lv01 contains a filesystem in use.
SSM Info: Unable to remove ‘vg01’
Removed “/dev/sdd” from volume group “vg01”
[root@centos7server ~]#

Lets increase root filesystem
[root@centos7server ~]# ssm add -p centos /dev/sdd
Volume group “centos” successfully extended
[root@centos7server ~]#

[root@centos7server ~]# ssm list
————————————————————-
Device         Free      Used      Total  Pool    Mount point
————————————————————-
/dev/fd0                         4.00 KB
/dev/sda                        40.00 GB          PARTITIONED
/dev/sda1                      500.00 MB          /boot
/dev/sda2   0.00 KB  39.51 GB   39.51 GB  centos
/dev/sdb                        20.00 GB
/dev/sdb1   0.00 KB  20.00 GB   20.00 GB  vg01
/dev/sdc                        20.00 GB
/dev/sdc1  17.07 GB   2.93 GB   20.00 GB  vg01
/dev/sdd   20.00 GB   0.00 KB   20.00 GB  centos
————————————————————-
—————————————————
Pool    Type  Devices      Free      Used     Total
—————————————————
centos  lvm   2        20.00 GB  39.51 GB  59.50 GB  —– WE HAVE 50 GB NOW
vg01    lvm   2        17.07 GB  22.93 GB  39.99 GB
—————————————————
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     35.51 GB  xfs   35.49 GB   34.56 GB  linear  /                     — We want to extend the root partion size /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

we are goint to increase the root file system to 1 GB

[root@centos7server ~]# ssm resize -s+1024M /dev/centos/root
Extending logical volume root to 36.51 GiB
Logical volume root successfully resized
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=9308160, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 9308160 to 9570304
[root@centos7server ~]#

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     36.51 GB  xfs   35.49 GB   34.56 GB  linear  /                            — We  have extend the root partion size / to 1 GB
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

[root@centos7server ~]# xfs_growfs /dev/centos/root
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=9570304, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     36.51 GB  xfs   36.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

we are increase the disk space to 4 GB for the root partion

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     38.48 GB  xfs   36.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-
[root@centos7server ~]# ssm resize -s+2024M /dev/centos/root
Extending logical volume root to 40.46 GiB
Logical volume root successfully resized
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=10088448, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10088448 to 10606592

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     40.46 GB  xfs   38.47 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

Note root partion is incresed on the fly

[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        44G  1.1G   43G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        25G   34M   25G   1% /xfs_test
[root@centos7server ~]#

[root@centos7server ~]# xfs_growfs /dev/centos/root
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=10606592, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        44G  1.1G   43G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        25G   34M   25G   1% /xfs_test

Create a New LVM Pool/Volume

In this experiment, let’s see how we can create a new storage pool and a new LVM volume on top of a physical disk drive. With traditional LVM tools, the entire procedure is quite involved; preparing partitions, creating physical volumes, volume groups, and logical volumes, and finally building a file system. However, with ssm, the entire procedure can be completed at one shot!

What the following command does is to create a storage pool named testpool, create a 5000MB LVM volume named disk0 in the pool, format the volume with XFS file system, and mount it under /mnt/test. You can immediately see the power of ssm.

[root@centos7server ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x3ea192fa

Device Boot      Start         End      Blocks   Id  System

$ ssm create -s 5200M -n disk0 –fstype xfs -p testpool /dev/sdc /test

[root@centos7server ~]# mkdir /test

[root@centos7server ~]# ssm create -s 5200M -n disk0 –fstype xfs -p testpool /dev/sdc /test
WARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n] y
Wiping dos signature on /dev/sdc.
Physical volume “/dev/sdc” successfully created
Volume group “testpool” successfully created
WARNING: LVM2_member signature detected on /dev/testpool/disk0 at offset 536. Wipe it? [y/n] y
Wiping LVM2_member signature on /dev/testpool/disk0.
Logical volume “disk0” created
meta-data=/dev/testpool/disk0    isize=256    agcount=4, agsize=332800 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=1331200, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Filesystem                 Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root    xfs        44G  1.1G   43G   3% /
devtmpfs                   devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                      tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                      tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                      tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1                  xfs       521M  147M  374M  29% /boot
/dev/mapper/testpool-disk0 xfs       5.5G   34M  5.5G   1% /test

Take a Snapshot of an LVM Volume

Using ssm tool, you can also take a snapshot of existing disk volumes. Note that snapshot works only if the back-end that the volumes belong to support snapshotting. The LVM backend supports online snapshotting, which means we do not have to take the volume being snapshotted offline. Also, since the LVM backend of ssm supports LVM2, the snapshots are read/write enabled.

Let’s take a snapshot of an existing LVM volume

/dev/testpool/disk0

[root@centos7server ~]# cp /var/log/
anaconda/  boot.log   cron       dmesg.old  lastlog    messages   secure     tallylog   wtmp
audit/     btmp       dmesg      grubby     maillog    ppp/       spooler    tuned/     yum.log
[root@centos7server ~]# cp -pR /var/log/* /test/
[root@centos7server ~]#

[root@centos7server ~]# ssm snapshot /dev/testpool/disk0
Logical volume “snap20150214T220836” created

After a snapshot is stored, you can remove the original volume, and mount the snapshot volume to access the data in the snapshot.

[root@centos7server ~]# mkdir /test1

[root@centos7server ~]# umount /dev/testpool/disk0
[root@centos7server ~]#

[root@centos7server ~]# ssm mount /dev/testpool/
disk0                snap20150214T220836
[root@centos7server ~]# ssm mount /dev/testpool/snap20150214T220836 /test1
[root@centos7server ~]#
[root@centos7server ~]# cd /test1
[root@centos7server test1]# ls -ltr
total 908
drwx——. 2 root root      6 Jun 10  2014 ppp
drwxr-xr-x. 2 root root     22 Jun 24  2014 tuned
-rw——-. 1 root root      0 Feb 11 18:47 tallylog
-rw——-. 1 root root      0 Feb 11 18:48 spooler
drwxr-xr-x. 2 root root   4096 Feb 11 18:49 anaconda
drwxr-x—. 2 root root     22 Feb 11 18:50 audit
-rw——-. 1 root root   1370 Feb 11 19:07 grubby
-rw-r–r–. 1 root root 110663 Feb 14 15:55 dmesg.old
-rw——-. 1 root root   5304 Feb 14 16:56 yum.log
-rw-r–r–. 1 root root   7205 Feb 14 21:20 boot.log
-rw-r–r–. 1 root root 111840 Feb 14 21:20 dmesg
-rw——-. 1 root root    824 Feb 14 21:20 maillog
-rw——-. 1 root utmp    768 Feb 14 21:23 btmp
-rw-r–r–. 1 root root 292000 Feb 14 21:23 lastlog
-rw-rw-r–. 1 root utmp  20736 Feb 14 21:39 wtmp
-rw——-. 1 root root  14997 Feb 14 21:39 secure
-rw——-. 1 root root 568125 Feb 14 22:01 messages
-rw-r–r–. 1 root root  39870 Feb 14 22:01 cron

[root@centos7server test1]# df -Th /test1
Filesystem                               Type  Size  Used Avail Use% Mounted on
/dev/mapper/testpool-snap20150214T220836 xfs   5.1G   35M  5.1G   1% /test1
[root@centos7server test1]#

Remove an LVM Volume

Removing an existing disk volume or storage pool is as easy as creating one. If you attempt to remove a mounted volume, ssm will automatically unmount it first. No hassle there.

To remove an LVM volume:
$ ssm remove <volume>

To remove a storage pool:
$  ssm remove <pool-name>

[root@centos7server test1]# ssm remove /dev/testpool/disk0
Logical volume testpool/snap20150214T220836 contains a filesystem in use.
SSM Info: Unable to remove ‘/dev/testpool/disk0’
SSM Error (2001): Nothing was removed!
[root@centos7server test1]#

[root@centos7server ~]# ssm remove /dev/testpool/disk0
Do you really want to remove active logical volume snap20150214T220836? [y/n]: y
Logical volume “snap20150214T220836” successfully removed
Do you really want to remove active logical volume disk0? [y/n]: y
Logical volume “disk0” successfully removed
[root@centos7server ~]#

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>