November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

LVM2: device filter and LVM metadata restore

LVM2: device filter and LVM metadata restore

Customize LVM device filter to get rid of the annoying “/dev/cdrom: open failed” warning

##/dev/cdrom: open failed warning
$pvcreate /dev/sdb1
/dev/cdrom: open failed: Read-only file system
$ vgcreate vg01 /dev/sdb1
/dev/cdrom: open failed: Read-only file system
##The error because LVM scan all device files by default, you can exclude some device files by device filters

##File /etc/lvm/cache/.cache contains the device file names scanned by LVM
$ cat /etc/lvm/cache/.cache
persistent_filter_cache {
valid_devices=[
"/dev/ram11",
"/dev/cdrom",

##Edit /etc/lvm/lvm.conf, Change default filter  
filter = [ "a/.*/" ]
#to
filter = [ "r|/dev/cdrom|","r|/dev/ram*|" ]

##You need to delete the cache file or ran vgscan to regenerate the file
$rm /etc/lvm/cache/.cache   OR vgscan

LVM metadata backup and restore 
LVM record every LVM VG and LV metadata operation and save it to /etc/lvm/backup automatically, old version backup files are archived to /etc/lvm/archive.
The backup file can be used to rollback LVM metadata changes, for example, if you have removed the VG/PV or even re-initialize disk with pvcreate, Don’t panic,as long as file system was not re-created, you can use vgcfgrestore to restore all the data.
The following is to demonstrate how to recover a LV after it is completed destroyed from PV level (pvremove)
1.Create test LV and write some data

$pvcreate  /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
$vgcreate vg01  /dev/sdb1 /dev/sdb2
Volume group "vg01" successfully created
$ lvcreate -L100M -n lv01 vg01
Logical volume "lv01" created
$ mkfs.ext3 /dev/vg01/lv01
$ mount /dev/vg01/lv01 /mnt/
$cp /etc/hosts /mnt/
$ ls /mnt/
hosts  lost+found

2.Destroy LV,VG,and PV

$vgremove vg01
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed

#VG is removed and PV was also wiped out
$ pvcreate /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created

3.Lets recover the LV and the data

##Find out the backup file to restore from
$vgcfgrestore -l vg01
..
file:         /etc/lvm/archive/vg01_00002.vg
VG name:      vg01
Description:  Created *before* executing 'vgremove vg01'
Backup time:  Tue May 10 15:41:31 2011

##first attempt failed, because PV UUID is changed
$ vgcfgrestore -f /etc/lvm/archive/vg01_00002.vg vg01
Couldn't find device with uuid 'pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB'.
Couldn't find device with uuid 'J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W'.
Cannot restore Volume Group vg01 with 2 PVs marked as missing.
Restore failed.

##Find old UUID
$ grep -B 2 /dev/sdb /etc/lvm/archive/vg01_00002.vg
pv0 {
id = "pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB"
device = "/dev/sdb1"    # Hint only
--
pv1 {
id = "J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W"
device = "/dev/sdb2"    # Hint only
$

##Recreate PV with the old UUID
$ pvcreate -u pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB /dev/sdb1
Physical volume "/dev/sdb1" successfully created
$ pvcreate -u J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W  /dev/sdb2
Physical volume "/dev/sdb2" successfully created

##run vgcfgrestore again
$ vgcfgrestore -f /etc/lvm/archive/vg01_00002.vg vg01
Restored volume group vg01

##data was also recovered
$ mount /dev/vg01/lv01 /mnt/
mount: special device /dev/vg01/lv01 does not exist
$ lvchange -a y vg01/lv01
$ mount /dev/vg01/lv01 /mnt/
$ cat /mnt/hosts
127.0.0.1       localhost
..

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>