March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

XFS CENTOS 7

Last login: Mon Sep  5 22:48:00 2016 from 192.168.1.1
clusterserver1 without LVM
Create an “clusterserver1” file system
We have “/dev/sdb” as a free hard drive :

[root@clusterserver1 ~]# lsblk -f
NAME            FSTYPE      LABEL           UUID                                   MOUNTPOINT
fd0
sda
??sda1          clusterserver1                         aba69d25-b3de-4d89-ba25-e50a8dcf10eb   /boot
??sda2          LVM2_member                 EE31dY-Ubnm-LwCA-8J9J-vK9B-XNzz-OZSt75
??centos-swap swap                        2e1fb731-0f59-4d10-9f2f-e302a671de57   [SWAP]
??centos-root clusterserver1                         8e1d8c59-5cd0-4716-92dd-de7c1417dc74   /
sdb
sr0             iso9660     CentOS 7 x86_64 2014-07-06-17-32-07-00
[root@clusterserver1 ~]#

Create a full partition on this drive :

[root@clusterserver1 ~]# parted -s /dev/sdb mklabel gpt
[root@clusterserver1 ~]# parted -s /dev/sdb mkpart primary clusterserver1 0% 100%
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#  mkfs.clusterserver1 /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=1310592 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5242368, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@clusterserver1 ~]# mount -o inode64,nobarrier /dev/sdb1 /mnt
[root@clusterserver1 ~]# df -TH /mnt/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      clusterserver1    22G   34M   22G   1% /mnt
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# lsblk -f /dev/sdb
NAME   FSTYPE LABEL UUID                                 MOUNTPOINT
sdb
??sdb1 clusterserver1          23356c78-b7eb-4dc8-bd29-3d9933ac848b /mnt
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# umount /mnt

[root@clusterserver1 ~]# mkdir -p /other/data
[root@clusterserver1 ~]# vi /etc/fstab
[root@clusterserver1 ~]# grep /dev/sdb1 /etc/fstab
/dev/sdb1       /other/data                     clusterserver1     inode64,nobarrier                                                                                                0 0
[root@clusterserver1 ~]# grep /dev/sdb1 /etc/fstab
/dev/sdb1       /other/data                     clusterserver1     inode64,nobarrier       0 0
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# mount /other/data
[root@clusterserver1 ~]#  df -hT /other/data
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      clusterserver1    20G   33M   20G   1% /other/data

[root@clusterserver1 ~]# parted -s /dev/sdb print free
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
17.4kB  1049kB  1031kB  Free Space
1      1049kB  21.5GB  21.5GB  clusterserver1          primary
21.5GB  21.5GB  1032kB  Free Space

[root@clusterserver1 ~]#

Umount the file system :

[root@clusterserver1 ~]# umount /other/data
One minute after, we see the new drive size (here : 6 GB) :

[root@clusterserver1 ~]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) print free
Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another operating system believes the disk is smaller.  Fix, by moving the
backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? fix
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 4194304 blocks) or continue with the current
setting?
Fix/Ignore? fix
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 6442MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
17.4kB  1049kB  1031kB  Free Space
1      1049kB  4294MB  4293MB  clusterserver1          primary
4294MB  6442MB  2149MB  Free Space
Switch to sectors values :

(parted) unit s
Display all partitions :

(parted) print free
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 12582912s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start     End        Size      File system  Name     Flags
34s       2047s      2014s     Free Space
1      2048s     8386559s   8384512s  clusterserver1          primary
8386560s  12582878s  4196319s  Free Space
Remove this partition :

(parted) rm 1
Recreate the partition :

(parted) mkpart primary 2048s 100%
Switch to kB values :

(parted) unit kB
We can see a 6GB new partition :

(parted) print free
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 6442451kB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start      End        Size       File system  Name     Flags
17.4kB     1049kB     1031kB     Free Space
1      1049kB     6441402kB  6440354kB  clusterserver1          primary
6441402kB  6442434kB  1032kB     Free Space
Quit “parted” :

(parted) q
Information: You may need to update /etc/fstab.
Rebuild this clusterserver1 file system :

[root@clusterserver1 ~]# clusterserver1_repair /dev/sdb1
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– zero log…
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– scan and clear agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
Phase 5 – rebuild AG headers and trees…
– reset superblock…
Phase 6 – check inode connectivity…
– resetting contents of realtime bitmap and summary inodes
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify and correct link counts…
done
Remount this file system :

[root@clusterserver1 ~]# mount /other/data
The file system size didn’t grow :

[root@clusterserver1 ~]# df -hT /other/data
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      clusterserver1   4.0G   33M  4.0G   1% /other/data
We need now to extend this clusterserver1 file system :

[root@clusterserver1 ~]# clusterserver1_growfs /other/data
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=262016 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1048064, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1048064 to 1572352
It’s done :

[root@clusterserver1 ~]# df -hT /other/data
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      clusterserver1   6.0G   33M  6.0G   1% /other/data
No data lost :

[root@clusterserver1 ~]# cat /other/data/file
here is a file in an clusterserver1 file system

yum -y install system-storage-manager
mkdir -p /other/data
fdisk -l

[root@clusterserver1 ~]#  ssm create -n data_lv –fstype xfs -p data_vg /dev/sdb /other/data
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 10096: /usr/bin/python
Physical volume “/dev/sdb” successfully created
Volume group “data_vg” successfully created
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 10096: /usr/bin/python
Logical volume “data_lv” created.
meta-data=/dev/data_vg/data_lv   isize=256    agcount=4, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5241856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@clusterserver1 ~]# df -hT /other/data
[root@clusterserver1 ~]# vgdisplay -v data_vg
Using volume group(s) on command line.
— Volume group —
VG Name               data_vg
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  2
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               1
Max PV                0
Cur PV                1
Act PV                1
VG Size               20.00 GiB
PE Size               4.00 MiB
Total PE              5119
Alloc PE / Size       5119 / 20.00 GiB
Free  PE / Size       0 / 0
VG UUID               wlEg2R-Bydn-UbFn-AI63-MeA4-dgNy-q30uWb

— Logical volume —
LV Path                /dev/data_vg/data_lv
LV Name                data_lv
VG Name                data_vg
LV UUID                BvQo50-0ehc-Ub92-QUNo-Qmn4-4roI-qHpkjX
LV Write Access        read/write
LV Creation host, time clusterserver1.rmohan.com, 2016-09-19 00:38:25 +0800
LV Status              available
# open                 1
LV Size                20.00 GiB
Current LE             5119
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     8192
Block device           253:2

— Physical volumes —
PV Name               /dev/sdb
PV UUID               kYBGc2-EZNF-JtdX-lKj4-WRJR-tKxP-vu8wJP
PV Status             allocatable
Total PE / Free PE    5119 / 0

root@clusterserver1 ~]# lsblk -f
NAME   FSTYPE   LABEL          UUID                                   MOUNTPOINT
fd0
sda
??sda1 xfs                     aba69d25-b3de-4d89-ba25-e50a8dcf10eb   /boot
??sda2 LVM2_mem                EE31dY-Ubnm-LwCA-8J9J-vK9B-XNzz-OZSt75
??centos-swap
swap                    2e1fb731-0f59-4d10-9f2f-e302a671de57   [SWAP]
??centos-root
xfs                     8e1d8c59-5cd0-4716-92dd-de7c1417dc74   /
sdb    LVM2_mem                kYBGc2-EZNF-JtdX-lKj4-WRJR-tKxP-vu8wJP
??data_vg-data_lv
xfs                     49460e67-1b86-444d-9dcb-7b7fd014303e
sdc
sr0    iso9660  CentOS 7 x86_64
2014-07-06-17-32-07-00
[root@clusterserver1 ~]# ssm add -p data_vg /dev/sdc
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 2200: /usr/bin/python
Physical volume “/dev/sdc” successfully created
Volume group “data_vg” successfully extended
[root@clusterserver1 ~]#
[root@clusterserver1 ~]# ssm list pool
—————————————————-
Pool     Type  Devices      Free      Used     Total
—————————————————-
centos   lvm   1         0.00 KB  19.51 GB  19.51 GB
data_vg  lvm   2        20.00 GB  20.00 GB  39.99 GB
—————————————————-
[root@clusterserver1 ~]# ssm resize -s +2G /dev/data_vg/data_lv
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 2256: /usr/bin/python
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– scan (but don’t clear) agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
No modify flag set, skipping phase 5
Phase 6 – check inode connectivity…
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify link counts…
No modify flag set, skipping filesystem flush and exiting.
Size of logical volume data_vg/data_lv changed from 20.00 GiB (5119 extents) to 22.00 GiB (5631 extents).
Logical volume data_lv successfully resized.
meta-data=/dev/mapper/data_vg-data_lv isize=256    agcount=4, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5241856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 5241856 to 5766144
[root@clusterserver1 ~]# ssm resize -s +2093056K /dev/data_vg/data_lv
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 2429: /usr/bin/python
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– scan (but don’t clear) agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– agno = 4
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– agno = 4
No modify flag set, skipping phase 5
Phase 6 – check inode connectivity…
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify link counts…
No modify flag set, skipping filesystem flush and exiting.
Size of logical volume data_vg/data_lv changed from 22.00 GiB (5631 extents) to 23.99 GiB (6142 extents).
Logical volume data_lv successfully resized.
meta-data=/dev/mapper/data_vg-data_lv isize=256    agcount=5, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5766144, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 5766144 to 6289408
[root@clusterserver1 ~]#

[root@clusterserver1 ~]#  xfs_growfs /other/data
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=1147392 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0
data     =                       bsize=4096   blocks=4589568, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@clusterserver1 ~]# df -TH
Filesystem                  Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root     xfs        19G  1.7G   18G   9% /
devtmpfs                    devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                       tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs                       tmpfs     2.0G  9.0M  2.0G   1% /run
tmpfs                       tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda1                   xfs       521M  279M  243M  54% /boot
tmpfs                       tmpfs     390M     0  390M   0% /run/user/0
/dev/mapper/data_vg-data_lv xfs        26G   34M   26G   1% /other/data
[root@clusterserver1 ~]# cd /other/data/
[root@clusterserver1 data]# ls
file
[root@clusterserver1 data]# cat file
we are using LVM
[root@clusterserver1 data]#

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>