August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

MySQL garbled several reasons

MySQL garbled reason is simply the following reasons:

1, it is stored in the database before garbled

2, stored in the database in the process of garbage

3, after the deposit into the database garbled

Want to know where the garbled very simple to print in the background will know.

Now that you know where the problem is, that the solution is very simple:

1, jsp set the encoding is utf-8, to ensure that the background is reached encoding utf-8’s

2, the database connection plus in such a period of jdbc: mysql: //localhost: 3306/test? useUnicode=true & characterEncoding = UTF-8 , to ensure that the transfer process is utf-8’s.

3. Set the database encoded as utf-8, set or Config settings are also lines in my.ini, attention is my.ini in default-character-set to change two places

4, url or easyui garbled, you can try in the tomcat’s server.xml

<Connector port = "8080" protocol = "HTTP / 1.1"
               connectionTimeout = "20000"
               redirectPort = "8443" />

To

<Connector port = "8080" protocol = "HTTP / 1.1"
               connectionTimeout = "20000"
               redirectPort = "8443" URIEncoding = "utf-8" />

To ensure that no garbage, the first to do the coding to unify, other problems are easy to find out. END.

JVM terminated. Exit code=1 on install installation manager

IBM Installation Manager and IBM Packaging Utility GUI crashes while advancing through wizard screens on recent updates of RHEL 6

roblem(Abstract)
The IBM Installation Manager and IBM Packaging Utility GUI crashes on RHEL 6 updates 5 and 6 during the installation of a product and while advancing through the wizard screens.
Symptom
The GUI crashes and a JVM termination window pops up with the following text:

JVM terminated. Exit code=1
/opt/IBM/InstallationManager/eclipse/jre_7.0.0.sr6_20131213_1238/
jre/bin/java
-Xms40m
-Xmx512m

Cause
An Eclipse defect, tracked here https://bugs.eclipse.org/bugs/show_bug.cgi?id=441705, causes the crash.
According to the Eclipse bug description, the crash occurs only on Linux systems where the GTK version is “2.24” or higher and the “cairo” library’s version is lower then “1.9.4”.
To verify that this is indeed the cause for the crash, execute:

$ rpm -q gtk2
Example output:

gtk2-2.24.23-6.el6.s390x
gtk2-2.24.23-6.el6.s390

$ rpm -q cairo
Example output:

cairo-1.8.8-3.1.el6.s390x
cairo-1.8.8-3.1.el6.s390
Resolving the problem
To workaround the issue, upgrade the “cairo” library.
Download a version of the cairo library that is at least “1.9.4”
x86 64-bit example: ftp://fr2.rpmfind.net/linux/sourceforge/f/fu/fuduntu-el/el6/current/TESTING/RPMS/cairo-1.10.2-3.el6.x86_64.rpm and execute:

rpm -U cairo-1.10.2-3.el6.x86_64.rpm

x86 32-bit example: ftp://fr2.rpmfind.net/linux/sourceforge/f/fu/fuduntu-el/el6/current/TESTING/RPMS/cairo-1.10.2-3.el6.i686.rpm and execute:

rpm -U cairo-1.10.2-3.el6.i686.rpm

Another option is to not run the Installation Manager in GUI mode but in Command-line, Console or Web UI mode.

MQ QUICK Reference Command and Documement

mq[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter1-introduction-and-basics.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter2-installation.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter3-messages.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter4-mq-objects.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter5-administration.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter6-mq-logs-and-logging.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter7-creating-mq-objects.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter8-triggering.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter9-distributed-q-and-clusters.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter10-security.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter11-mq-client.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter12-backup-and-restore.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter13-problem-determination.pdf”]

[gview file=”http://rmohan.com/wp-content/uploads/2015/03/chapter14-publish-subscribe.pdf”]

Solution to fix WordPress update failure issue

I was having problem while updating one of my plugin weeks ago, telling me that

Could not create directory. /home/<>/public_html/wp-content/plugins/download-manager
Plugin install failed.

or

Unable to locate WordPress Content directory (wp-content).

By then, I turned to Google for solutions, however, tried every way I got, such as:
•1. Remove the plugin and re-install it.
•2. Change permission of wp-contents direcotry and sub-directories to 777
•3. Change the ftp accout of wp-config.php

But none of them can fix my problem, so I have to upload the plugin mannually, and got it fixed.

Days ago, WordPress 4.1.1 is released, and a bunch of plugin updated. When I tried to update them turned out none of them can be updated.

So I can not avoid it now.

Obviously, this issue is permission relevant. After digged more into the issue, finnally got a solution for it, and this is the antidote, fix it by adding the following code to the end of your wp-config.php

if(is_admin())
{
add_filter(‘filesystem_method’, create_function(‘$a’, ‘return “direct”;’ ));
define( ‘FS_CHMOD_DIR’, 0751 );
}

Repairing Windows 2012 R2 Startup

Method 1:
===============
1. Put the Windows Server 2012 R2 installation disc into the disc drive, and then start the computer.
2. Press a key when the message indicating “Press any key to boot from CD or DVD …”. appears.
3. Select a language, a time, a currency, and a keyboard or another input method, and then click Next.
4. Click Repair your computer.
5. Click the operating system that you want to repair, and then click Next.
6. In the System Recovery Options dialog box, click Command Prompt.
7. Type sfc /scannow, and then press ENTER.

Method 2:
===============
1. Put the Windows Server 2012 R2 installation disc in the disc drive, and then start the computer.
2. Press any key when the message indicating “Press any key to boot from CD or DVD …”. appears.
3. Select a language, time, currency, and a keyboard or another input method. Then click Next.
4. Click Repair your computer.
5. Click the operating system that you want to repair, and then click Next.
6. In the System Recovery Options dialog box, click Command Prompt.
7. Type Bootrec /RebuildBcd, and then press ENTER.

Method 3:
===============
1. Put the Windows Server 2012 R2 installation disc into the disc drive, and then start the computer.
2. Press a key when the message indicating “Press any key to boot from CD or DVD …”. appears.
3. Select a language, a time, a currency, and a keyboard or another input method, and then click Next.
4. Click Repair your computer.
5. Click the operating system that you want to repair, and then click Next.
6. In the System Recovery Options dialog box, click Command Prompt.
7. Type BOOTREC /FIXMBR, and then press ENTER.
8. Type BOOTREC /FIXBOOT, and then press ENTER.
9. Type Drive:\boot\Bootsect.exe /NT60 All, and then press ENTER.

Note: In this command, Drive is the drive where the Windows Server 2012 R2 installation media is located.

CENTOS 6.5 GFS CLUSTER

CentOS 6.5 x64 RHCS GFS

cluster1.rmohan.com
cluster2.rmohan.com

# cat /etc/hosts

192.168.0.10 cluster1.rmohan.com cluster1
192.168.0.11 cluster2.rmohan.com cluster2

[root@cluster1 ~]# iptables -F
[root@cluster1 ~]# iptables-save > /etc/sysconfig/iptables
[root@cluster1 ~]# /etc/init.d/iptables restart
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
iptables: Applying firewall rules: [ OK ]
[root@cluster1 ~]# vi /etc/selinux/config
[root@cluster1 ~]#

[root@cluster1 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing – SELinux security policy is enforced.
# permissive – SELinux prints warnings instead of enforcing.
# disabled – No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
# targeted – Targeted processes are protected,
# mls – Multi Level Security protection.
SELINUXTYPE=targeted

yum install iscsi-initiator-utils
chkconfig iscsid on
service iscsid start

yum install ntp

chkconfig ntpd on

service ntpd start

[root@cluster ~]# chkconfig –level 345 ntpd on
[root@cluster2 ~]# /etc/init.d/ntpd
ntpd ntpdate
[root@cluster2 ~]# /etc/init.d/ntpd restart
Shutting down ntpd: [FAILED]
Starting ntpd: [ OK ]
[root@cluster2 ~]# clear

ntpdate -u 0.centos.pool.ntp.org
17 Feb 21:32:32 ntpdate[12196]: adjust time server 103.11.143.248 offset 0.000507 sec

[root@cluster1 ~]# date
Tue Feb 17 21:32:49 SGT 2015
[root@cluster1 ~]#

# iscsiadm -m discovery -t sendtargets -p 192.168.1.50

qdisk 100MB
data 20GB

[root@cluster2 ~]# yum install iscsi-initiator-utils
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: centos.ipserverone.com
* extras: singo.ub.ac.id
* updates: centos.ipserverone.com
Resolving Dependencies
–> Running transaction check
—> Package iscsi-initiator-utils.x86_64 0:6.2.0.873-13.el6 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================================================
Package Arch Version Repository Size
========================================================================================================================================
Installing:
iscsi-initiator-utils x86_64 6.2.0.873-13.el6 base 719 k

Transaction Summary
========================================================================================================================================
Install 1 Package(s)

Total download size: 719 k
Installed size: 2.4 M
Is this ok [y/N]: y
Downloading Packages:
iscsi-initiator-utils-6.2.0.873-13.el6.x86_64.rpm | 719 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : iscsi-initiator-utils-6.2.0.873-13.el6.x86_64 1/1
Verifying : iscsi-initiator-utils-6.2.0.873-13.el6.x86_64 1/1

Installed:
iscsi-initiator-utils.x86_64 0:6.2.0.873-13.el6

Complete!

[root@cluster2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.50
Starting iscsid: [ OK ]
192.168.1.50:3260,1 iqn.2006-01.com.openfiler:tsn.5ed5c1620415
[root@cluster2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.5ed5c1620415 -p 192.168.1.50 -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5ed5c1620415, portal: 192.168.1.50,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5ed5c1620415, portal: 192.168.1.50,3260] successful.
[root@cluster2 ~]#

[root@cluster2 ~]# cat /proc/partitions
major minor #blocks name

8 0 20971520 sda
8 1 512000 sda1
8 2 20458496 sda2
253 0 18358272 dm-0
253 1 2097152 dm-1
8 16 19988480 sdb
8 32 327680 sdc

[root@cluster2 ~]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00098b76

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM

Disk /dev/mapper/vg_cluster2-lv_root: 18.8 GB, 18798870528 bytes
255 heads, 63 sectors/track, 2285 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg_cluster2-lv_swap: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb: 20.5 GB, 20468203520 bytes
64 heads, 32 sectors/track, 19520 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdc: 335 MB, 335544320 bytes
11 heads, 59 sectors/track, 1009 cylinders
Units = cylinders of 649 * 512 = 332288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@cluster2 ~]#

Second, the installation package RHCS

1) cluster1 (managed node) installation package RHCS, luci end management software package, install only the management side.

yum -y install luci cman odcluster ricci gfs2-utils rgmanager lvm2-cluster

2) cluster2 installation package RHCS

yum -y install cman odcluster ricci gfs2-utils rgmanager lvm2-cluster

3) cluster1, cluster2 nodes ricci change user passwords

passwd ricci

4) Configuration RHCS service boot

chkconfig ricci on
chkconfig rgmanager on
chkconfig cman on
service ricci start
service rgmanager start
service cman start

[root@cluster1 ~]# chkconfig ricci on
chkconfig cman on
service ricci start
service rgmanager start
[root@cluster1 ~]# chkconfig rgmanager on
service cman start
[root@cluster1 ~]# chkconfig cman on
[root@cluster1 ~]# service ricci start
Starting system message bus: [ OK ]
Starting oddjobd: [ OK ]
generating SSL certificates… done
Generating NSS database… done
Starting ricci: [ OK ]
[root@cluster1 ~]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
[root@cluster1 ~]# service cman start
Starting cluster:
Checking if cluster has been disabled at boot… [ OK ]
Checking Network Manager… [ OK ]
Global setup… [ OK ]
Loading kernel modules… [ OK ]
Mounting configfs… [ OK ]
Starting cman… xmlconfig cannot find /etc/cluster/cluster.conf
[FAILED]
Stopping cluster:
Leaving fence domain… [ OK ]
Stopping gfs_controld… [ OK ]
Stopping dlm_controld… [ OK ]
Stopping fenced… [ OK ]
Stopping cman… [ OK ]
Unloading kernel modules… [ OK ]
Unmounting configfs… [ OK ]

[root@cluster2 ~]# service ricci start
Starting system message bus: [ OK ]
Starting oddjobd: [ OK ]
generating SSL certificates… done
Generating NSS database… done
Starting ricci: [ OK ]
[root@cluster2 ~]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
[root@cluster2 ~]# service cman start
Starting cluster:
Checking if cluster has been disabled at boot… [ OK ]
Checking Network Manager… [ OK ]
Global setup… [ OK ]
Loading kernel modules… [ OK ]
Mounting configfs… [ OK ]
Starting cman… xmlconfig cannot find /etc/cluster/cluster.conf
[FAILED]
Stopping cluster:
Leaving fence domain… [ OK ]
Stopping gfs_controld… [ OK ]
Stopping dlm_controld… [ OK ]
Stopping fenced… [ OK ]
Stopping cman… [ OK ]
Unloading kernel modules… [ OK ]
Unmounting configfs… [ OK ]
[root@cluster2 ~]#

Install the start luci service on the management node cluster1

1) Start luci Service

[root@cluster1 ~]# chkconfig luci on
[root@cluster1 ~]# service luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `cluster1.rmohan.com’ address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config’ (you can change them by editing `/var/lib/luci/etc/cacert.config’, removing the generated certificate `/var/lib/luci/certs/host.pem’ and restarting luci):
(none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to ‘/var/lib/luci/certs/host.pem’
Start luci… [ OK ]
Point your web browser to https://cluster1.rmohan.com:8084 (or equivalent) to access luci
[root@cluster1 ~]#

[root@cluster1 ~]#

RHCS cluster configuration

https://192.168.1.10:8084/homebase/

GFS 001

1. Add a cluster

Log into the management interface, click Manage Clusters -> Create fill in the following:

Cluster Name: gfs

GFS 002

GFS 003

Cluster Name: gfs

NodeName Password RicciHostname Ricci Port
cluster1.rmohan.com test123 cluster1.rmohan.com 11111
cluster2.rmohan.com test123 cluster2.rmohan.com 11111

Select the following options, and then submit the
Use locally installed packages.

Note: This step will create a cluster configuration file /etc/cluster/cluster.conf

Fence Devices

Description:
RHCS To achieve a complete clustering capabilities, we must realize fence function.
Due to the non-physical server configuration and other conditions, especially the use of VMware ESXi5.X virtual fence fence to realize the function of the device.
It is thanks to the fence device can be used, was able to complete the test RHCS function.

(1) log into the management interface, click Cluster-> Fence Devices ->
(2) Select the “Add selection VMware Fencing (SOAP Interface)
(3) Name “ESXi_fence”
(4) IP Address or Hostname “192.168.1.31” (ESXi Address)
(5) Login “root”
(6) Password “test123”

GFS 004

GFS 005

3. The device node binding Fence

Adding a node a fence

1) Click cluster1.rmohan.com node,

Add Fence Method, here fill node01_fence;
2) add a fence instance, select “ESXi_fence” VMware Fencing (SOAP Interface)
3) VM NAME “kvm_cluster1”
4) VM UUID “564d6fbf-05fb-1dd1-fb66-7ea3c85dcfdf “check ssl

Description: VMNAME: virtual machine name, VM UUID: virtual machine.vmx file ”
uuid.location, “value, using the following format string.

# /usr/sbin/fence_vmware_soap -a 192.168.1.31 -z -l root -p test123 -n kvm_node2 -o list
kvm_cluster2,564d4c42-e7fd-db62-3878-57f77df2475e
kvm_cluster1,564d6fbf-05fb-1dd1-fb66-7ea3c85dcfdf

Adding a node two fence

1) Click cluster2.rmohan.com node, Add Fence Method, here fill node02_fence;
2) add a fence instance, select “ESXi_fence” VMware Fencing (SOAP Interface)
3) VM NAME “kvm_cluster2”
4) VM UUID “564d4c42-e7fd-db62-3878-57f77df2475e “check ssl

# Manual test fence function example:
GFS 006

GFS 007

# Manual test fence function example:

# /usr/sbin/fence_vmware_soap -a 192.168.1.31 -z -l root -p test123 -n kvm_node2 -o reboot
Status: ON

Options:
-o: List, status, and other parameters reboot

4. Add Failover Domains Configuration

Name “gfs_failover”
Prioritized
Restricted
cluster1.rmohan.com 1
cluster2.rmohan.com 1

5. Configure GFS Service

(1) GFS Service Configuration

In cluster1.rmohan.com, cluster2.rmohan.comstart CLVM services were integrated cluster lock

lvmconf –enable-cluster
chkconfig clvmd on

service clvmd start
Activating VG(s): No volume groups found [ OK ]

[root@cluster1 ~]# service clvmd start
Activating VG(s): 2 logical volume(s) in volume group “vg_cluster1” now active
[ OK ]
[root@cluster1 ~]#

[root@cluster2 ~]# service clvmd start
Activating VG(s): 2 logical volume(s) in volume group “vg_cluster2” now active
[ OK ]
[root@cluster2 ~]#

cluster1.rmohan.com

pvcreate /dev/sdb1

[root@cluster1 ~]# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created

[root@cluster1 ~]# vgcreate gfsvg /dev/sdb1
Clustered volume group “gfsvg” successfully created

# pvcreate /dev/sdc1
Physical volume “/dev/sdc1” successfully created

# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_node01 lvm2 a– 39.51g 0
/dev/sdc1 lvm2 a– 156.25g 156.25g

# vgcreate gfsvg /dev/sdb1
Clustered volume group “gfsvg” successfully created

Failed to activate new LV.
[root@cluster1 ~]# lvcreate -l +100%FREE -n data gfsvg
Error locking on node cluster2.rmohan.com: Volume group for uuid not found: EjHAOyOMJtk7pJ1gcUeOrbXjgFKMl05y2a3Mdh27oxKVpVBXjLYxHeU6088U9Ptc
Failed to activate new LV.

Note: just reboot the cluster2

[root@cluster1 ~]# lvcreate -l +100%FREE -n data gfsvg
clvmd not running on node cluster2.rmohan.com
Unable to drop cached metadata for VG gfsvg.
clvmd not running on node cluster2.rmohan.com

cluster2
# /etc/init.d/clvmd start

(3) GFS file system format

cluster1 node:

[root@cluster1 ~]# mkfs.gfs2 -p lock_dlm -t gfs:gfs2 -j 2 /dev/gfsvg/data
This will destroy any data on /dev/gfsvg/data.
It appears to contain: symbolic link to `../dm-2′

Are you sure you want to proceed? [y/n] y

Device: /dev/gfsvg/data
Blocksize: 4096
Device Size 19.06 GB (4996096 blocks)
Filesystem Size: 19.06 GB (4996093 blocks)
Journals: 2
Resource Groups: 77
Locking Protocol: “lock_dlm”
Lock Table: “gfs:gfs2”
UUID: aaecbd43-cd34-fc15-61c8-29fb0c282279

Description:
gfs: gfs2 gfs is the name of the cluster, gfs2 is to define the name, the equivalent of the label.
-j mount the file system specified number of hosts, you do not specify a default is the management node.
Here there are two nodes experiment

6. mount the GFS file system

cluster1, create a mount point on cluster2 GFS

[root@cluster1 ~]# mkdir /vmdata
[root@cluster1 ~]# mount.gfs2 /dev/gfsvg/data /vmdata
[root@cluster1 ~]#

[root@cluster2 ~]# mkdir /vmdata
[root@cluster2 ~]# mount.gfs2 /dev/gfsvg/data /vmdata
[root@cluster2 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_cluster2-lv_root
ext4 19G 1.1G 17G 6% /
tmpfs tmpfs 2.0G 33M 2.0G 2% /dev/shm
/dev/sda1 ext4 500M 52M 422M 11% /boot
/dev/gfsvg/data gfs2 21G 272M 21G 2% /vmdata
[root@cluster2 ~]#

GFS 008

GFS 009

GFS 010

GFS 011

GFS 012

GFS 013

GFS 014

GFS 015

GFS 016

7. Configure the voting disk

Description:
# voting disk is a shared disk, no need for too much, in this case using the /dev/sdc1 300MB to be created.

fdisk -l

Device Boot Start End Blocks Id System
/dev/sdc1 1 1009 327391 8e Linux LVM

Disk /dev/mapper/gfsvg-data: 20.5 GB, 20464009216 bytes
255 heads, 63 sectors/track, 2487 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

mkqdisk -c /dev/sdc1 -l myqdisk

[root@cluster1 ~]# mkqdisk -c /dev/sdc1 -l myqdisk
mkqdisk v3.0.12.1

Writing new quorum disk label ‘myqdisk’ to /dev/sdc1.
WARNING: About to destroy all data on /dev/sdc1; proceed [N/y] ? y
Initializing status block for node 1…
Initializing status block for node 2…
Initializing status block for node 3…
Initializing status block for node 4…
Initializing status block for node 5…
Initializing status block for node 6…
Initializing status block for node 7…
Initializing status block for node 8…
Initializing status block for node 9…
Initializing status block for node 10…
Initializing status block for node 11…
Initializing status block for node 12…
Initializing status block for node 13…
Initializing status block for node 14…
Initializing status block for node 15…
Initializing status block for node 16…

[root@cluster1 ~]# mkqdisk -L
mkqdisk v3.0.12.1

/dev/block/8:33:
/dev/disk/by-id/scsi-14f504e46494c45525a344e426c512d623153702d326e417a-part1:
/dev/disk/by-path/ip-192.168.1.50:3260-iscsi-iqn.2006-01.com.openfiler:tsn.5ed5c1620415-lun-1-part1:
/dev/sdc1:
Magic: eb7a62c2
Label: myqdisk
Created: Tue Feb 17 22:56:30 2015
Host: cluster1.rmohan.com
Kernel Sector Size: 512
Recorded Sector Size: 512

(3) Voting Disk configuration qdisk

# Into the management interface Manage Clusters -> gfs -> Configure -> QDisk

Device : /dev/sdc1

Path to program : ping -c3 -t2 192.168.0.253
Interval : 3
Score : 2
TKO : 10
Minimum Score : 1

# Click apply

(4) Start qdisk Service

chkconfig qdiskd on
service qdiskd start
clustat -l

[root@cluster1 ~]# clustat -l
Cluster Status for gfs @ Tue Feb 17 23:10:24 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
cluster1.rmohan.com 1 Online, Local
cluster2.rmohan.com 2 Online
/dev/sdc1 0 Online, Quorum Disk

[root@cluster2 ~]# clustat -l
Cluster Status for gfs @ Tue Feb 17 23:10:29 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
cluster1.rmohan.com 1 Online
cluster2.rmohan.com 2 Online, Local
/dev/sdc1 0 Online, Quorum Disk

LVM volumes on CentOS / RHEL 7 with System Storage Manager

LVM volumes on CentOS / RHEL 7 with System Storage Manager

Logical Volume Manager (LVM) is an extremely flexible disk management scheme, allowing you to create and resize logical disk volumes off of multiple physical hard drives with no downtime. However, its powerful features come with the price of a somewhat steep learning curves, with more involved steps to set up LVM using multiple command line tools, compared to managing traiditional disk partitions.

Here is good news for CentOS/RHEL users. The latest CentOS/RHEL 7 now comes with System Storage Manager (aka ssm) which is a unified command line interface developed by Red Hat for managing all kinds of storage devices. Currently there are three kinds of volume management backends available for ssm: LVM, Btrfs, and Crypt.

In this tutorial, I will demonstrate how to manage LVM volumes with ssm

[root@centos7server ~]# yum install system-storage-manager
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.vodien.com
* extras: mirror.vodien.com
* updates: mirror.vodien.com
Package system-storage-manager-0.4-5.el7.noarch already installed and latest version
Nothing to do
[root@centos7server ~]#

[root@centos7server ~]# ssm list
————————————————————-
Device         Free      Used      Total  Pool    Mount point
————————————————————-
/dev/fd0                         4.00 KB
/dev/sda                        40.00 GB          PARTITIONED
/dev/sda1                      500.00 MB          /boot
/dev/sda2   0.00 KB  39.51 GB   39.51 GB  centos
/dev/sdb                        20.00 GB
/dev/sdb1   0.00 KB  20.00 GB   20.00 GB  vg01
/dev/sdc                        20.00 GB
/dev/sdc1  17.07 GB   2.93 GB   20.00 GB  vg01
/dev/sdd                        20.00 GB
————————————————————-
—————————————————
Pool    Type  Devices      Free      Used     Total
—————————————————
centos  lvm   1         0.00 KB  39.51 GB  39.51 GB
vg01    lvm   2        17.07 GB  22.93 GB  39.99 GB
—————————————————
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     35.51 GB  xfs   35.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

Add a Physical Disk to an LVM Pool

Let’s add a new physical disk (e.g., /dev/sdb) to an existing storage pool (e.g., centos).
The command to add a new physical storage device to an existing pool is as follows.

[root@centos7server ~]# ssm add -p vg01 /dev/sdd
Physical volume “/dev/sdd” successfully created
Volume group “vg01” successfully extended

To Remove the Phyiscal disk from LVM Pool

[root@centos7server ~]# ssm remove -p vg01 /dev/sdd
usage: ssm [-h] [–version] [-v] [-f] [-b BACKEND] [-n]
{check,resize,create,list,add,remove,snapshot,mount} …
ssm: error: unrecognized arguments: -p
[root@centos7server ~]# ssm remove  vg01 /dev/sdd
Do you really want to remove volume group “vg01” containing 1 logical volumes? [y/n]: y
Logical volume vg01/lv01 contains a filesystem in use.
SSM Info: Unable to remove ‘vg01’
Removed “/dev/sdd” from volume group “vg01”
[root@centos7server ~]#

Lets increase root filesystem
[root@centos7server ~]# ssm add -p centos /dev/sdd
Volume group “centos” successfully extended
[root@centos7server ~]#

[root@centos7server ~]# ssm list
————————————————————-
Device         Free      Used      Total  Pool    Mount point
————————————————————-
/dev/fd0                         4.00 KB
/dev/sda                        40.00 GB          PARTITIONED
/dev/sda1                      500.00 MB          /boot
/dev/sda2   0.00 KB  39.51 GB   39.51 GB  centos
/dev/sdb                        20.00 GB
/dev/sdb1   0.00 KB  20.00 GB   20.00 GB  vg01
/dev/sdc                        20.00 GB
/dev/sdc1  17.07 GB   2.93 GB   20.00 GB  vg01
/dev/sdd   20.00 GB   0.00 KB   20.00 GB  centos
————————————————————-
—————————————————
Pool    Type  Devices      Free      Used     Total
—————————————————
centos  lvm   2        20.00 GB  39.51 GB  59.50 GB  —– WE HAVE 50 GB NOW
vg01    lvm   2        17.07 GB  22.93 GB  39.99 GB
—————————————————
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     35.51 GB  xfs   35.49 GB   34.56 GB  linear  /                     — We want to extend the root partion size /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

we are goint to increase the root file system to 1 GB

[root@centos7server ~]# ssm resize -s+1024M /dev/centos/root
Extending logical volume root to 36.51 GiB
Logical volume root successfully resized
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=9308160, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 9308160 to 9570304
[root@centos7server ~]#

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     36.51 GB  xfs   35.49 GB   34.56 GB  linear  /                            — We  have extend the root partion size / to 1 GB
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

[root@centos7server ~]# xfs_growfs /dev/centos/root
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=9570304, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     36.51 GB  xfs   36.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

we are increase the disk space to 4 GB for the root partion

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     38.48 GB  xfs   36.49 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-
[root@centos7server ~]# ssm resize -s+2024M /dev/centos/root
Extending logical volume root to 40.46 GiB
Logical volume root successfully resized
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=10088448, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10088448 to 10606592

[root@centos7server ~]# ssm list volumes
————————————————————————————-
Volume            Pool    Volume size  FS     FS size       Free  Type    Mount point
————————————————————————————-
/dev/centos/swap  centos      4.00 GB                             linear
/dev/centos/root  centos     40.46 GB  xfs   38.47 GB   34.56 GB  linear  /
/dev/vg01/lv01    vg01       22.93 GB  xfs   22.92 GB   20.47 GB  linear  /xfs_test
/dev/sda1                   500.00 MB  xfs  496.67 MB  381.54 MB  part    /boot
————————————————————————————-

Note root partion is incresed on the fly

[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        44G  1.1G   43G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        25G   34M   25G   1% /xfs_test
[root@centos7server ~]#

[root@centos7server ~]# xfs_growfs /dev/centos/root
meta-data=/dev/mapper/centos-root isize=256    agcount=5, agsize=2327040 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=10606592, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=4545, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        44G  1.1G   43G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        25G   34M   25G   1% /xfs_test

Create a New LVM Pool/Volume

In this experiment, let’s see how we can create a new storage pool and a new LVM volume on top of a physical disk drive. With traditional LVM tools, the entire procedure is quite involved; preparing partitions, creating physical volumes, volume groups, and logical volumes, and finally building a file system. However, with ssm, the entire procedure can be completed at one shot!

What the following command does is to create a storage pool named testpool, create a 5000MB LVM volume named disk0 in the pool, format the volume with XFS file system, and mount it under /mnt/test. You can immediately see the power of ssm.

[root@centos7server ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x3ea192fa

Device Boot      Start         End      Blocks   Id  System

$ ssm create -s 5200M -n disk0 –fstype xfs -p testpool /dev/sdc /test

[root@centos7server ~]# mkdir /test

[root@centos7server ~]# ssm create -s 5200M -n disk0 –fstype xfs -p testpool /dev/sdc /test
WARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n] y
Wiping dos signature on /dev/sdc.
Physical volume “/dev/sdc” successfully created
Volume group “testpool” successfully created
WARNING: LVM2_member signature detected on /dev/testpool/disk0 at offset 536. Wipe it? [y/n] y
Wiping LVM2_member signature on /dev/testpool/disk0.
Logical volume “disk0” created
meta-data=/dev/testpool/disk0    isize=256    agcount=4, agsize=332800 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=1331200, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Filesystem                 Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root    xfs        44G  1.1G   43G   3% /
devtmpfs                   devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                      tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                      tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                      tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1                  xfs       521M  147M  374M  29% /boot
/dev/mapper/testpool-disk0 xfs       5.5G   34M  5.5G   1% /test

Take a Snapshot of an LVM Volume

Using ssm tool, you can also take a snapshot of existing disk volumes. Note that snapshot works only if the back-end that the volumes belong to support snapshotting. The LVM backend supports online snapshotting, which means we do not have to take the volume being snapshotted offline. Also, since the LVM backend of ssm supports LVM2, the snapshots are read/write enabled.

Let’s take a snapshot of an existing LVM volume

/dev/testpool/disk0

[root@centos7server ~]# cp /var/log/
anaconda/  boot.log   cron       dmesg.old  lastlog    messages   secure     tallylog   wtmp
audit/     btmp       dmesg      grubby     maillog    ppp/       spooler    tuned/     yum.log
[root@centos7server ~]# cp -pR /var/log/* /test/
[root@centos7server ~]#

[root@centos7server ~]# ssm snapshot /dev/testpool/disk0
Logical volume “snap20150214T220836” created

After a snapshot is stored, you can remove the original volume, and mount the snapshot volume to access the data in the snapshot.

[root@centos7server ~]# mkdir /test1

[root@centos7server ~]# umount /dev/testpool/disk0
[root@centos7server ~]#

[root@centos7server ~]# ssm mount /dev/testpool/
disk0                snap20150214T220836
[root@centos7server ~]# ssm mount /dev/testpool/snap20150214T220836 /test1
[root@centos7server ~]#
[root@centos7server ~]# cd /test1
[root@centos7server test1]# ls -ltr
total 908
drwx——. 2 root root      6 Jun 10  2014 ppp
drwxr-xr-x. 2 root root     22 Jun 24  2014 tuned
-rw——-. 1 root root      0 Feb 11 18:47 tallylog
-rw——-. 1 root root      0 Feb 11 18:48 spooler
drwxr-xr-x. 2 root root   4096 Feb 11 18:49 anaconda
drwxr-x—. 2 root root     22 Feb 11 18:50 audit
-rw——-. 1 root root   1370 Feb 11 19:07 grubby
-rw-r–r–. 1 root root 110663 Feb 14 15:55 dmesg.old
-rw——-. 1 root root   5304 Feb 14 16:56 yum.log
-rw-r–r–. 1 root root   7205 Feb 14 21:20 boot.log
-rw-r–r–. 1 root root 111840 Feb 14 21:20 dmesg
-rw——-. 1 root root    824 Feb 14 21:20 maillog
-rw——-. 1 root utmp    768 Feb 14 21:23 btmp
-rw-r–r–. 1 root root 292000 Feb 14 21:23 lastlog
-rw-rw-r–. 1 root utmp  20736 Feb 14 21:39 wtmp
-rw——-. 1 root root  14997 Feb 14 21:39 secure
-rw——-. 1 root root 568125 Feb 14 22:01 messages
-rw-r–r–. 1 root root  39870 Feb 14 22:01 cron

[root@centos7server test1]# df -Th /test1
Filesystem                               Type  Size  Used Avail Use% Mounted on
/dev/mapper/testpool-snap20150214T220836 xfs   5.1G   35M  5.1G   1% /test1
[root@centos7server test1]#

Remove an LVM Volume

Removing an existing disk volume or storage pool is as easy as creating one. If you attempt to remove a mounted volume, ssm will automatically unmount it first. No hassle there.

To remove an LVM volume:
$ ssm remove <volume>

To remove a storage pool:
$  ssm remove <pool-name>

[root@centos7server test1]# ssm remove /dev/testpool/disk0
Logical volume testpool/snap20150214T220836 contains a filesystem in use.
SSM Info: Unable to remove ‘/dev/testpool/disk0’
SSM Error (2001): Nothing was removed!
[root@centos7server test1]#

[root@centos7server ~]# ssm remove /dev/testpool/disk0
Do you really want to remove active logical volume snap20150214T220836? [y/n]: y
Logical volume “snap20150214T220836” successfully removed
Do you really want to remove active logical volume disk0? [y/n]: y
Logical volume “disk0” successfully removed
[root@centos7server ~]#

CENTOS , FEDORA , RHEL 7 XFS FILE SYSTEM

XFS File System

What is XFS?

XFS is a highly scalable, high-performance file journalling file system which was originally designed at Silicon Graphics, Inc in 1993. Originally XFS was used on Silicon Graphics Inc’s own operating system Irix, however, it was later ported to the Linux kernel in 2001. Today XFS is supported by most Linux distributions and has now become the default filesystem on RHEL (Red Hat Enterprise Linux), Oracle Linux 7, CentOS 7 and many other distributions. Originally XFS was created to support extremely large filesystems with sizes of up to 16 exabytes and file sizes of up to 8 exabytes.

XFS supports metadata journalling allowing for quicker recovery after a system crash. The XFS file system can also be de-fragmented and enlarged while mounted and active. The XFS file system can not be reduced in size !

XFS features the following allocation schemes:

Extent based allocation
Stripe-aware allocation policies
Delayed allocation
Space pre-allocation
Delayed allocation and other performance optimizations affect XFS the same way that they do ext4. Namely, a program’s writes to a an XFS file system are not guaranteed to be on-disk unless the program issues an fsync() call afterwards.

The XFS file system also supports the following:

Extended attributes (xattr), which allows the system to associate several additional name/value pairs per file.
Quota journalling, which avoids the need for lengthy quota consistency checks after a crash.
Project/directory quotas, allowing quota restrictions over a directory tree.
Subsecond timestamps

Creating a XFS File System

To create a XFS file system, you can use the command mkfs.xfs /dev/device.

When using mkfs.xfs on a block device containing an existing file system, you should use the -f option to force an overwrite of that file system.

Below is an example of the mkfs.xfs command being issued on a CentOS 7 server. Once the command has run successfully,

we issued the mount command. In this example, we are using a mount point of “xfs_test”.

This was created by issuing the command “mkdir /xfs_test”. (see output below)

fdisk -l /dev/sdb

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf478ffab

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   83  Linux

Disk /dev/mapper/centos-swap: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-root: 38.1 GB, 38126223360 bytes, 74465280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@centos7server ~]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=1310656 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5242624, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

mkdir /xfs_test

[root@centos7server ~]# mkdir /xfs_test
[root@centos7server ~]# mount /dev/sdb1 /xfs_test
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/sdb1               xfs        22G   34M   22G   1% /xfs_test

Using LVM (Logical Volume Manager) to add space to an XFS file system

Generally to increase space you would use LVM (Logical Volume Manager. In the following example we will create a partition with a type of “8e” which denotes LVM. We will create a PV with the pvcreate command, create a VolumeGroup and define a LV. Next we will generate an XFS filesystem on the Logical Volume. For more information regarding LVM, follow the LVM link: Introduction to LVM

An overview of the basic process involved can be seen below:

Create a Partition using fdisk

In this example, we are going to create a partition using the disk partitioning tool “fdisk”. The commands used to create the partition can be seen in the output below:

[root@centos7server ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos7server ~]#

Create Logical Volume Manager Components: PV, VG and LV

Our next step is to create a Physical Volume comprising of the /dev/sdb1 partition.
Next we then create a Volume Group called “vg01” and finally we create a Logical Volume that will use all available space within our Volume Group.
The commands issued can be seen in the output below:

[root@centos7server ~]# pvcreate /dev/sdb1
WARNING: xfs signature detected on /dev/sdb1 at offset 0. Wipe it? [y/n] y
Wiping xfs signature on /dev/sdb1.
Physical volume “/dev/sdb1” successfully created
[root@centos7server ~]# vgcreate vg01 /dev/sdb1
Volume group “vg01” successfully created
[root@centos7server ~]# lvcreate -n lv01 -l 100%VG vg01
Logical volume “lv01” created
[root@centos7server ~]#
[root@centos7server ~]# pvs
PV         VG     Fmt  Attr PSize  PFree
/dev/sda2  centos lvm2 a–  39.51g    0
/dev/sdb1  vg01   lvm2 a–  20.00g    0
[root@centos7server ~]# vgs
VG     #PV #LV #SN Attr   VSize  VFree
centos   1   2   0 wz–n- 39.51g    0
vg01     1   1   0 wz–n- 20.00g    0
[root@centos7server ~]# lvs
LV   VG     Attr       LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
root centos -wi-ao—- 35.51g
swap centos -wi-ao—-  4.00g
lv01 vg01   -wi-a—– 20.00g
[root@centos7server ~]#

[root@centos7server ~]# mkfs.xfs /dev/vg01/lv01
meta-data=/dev/vg01/lv01         isize=256    agcount=4, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5241856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]#

[root@centos7server ~]# mount /dev/vg01/lv01 /xfs_test
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        22G   34M   22G   1% /xfs_test

Attaching a new Disk for /dev/sdc

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@centos7server ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x3ea192fa.

Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos7server ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help):
Command (m for help):
Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x3ea192fa

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    41943039    20970496   83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Create a new Physical Volume

Next we create a new PV (Physical Volume using the device “/dev/sdc1”.

[root@centos7server ~]# pvcreate /dev/sdc1
Physical volume “/dev/sdc1” successfully created

Add the new Physical Volume to an existing Volume Group

Here we are adding our new space to the existing Volume Group “vg01”.
This extra space will be then available to our Logical Volume. The command to add the new partition to our Volume Group is “vgextend”.

Check PV and VG

If we now issue the “pvs” and vgs” commands again, we will now see the new additions:

[root@centos7server ~]# pvs
PV         VG     Fmt  Attr PSize  PFree
/dev/sda2  centos lvm2 a–  39.51g     0
/dev/sdb1  vg01   lvm2 a–  20.00g     0
/dev/sdc1  vg01   lvm2 a–  20.00g 20.00g
[root@centos7server ~]# vgs
VG     #PV #LV #SN Attr   VSize  VFree
centos   1   2   0 wz–n- 39.51g     0
vg01     2   1   0 wz–n- 39.99g 20.00g

From the above “pvs” command we can see that the new PV has been added with 508MB of space. The “vgs” command is now indicating that there are now two Physical Volumes associated with the Volume Group “vg01”.

Extend a Logical Volume

To extend the Logical Volume, we are going to use the “lvextend command. In the example below we are extending the Logical Volume by 500MB. The file system will be automatically resized because we are using the “-r” option.

[root@centos7server ~]# vgextend vg01 /dev/sdc1
Volume group “vg01” successfully extended

[root@centos7server ~]# lvextend /dev/vg01/lv01 -L +500M -r
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– scan filesystem freespace and inode maps…
– found root inode chunk
Phase 3 – for each AG…
– scan (but don’t clear) agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
No modify flag set, skipping phase 5
Phase 6 – check inode connectivity…
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify link counts…
No modify flag set, skipping filesystem flush and exiting.
Extending logical volume lv01 to 20.48 GiB
Logical volume lv01 successfully resized
meta-data=/dev/mapper/vg01-lv01  isize=256    agcount=4, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5241856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 5241856 to 5369856

[root@centos7server ~]# lvextend /dev/vg01/lv01 -L +1000M -r
Extending logical volume lv01 to 21.95 GiB
Logical volume lv01 successfully resized
meta-data=/dev/mapper/vg01-lv01  isize=256    agcount=5, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5497856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 5497856 to 5753856
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.0M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        24G   34M   24G   1% /xfs_test

[root@centos7server ~]# lvextend /dev/vg01/lv01 -L +1000M -r
Phase 1 – find and verify superblock…
Phase 2 – using internal log
– scan filesystem freespace and inode maps…
sb_fdblocks 5367272, counted 5751272
– found root inode chunk
Phase 3 – for each AG…
– scan (but don’t clear) agi unlinked lists…
– process known inodes and perform inode discovery…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– agno = 4
– process newly discovered inodes…
Phase 4 – check for duplicate blocks…
– setting up duplicate extent list…
– check for inodes claiming duplicate blocks…
– agno = 0
– agno = 1
– agno = 2
– agno = 3
– agno = 4
No modify flag set, skipping phase 5
Phase 6 – check inode connectivity…
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify link counts…
No modify flag set, skipping filesystem flush and exiting.
Filesystem check failed.

[root@centos7server ~]# xfs_growfs /dev/vg01/lv01
xfs_growfs: /dev/vg01/lv01 is not a mounted XFS filesystem
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.1M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
[root@centos7server ~]# mount /dev/vg01/lv01 /xfs_test/
[root@centos7server ~]#
[root@centos7server ~]# xfs_growfs /dev/vg01/lv01
meta-data=/dev/mapper/vg01-lv01  isize=256    agcount=5, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5753856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.1M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        24G   34M   24G   1% /xfs_test

[root@centos7server ~]# lvextend /dev/vg01/lv01 -L +1000M -r
Extending logical volume lv01 to 22.93 GiB
Logical volume lv01 successfully resized
meta-data=/dev/mapper/vg01-lv01  isize=256    agcount=5, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=5753856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 5753856 to 6009856
[root@centos7server ~]# xfs_growfs /dev/vg01/lv01
meta-data=/dev/mapper/vg01-lv01  isize=256    agcount=5, agsize=1310464 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=6009856, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7server ~]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        39G  1.1G   38G   3% /
devtmpfs                devtmpfs  3.0G     0  3.0G   0% /dev
tmpfs                   tmpfs     3.0G     0  3.0G   0% /dev/shm
tmpfs                   tmpfs     3.0G  9.1M  3.0G   1% /run
tmpfs                   tmpfs     3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       521M  147M  374M  29% /boot
/dev/mapper/vg01-lv01   xfs        25G   34M   25G   1% /xfs_test

We can now see from the above output that we have 2.5GB of space now available to our xfs file system.

Important: You can not reduce a XFS filesystem, if you try you will get the following message:

[root@centos7server ~]# lvreduce /dev/vg01/lv01 -L -500M -r
fsadm: Xfs filesystem shrinking is unsupported
fsadm failed: 1
Filesystem resize failed.
[root@centos7server ~]#

Method 2 for extending a XFS File System using the utility xfs_growfs

The following method allows you to extend the file system by using the “xfs_growfs” command. The xfs_growfs command is used to increase the size of a mounted XFS file system only if there is space on the underlying devices to accommodate the change. The xfs_growfs command does not require LVM to extend the file system as per the previous example.

The mount point argument that is passed is the pathname to the directory where the filesystem is mounted. The xfs filesystem must be mounted first before it can be grown. The contents of the file system are undisturbed, and the added space is then made available for additional file storage.

In the following example I am using a CentOS 7 server running in VirtualBox. Initially the disk used “/dev/sdb” is set to a size of 500MB. Our next step is to then create a XFS file system by issuing the “mkfs.xfs” command. The resulting file system can be seen after issuing the “df -hT” command. (The -h option displays a human readable format in MB, GB etc.. and the -T option displays the type of file system (XFS in this example).

After the file system was initially created, I then increased the size of the underlying disk by using the following VirtualBox command:

VBoxManage modifyhd “/home/mohan/VirtualBox VMs/CentOS_7/CentOS7_XFS_Test1.vdi” –resize 2048

An outline of the steps involved for this example are below:

Create xfs file system on a 500MB Disk

Create the xfs file system using the “mkfs.xfs” command:

[root@centos07a /]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=32000 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=128000, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=853, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
real-time =none                   extsz=4096   blocks=0, rtextents=0

Mount File System

Next we need to mount the file system using the mount command (syntax: mount /dev/device /mount_point)

[root@centos07a /]# mount /dev/sdb1 xfs_test/

Display XFS File System information

We can use the “df” command again to look at the size of our mounted xfs file system:

[root@centos07a /]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs       6.7G  1.1G  5.7G  16% /
devtmpfs                devtmpfs  492M     0  492M   0% /dev
tmpfs                   tmpfs     498M     0  498M   0% /dev/shm
tmpfs                   tmpfs     498M  6.6M  491M   2% /run
tmpfs                   tmpfs     498M     0  498M   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  100M  398M  20% /boot
/dev/sdb1               xfs       497M   26M  472M   6% /xfs_test

We can now see that our file system is mounted at mount point /xfs_test. You may wish to add your disk/filesystem into “/etc/fstab” so that it will be automatically mounted at system reboot. To do this, simply add a line similar to the one below into the file “/etc/fstab”.

Example /etc/fstab entry

/dev/sdb1               /xfs_test               xfs     defaults        0 0

Shutdown Partition and Increase underlying Disk size

As we are using Oracle’s VirtualBox software, we need to shutdown the partition first to increase the under lying disk. The shutdown command “shutdown -h now” can be used to cleanly shutdown your partition. Unfortunately, VirtualBox only allows the disk to be increased when it is not in use. The following command will be issued from the host computer. The “host” computer is the computer that is running the VirtualBox software.

VirtualBox – Increase Disk Size

The following command is issued to the host computer.

$ VBoxManage modifyhd “/home/mohan/VirtualBox VMs/CentOS_7/CentOS7_XFS_Test1.vdi” –resize 2048
0%…10%…20%…30%…40%…50%…60%…70%…80%…90%…100%

The above command has now resized the virtual disk “CentOS7_XFS_Test1.vdi” to a size of 2GB. Notice the double quotes around the path name. Double quotes are needed as there are spaces within the path name name. If you are unsure of the name of the Virtualdisk that you created originally for your partition, you can “right click” on the name of your VM in the VirtualBox manager and select the option to “Show in file manager”. Here you will see a list of all the virtual disks that have been allocated to your server.

Restart Server and Delete Original Partition

Once we have increased the under lying disk space, we will need to restart our server. Next we are going to use fdisk to delete our original partition and then recreate it again with more space. The disk in the example is known as “/dev/sdb”.

Below are the steps taken using fdisk to accomplish the above task.

[root@centos07a /]# umount /xfs_test

[root@centos07a /]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p

Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x340b4c69

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     1026047      512000   83  Linux

Command (m for help): d
Selected partition 1
Partition 1 is deleted

Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4194303, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303):
Using default value 4194303
Partition 1 of type Linux and of size 2 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Remount xfs File System

Before we can increase a xfs file system, it needs to be mounted first.

[root@centos07a /]# mount /dev/sdb1 /xfs_test

We can issue the “df” command again to display our file system information.

[root@centos07a /]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs       6.7G  1.1G  5.7G  16% /
devtmpfs                devtmpfs  492M     0  492M   0% /dev
tmpfs                   tmpfs     498M     0  498M   0% /dev/shm
tmpfs                   tmpfs     498M  6.6M  491M   2% /run
tmpfs                   tmpfs     498M     0  498M   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  100M  398M  20% /boot
/dev/sdb1               xfs       497M   26M  472M   6% /xfs_test

From the above we can see that the file system on “xfs_test” is still indicating its original size.

Issue xfs_growfs against mount point

Next we are going to issue the “xfs_growfs” command:

[root@centos07a /]# xfs_growfs xfs_test/
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=32000 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=128000, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=853, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
real-time =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 128000 to 524032

The important information to look for when you issue this command can be found within the last line of the output. You are looking for an increase in size relating to data blocks. We can now issue the “df” command again to verify that the xfs_growfs command was successful:

[root@centos07a /]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs       6.7G  1.1G  5.7G  16% /
devtmpfs                devtmpfs  492M     0  492M   0% /dev
tmpfs                   tmpfs     498M     0  498M   0% /dev/shm
tmpfs                   tmpfs     498M  6.6M  491M   2% /run
tmpfs                   tmpfs     498M     0  498M   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  100M  398M  20% /boot
/dev/sdb1               xfs       2.0G   26M  2.0G   2% /xfs_test

This time we can see that the file system is now 2GB in size.

Overview of Options that can be passed to the xfs_growfs utility

SYNOPSIS
xfs_growfs  [  -dilnrx  ] [ -D size ] [ -e rtextsize ] [ -L size ] [ -m
maxpct ] [ -t mtab ] [ -R size ] mount-point
xfs_growfs -V

xfs_info [ -t mtab ] mount-point
xfs_info -V

Options that can be passed to the xfs_growfs utility

d | -D size

The “-d or -D” option is used to specify that the data section of the filesystem should be grown. If the “-D” size option is passed, then the data section is grown to the specified size. The “-d” option specifies that the data section is grown to the largest possible size. The size is specified in file system blocks.

-e

Allows the real-time extent size to be specified. This can also be specified with the mkfs.xfs command with the specified option of “-r extsize=nnnn”.

-l | -L size

Specifies that the log section of the filesystem should be grown, shrunk, or moved. If the -L size option is given, the log section is changed to be that size, if possible. The size is expressed in file system blocks. The size of an internal log must be smaller than the size of an allocation group (this value is printed at mkfs time). If neither -i nor -x is given with -l, the log continues to be internal or external as it was before. [NOTE: These options are not implemented]

-m

Specify a new value for the maximum percentage of space in the file system that can be allocated as inodes. In mkfs.xfs(8) this is specified with -i maxpct=nn.

-n

Specifies that no change to the filesystem is to be made. The file system geometry is printed, and argument checking is performed, but no growth occurs.

-r | -R size

Specifies that the real time section of the file system should be grown. If the -R size option is given, the real time section is grown to that size, otherwise the real time section is grown to the largest size possible with the -r option. The size is expressed in filesystem blocks. The filesystem does not need to have contained a real time section before the xfs_growfs operation.

-t

Specifies an alternate mount table file.

-V

Prints the version number and exits. The mount point argument is not required with -V.

Summary of resizing a XFS file system

Although both of the methods used allowed you to increase the size of the XFS file system. Neither of the methods allowed us to reduce the size. The most common method of resizing a file system is to use the LVM (Logical Volume Manager) approach. LVM gives you the advantage of being able to add additional disk easily to an existing Volume Group and then use the lvextend command to increase the size of your filesystem.

Overview of other xfs Utilities

Repairing a XFS File System with xfs_repair

Basic Syntax:

xfs_repair /dev/device

The xfs_repair utility is designed to repair file systems whether small or large very quickly. Unlike other file system repair tools, xfs_repair does not run at system boot time. xfs_repair replays its logs at mount time to ensure a consistent file system. If xfs_repair encounters a dirty log, then it will not be able to repair the file system. To rectify the file system, you will need to first clear the relevant log, mount and then unmount the xfs file system. You may use the option “-L” to force log zeroing if the log file is corrupt and can not be replayed successfully. The command to issue for zeroing the log is as follows:

xfs_repair -L /dev/device

For full details regarding the xfs_repair utility, please consult the relevant man pages: man xfs_repair

XFS Quota Management – xfs_quota

xfs_quota gives the administrator the ability to manage limits on disk space. XFS quotas can control or report usage on users, groups or directory project level. XFS quotas are enabled at mount time. You may specify the “noenforce” option which allows reporting of usage, however, it does not enforce any limits. For full details of XFS quotas see the relevant man page: man xfs_quota

Suspending a XFS File System with xfs_freeze

The command to suspend access or resume write activity to a xfs file system is “xfs_freeze”. Generally this option is used for suspending write activity thus allowing hardware based device snapshots to be used to capture the file system in a consistent state.

The xfs_freeze utility is provided by the package “xfsprogs”, note, this is only available to x86_64 architecture.

To freeze a XFS file system the basic syntax is:

xfs_freeze -f /mount/point

The -f flag requests that the specified XFS filesystem should be set to a state of frozen, immediately stopping any modifications from being made. When this option is selected, all ongoing transactions in the file system are allowed to complete. Any new write system calls are halted.

To unfreeze a XFS file system the basic syntax is:

xfs_freeze -u /mount/point

The -u flag is used to unfreeze the file system and allow operations to continue again. Any file system modifications that were blocked by the freeze option are unblocked and allowed to complete.

If you are taking a LVM snapshot, then it is not necessary to run the “xfs_freeze” utility as the LVM utility will automatically suspend the relevant xfs file system.

You can also use the “xfs_freeze” utility to freeze or unfreeze an ext3, ext4 and btrfs, file system.

xfs_copy

Copy the contents of a XFS file system. xfs_copy should only be used to copy unmounted file systems, read-only mounted file systems, or file systems that have been frozen with the xfs_freeze utility. The basic syntax of the utility is as follows:

xfs_copy [ -bd ] [ -L log ] source target1 [ target2 … ]

OPTIONS
-d     Create a duplicate (true clone) filesystem. This should be done
only if the new filesystem will be used as a replacement for the
original filesystem (such as in the case of disk replacement).

-b     The buffered option can be  used  to ensure direct IO is not
attempted to any of the target files. This is  useful  when  the
filesystem holding the target file does not support direct IO.

-L log Specifies  the  location  of  the log if the default location of
/var/tmp/xfs_copy.log.XXXXXX is not desired.

-V     Prints the version number and exits.

xfs_fsr – File System re-organizer for XFS

The “xfs_fsr” utility is used to defragment mounted XFS file systems. The reorganization algorithm operates on one file at a time, compacting or otherwise improving the layout of the file extents (contiguous blocks of file data). When invoked with out any arguments, xfs_fsr will defragment all regular files in all mounted xfs file systems. xfs_fsr uses the file “/etc/mtab” as its source of mounted file systems. The xfs_fsr utility also allows a user to suspend a defragmentation process at a specified time and then resume from where it last left off. The current position of the defragmentation process is stored in the file:

/var/tmp/.fsrlast_xfs

xfs_fsr can also be invoked to work with a single file:

xfs_fsr /path/to/file

When xfs_fsr is invoked, it will require sufficient free space to be available as each file is copied to a temporary location whilst it is processed. A warning message will be displayed if sufficient space is not available.

xfs_bmap

Prints the map of disk blocks used by files in an XFS filesystem. The map lists each extent used by the file, as well as regions in the file with no corresponding blocks.

Each line of the listing takes the following form: extent: [startoffset..endoffset]: startblock..endblock

[root@centos07a xfs_test]# xfs_bmap test.file
test.file:
0: [0..7]: 96..103

[root@centos07a xfs_test]# xfs_bmap -v test.file
test.file:
EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL
0: [0..7]:          96..103           0 (96..103)            8

For further information regarding this utility, please consult the relevant man page: man xfs_bmap

xfs_info

To view your XFS file system information, the command “xfs_info” can be issued:

[root@centos07a /]# xfs_info /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=17, agsize=32704 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0
data     =                       bsize=4096   blocks=524032, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=853, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
real-time =none                   extsz=4096   blocks=0, rtextents=0

xfs_admin

The xfs_admin command allows an administrator to changes parameters of a XFS file system. The xfs_admin utility can only modify parameters of unmounted devices/file systems. (Mounted devices can not be modified).

For a full list of parameters that can be changed, please consult the relevant man page: man xfs_admin
xfs_metadump

xfs_metadump is a debugging tool that copies XFS file system metadata to a file. The xfs_metadump utility should only be used to copy unmounted, read-only, or frozen/suspended file systems; otherwise, generated dumps could be corrupted or inconsistent. For a full list of available option please consult the relevant man page: man xfs_metadump

xfs_mdrestore

xfs_mdrestore is used to restore a XFS metadump image to a filesystem image. The source argument specifies the location of the metadump image and the target argument specifies the destination for the filesystem image. The target can be either a file or a device.

xfs_mdrestore [ -g ] source target

The “-g” parameter shows the restore progress on stdout.

xfs_db

A utility that can be used to debug a XFS file system. For further information, please consult the relevant man page: man xfs_db.

xfs_estimate

The xfs_estimate utility is used to estimate the amount of space that a xfs file system will take.

[root@centos07a xfs_test]# xfs_estimate /var/tmp
/var/tmp will take about 4.4 megabytes

[root@centos07a xfs_test]# xfs_estimate -v /var/tmp
directory                               bsize   blocks    megabytes    logsize
/var/tmp                                 4096     1138        4.4MB    4096000

xfs_estimate parameters:

[root@centos07a xfs_test]# xfs_estimate -h
Usage: xfs_estimate [opts] directory [directory …]
-b blocksize (fundamental filesystem blocksize)
-i logsize (internal log size)
-e logsize (external log size)
-v prints more verbose messages
-V prints version and exits
-h prints this usage message

Note:    blocksize may have ‘k’ appended to indicate x1024
logsize may also have ‘m’ appended to indicate (1024 x 1024)

xfs_mkfile

xfs_mkfile is used to create a xfs file. The file is padded with zeroes by default. The default size is in bytes, but it can be flagged as kilobytes, blocks, megabytes, or gigabytes with the k, b, m, or g suffixes respectively.

Syntax: xfs_mkfile [ -v ] [ -n ] [ -p ] size[k|b|m|g] filename

Options that can be passed:

-v     Verbose. Report the names and sizes of the created files.

-n     No bytes. Create a holey file – that is, do not write out any
data, just seek to the end of the file and write a block.

-p     Preallocate.  The file is preallocated, then overwritten with
zeroes, it is then truncated to the desired size.

-V     Prints the version number and exits.

xfs_io

xfs_io is a debugging tool similar to the utility xfs_db, but is aimed at examining the regular file I/O paths rather than the raw XFS volume itself. For a full list of all the parameters/options that can be passed, please consult the relevant man page: man xfs_io

xfs_logprint

xfs_logprint prints the log of a XFS file system. The device argument is the pathname of the partition or logical volume containing the filesystem. The device can be a regular file if the “-f” option is used. The contents of the file system remain undisturbed.

There are two major modes of operation in xfs_logprint:

One mode is better for filesystem operation debugging. It is called the transactional view and is enabled through the “-t” option. The transactional view prints only the portion of the log that pertains to recovery. In other words, it prints out complete transactions between the tail and the head. This view tries to display each transaction without regard to how they are split across log records.

The second mode starts printing out information from the beginning of the log. Some error blocks might print out in the beginning because the last log record usually overlaps the oldest log record. A message is printed when the physical end of the log is reached and when the logical end of the log is reached. A log record view is displayed one record at a time. Transactions that span log records may not be decoded fully.

Syntax: xfs_logprint [ options ] device

Options that can be passed:

-b     Extract and print buffer information. Only used in transactional
view.

-c     Attempt to continue when an error is detected.

-C filename
Copy  the log from the filesystem to the file filename.  The log
itself is not printed.

-d     Dump the log from front to end, printing where each  log  record
is located on disk.

-D     Do not decode anything; just print data.

-e     Exit  when  an error is found in the log. Normally, xfs_logprint
tries to continue and unwind from bad logs.  However,  sometimes
it  just  dies  in  bad  ways.   Using this option prevents core
dumps.

-f     Specifies that the filesystem image to be processed is stored in
a  regular  file at device (see the mkfs.xfs(8) -d file option).
This might happen if an image copy of a filesystem has been made
into an ordinary file with xfs_copy(8).

-l logdev
External  log  device.  Only  for those filesystems which use an
external log.

-i     Extract and print inode information. Only used in  transactional
view.

-q     Extract  and print quota information. Only used in transactional
view.

-n     Do not try and interpret log data;  just  interpret  log  header
information.

-o     Also  print  buffer  data in hex.  Normally, buffer data is just
decoded, so better information can be printed.
-s start-block
Override any notion of where to start printing.

-t     Print out the transactional view.

-v     Print “overwrite” data.

-V     Prints the version number and exits.

xfs_rtcp

xfs_rtcp copies a file to the real-time partition on a XFS file system. If there is more than one source and target, the final argument (the target) must be a directory which already exists.

Syntax: xfs_rtcp [ -e extsize ] [ -p ] source … target

Options that can be used:

OPTIONS
-e extsize
Sets the extent size of the destination real-time file.

-p     Use  if  the  size of the source file is not an even multiple of
the block size of the destination filesystem. When -p is  speci-
fied  xfs_rtcp  will pad the destination file to a size which is
an even multiple of the filesystem block size.  This  is  neces-
sary since the real-time file is created using direct I/O and the
minimum I/O is the filesystem block size.

-V     Prints the version number and exits.

xfs_ncheck

The utility xfs_ncheck is used to generate a list of inode numbers along with path names.

Syntax: xfs_ncheck [ -i ino ] … [ -f ] [ -s ] [ -l logdev ] device

Options:

-f       Specifies that the filesystem image to be processed is  stored
in a regular file at device (see the mkfs.xfs -d file option).
This might happen if an image copy of a  filesystem  has  been
made into an ordinary file.

-l logdev
Specifies  the  device  where  the  filesystem’s  external log
resides.  Only for those filesystems  which  use  an  external
log.  See  the  mkfs.xfs  -l option, and refer to xfs(5) for a
detailed description of the XFS log.

-s       Limits the report to special files and  files  with  setuserid
mode.   This  option may be used to detect violations of secu-
rity policy.

-i ino   Limits the report to only those files whose inode numbers fol-
low.   May  be  given  multiple times to select multiple inode
numbers.

-V       Prints the version number and exits.

xfsdump

xfsdump is a file system incremental dump utility that is used in conjunction with xfsrestore. xfsdump backs up files and their attributes in a xfs file system. The files can be dumped to storage media, a regular file, or to standard output. Various dump options allow the administrator to create a full dump or an incremental dump. You may also specify a path to limit the files that are dumped.

To use the xfsdump utility, you first have to install it. This can be easily achieved by issuing the following command:

yum install xfsdump

xfsrestore

xfsrestore restores filesystems from dumps produced by xfsdump. For a full list of all available parameters and options available to xfsdump and xfsrestore, please consult the relevant man pages

 

RHEL 7 DETAILS

RHEL7 001

RHEL7 002

RHEL7 003

RHEL7 004

RHEL7 005

RHEL7 006

RHEL7 007

RHEL7 008

RHEL7 009

RHEL7 010

RHEL7 011

RHEL7 012

RHEL7 013

RHEL7 014

RHEL7 015

RHEL7 016

RHEL7 017

RHEL7 018

RHEL7 019

RHEL7 020

RHEL7 021

RHEL7 022

RHEL7 023

RHEL7 024

RHEL7 025

RHEL7 026

RHEL7 027

RHEL7 028

RHEL7 029

RHEL7 030

How to block brute force attacks on your SSH server

You have probably seen very simple iptables rules to do this. This is a little bit better.

-A INPUT -i eth0.103 -p tcp -m tcp –dport 22 -m state –state NEW -m recent –set –name SSH –rsource
-A INPUT -i eth0.103 -p tcp -m tcp –dport 22 -m recent –rcheck –seconds 30 –hitcount 4 –rttl –name SSH –rsource -j REJECT –reject-with tcp-reset
-A INPUT -i eth0.103 -p tcp -m tcp –dport 22 -m recent –rcheck –seconds 30 –hitcount 3 –rttl –name SSH –rsource -j LOG –log-prefix “SSH brute force ”
-A INPUT -i eth0.103 -p tcp -m tcp –dport 22 -m recent –update –seconds 30 –hitcount 3 –rttl –name SSH –rsource -j REJECT –reject-with tcp-reset
-A INPUT -i eth0.103 -p tcp -m tcp –dport 22 -j ACCEPT

That’s it.
Now what is it? What does it do? How does it work?

The first rule tells the system:
TCP packets are going to come in, that will attempt to establish an SSH connection. Mark them as SSH. Pay attention to the source of the packet.
The second rule says:
If a packet attempting to establish an SSH connection comes, and it’s the fourth packet to come from the same source in thirty seconds, just reject it with prejudice and stop thinking about it.
The third and fourth rules mean:
If an SSH connection packet comes in, and it’s the third attempt from the same guy in thirty seconds, log it to the system log once, then immediately reject it and forget about it.
The fifth rule says:
Any SSH packet not stopped so far, just accept it.

And that’s all.You may want to adjust the number of seconds and hit count to your tastes.
Remember that the second rule has a hit count that is one higher than the ones following it — this is a precaution to stop the packet from continuing down the chain of rules, so brute forcing won’t spam your logs.