October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Mysql Master Slave Replication

Setting the Replication Master Configuration

On a replication master, you must enable binary logging and establish a unique server ID.

Binary logging must be enabled on the master because the binary log is the basis for sending data changes from the master to its slaves.

If binary logging is not enabled, replication will not be possible.

Each server within a replication group must be configured with a unique server ID.This ID is used to identify individual servers within the group.

To configure the binary log and server ID options, you will need to shut down your MySQL server and edit the my.cnf or my.ini file.  Add the following options to the configuration file within the [mysqld] section.

to enable binary logging using a log file name prefix of mysql-bin, and configure aserver ID of 1, use these lines:

[mysqld]
log-bin=mysql-bin
server-id=1

After making the changes, restart the server.

If you omit server-id (or set it explicitly to its default value of 0), a master refuses connections from all slaves.

For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1 and sync_binlog=1 in the master my.cnf file.

Ensure that the skip-networking  option is not enabled on your replication master. If networking has been disabled, your slave will not able to communicate with the master and replication will fail.

if your master is also a slave (DB1 is the master of DB2, DB2 is the master of DB3) in order for DB2 to log updates from his master DB1 to the binlog (so DB3 can read them) you need to put “log-slave-updates” on my.cnf or my.ini.

Replication Implementation

 Replication is based on the master server keeping track of all changes to its databases (updates, deletes, and so on) in its binary log.The binary log serves as a written record of all events that modify database structure or content (data) from the moment the server was started. Typically,SELECT  statements are not recorded because they modify neither database structure nor content.

Each slave that connects to the master requests a copy of the binary log. That is, it pulls the data from the master, rather than the master pushing the data to the slave. The slave also executes the events from the binary log that it receives.This has the effect of repeating the original changes just as they were made on the master. Tables are created or their structure modified, and data is inserted, deleted, and updated according to the changes that were originally made on the master.

Because each slave is independent, the replaying of the changes from the master’s binary log occurs independently on each slave that is connected to the master. In addition, because each slave receives a copy of the binary log only by requesting it from the master, the slave is able to read and update the copy of the database at its own pace and can start and stop the replication process at will without affecting the ability to update to the latest database status on either the master or slave side.

  http://dev.mysql.com/doc/refman/5.0/en/replication-implementation-details.html
Masters and slaves report their status in respect of the replication process regularly so that you can monitor them.
http://dev.mysql.com/doc/refman/5.0/en/thread-information.html
The master binary log is written to a local relay log on the slave before it is processed. The slave also records information about the current position with the master’s binary log and the local relay log.
 http://dev.mysql.com/doc/refman/5.0/en/slave-logs.html
Database changes are filtered on the slave according to a set of rules that are applied according to the various configuration options and variables that control event evaluation.
http://dev.mysql.com/doc/refman/5.0/en/replication-rules.html
Steps For Mysql Master-slave Replication
One of the biggest advantages to have master-slave set up in MySQL is to be able to use master for all of the inserts and send some, if not all, select queries to slave. This will most probably speed up your application without having to diving into optimizing all the queries or buying more hardware.Do backups from slave: One of the advantages people overlook is that you can use MySQL slave to do backups from. That way site is not affected at all when doing backups. This becomes a big deal when your database has grown to multiple gigs and every time you do backups using mysqldump, site lags when table locks happen.You can even stop slave MySQL instance and copy the var folder instead of doing mysqldump.

let us dive into how to setup master-slave replication under MySQL. There are many configuration changes you can do to optimize your MySQL set up.

Master server ip: 192.168.1.231
Slave server ip : 192.168.1.232Slave username : mysqlslave
Slave password : slavepwd

Your data directory is:  /var/lib/mysql

1. In Master# yum install mysql mysql-server
# service mysqld start
# mysqladmin -uroot password ‘master’
# service mysqld stop

Edit the my.cnf file under [mysqld] section of your mysql master

# vim /etc/my.cnf

[mysqld]
server-id = 1
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
log-error = /var/lib/mysql/mysql.err
master-info-file = /var/lib/mysql/mysql-master.info
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
datadir=/var/lib/mysql
log-bin = /var/lib/mysql/mysql-bin

# service mysqld restart

2. In Slave# yum install mysql mysql-server
# service mysqld start
# mysqladmin -uroot password ‘slave’
# service mysqld stop

Add the the following under [mysqld] to the mysql slave by edting my.cnf

# vim /etc/my.cnf

[mysqld]
server-id = 2
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
log-error = /var/lib/mysql/mysql.err
master-info-file = /var/lib/mysql/mysql-master.info
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
datadir=/var/lib/mysql

# service mysqld restart

3. Then in Mysql Master server create a user with replication privileges

# mysql -uroot -pmaster

mysql>  grant replication slave on *.* to mysqlslave@’192.168.1.232′ identified by ‘slavepwd’;
mysql> flush privileges;You can see the new user on the master db by

mysql> show databases;
mysql> use mysql;
mysql> show tables;
mysql> select * from user;
4. Take a dump of data from Mysql Master to move to slave
# mysqldump -uroot -p –all-databases –single-transaction –master-data=1 > masterdump.sql (master dump is put into slave inorder to make the master and slave data similar before starting the sync)after taking the dump of master import it in slave server,

# scp masterdump.sql root@192.168.1.232:

# ssh root@192.168.1.232 —–> slave machine# mysql -uroot -p < masterdump.sql

After dump is imported go in to mysql client by typing mysql. Let us tell the slave which master to connect to and what login/password to use: ( In slave machine )
# mysql -uroot -p
mysql> change master to master_host=’192.168.1.231′, master_user=’mysqlslave’, master_password=’slavepwd’;
mysql> flush privileges;
5. Let us start the slave
mysql> start slave;You can check the status of the slave by typing
mysql> show slave status;
mysql> show slave status\G

     ****************** 1. row *******************
          Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.231
Master_User: mysqlslave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 706
Relay_Log_File: mysql-relay-bin.000008
Relay_Log_Pos: 446
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 706
Relay_Log_Space: 446
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.01 sec)
The last row tells you how many seconds its behind the master. Don’t worry if it doesn’t say 0, the number should be going down over time until it catches up with master (at that time it will show Seconds_Behind_Master: 0) If it shows NULL, it could be that slave is not started (you can start by typing: start slave).
6. For Checking whether replication is working or not create a db in mysql master server
# create database replication;
   and check in slave whether its replicated or not
# show databases;
the db replication will be there in slave as well

Mysqlslap for mysql load testing

Mysqlslap is a Load Emulation Client. mysqlslap is a diagnostic program designed to emulate client load for a MySQL server and to report the timing of each stage. It works as if multiple clients are accessing the server. mysqlslap is available as of MySQL 5.1.4.

mysqlslap runs in three stages:

1. Create schema, table, and optionally any stored programs or data you want to using for the test. This stage uses a single client connection.

2. Run the load test. This stage can use many client connections.

3. Clean up (disconnect, drop table if specified). This stage uses a single client connection.

Please find expalnations for mysqlslap options below.

# /usr/local/mysql/bin/mysqlslap -uroot -p –create-schema=DBName –query=file.txt –concurrency=100 –iterations=1 –delimiter=”;”

mysqlslap usage

To apply auto generated queries,
# mysqlslap -uroot -p –auto-generate-sql

To use multiple simultaneous client connections,
# mysqlslap -uroot -p –auto-generate-sql –concurrency=100

To run the above command multiple times, iteration option is used,
# mysqlslap -uroot -p –auto-generate-sql –concurrency=100 –iterations=5

To specify the number of queries each client executes,
# mysqlslap -uroot -p –auto-generate-sql –concurrency=100 –number-of-queries=1000

The default behavior of mysqlslap when using the –auto-generate-sql switch is to create a two-column table with one integer column and one character column. To adjust these settings to include more integer and/or character columns, use the –number-char-cols and –number-int-cols switches.
# mysqlslap -uroot -p –auto-generate-sql –concurrency=100 –number-of-queries=1000 –number-char-cols=4 –number-int-cols=7

To specify the storage engine –engine option is used,
# mysqlslap -uroot -p –auto-generate-sql –concurrency=100 –number-of-queries=1000 –number-char-cols=4 –number-int-cols=7 –engine=innodb

We can also apply custom queries to mysqlslap either from a text file or directly from command line using –query option. In order to use an existing database use the –create-schema option. This doesnt create a schema but uses an existing schema.
# mysqlslap -uroot -p –create-schema=Cherry_1H_B-01 –query=cherry_1h.txt –concurrency=100 –iterations=5 –delimiter=”;”

DRBD MYSQL

DRBD is a block device which is designed to build high availability clusters.

This is done by mirroring a whole block device via (a dedicated) network. DRBD takes over the data, writes it to the local disk and sends it to the other host. On the other host, it takes it to the disk there. The other components needed are a cluster membership service, which is supposed to be heartbeat, and some kind of application that works on top of a block device. Each device (DRBD provides more than one of these devices) has a state, which can be ‘primary’ or ‘secondary’. If the primary node fails, heartbeat is switching the secondary device into primary state and starts the application there.If the failed node comes up again, it is a new secondary node and has to synchronise its content to the primary. This, of course, will happen whithout interruption of service in the background.

The Distributed Replicated Block Device (DRBD) is a Linux Kernel module that constitutes a distributed storage system. You can use DRBD to share block devices between Linux servers and, in turn, share file systems and data.

DRBD implements a block device which can be used for storage and which is replicated from a primary server to one or more secondary servers. The distributed block device is handled by the DRBD service. Each DRBD service writes the information from the DRBD block device to a local physical block device (hard disk).

On the primary data writes are written both to the underlying physical block device and distributed to the secondary DRBD services. On the secondary, the writes received through DRBD and written to the local physical block device.The information is shared between the primary DRBD server and the secondary DRBD server synchronously and at a block level, and this means that DRBD can be used in high-availability solutions where you need failover support.

When used with MySQL, DRBD can be used to ensure availability in the event of a failure. MySQL is configured to store information on the DRBD block device, with one server acting as the primary and a second machine available to operate as an immediate replacement in the event of a failure.

For automatic failover support, you can combine DRBD with the Linux Heartbeat project, which manages the interfaces on the two servers and automatically configures the secondary (passive) server to replace the primary (active) server in the event of a failure. You can also combine DRBD with MySQL Replication to provide both failover and scalability within your MySQL environment.

NOTE:- Because DRBD is a Linux Kernel module, it is currently not supported on platforms other than Linux.

Configuring the DRBD Environment

To set up DRBD, MySQL and Heartbeat, you follow a number of steps that affect the operating system, DRBD and your MySQL installation.

@ DRBD works through two (or more) servers, each called a node.

@ Ensure that your DRBD nodes are as identically configured as possible, so that the secondary machine can act as a direct replacement for the primary machine in the event of system failure.

@ The node that contains the primary data, has read/write access to the data, and in an HA environment is the currently active node is called the primary.

@ The server to which the data is replicated is called the secondary.

@ A collection of nodes that are sharing information is referred to as a DRBD cluster.

@ For DRBD to operate, you must have a block device on which the information can be stored on each DRBD node. The lower level block device can be a physical disk partition, a partition from a volume group or RAID device or any other block device.

@ For the distribution of data to work, DRBD is used to create a logical block device that uses the lower level block device for the actual storage of information.To store information on the distributed device, a file system is created on the DRBD logical block device.

@ When used with MySQL, once the file system has been created, you move the MySQL data directory (including InnoDB data files and binary logs) to the new file system.

@ When you set up the secondary DRBD server, you set up the physical block device and the DRBD logical block device that stores the data. The block device data is then copied from the primary to the secondary server.

Installation and configuration sequence

@ First, set up your operating system and environment. This includes setting the correct host name, updating the system and preparing the available packages and software required by DRBD, and configuring a physical block device to be used with the DRBD block device.@ Installing DRBD requires installing or compiling the DRBD source code and thenconfiguring the DRBD service to set up the block devices to be shared.

@ After configuring DRBD, alter the configuration and storage location of the MySQL data.

@ Optionally, configure high availability using the Linux Heartbeat service.

Setting Up Your Operating System for DRBD
To set your Linux environment for using DRBD, follow these system configuration steps:
@ Make sure that the primary and secondary DRBD servers have the correct host name, and that the host names are unique. You can verify this by using the uname command:
# hostname drbd1   —–> set the hostname for first node
# hostname drbd2   —–> set the hostname for first node
@ Each DRBD node must have a unique IP address. Make sure that the IP address information is set correctly within the network configuration and that the host name and IP address has been set correctly within the /etc/hosts file.
# vim /etc/hosts
192.168.1.231 drbd1 drbd1# vim /etc/hosts
192.168.1.237  drbd2 drbd2

@ Because the block device data is exchanged over the network,everything that is written to the local disk on the DRBD primary is also written to the network for distribution to the DRBD secondary.@ You devote a spare disk, or a partition on an existing disk, as the physical storage location for the DRBD data that is replicated. If the disk is unpartitioned, partition the disk using fdisk, cfdisk or other partitioning solution. Do not create a file system on the new partition. (ie, do not partation the new device attached or new partation created).

# fdisk /dev/sdb  —–> in primary node create a partation first
n / p(1)
w
# partprobe
# fdisk -l
/dev/sdb1
# fdisk /dev/hdb  ———-> create a partation in secondary node also
n / p(1)
w
# partprobe
# fdisk -l
/dev/hdb1
create a new partation OR if you are using a vmware or a virtual box, and you do not have an extra space for a new partation please add an extra data block to have more space. and don’t partation the disk. After attaching the drbd device only we need to partation the device. use identical sizes for the partitions on each node, for primary and secondary.@ If possible, upgrade your system to the latest available Linux kernel for your distribution. Once the kernel has been installed, you must reboot to make the kernel active. To use DRBD, you must also install the relevant kernel development and header files that are required for building kernel modules.

Before you compile or install DRBD, make sure the following tools and files are there
update and install the latest kernel and kernel header files:-@ root-shell> up2date kernel-smp-devel kernel-smp

@ root-shell> up2date glib-devel openssl-devel libgcrypt-devel glib2-devel pkgconfig ncurses-devel rpm-build rpm-devel redhat-rpm-config gcc gcc-c++ bison flex gnutls-devel lm_sensors-devel net-snmp-devel python-devel bzip2-devel libselinux-devel perl-DBI# yum install drbd kmod-drbd   / if any dependency error came
OR
# yum install drbd82 kmod-drbd82

[/etc/drbd.conf] is the configuration file

To set up a DRBD primary node, configure the DRBD service, create the first DRBD block device, and then create a file system on the device so that you can store files and data.

@ Set the synchronization rate between the two nodes. This is the rate at which devices are synchronized in the background after a disk failure, device replacement or during the initial setup. Keep this in check compared to the speed of your network connection.

@ To set the synchronization rate, edit the rate setting within the syncer block:

Creating your primary node
# vim /etc/drbd.confglobal { usage-count yes; }

common {
syncer {
rate 50M;
verify-alg sha1;
}
handlers { outdate-peer “/usr/lib/heartbeat/drbd-peer-outdater”;}
}

resource mysqlha {
protocol C;   # Specifies the level of consistency to be  used when information
                                    is written to the block device.Data is considered  written
                                     when the data has reached  the local disk and the
                                       remote node’s  physical disk.
disk {
on-io-error detach;
fencing resource-only;
#disk-barrier no;
#disk-flushes no;
}
@ Set up some basic authentication. DRBD supports a simple password hash exchange mechanism. This helps to ensure that only those hosts with the same shared secret are able to join the DRBD node group.
net {
cram-hmac-alg sha1;
shared-secret “cEnToS”;
sndbuf-size 512k;
max-buffers 8000;
max-epoch-size 8000;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg sha1;
}
@ Now you must configure the host information. You must have the node information for the primary and secondary nodes in the drbd.conf file on each host. Configure the following information for each node:
@ device: The path of the logical block device that is created by DRBD.
@ disk: The block device that stores the data.
@ address: The IP address and port number of the host that holds this DRBD device.
@ meta-disk: The location where the metadata about the DRBD device is stored. If you set this to internal, DRBD uses the physical block device to store the information, by recording the metadata within the last sections of the disk.
The exact size depends on the size of the logical block device you have created, but it may involve up to 128MB.
@ The IP address of each on block must match the IP address of the corresponding host. Do not set this value to the IP address of the corresponding primary or secondary in each case.
on drbd1 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/sdb1;
address 192.168.1.231:7789;
meta-disk internal;
}on drbd2 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/hdb;
address 192.168.1.237:7789;
meta-disk internal;
}

And in  second machine do the same as in first machine
Setting Up a DRBD Secondary Node
The configuration process for setting up a secondary node is the same as for the primary node, except that you do not have to create the file system on the secondary node device, as this information is automatically transferred from the primary node.
@ To set up a secondary node:
Copy the /etc/drbd.conf file from your primary node to your secondary node. It should already contain all the information and configuration that you need, since you had to specify the secondary node IP address and other information for the primary node configuration.
global { usage-count yes; }common {
syncer {
rate 50M;
verify-alg sha1;
}

handlers { outdate-peer “/usr/lib/heartbeat/drbd-peer-outdater”;}
}

resource mysqlha {
protocol C;
disk {
on-io-error detach;
fencing resource-only;
#disk-barrier no;
#disk-flushes no;
}

net {
cram-hmac-alg sha1;
shared-secret “cEnToS”;
sndbuf-size 512k;
max-buffers 8000;
max-epoch-size 8000;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
data-integrity-alg sha1;
}

on drbd1 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/sdb1;
address 192.168.1.231:7789;
meta-disk internal;
}

on drbd2 {
device /dev/drbd0;
#disk /dev/sda3;
disk /dev/hdb1;
address 192.168.1.237:7789;
meta-disk internal;
}

@@ On both machines, Before starting the primary node and secondary nodes, create the metadata for the devices# drbdadm create-md mysqlha

@@ On primary/active node,

# /etc/init.d/drbd start  ## DRBD should now start and initialize, creating the DRBD devices that you have configured.
DRBD creates a standard block device – to make it usable, you must create a file system on the block device just as you would with any standard disk partition. Before you can create the file system, you must mark the new device as the primary device (that is, where the data is written and stored), and initialize the device. Because this is a destructive operation, you must specify the command line option to overwrite the raw data.
# drbdadm — –overwrite-data-of-peer primary mysqlha@  On seconday/passive node,

# /etc/init.d/drbd start

@  On both machines,# /etc/init.d/drbd status

# cat /proc/drbd      ##  Monitoring a DRBD Device• cs: connection state
• st: node state (local/remote)
• ld: local data consistency
• ds: data consistency
• ns: network send
• nr: network receive
• dw: disk write
• dr: disk read
• pe: pending (waiting for ack)
• ua: unack’d (still need to send ack)
• al: access log write count

# watch -n 10 ‘cat /proc/drbd’

@ On primary/active node,

# mkfs.ext3 /dev/drbd0
# mkdir /drbd        ##  no needed this bcz we need to mount it in
                                            another point called  /usr/local/mysql/
# mount /dev/drbd0 /drbd   ## not need this as wellYour primary node is now ready to use.

@ On seconday/passive node,

# mkdir /drbd   ## not needed, it will replicate from primary
[[[[[[[[[[ for TESTING the replication of drbd alone follow the above steps. after the primary node is mounted to a mount point, create any files in it. create same mount point in both the system.
# cd /mountpoint
# dd if=/dev/zero of=check bs=1024 count=1000000After that in primary
# umount /drbd

# drbdadm secondary mysqlfo  ## make the primary node as secondaryAnd in secondary
# drbdadm primary mysqlfo  ## make the secondary node as primary

# mount /dev/drbd0 /drbd/
# ls /drbd/   ## the data will be replicated into it. ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
Mysql for DRBD
# [MySQL]  ##  install mysql if its not there, here its installed with partation enabled
@ On primary/active node,
# cd  mysql-5.5.12/
# cmake . -LH
# cmake .
# make
# make install
# cd /usr/local/mysql/
# chown mysql:mysql . -R
# scripts/mysql_install_db –datadir=/usr/local/mysql/data/ –user=mysql
# scp /etc/my.cnf root@192.168.1.231:/usr/local/mysql/
                          ## config file copied from another machine
# cd /usr/local/mysql/
# vim my.cnfdatadir=/usr/local/mysql/data/
socket=/usr/local/mysql/data/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/usr/local/mysql/mysqld.pid

./bin/mysqld_safe –defaults-file=/usr/local/mysql/my.cnf &
                ##  start mysql server
OR
# nohup sh /usr/local/mysql/bin/mysqld_safe –defaults-file=/usr/local/mysql/my.cnf &
#./bin/mysqladmin -h localhost -uroot password ‘mysql’
 
# vim /etc/profile
export PATH=$PATH:/usr/local/mysql/bin
# . /etc/profile
# mysql -uroot -pmysql
# cd /usr/local/mysql/support-files/
# cp mysql.server /etc/init.d/mysqld
# /etc/init.d/mysqld restart# /etc/init.d/mysqld stop
### /etc/init.d/drbd stop —- dont stop drbd

# mkdir /tmp/new
# mv /usr/local/mysql/* /tmp/new/ ## Move the mysql data to a secure location, and mount the drbd partation to /usr/local/mysql
# umount /drbd  ## Already mounted partation, need to unmount it and mount the drbd partation to /usr/local/mysql, where mysql data are stored.
# mount /dev/drbd0 /usr/local/mysql/  ## To this location we want to mount drbd where mysql directory locations and installation files resides.
# cp -r /tmp/new/* .  ## after mounting the drbd partation to /usr/local/src/ copy the mysql data’s back to /usr/local/mysql/ from the alrady backeded place. now the mysql is in drbd partation.
 [[[[[[[[[[[[[ for TESTING the mysql replication in drdb
In Primary node
# mysql -uroot -pmysql
mysql> create database DRBD;  ## if database is not created the entire mysql instance is replicated to secondary.
# /etc/init.d/mysqld stop    ## we stopped this because if the mysql from primary node is stopped we want to replicated the mysql service and db in secondary server.
# umount /usr/local/mysql/   ## umount the mount point in primary
# ls /usr/local/mysql/
# drbdadm secondary mysqlfo  ## make primary as secondary node
# /etc/init.d/drbd status
In secondary Node# drbdadm primary mysqlfo

# /etc/init.d/drbd status
# mount /dev/drbd0 /usr/local/mysql/
# ls /usr/local/mysql/
# /usr/local/mysql/bin/mysql -uroot -pmysql  ## we can see the database created in primary replicated to secondary.
# /etcinit.d/mysqld start]]]]]]]]]]]]]]]]]]]]]]]

 Configuring Heartbeat for BRBD (the service attached to DRBD) failover
1. Assign hostname node01 to primary node with IP address 172.16.4.80 to eth0
2. Assign hostname node02 to slave node with IP address 172.16.4.81
Note: on node01uname -n —- must return node01
uname -n —- must return node02

We set the host name already to configure the drbd. Here we use 192.168.1.245 as virtual ip, communications will listen to that IP.
# yum install heartbeat heartbeat-devel  ##  On both the servers@ if config files are not under /usr/share/doc/heartbeat

# cd /etc/ha.d/

# touch authkeys
# touch ha.cf
# touch haresources
# vim authkeys
   auth 2
2 sha1 test-ha
  ## auth 3
3 md5 “secret”
# chmod 600 /etc/ha.d/authkeys
# vim ha.cf
logfacility local0
debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 500ms
deadtime 10
warntime 5
initdead 30
mcast eth0 225.0.0.1 694 2 0
ping 192.168.1.22
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
respawn hacluster /usr/lib/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster
auto_failback off (on)
node drbd1
node drbd2
# vim haresource ## This file contains the information about resources which we want to highly enable.
drbd1 drbddisk Filesystem::/dev/drbd0::/usr/local/mysql::ext3 mysqld 192.168.1.245 (virtual IP)
# cd /etc/ha.d/resource.d
# vim drbddisk
DEFAULTFILE=”/etc/drbd.conf”
@ On PRIMARY Node# cp /etc/rc.d/init.d/mysqld /etc/ha.d/resource.d/

@ Copy the files from primary node to secondary node

# scp -r ha.d root@192.168.1.237:/etc/ ## copy all files to node two, because Primary node and secondary node contains the same configuration.

 @@ Stop all services in both the nodesnode1$ service mysqld stop
node1$ umount /usr/local/mysql/
node1$ service drbd stop
node1$ service heartbeat stop

node2$ service mysqld stop
node2$ umount /usr/local/mysql/
node2$ service drbd stop
node2$ service heartbeat stop

@@ # Automatic startup,
node1$ chkconfig drbd on
node2$ chkconfig drbd on

node1$ chkconfig mysqld off ## mysql will be handled by heartbeat, its exe we placed in /ha.d/resources/
node2$ chkconfig mysqld off
node1$ chkconfig heartbeat on
node2$ chkconfig heartbeat on
# Start drbd on both machines,
node1$ service drbd start
node1$ service heartbeat start# Start heartbeat on both machines,
node2$ service drbd start
node2$ service heartbeat start

No need of starting mysql Heartbeat will start it automatically.

For testing the replication
#/usr/lib/heartbeat/hb_standby ## Run this command in any host, then that host will going down and the DB will replicate to other system
@ access the DB from a remote host using the virtual IPmysql> grant all privileges on *.* to ‘root’@’192.168.1.67’ identified by ‘mysql1’;

mysql> flush privileges;
# delete from user where Host=’192.168.1.67’# mysql -uroot -p -h 192.168.1.245

#[Test Failover Services]
node1$ hb_standby
node2$ hb_takeover#[Sanity Checks]
node1$ service heartbeat stop
node2$ service heartbeat stop
$/usr/lib64/heartbeat/BasicSanityCheck

#[commands]
$/usr/lib64/heartbeat/heartbeat -s

Mysql Master-Slave Replication after slave fails

1. From slave if we run mysql> mysql slave status;
it will show last bin file slave reads from master
and all,so start from that bin file to sink with master to slave.

2. Set Master configuration on the Slave.
Execute the following command on a MySQL prompt to sink slave with master:

mysql > CHANGE MASTER TO MASTER_HOST=’10.100.10.80’, MASTER_USER=’repl’, MASTER_PASSWORD=’slavepassword’, MASTER_LOG_FILE=’mysql-bin.000003’, MASTER_LOG_POS=106;

This is how you tell Slave how to connect to Master in order to replicate. Note the log coordinates. These are the coordinates you got from step 1 above.

[
Now we need to tell the slave where the master is located, which binlog file to use, and which position to start. Issue this CHANGE MASTER TO 
command on the slave server(s): (don’t forget to change the values to match your master server)

 mysql> CHANGE MASTER TO
    ->   MASTER_HOST=’master IP address’,
    ->   MASTER_USER=’replication user’,
    ->   MASTER_PASSWORD=’replication user password’,
    ->   MASTER_PORT=3306,
    ->   MASTER_LOG_FILE=’mysql-bin.000015′,
    ->   MASTER_LOG_POS=540,
    ->   MASTER_CONNECT_RETRY=10;

mysql> show warnings\G

Two values to note in the slave status shows us that our CHANGE MASTER TO statement worked:

    Master_Log_File: mysql-bin.000015
    Read_Master_Log_Pos: 540
]

3. Stop MySQL

4. Start MySQL normally

Checking out that everything is OK

Having started the slave MySQL node, you can log in and issue some commands to make sure that Slave is running OK.

On mysql prompt, give the following command:

mysql> show processlist;

You can see the SQL thread that gets data from Master (in the above output is the thread with Id 2) and the SQL thread that executes the statements on Slave (in the output is the thread with Id 1).

2. mysql> show slave status;

This will display the current status on slave. Pay attention to the *_Errno and *_Error columns. Normally, you shouldn’t see anything that indicates existence of errors there.

3. On mysql prompt, give the following command

mysql> show status like ‘Slave%’;

You should see an output like the following:
+—————————-+——-+
| Variable_name              | Value |
+—————————-+——-+
| Slave_open_temp_tables     | 0     |
| Slave_retried_transactions | 0     |
| Slave_running              | ON    |
+—————————-+——-+

Pay attention to Slave_running being with value ON.

Important note on binary log time to live

As we have said before, you can have Slave down and
re-synchronize as soon as you bring it up again.But do not put it out of service for quite long because, then it will be impossible to synchronize its content with Master.

This is because the binary logs on Master do not leave forever.

There is the variable with name expire_logs_days that determines the number of days for automatic binary log file removal. Check this out. This should be 10, meaning that if you ever have your Slave down for 10 days or more, it will not be able to do replication as soon as you bring it up, and you will have to  everything from the beginning.

CPU architecture information under Linux operating

You can use /proc/cpuinfo file or use the lscpu command to get info about CPU architecture. It will display information like:

  • Number of CPUs
  • Threads
  • Cores
  • Sockets
  • NUMA nodes
  • Information about CPU caches,
  • CPU family, model and stepping.
  • in human-readable format. Alternatively, it can print out in parsable
  • format including how different caches are shared by different CPUs,
  • which can also be fed to other programs.
Open a terminal and type the following command:
$ less /proc/cpuinfo
OR
$ lscpu
Sample outputs:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                8
Thread(s) per core:    2
Core(s) per socket:    4
CPU socket(s):         1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 30
Stepping:              5
CPU MHz:               1199.000
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K

Add disk on Rhel 6

There are two ways to configure a new disk drive into a Red Hat Enterprise Linux 6 system. One very simple method is to create one or more Linux partitions on the new drive, create Linux file systems on those partitions and then mount them at specific mount points so that they can be accessed. This approach will be covered in this chapter.

Another approach is to add the new space to an existing volume group or create a new volume group. When RHEL 6 is installed a volume group is created and named vg_hostname, where hostname is the host name of the system. Within this volume group are two logical volumes named lv_root and lv_swap that are used to store the / file system and swap partition respectively. By configuring the new disk as part of a volume group we are able to increase the disk space available to the existing logical volumes. Using this approach we are able, therefore, to increase the size of the / file system by allocating some or all of the space on the new disk to lv_root. This topic will be discussed in detail in Adding a New Disk to an RHEL 6 Volume Group and Logical Volume .

Getting Started

This tutorial assumes that the new physical hard drive has been installed on the system and is visible to the operating system. The best way to do this is to enter the system BIOS during the boot process and ensuring that the BIOS sees the disk drive. Sometimes the BIOS will provide a menu option to scan for new drives. If the BIOS does not see the disk drive double check the connectors and jumper settings (if any) on the drive.

Finding the New Hard Drive in RHEL 6

Assuming the drive is visible to the BIOS it should automatically be detected by the operating system. Typically, the disk drives in a system are assigned device names beginning hd or sd followed by a letter to indicate the device number. For example, the first device might be /dev/sda, the second /dev/sdb and so on.
The following is output from a system with only one physical disk drive:
# ls /dev/sd*
/dev/sda   /dev/sda1  /dev/sda2
This shows that the disk drive represented by /dev/sda is itself divided into 2 partitions, represented by /dev/sda1 and /dev/sda2.
The following output is from the same system after a second hard disk drive has been installed:
# ls /dev/sd*
/dev/sda   /dev/sda1  /dev/sda2 /dev/sdb
As shown above, the new hard drive has been assigned to the device file/dev/sdb. Currently the drive has no partitions shown (because we have yet to create any).
At this point we have a choice of creating partitions and file systems on the new drive and mounting them for access or adding the disk as a physical volume as part of a volume group. To perform the former continue with this chapter, otherwise read Adding a New Disk to an RHEL 6 Volume Group and Logical Volume for details on configuring Logical Volumes.

Creating Linux Partitions

The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility which takes as a command-line argument the device to be partitioned:
# su -
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd1082b01.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
As instructed, switch off DOS compatible mode and change the units to sectors by entering the c and u commands:
Command (m for help): c
DOS Compatibility flag is not set
Command (m for help): u
Changing display/entry units to sectors
In order to view the current partitions on the disk enter the p command:
Command (m for help): p

Disk /dev/sdb: 34.4 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd1082b01

   Device Boot      Start         End      Blocks   Id  System
As we can see from the above fdisk output the disk currently has no partitions because it is a previously unused disk. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4):
In this example we only plan to create one partition which will be partition 1. Next we need to specify where the partition will begin and end. Since this is the first partition we need it to start at the first available sector and since we want to use the entire disk we specify the last sector as the end. Note that if you wish to create multiple partitions you can specify the size of each partition by sectors, bytes, kilobytes or megabytes.
Partition number (1-4): 1
First sector (2048-67108863, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-67108863, default 67108863):
Using default value 67108863
Now that we have specified the partition we need to write it to the disk using the w command:
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
If we now look at the devices again we will see that the new partition is visible as /dev/sdb1:
# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1
The next step is to create a filesystem on our new partition.

Creating a Filesystem on an RHEL 6 Disk Partition

We now have a new disk installed, it is visible to RHEL 6 and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.ext4 utility which takes as arguments the label and the partition device:
# /sbin/mkfs.ext4 -L /backup /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=/backup
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
2097152 inodes, 8388352 blocks
419417 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
256 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mounting a Filesystem

Now that we have created a new filesystem on the Linux partition of our new disk drive we need to mount it so that it is accessible. In order to do this we need to create a mount point. A mount point is simply a directory or folder into which the filesystem will be mounted. For the purposes of this example we will create a /backup directory to match our filesystem label (although it is not necessary that these values match):
# mkdir /backup
The filesystem may then be manually mounted using the mount command:
# mount /dev/sdb1 /backup
Running the mount command with no arguments shows us all currently mounted filesystems (including our new filesystem):
# mount
/dev/mapper/vg_rhel6-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/sr0 on /media/RHEL_6.0 x86_64 Disc 1 type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=500,gid=500,iocharset=utf8,mode=0400,dmode=0500)
/dev/sdb1 on /backup type ext4 (rw)

Configuring RHEL 6 to Automatically Mount a Filesystem

In order to set up the system so that the new filesystem is automatically mounted at boot time an entry needs to be added to the /etc/fstab file.
The following example shows an fstab file configured to automount our /backuppartition:
/dev/mapper/vg_rhel6-lv_root /                       ext4    defaults        1 1
UUID=4a9886f5-9545-406a-a694-04a60b24df84 /boot                   ext4    defaults        1 2
/dev/mapper/vg_rhel6-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=/backup           /backup                 ext4    defaults        1 2

iostat, vmstat and mpstat examples

This article provides a total of 24 examples on iostat, vmstat, and mpstat commands.

  • iostat reports CPU, disk I/O, and NFS statistics.
  • vmstat reports virtual memory statistics.
  • mpstat reports processors statictics.
This article is part of our ongoing Linux performance monitoring series.
Please note that iostat and vmstat are part of the sar utility. You should install sysstat package as explained in our sar (sysstat) article to get iostat and vmstat working.

IOSTAT EXAMPLES

1. iostat – Basic example

Iostat without any argument displays information about the CPU usage, and I/O statistics about all the partitions on the system as shown below.
$ iostat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.72      1096.66      1598.70 2719068704 3963827344
sda1            178.20       773.45      1329.09 1917686794 3295354888
sda2             16.51       323.19       269.61  801326686  668472456
sdb             371.31       945.97      1073.33 2345452365 2661206408
sdb1            371.31       945.95      1073.33 2345396901 2661206408
sdc             408.03       207.05       972.42  513364213 2411023092
sdc1            408.03       207.03       972.42  513308749 2411023092

2. iostat – Display only cpu statistics

iostat option -c, displays only the CPU usage statistics as shown below.
$ iostat -c
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

3. iostat – Display only disk I/O statistics

iostat option -d, displays only the disk I/O statistics as shown below.
$ iostat -d
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.71      1096.61      1598.63 2719068720 3963827704
sda1            178.20       773.41      1329.03 1917686810 3295355248
sda2             16.51       323.18       269.60  801326686  668472456
sdb             371.29       945.93      1073.28 2345452365 2661209192
sdb1            371.29       945.91      1073.28 2345396901 2661209192
sdc             408.01       207.04       972.38  513364213 2411024484
sdc1            408.01       207.02       972.38  513308749 2411024484

4. iostat – Display only network statistics

iostat option -n, displays only the device and NFS statistics as shown below.
$ iostat -n
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)        07/09/2011

avg-cpu:  %user   %nice    %sys %iowait   %idle
           4.33    0.01    1.16    0.31   94.19

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               2.83         0.35         5.39   29817402  457360056
sda1              3.32        50.18         4.57 4259963994  387641400
sda2              0.20         0.76         0.82   64685128   69718576
sdb               6.59        15.53        42.98 1318931178 3649084113
sdb1             11.80        15.53        42.98 1318713382 3649012985

Device:                  rBlk_nor/s   wBlk_nor/s   rBlk_dir/s   wBlk_dir/s   rBlk_svr/s   wBlk_svr/s
192.168.1.4:/home/data      90.67        0.00         0.00         0.00         5.33         0.00
192.168.1.4:/backup         8.74         0.00         0.00         0.00         8.74         0.00
192.168.1.8:/media          0.02         0.00         0.00         0.00         0.01         0.00

5. iostat – Display I/O data in MB/second

By default iostat, displays the device I/O statistics in Blocks. To change it to MB, use -m as shown below.
$ iostat -m
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             194.70         0.54         0.78    1327670    1935463
sda1            178.19         0.38         0.65     936370    1609060
sda2             16.51         0.16         0.13     391272     326402
sdb             371.27         0.46         0.52    1145240    1299425
sdb1            371.27         0.46         0.52    1145213    1299425
sdc             407.99         0.10         0.47     250666    1177259
sdc1            407.99         0.10         0.47     250639    1177259

6. iostat – Display I/O statistics only for a device

By default iostat displays I/O data for all the disks available in the system. To view statistics for a specific device (For example, /dev/sda), use the option -p as shown below.
$ iostat -p sda
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.69      1096.51      1598.48 2719069928 3963829584
sda2            336.38        27.17        54.00   67365064  133905080
sda1            821.89         0.69       243.53    1720833  603892838

7. iostat – Display timestamp information

By default iostat displays only the current date. To display the current time, use the option -t as shown below.
$ iostat -t
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

Time: 08:57:52 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.69      1096.49      1598.45 2719070384 3963829704
sda1            178.18       773.32      1328.88 1917688474 3295357248
sda2             16.51       323.14       269.57  801326686  668472456
sdb             371.25       945.82      1073.16 2345452741 2661228872
sdb1            371.25       945.80      1073.16 2345397277 2661228872
sdc             407.97       207.02       972.27  513364233 2411030200
sdc1            407.97       207.00       972.27  513308769 2411030200

8. iostat – Display Extended status

Use option -x, which will displays extended disk I/O statistics information as shown below.
$ iostat -x
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda              27.86    63.53 61.77 132.91  1096.46  1598.40    13.84     0.21    1.06   2.28  44.45
sda1              0.69    33.22 48.54 129.63   773.30  1328.84    11.80     1.39    7.82   2.28  40.57
sda2             27.16    30.32 13.23  3.28   323.13   269.56    35.90     0.55   32.96   3.44   5.68
sdb              39.15   215.16 202.20 169.04   945.80  1073.13     5.44     1.05    2.78   1.64  60.91
sdb1             39.15   215.16 202.20 169.04   945.77  1073.13     5.44     1.05    2.78   1.64  60.91
sdc               8.90     3.63 356.56 51.40   207.01   972.24     2.89     1.04    2.56   1.55  63.30
sdc1              8.90     3.63 356.55 51.40   206.99   972.24     2.89     1.04    2.56   1.55  63.30
To display extended information for a specific partition (For example, /dev/sda1), do the following.
$ iostat -x sda1
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda1              0.69    33.21 48.54 129.62   773.23  1328.76    11.80     1.39    7.82   2.28  40.56

9. iostat – Execute Every x seconds (for y number of times)

To execute iostat every 2 seconds (until you press Ctl-C), do the following.
$ iostat 2
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.67      1096.39      1598.33 2719070584 3963891256
sda1            178.16       773.26      1328.79 1917688482 3295418672
sda2             16.51       323.11       269.54  801326878  668472584
sdb             371.22       945.74      1073.08 2345454041 2661251200
sdb1            371.22       945.72      1073.08 2345398577 2661251200
sdc             407.93       207.00       972.19  513366813 2411036564
sdc1            407.93       206.98       972.19  513311349 2411036564
..
To execute every 2 seconds for a total of 3 times, do the following.
$ iostat 2 3

10. iostat – Display LVM statistic (and version)

To display the LVM statistics use option -N as shown below.
$ iostat -N
To display the version of iostat, use -V. This will really display the version information of sysstat, as iostat is part of sysstat package.
$ iostat -V
sysstat version 7.0.2

VMSTAT EXAMPLES

11. vmstat – Basic example

vmstat by default will display the memory usage (including swap) as shown below.
$ vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0 305416 260688  29160 2356920    2    2     4     1    0    0  6  1 92  2  0
vmstat output contains the following fields:
  • Procs – r: Total number of processes waiting to run
  • Procs – b: Total number of busy processes
  • Memory – swpd: Used virtual memory
  • Memory – free: Free virtual memory
  • Memory – buff: Memory used as buffers
  • Memory – cache: Memory used as cache.
  • Swap – si: Memory swapped from disk (for every second)
  • Swap – so: Memory swapped to disk (for every second)
  • IO – bi: Blocks in. i.e blocks received from device (for every second)
  • IO – bo: Blocks out. i.e blocks sent to the device (for every second)
  • System – in: Interrupts per second
  • System – cs: Context switches
  • CPU – us, sy, id, wa, st: CPU user time, system time, idle time, wait time

12. vmstat – Display active and inactive memory

By default vmstat doesn’t display this information. Use option -a, to display active and inactive memory information as shown below.
$ vmstat -a
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st
 0  0 305416 253820 1052680 2688928    2    2     4     1    0    0  6  1 92  2  0

13. vmstat – Display number of forks since last boot

This displays all the fork system calls made by the system since the last boot. This displays all fork, vfork, and clone system call counts.
$ vmstat -f
     81651975 forks

14. vmstat – Execute Every x seconds (for y number of times)

To execute every 2 seconds, do the following. You have to press Ctrl-C to stop this.
$ vmstat 2
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 537144 182736 6789320    0    0     0     0    1    1  0  0 100  0  0
 0  0      0 537004 182736 6789320    0    0     0     0   50   32  0  0 100  0  0
..
To execute every 2 seconds for 10 times, do the following. You don’t need to press Ctrl-C in this case. After executing 10 times, it will stop automatically.
$ vmstat 2 10
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 537144 182736 6789320    0    0     0     0    1    1  0  0 100  0  0
 0  0      0 537004 182736 6789320    0    0     0     0   50   32  0  0 100  0  0
..

15. vmstat – Display timestamp

When you use vmstat to monitor the memory usage repeately, it would be nice to see the timestap along with every line item. Use option -t to display the time stamp as shown below.
$ vmstat -t 1 100
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ ---timestamp---
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 3608728 148368 3898200    0    0     0     0    1    1  0  0 100  0  0     2011-07-09 21:16:28 PDT
 0  0      0 3608728 148368 3898200    0    0     0     0   60   15  0  0 100  0  0     2011-07-09 21:16:29 PDT
 0  0      0 3608712 148368 3898200    0    0     0     0   32   28  0  0 100  0  0     2011-07-09 21:16:30 PDT
For me, the timestamp option worked in the following version.
$ vmstat -V
procps version 3.2.8
Note: If you use a older version of vmstat, option -t might not be available. In that case, use the method we suggested earlier to display timestamp in vmstatoutput.

16. vmstat – Display slab info

Use option -m, to display the slab info as shown below.
$ vmstat -m
Cache                       Num  Total   Size  Pages
fib6_nodes                    5    113     32    113
ip6_dst_cache                 4     15    256     15
ndisc_cache                   1     15    256     15
RAWv6                         7     10    768      5
UDPv6                         0      0    640      6
tw_sock_TCPv6                 0      0    128     30
...

17. vmstat – Display statistics in a table format

Instead of displays the values in the record format, you can display the output of vmstat in table format using option -s as shown below.
$ vmstat -s
      4149928  total memory
      3864824  used memory
      2606664  active memory
      1098180  inactive memory
       285104  free memory
        19264  buffer memory
      2326692  swap cache
      4192956  total swap
       274872  used swap
      3918084  free swap
   1032454000 non-nice user cpu ticks
        14568 nice user cpu ticks
     89482270 system cpu ticks
  16674327143 idle cpu ticks
    368965706 IO-wait cpu ticks
      1180468 IRQ cpu ticks
..

18. vmstat – Display disk statistics

Use option -d to display the disk statistics as shown below. This displays the reads, writes, and I/O statistics of the disk.
$ vmstat -d
disk- ------------reads------------ ------------writes----------- -----IO------
       total merged sectors      ms  total merged sectors      ms    cur    sec
sda   153189971 69093708 2719150864 737822879 329617713 157559204 3965687592 4068577985      0 1102243
sdb   501426305 97099356 2345472425 731613156 419220973 533565961 2661869460 1825174087      0 1510434
sdc   884213459 22078974 513390701 452540172 127474901 8993357 2411187300 2133226954      0 1569758

19. vmstat – Increase the width of the display

The default output without increasing the width is shown below.
$ vmstat 1 3
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 3608688 148368 3898204    0    0     0     0    1    1  0  0 100  0  0
 0  0      0 3608804 148368 3898204    0    0     0     0   72   30  0  0 100  0  0
 0  0      0 3608804 148368 3898204    0    0     0     0   60   27  0  0 100  0  0
Use option -w to increase the width of the output columns as shown below. This give better readability.
$ vmstat -w 1 3
procs -------------------memory------------------ ---swap-- -----io---- --system-- -----cpu-------
 r  b       swpd       free       buff      cache   si   so    bi    bo   in   cs  us sy  id wa st
 0  0          0    3608712     148368    3898204    0    0     0     0    1    1   0  0 100  0  0
 0  0          0    3608712     148368    3898204    0    0     0     0   93   23   0  0 100  0  0
 0  0          0    3608696     148368    3898204    0    0     0     0   35   34   0  0 100  0  0

20. vmstat – Display statistics for a partition

To display the disk I/O statistics of a specific disk partition use option -p as shown below.
$ vmstat -p sdb1
sdb1          reads   read sectors  writes    requested writes
           501423248 2345417917  419221612 2661885948

21. vmstat – Display in MB

By default vmstat displays the memory information in kb. To disply in MB, use the option “-S m” as shown below.
$ vmstat -S m
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0    281    288     19   2386    0    0     4     1    0    0  6  1 92  2  0

MPSTAT EXAMPLES

22. mpstat – Display basic info

By default mpstat displays CPU statistics as shown below.
$ mpstat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

10:25:32 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle    intr/s
10:25:32 PM  all    5.68    0.00    0.49    2.03    0.01    0.02    0.00   91.77    146.55

23. mpstat – Display all information

Option -A, displays all the information that can be displayed by the mpstat command as shown below. This is really equalivalent to “mpstat -I ALL -u -P ALL” command.
$ mpstat -A
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:26:34 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:26:34 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:26:34 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

10:26:34 PM  CPU    intr/s
10:26:34 PM  all     36.51
10:26:34 PM    0      0.00
10:26:34 PM    1      0.00
10:26:34 PM    2      0.04
10:26:34 PM    3      0.00

10:26:34 PM  CPU     0/s     1/s     8/s     9/s    12/s    14/s    15/s    16/s    19/s    20/s    21/s    33/s   NMI/s   LOC/s   SPU/s   PMI/s   PND/s   RES/s   CAL/s   TLB/s   TRM/s   THR/s   MCE/s   MCP/s   ERR/s   MIS/s
10:26:34 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    7.47    0.00    0.00    0.00    0.00    0.02    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.90    0.00    0.00    0.00    0.00    0.03    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.04    0.00    0.00    0.00    0.00    0.00    3.32    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.17    0.00    0.00    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00    0.00    0.00

24. mpstat – Display CPU statistics of individual CPU (or) Core

Option -P ALL, displays all the individual CPUs (or Cores) along with its statistics as shown below.
$ mpstat -P ALL
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:28:04 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:28:04 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:28:04 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:28:04 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
To display statistics information of a particular CPU (or core), use option -P as shown below.
$ mpstat -P 0
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (8 CPU)

10:28:53 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:28:53 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98

$ mpstat -P 1
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (8 CPU)

10:28:55 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:28:55 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
Finally, as we mentioned earlier mpstat is part of the sysstat package. When you do mpstat -V, it will really display the version number of the systat package as shown below.
$ mpstat -V

Mysql backup using samba

#!/bin/bash

        # This script creates a copy of the /opt/zimbra folder and then compress
        # it localy before sending it over the nework so make sure there is enough space
        # left on your local hard drive ($parentfold variable).

        # A samba share is used in this script, fill the ## NETWORKING ## section
        # and the $pathfold variable with your info.

        # sendEmail is used in this script, if you wish to use it, fill the ## MAILING ## section
        # with your info.

dateformat=$(date +\%d-\%m-\%Y)        # Format of the date used in the script.
datebegin=$(date)            # Date when the backup begins its routine.
beginjob=$(date +%s)            # Second at which the routine starts.
remotename=mremote            # Name of the temporary local folder where to mount the Samba share.
pathfold=//PATH/TO/SHARE/MYSQL        # Network folder for the backup; **folder must exist remotely**.
parentfold=/tmp                # Where it all begins.
basefold=$parentfold/.mysql        # Local parent folder created for the script usage.
remotefold=$basefold/$remotename    # Path of local folder to mount the remote Samba share.
muser=root                # User that launch the backup routine (root).
mpass='ROOT-PASSWORD'            # Password of the user that launch the backup routine.
mserver='localhost'            # Mysql server being backed up.
bakname=mysql_$dateformat.bak.db.sql    # Actual backup name.
archname=mysql_$dateformat.tgz        # Archive name that will be created.
logpath=$basefold/            # Log file path.
logfile=mysql_backup_LOGS_$dateformat.txt    # Log file name.
logging=$parentfold/$logfile        # Complete log path.
returns=''                # Simple line return for the logs.
mysqlv=$(mysql -V)            # MySQL version
uname=$(uname -a)            # Kernel version
lsb=$(lsb_release -d | cut -f2)        # OS version

## INFO ##

echo "========================================================
MySql Backup Summary - $(date +%d-%m-%Y)
========================================================
$returns" > $logging

# Declaring mysql server version.
echo "MySQL version:
---------------
$mysqlv
$returns" >> $logging

# Declaring Operating System version.
echo "Operating system version:
--------------------------
$lsb
$returns
Kernel version:
----------------
$uname
$returns" >> $logging

echo "** The backup job started on: $datebegin.
$returns
* Initializing...
" >> $logging

## LOCAL WORKING FOLDER ##

# Creating the backup structure.
echo '* Creating the temporary directory structure for the backup...' >> $logging
if [ ! -d "$remotefold" ]; then
        mkdir -p $remotefold
        echo " - Successfully created the local working directory under $remotefold." >> $logging
else
        echo " - Directory $remotefold exist, attempting to continue." >> $logging
fi

## NETWORKING ##

# Mounting the network folder where the backup will be sent.
echo $returns >> $logging
echo '* Attempting to Mount the Network Backup Folder...' >> $logging
mount -t cifs $pathfold $remotefold -o user='USERNAME',pass='PASSWORD',domain='WORKGROUP' >> $logging

if [ $? -eq 0 ]; then
        echo " - Remote path $pathfold successfully mounted." >> $logging
else
        echo " - Error encountered while mounting remote path $pathfold." >> $logging

# Creating a subfolder with the date as name on the remote foler.
cd $remotefold
#mkdir $dateformat

if [ ! -d "$dateformat" ]; then
        mkdir $dateformat
        echo " - Successfully created a subfolder called $dateformat in the remote folder"  >> $logging
else
        echo " - Could not create a sub-directory called $dateformat as it already exists, continuing...." >> $logging
fi

## MYSQL Backup ##

# Backup command
echo $returns >> $logging
echo '* Taking the backup of ALL databases...' >> $logging
cd $basefold
mysqldump -u $muser -h $mserver --password="$mpass" --all-databases > $bakname

if [ $? -eq 0 ]; then
        echo " - Successful dump of ALL databases in $basefold as $bakname." >> $logging
else
        echo " - Mysqldump task encountered an error, please review your server logs." >> $logging
fi

rawsize=$(du -sh $bakname | cut -f1)

## COMPRESSION ##

# Compressing the backup into a tar.gz archive
echo $returns >> $logging
echo '* Compressing the backup into a .tgz archive...' >> $logging
tar cfz $archname $bakname > /dev/null 2>&1

if [ $? -eq 0 ]; then
        echo " - Successfully compressed the backup as $archname." >> $logging
else
        echo " - Compression task encountered an error, please review your server logs." >> $logging
fi

# Calculating the size of the backup once compressed.
compsize=$(du -sh $archname | cut -f1)

## REMOTE COPY ##

# Copying the MySql backup archive from the local backup folder to the remote backup folder.
echo $returns >> $logging
echo "* Copying the Backup to the Network folder..." >> $logging
cd $basefold
cp $archname $remotefold/$dateformat
cd ~

if [ $? -eq 0 ]; then
        echo " - Backup successfully transfered to $pathfold/$dateformat." >> $logging
else
        echo " - Error encountered while transfering the backup, review your server logs." >> $logging
fi

## VERIFICATION ##

# Checking that the MySql service is still running.
chkserv=$(ps aux | grep /usr/sbin/mysqld | grep -v grep)
echo "
* Querying the MySql service to ensure it is running...
$chkserv
$returns" >> $logging

echo "Native database(s) size: $rawsize
Compressed database(s) size: $compsize
$returns" >> $logging

# Ending  stopwatch, calculating time difference and announcing it.
dateend=$(date)
echo "** The backup job ended on: $dateend.
$returns" >> $logging

endjob=$(date +%s)
elapsed=$(( $endjob - $beginjob ))
hours=$(( $elapsed / 3600 ))
elapsed=$(( $elapsed - $hours * 3600 ))
minutes=$(( $elapsed / 60 ))
seconds=$(( $elapsed - $minutes * 60 ))

echo "The backup routine took: $hours hours $minutes minutes $seconds seconds to complete.
$returns" >> $logging

wpath=$(echo $pathfold | sed -e 's/\//\\/g')
echo "A copy of the present log can be found along with the backup at:
Linux:          smb:$pathfold
Windows:        $wpath" >> $logging
echo "$returns
$returns
Backup routine completed.
" >> $logging

## MAILING ##

# Once completed, the log can be emailed easily by installing the sendEmail package. Comment out the next line
# if you do not want to use sendEmail or change information accordingly.
esender=mysql-backups@testdomain.com
erecipient=admin@testdomain.com
esubject='MySQL backup logs'
ebody='This is the Backup Log of the MySQL server. Please review the attached document for detailed information.'

sendemail -q -f $esender -t $erecipient -u $esubject -m $ebody -a $logging

# Copying the logs over the network folder.
cp $logging $remotefold/$dateformat

# Unmount the remote backup folder.
umount $remotefold

## CLEANING AND EXITING ##

# Cleaning the server of the backup files/folders we created.
rm -rf $basefold
rm $logging

exit

Bash Profile Tips

If you’ve been learning the command-line and you have the basics down (you should be, as the most effective way to use a computer is a combination of a GUI and command-line), the next step is to customize your environment.

Beginner’s Tip: “command-line” and “shell” are often used synonymously. In unix, technically speaking, the shell is what processes the command-line, but usually, they mean the same thing

The ability to fully customize your shell is one of the most powerful things about the command-line. It’s a dry subject, and mastering it won’t get you favors from the opposite sex (although it should), but it can be very useful.

There are many ways to customize your shell, but the first one you should learn is modifying your Bash startup files (assuming your shell is Bash, which is the default in OS X, Linux, and many other unices).

When I first learned how to customize bash, I found an overwhelming amount of information and opinion, which made it difficult. This article is intended to give you the fundamental concepts so that you can create your own startup files, and understand how they work. To give you an example, I go through a subset of my own files, section by section.

Let’s install the example startup files
Beginner’s Tip: Directory and folder are synonymous. Often folder is used in Windows and OS X and directory is used in Linux, however even Linux represents a directory as a folder graphically

Below are the two example startup files: .bashrc and .bash_profile.

If you would like to use these as your startup files, follow the following directions for your OS.

OS X:
If you want a backup of your existing files, use the following commands (if the files don’t already exist, you will get an error. The files will be named .bashrc_ORIGINAL and .bash_profile_ORIGINAL in your home folder):

cp ~/.bashrc ~/.bashrc_ORIGINAL ; cp ~/.bash_profile ~/.bash_profile_ORIGINAL
Copy .bash_profile and .bashrc to your home folder.
There are a variety of ways to do this, but the simplest is to use the curl command:

curl -o ~/.bash#1 “http://assets.toddwerth.com/settings/dotfiles/osx/.bash{rc,_profile}”
You do not need to log out, just create a new window or tab in iTerm, or a new window in Terminal.
Linux and other unices:
If you want a backup of your existing files, use the following commands (if the files don’t already exist, you will get an error. The files will be named .bashrc_ORIGINAL and .bash_profile_ORIGINAL in your home folder):

cp ~/.bashrc ~/.bashrc_ORIGINAL ; cp ~/.bash_profile ~/.bash_profile_ORIGINAL
Copy .bash_profile and .bashrc to your home directory.
There are a variety of ways to do this, but the simplest is to use the wget (or curl for BSD and others) commands:

wget -O ~/.bashrc “http://assets.toddwerth.com/settings/dotfiles/generic/.bashrc”
wget -O ~/.bash_profile “http://assets.toddwerth.com/settings/dotfiles/generic/.bash_profile”
or
curl -o ~/.bash#1 “http://assets.toddwerth.com/settings/dotfiles/generic/.bash{rc,_profile}”
Log out then log back in in order to load .bash_profile. Alternatively, you can do a source ~/.bash_profile to run the files.
What the heck are bash Startup Files?
Beginner’s Tip: ~ represents your home folder, it is short-hand notation so that you don’t have to type the whole thing; it is also used when you don’t know the home folder; for example, my code above works, no matter where your home folder/directory is.

Bash, as well as other unix shells, have files that run when they start. You can modify these files to set preferences, create aliases and functions (a kind of micro-script), and other such fun.

When you start an interactive shell (log into the console, open terminal/xterm/iTerm, or create a new tab in iTerm) the following files are read and run, in this order:

/etc/profile
/etc/bashrc
~/.bash_profile
~/.bashrc (Note: only if you call it in .bash_profile or somewhere else)
When an interactive shell, that is not a login shell, is started (when you call “bash” from inside a login shell, or open a new tab in Linux) the following files are read and executed, in this order:

/etc/bashrc
~/.bashrc
Beginner’s Tip: Normally you can’t see the . files (files that start with a period) because they are hidden. Depending on your OS, you can simply turn on hidden files. Another option is to open the file in the command-line. Here are a few examples:
In shell: pico .bashrc
In shell: vi .bashrc
In OS X: open .bashrc
In GNOME: gedit .bashrc
/etc/profile and /etc/bashrc are run for all users on the system. Often on your workstation, there is only one user, you. But in systems with more than one user, these files can be used to set generic settings for all users. The files in your home folder, ~/.bashrc and ~/.bash_profile, are only for your particular user (since /etc/bashrc is run before ~/.bashrc, you can override anything in /etc/bashrc by simply setting it again in ~/.bashrc). Normally I only change these, since they are in your home folder, and only you have rights to them, you can change them without worry of affecting anyone else.

When your session starts, these files are run, just as if you typed the commands in yourself. Anything that normally works in the shell works in these files. Since .bash_profile only runs when you first login, you set very little there; the only important thing is your PATH. bashrc is where the meat goes, and will be where you spend all your time.

Comments, export, and aliases, the tools of the trade
The most common commands you use in your .bashrc and .bash_profile files are the following:

comments: A # starts a comment, which continues until the end of a line. The comments are not run by the system, and are only there for your information. Use them to make notes to yourself for future reference.

For example:

# A very interesting comment, sure to be read by everyone
alias foo=’bar’ # Another comment, dangling at the end of a line
alias: An alias is simply substituting one piece of text for another. When you run an alias, it simply replaces what you typed with what the alias is equal to.
For example, if you create an alias called foo and set it to “cd /etc”:

alias foo=’cd /etc’
Beginner’s Tip: To test your startup files, use the source command (. also works). It loads and executes a file in your current process (rather than starting a new process, which is normally what happens):
source .bashrc

In the command-line every time you type foo it is replaced by cd /etc. If you type foo ; ls it will be run as cd /etc ; ls.

To see what aliases are set on your system use the alias command. If run without any parameters it will list all the aliases.

Note: You can create a temporary alias by setting it in the command-line rather than in a bash startup file. Lets say you’re typing the same command over and over as you’re working on some task. Just set an alias (alias foo=ThingImDoingOverAndOver). It will only exist in your current session.

export: Export sets an environment variable. Various environment variables are used by the system, or other applications, such as grep, or bash itself. They simply are a way of setting parameters that any application/script can read. If you set a variable without the export command, that variable only exists for that particular process (for example if you set a variable in a script, only that script can access it). You do this like so:

FOO=1234
If you want that variable to be accessible by other child processes (which is a fancy way of saying by everything) then you simply use the export command to make it permanent (technically it isn’t permanent, but if you set it in your bash startup scripts it will be available to everything you run in bash, which basically means everything.). To make FOO, which we just set, permanent, we do the following in our .bashrc file:

export FOO=1234
It is common to make environment variables all upper case, as shown in the example.

Bash has a set of environment variables you can set, such as the prompts (see below). Other applications use them too, such as GREP_OPTIONS for grep (see below). You can also create your own, to be used by you in the command-line or in scripts. In the color section below I create common colors to be used in the startup files, as well in other scripts I write.

To see what environment variables are set on your system use the set command. If run without any parameters it will list them.

To use an environment variable, you preface it with a $. A simple way to test this (assuming you set FOO already) is to print out the value using the echo command, like so:

echo $FOO
Creating your .bash_profile

The .bash_profile file runs when you first login. You can put everything in this file, but since it only runs once and doesn’t run when you start a non-login session, it is best that you put only the bare minimum, and then run the .bashrc file. With this approach, your .bashrc gets run every-time to start a new session, because it is called from .bash_profile on login, and it also gets run by bash automatically when starting a non-login shell.

The first thing to setup is your PATH. A path will already be set by the system, so you don’t want to overwrite that. You can add to the path like this:

# Add to original path
export PATH=/foo:/foo/bar:$PATH
The $PATH at the end represents the existing path; in otherwords, you are taking the existing path, whatever that is set to, and adding the paths /foo and /foo/bar to it.

It is a good idea to add your bin directory to your path, if you have it. It’s a standard place to add all of your scripts. The following lines will add the bin folder in your home folder to the path. The if statement simply checks to make sure it exists first before it adds it. This is safe to use even if there is no bin folder:

if [ -d ~/bin ]; then
export PATH=:~/bin:$PATH
fi
Next, we will run the .bashrc file. The source command reads and runs another file in the same session you’re in (the . command does the same thing and is synonymous with source).

source ~/.bashrc
If you like, you can show a welcome message. This only runs when you first log in. The following is an example welcome message (note the colors, which are set in .bashrc below):

echo -e “Kernel Information: ” `uname -smr`
echo -e “${COLOR_BROWN}`bash –version`”
echo -ne “${COLOR_GRAY}Uptime: “; uptime
echo -ne “${COLOR_GRAY}Server time is: “; date
On my workstation, it displays the following:

Last login: Wed Aug 29 09:55:37 on ttyp1
Welcome to Darwin!
Kernel Information: Darwin 8.10.1 i386
GNU bash, version 2.05b.0(1)-release (powerpc-apple-darwin8.0)
Copyright (C) 2002 Free Software Foundation, Inc.
Uptime: 10:41 up 47 mins, 3 users, load averages: 0.01 0.02 0.00
Server time is: Sat Sep 29 18:41:12 PDT 2007
Now the fun* begins, creating your .bashrc
* The word “fun” is used for entertainment purposes only. “Fun” or any other type of amusement is neither guaranteed nor expressly implied.

The .bashrc file is where the meat of your customization goes.

Normally it doesn’t matter in which order various items go, so the way the file is laid out is an individual preference. There are many ways to setup a .bashrc file, and many things you can use it for, there is no “right” way; only opinions on which is the right way.

In this section I’m going to go over some of the more common things you would use it for, as well as some things that are less common, but that I personally like. You can include as little or as much as you like in your own file.

ColorsTM
Colors aren’t the most important thing, truly, however because I use colors throughout the entire file, I need them to be created first.

I’ll start by turning on colors for misc things, the following set your terminal to be the color variant of xterm, turn on colors in grep, etc:

export TERM=xterm-color

export GREP_OPTIONS=’–color=auto’ GREP_COLOR=’1;32′

export CLICOLOR=1
OS X and BSD – The following command turns on colors in ls:

alias ls=’ls -G’
Linux, etc – uses the following to turn on colors in ls. You can specify exactly what color is used for what, which is very nice. See http://www.linux-sxs.org/housekeeping/lscolors.html for more information on setting specific colors:

alias ls=’ls –color=auto’
export LS_COLORS=’di=1:fi=0:ln=31:pi=5:so=5:bd=5:cd=5:or=31:mi=0:ex=35:*.rb=90′
Some standard colors, to make adding colors later in this file, or in scripts, easier:

export COLOR_NC=’\\e[0m’ # No Color
export COLOR_WHITE=’\\e[1;37m’
export COLOR_BLACK=’\\e[0;30m’
export COLOR_BLUE=’\\e[0;34m’
export COLOR_LIGHT_BLUE=’\\e[1;34m’
export COLOR_GREEN=’\\e[0;32m’
export COLOR_LIGHT_GREEN=’\\e[1;32m’
export COLOR_CYAN=’\\e[0;36m’
export COLOR_LIGHT_CYAN=’\\e[1;36m’
export COLOR_RED=’\\e[0;31m’
export COLOR_LIGHT_RED=’\\e[1;31m’
export COLOR_PURPLE=’\\e[0;35m’
export COLOR_LIGHT_PURPLE=’\\e[1;35m’
export COLOR_BROWN=’\\e[0;33m’
export COLOR_YELLOW=’\\e[1;33m’
export COLOR_GRAY=’\\e[0;30m’
export COLOR_LIGHT_GRAY=’\\e[0;37m’
alias colorslist=”set | egrep ‘COLOR_\\w*'” # lists all the colors
Here is an example of using a color in a script (note, the -e is required):

echo -e “The color ${COLOR_GREEN}green${COLOR_NC} is nice.”
Prompts ~/bin >
Beginner’s Tip: If you use different systems, a good trick is to check or import your startup files (and any other . files you like [. or dot files are hidden files and often used for settings]) into a source code repository, such as subversion. Then no matter what system you are on, assuming your subversion server is accessible, you can simply “checkout” your files with one command. Alternatively, you can store them on your website.

You can also do this with your ~/bin folder so you have your scripts too. It takes only a second to get them, and saves you time configuring the environment.

In the shell, every line begins with the prompt, right before the area you can type into. You can put a variety of different things in your prompt, from simply characters such as > to your user name, the name of the machine, or a ridiculous number of other things.

Some people’s prompts are, frankly, silly; do you really need your prompt to say something like this (note, the answer is “no”)?
juser@myMachine 1/1/2007 12:31:02 PM – “Funny quote of the day” – /home/juser>

I make my prompt as simple as possible, normally all you see is
~ >, which is just the path of the current folder followed by a “>”.

Knowing what user I am logged in as and which machine I’m on is also useful, but I put that in the window of my terminal (see below). For this very simple prompt, I use the following:

export PS1=”\\[${COLOR_GREEN}\\]\\w > \\[${COLOR_NC}\\]”
Since you can setup a .bashrc for each user, I normally set my regular user’s prompt to be green, and then I change it to red for root. This gives me a visual reminder that I’m logged in as root, and to be careful.

The following runs before the prompt and sets the title of the terminal window. If you set the window title in the prompt, weird wrapping errors occur on some systems, so this method is superior:

export PROMPT_COMMAND=’echo -ne “\\033]0;${USER}@${HOSTNAME%%.*} ${PWD}”; echo -ne “\\007″‘
Note: PROMPT_COMMAND, used in this way, causes problems when you are using the command-line directly (a non-graphical terminal, such as the system console). Simply disable this in those circumstances. You can also write a little if to check for that if you like.

Here is an alternative prompt, with the user and machine name:

export PS1=”\\[${COLOR_GRAY}\\]\\u@\\h \\[${COLOR_GREEN}\\]\\w > \\[${COLOR_NC}\\]” # Primary prompt with user, host, and path
Now I just set prompts 2 through 4. There is a lot of information on setting prompts, but to cover that would be a whole other article. I wanted to give you just the basics, if you want more in depth information follow this link.

export PS2=’> ‘ # Secondary prompt
export PS3=’#? ‘ # Prompt 3
export PS4=’+’ # Prompt 4
Navigation
In the command-line, you have to move around a lot. The following aliases are designed to help you navigate your file system with ease.

Here is a very simple and useful way to move back up the directory tree, two dots moves you up one, and three dots moves you up two levels in your path:

alias ..=’cd ..’
alias …=’cd .. ; cd ..’
A lot of people create a home alias, but this isn’t necessary. cd by itself, without any parameters takes you back to your home.

Another tip is to use cd – which takes you to the previous folder.

The following is very cool; I got it from this site. I modified it slightly to make it work a little better:

if [ ! -f ~/.dirs ]; then # if doesn’t exist, create it
touch ~/.dirs
fi

alias show=’cat ~/.dirs’
save (){
command sed “/!$/d” ~/.dirs > ~/.dirs1; \\mv ~/.dirs1 ~/.dirs; echo “$@”=\\”`pwd`\\” >> ~/.dirs; source ~/.dirs ;
}
source ~/.dirs # Initialization for the above ‘save’ facility: source the .sdirs file
shopt -s cdable_vars # set the bash option so that no ‘$’ is required when using the above facility
It basically allows you to save bookmarks to folders. If you constantly go to a particular directory, you can just set it as a bookmark like so:

save foo
This will set the current folder you are in as the bookmark foo. To get back to the folder, just type this:

cd foo
You can list your “bookmarks” with the show command:

show
foo=”/Users/joebob/foo”
The bookmarks are stored in the ~/dirs file, so you can edit them there too if you like.

Editors
If an application, such as Subversion, needs you to edit text. It will often launch an external editor and load the text into it. You can modify which application is launched by setting the EDITOR variable. I’ve listed various options below depending on your system:

#export EDITOR=’mate -w’ # OS-X SPECIFIC – TextMate, w is to wait for TextMate window to close
#export EDITOR=’gedit’ #Linux/gnome
#export EDITOR=’vim’ #Command line
export EDITOR=’pico’ #Command line
Misc
Common ls commands. Many people use these. ll shows the long listing, and lla shows the long listing with hidden files:

alias ll=’ls -hl’
alias la=’ls -a’
alias lla=’ls -lah’
Some very useful settings I like to set:

export HISTCONTROL=ignoredups # Ignores dupes in the history
shopt -s checkwinsize # After each command, checks the windows size and changes lines and columns

# bash completion settings (actually, these are readline settings)
bind “set completion-ignore-case on” # note: bind is used instead of setting these in .inputrc. This ignores case in bash completion
bind “set bell-style none” # No bell, because it’s damn annoying
bind “set show-all-if-ambiguous On” # this allows you to automatically show completion without double tab-ing

# Turn on advanced bash completion if the file exists (get it here: http://www.caliban.org/bash/index.shtml#completion)
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
Others you may find useful:

alias g=’grep -i’ #case insensitive grep
alias f=’find . -iname’
alias ducks=’du -cks * | sort -rn|head -11′ # Lists the size of all the folders and files
alias top=’top -o cpu’
alias systail=’tail -f /var/log/system.log’
Shows the commands you use most, it’s a cool script I got this from this site. It’s useful to show you what you should create aliases for:

alias profileme=”history | awk ‘{print \\$2}’ | awk ‘BEGIN{FS=\\”|\\”}{print \\$1}’ | sort | uniq -c | sort -n | tail -n 20 | sort -nr”
Subversion and Diff
This section is only useful if you use Subversion and diff. However, it highlights various things you can do with aliases and functions.

The basic concept is to wrap a bunch of commands, giving them a new naming convention. In this case the command is svn, and it has many options, such as svn status. By wrapping these commands, you can set common parameters that you use all the time. Also since they are aliases, you can use bash completion (set above) to get a nice list of the commands. For the following, if I just type sv and tab, I get a list of commands.

Some of these aliases simply wrap the original commands, without modifying them. I do this for consistency, and it gives me the opportunity to change them later.

First we set some standard settings for Subversion:

export SV_USER=’juser’ # Change this to your username that you normally use on Subversion (only if it is different from your logged in name)
export SVN_EDITOR=’${EDITOR}’
The next alias is interesting, as I’m basically using it to provide help to myself. Of course if you know subversion well, you don’t need this, but it may be useful when you’re first starting out and learning. When you type svshowcommands, it prints out a nice colored description of all the commands. Notice how you can have one alias on multiple lines:

alias svshowcommands=”echo -e ‘${COLOR_BROWN}Available commands:
${COLOR_GREEN}sv
${COLOR_GREEN}sv${COLOR_NC}help
${COLOR_GREEN}sv${COLOR_NC}import ${COLOR_GRAY}Example: import ~/projects/my_local_folder http://svn.foo.com/bar
${COLOR_GREEN}sv${COLOR_NC}checkout ${COLOR_GRAY}Example: svcheckout http://svn.foo.com/bar
${COLOR_GREEN}sv${COLOR_NC}status
${COLOR_GREEN}sv${COLOR_NC}status${COLOR_GREEN}on${COLOR_NC}server
${COLOR_GREEN}sv${COLOR_NC}add ${COLOR_GRAY}Example: svadd your_file
${COLOR_GREEN}sv${COLOR_NC}add${COLOR_GREEN}all${COLOR_NC} ${COLOR_GRAY}Note: adds all files not in repository [recursively]
${COLOR_GREEN}sv${COLOR_NC}delete ${COLOR_GRAY}Example: svdelete your_file
${COLOR_GREEN}sv${COLOR_NC}diff ${COLOR_GRAY}Example: svdiff your_file
${COLOR_GREEN}sv${COLOR_NC}commit ${COLOR_GRAY}Example: svcommit
${COLOR_GREEN}sv${COLOR_NC}update ${COLOR_GRAY}Example: svupdate
${COLOR_GREEN}sv${COLOR_NC}get${COLOR_GREEN}info${COLOR_NC} ${COLOR_GRAY}Example: svgetinfo your_file
${COLOR_GREEN}sv${COLOR_NC}blame ${COLOR_GRAY}Example: svblame your_file
‘”
Here I wrap each command:

alias sv=’svn –username ${SV_USER}’
alias svimport=’sv import’
alias svcheckout=’sv checkout’
alias svstatus=’sv status’
alias svupdate=’sv update’
alias svstatusonserver=’sv status –show-updates’ # Show status here and on the server
alias svcommit=’sv commit’
alias svadd=’svn add’
alias svaddall=’svn status | grep “^\\?” | awk “{print \\$2}” | xargs svn add’
alias svdelete=’sv delete’
alias svhelp=’svn help’
alias svblame=’sv blame’
alias svdiff=’sv diff’
You may have noticed I added a few commands too, such as svaddall; this isn’t part of standard Subversion. This command, for example, adds all files that aren’t already in the repository.

OS X specific: The following alias allows you to use the FileMerge OS X app for your diffs. You need to create fmdiff and fmresolve, which can be found at this site.

Note: Use diff for command line diff, use fmdiff for gui diff, and svdiff for subversion diff.

alias svdiff=’sv diff –diff-cmd fmdiff’ # OS-X SPECIFIC
Functions
Functions deserve their own article. But basically they are used when an alias isn’t enough. The following is a simple example of a function, it is used to show the both the info and the log of a file in subversion in one command (svgetinfo):

svgetinfo (){
sv info $@
sv log $@
}
All done… finally

List last modified – shell Script

SHORT_HOSTNAME=`hostname -s`
LOCATION="/home"
#
DESTINATION="$1"
if [ -z "$1" ]
then
  DESTINATION="$LOCATION"
fi
#
EMAIL_TO="test@rmohan.com"
EMAIL_SUBJECT="List Last Modified files from $SHORT_HOSTNAME"
EMAIL_BODY_Line1="Below is the list of last modified files:"
#
MOD_FILES=`find $DESTINATION -type f -mmin -10 -ls`
#
if [ -n "$MOD_FILES" ]
then
  printf "$EMAIL_BODY_Line1\n\n$MOD_FILES\n" | mail -s "$EMAIL_SUBJECT" $EMAIL_TO
fi
#