October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Multipath config status check in Linux

Multipath config status check in Linux


Using dmsetup command:

# ls -lrt /dev/mapper  //To View the Mapper disk paths and Lvols

#dmsetup table

#dmsetup ls

#dmsetup status

Using Multipathd Command ( Daemon )

#echo ‘show paths’ |multipathd -k

#echo ‘show maps’ |multipathd -k

Explained multipathd below:

A.DISPLAY PATH STATUS 

Multipathd has a mode (the -k flag) were it can be used to connect to the running multipathd process over a socket.

If there is no running multipathd you will get the following error.

[root@k2 ~]# multipathd -k
ux_socket_connect: Connection refused

If the daemon is running, you can issue commands like below,

#multipathd -k

Multipathd>

#multipathd> show multipaths status

name failback queueing paths dm-st
mpath0 immediate – 4 active
mpath1 immediate – 4 active

B.SHOW TOPOLOGY 

#multipathd> show topology

mpath0 (360050768018380367000000000000049) dm-0 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0 ]
\_ round-robin 0 [prio=100][enabled]
\_ 1:0:3:0 sdg 8:96 [active][ready]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 1:0:0:0 sda 8:0 [active][ready]
\_ 1:0:2:0 sdc 8:32 [active][ready]

C.SHOW PATHS 

#multipathd> show paths
hcil dev dev_t pri dm_st chk_st next_check
1:0:0:0 sda 8:0 10 [active][ready] XXXXXXX… 14/20
1:0:0:1 sdb 8:16 10 [active][ready] XXXXXXX… 14/20
1:0:2:0 sdc 8:32 10 [active][ready] XXXXXXX… 14/20
1:0:2:1 sdd 8:48 10 [active][ready] XXXXXXX… 14/20
… excess deleted …

D.Fail a path 

# multipathd -k”fail path sdc”

# multipathd -k”show paths”

E. DELETE A PATH 

#multipathd> del path sdc
ok

F.SUSPEND / ENABLE A PATH 

#multipathd> suspend map mpath0
ok

And For enable the map

#multipathd> resume map mpath0

Redirecting the Server Console to the Serial Port Using Linux Operating System:

Although the server’s ILOM has a redirection feature that allows you to do this, you can also redirect the server console to the serial port by doing the following for either Red Hat (RHEL) or Suse (SLES):

 

1. Add the following line in /etc/inittab file (for SLES, this line might already exist but be commented out. If so, simply remove the “#” at the beginning of the line):

 

s0:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt102
 

 

2. Add the following line in the /etc/securetty file:

 

ttyS0
 

 

3. Change the /etc/grub.conf file as described below.

 

a. Comment the line that begins with “splashimage …”, like this:

 

# splashimage=(hd0,0)/grub/splash.xpm.gz
 

 

b. Add console=ttyS0 console=tty0 at the end of the line that starts with “”kernel /vmlinuz …, for example:

 

kernel /vmlinuz-2.6.9 ro root=LABEL=/ debug console=ttyS0,9600 console=tty0
 

 

c. Optionally, to have the grub boot menu display at the serial console, add the following lines before the splashimage line:

 

serial --unit=0 --speed=9600

 

terminal --timeout=10 serial console

How to change Time zone in Linux

# cd /etc
# rm localtime  //delete the existing localtime file

Check the available time zones in US

# ls /usr/share/zoneinfo/US/
Alaska          Arizona         Eastern         Hawaii          Michigan        Pacific
Aleutian        Central         East-Indiana    Indiana-Starke  Mountain        Samoa

Note: For other country timezones, browse the /usr/share/zoneinfo directory

Now we can change the time zone using below step

 

 

# vi /etc/timezone
America/Los_Angeles

then export the TZ variable

$ export TZ=America/Los_Angeles
$ date

 

 
Using below interactive method also we can change the Time zone

Ubuntu: dpkg-reconfigure tzdata
Redhat: redhat-config-date
CentOS/Fedora: system-config-date
FreeBSD/Slackware: tzselect

# ln -sf /usr/share/zoneinfo/Asia/Calcutta localtime

Now check the time

#date

# date
Mon Aug 17 23:10:14 IST 2013

But if you are doing any patching then after patching this Time zone will get changed. So this method is not perfect one.

Hp Servers

ML, DL & BL indicate the families for HPs lineup of Industry Standard Servers: ProLiant.

BL = Blades= HP BladeSystem.
Up to 16 server &/or storge blades + Networking, sharing power and cooling in 10u enclosure

DL = Density LIne = Rack mount ProLiant server.
Here, HP puts as many features as possible in as small an enclosure as possible. 1 U (rack unit, 1.75″) is the smallest. It is a complete server and does not share components (like power and cooling) with other servers (like blades)

ML = Maximized Line. These are typically ProLiant servers in
a tower enclosures but there are several in rack enclosures. Here HP gives you more slots for option cards and more drive bays and typically, more memory slots. They are also complete servers that do not share components like blades.

Depending on the exact model, they have Intel or AMD processors, single CPU, multi-cpu, hot plug components, etc.

Solaris 10

After the Solaris installation finished you got to modifiy these things.

1. Login with ‘root’ user

2.To create group and user account:
#groupadd -g 500 unixmin
#useradd -u 500 -g unixmin -d /export/home/zawhtet -m -s /usr/bin/bash -c “Zaw Htet” zawhtet
#passwd zawhtet

3.Create no login user for Services (Optional)
#groupadd -g 501 squid
#useradd -u 501 -g squid -s /usr/bin/false -c “Squid Admin” squid

4.To change the login name and home directory for – user2 (new) to user1 (old).
#usermod -m -d /export/home/user1 -l user1 user2

For Testing, create user2 first,
#useradd -u 503 -g unixmin -d /export/home/user2 -m -s /usr/bin/bash -c “User 2” user2
Then modify, user2 login name and home directory become user1
#usermod -m -d /export/home/user1 -l user1 user2
Note: Even we modified the user’s home directory and login name, User2 name still remains/ also put the account to random group.
cat /etc/passwd
user1:x:503:500:User 2:/export/home/user1:/usr/bin/bash

5.Deleting User Accounts
#userdel  user1  – to remove the user1 account
#userdel -r user1 – This command also remove the user’s Home Directory

6.Deleting Group
#groupdel
cat /etc/group

7.When we first login to terminal, you will see that you got “/bin/sh”
#echo $SHELL
/bin/sh
#bash
bash-3.00$
Note: When you edit the file even with root account you will get Read-Only message
If you want to save after you open file with Vi editor use ‘:wq!’
bash-3.00# whereis bash
bash: /usr/bin/bash /usr/man/man1/bash.1

8.If you want to set root or your user account, permanently login to bash shell
vi  /etc/passwd
root:x:0:0:Super-User:/:/bin/sh
change to
root:x:0:0:Super-User:/:/bin/bash

9. Create ‘.bash_profile’ file under ‘/’ then copy to /root (#cp   .*   /root)
vi .bash_profile
export PATH=/usr/bin:/usr/sbin:/usr/sfw/bin:/opt/sfw/bin:/usr/dt/bin:/usr/sadm/admin/bin/
export PS1='[\u@\h \W]\$ ‘
export HISTSIZE=5000
alias ls=’ls -l’
alias netstat=’netstat -an |grep LISTEN’
alias h=’history’
alias lsd=’ls -ACF \!* | more’
alias lsl=’ls -alh | less’
alias lst=’ls -alt \!* | more’
alias plm=’ps -elf | more’
alias plg=’ps -elf | grep “\!*” | sort -n +3 -4′
alias psm=’ps -ef | more’
alias psg=’ps -ef | grep “\!*” | sort -n +1 -2′

10.refresh the profile with logout
source ~root/.bash_profile
.  ~root/.bash_profile
#env or  set
#echo $PATH
/usr/sbin:/usr/bin:/usr/sfw/bin/
#export

11.Make root account to login to his home directory
vi  /etc/passwd
root:x:0:0:Super-User:/:/bin/bash
change to
root:x:0:0:Super-User:/root:/bin/bash

12.Make SSH login permission to root user
vi  /etc/ssh/sshd_config
PermitRootLogin  yes

13.Restart SSH service
#svcadm enable ssh
#svcadm refresh ssh
#svcs -a | grep ssh
#netstat -an | grep LISTEN

14.IPFilter for Solaris Firewall
svcadm enable ipfilter
svcs -a|grep pfil
/usr/share/ipfilter/examples. Just copy one of them over /etc/ipf/ipf.conf

#ipf  -Fa  -f  /etc/ipf/ipf.conf
pass in all
pass out all

routeadm -u -e ipv4-forwarding

vi  /etc/ipf/ipf.conf
———————–Firewall—————————————
pass in quick on lo0 all
pass out quick on lo0 all
block in log on e1000g0 all
block out log on e1000g0 all
pass out quick on e1000g0 proto tcp/udp from any to any keep state
pass out quick on e1000g0 proto icmp all keep state
pass in quick on e1000g0 proto icmp all keep state
pass in quick proto tcp from any to any port = 22 keep state
pass in quick proto tcp from any to any port = 10000 keep state
pass in quick proto udp from any to any port = 67 keep state
———————————————————————–
# Allow all traffic on loopback.
pass in quick on lo0 all
pass out quick on lo0 all
# Public Network.   Block everything not explicity allowed.
block in log on e1000g0 all
block out log on e1000g0 all

# Allow all connection out from this computer
pass out quick on e1000g0 proto tcp/udp from any to any keep state

# Allow pings out.
pass out quick on e1000g0 proto icmp all keep state

# Allow pings in.
pass in quick on e1000g0 proto icmp all keep state

# Allow ssh connection on port 22 to Laptop(192.168.0.1)
pass in quick proto tcp from 192.168.0.1 to 192.168.0.254 port=22 keep state
pass in quick proto tcp from any to any port = 22 keep state
pass in quick proto tcp from any to any port = 10000 keep state
————————————————————————————————
-bash-3.00# cat reloadipf.sh
#!/bin/sh
# Last Modified On: 25-FEB-2006
# Script to reload the IFP
ipf -Fa -f /etc/ipf/ipf.conf
-bash-3.00#
————————————————————————————————
ipf -E  : Enable ipfilter when running for the   first time.(Needed for ipf on Tru64)

ipf -f /etc/ipf/ipf.conf  : Load rules in /etc/ipf/ipf.conf file into the active firewall.

ipf -Fa -f /etc/ipf/ipf.conf : Flush all rules, then load rules in /etc/ipf/ipf.conf   into active firwall.

ipf -Fi  : Flush all input rules.

ipf -I -f /etc/ipf/ipf.conf : Load rules in /etc/ipf/ipf.conf file into inactive firewall.

ipf -V  : Show version info and active list.

ipf -s  : Swap active and inactive firewalls.

ipfstat  : Show summary

ipfstat -i : Show input list

ipfstat -o : Show output list

ipfstat -hio : Show hits against all rules

ipfstat -t -T 5 : Monitor the state table and refresh every 5 seconds.   Output is similiar to ‘top’ monitoring the process table.

ipmon -s S : Watch state table.

ipmon -sn : Write logged entries to syslog, and convert back   to hostnames and servicenames.

ipmon -s [file] : Write logged entries to some file.

ipmon -Ds : Run ipmon as a daemon, and log to default location.  (/var/adm/messages for Solaris) (/var/log/syslog for Tru64)

15.Solaris 10 Static IP Configuration
/etc/nodename
/etc/hosts
/etc/inet/hosts
/etc/hostname.e1000g0
/etc/inet/ipnodes
/etc/inet/netmasks
/etc/defaultdomain
/etc/defaultrouter
/etc/resolv.conf
svcadm  restart network/physical

16. Solaris 10 Dynamic IP Configuration, make sure following files are blank.
/etc/hostname.e1000g0
/etc/dhcp.e1000g0
/etc/defaultrouter
svcadm  restart network/physical

#/usr/sbin/netservices limited

17. Check Port Open  status
#netstat -n -f inet
#netstat –anf  inet -P tcp
#netstat -anf  inet –P udp
#netstat   -nr
lsof  -i   TCP
lsof  -I  TCP  | grep LISTEN

18. Package management
If you want to add more solaris packages from DVD, after you installed the solaris
first insert the DVD and mount the DVD by
Remount volume manager
#/etc/init.d/volmgt stop
#/etc/init.d/volmgt start
check
# ls /cdrom/cdrom0
# cd /cdrom/cdrom0/Solaris_10/Product
or mount manually from
#mount -F  hsfs  /dev/dsk/c0t0d0p0   /mnt

19.mount ISO file
#lofiadm -a /tmp/companion-sparc-sol10.iso /dev/lofi/1
#mount -F hsfs -o ro /dev/lofi/1 /mnt

20. CD Burning
#cdrw -l
Looking for CD devices…
Node            Connected Device            Device type
———————-+——————————–+—————–
/dev/rdsk/c2t0d0s2   | MATSHITA DVD-RAM UJ-845S  D200 | CD Reader/Writer
#cdrw -d c2t0d0s2 -i companion-sparc-sol10.iso

21. Package installation
#ls  /mnt/Solaris_10/Product
Solaris Packages start with ‘SUNW*’
If you want to add one package
#pkgadd  -d  .  SUNWbash
Normally these packages install to ‘/usr/sfw’

Or you want to manually download bz2 package from internet and install like this
bunzip2 firefox-24.0.en-US.solaris-10-fcs-i386-pkg.bz2
pkgadd -d firefox-24.0.en-US.solaris-10-fcs-i386-pkg

Decompress tar.gz file
#gunzip   vmware-solaris-tools.tar.gz   |tar    -xv
#tar  xvf   vmware-solaris-tools.tar.gz

If your package is in .bz2 format then first uncompress it using bunzip2 command:
#bunzip2 Packagname.bz2
Install package:
#pkgadd –d Packagname
Note .bz2 extension will automatically removed by first command.
For example if your package name is SFWqt.bz2
#buzip2 SFWqt.bz2
#pkgadd –d SFWqt

Add Packages from DVD to /var/spool/pkg
#Pkgadd    -d    /cdrom/sol_10_910_x86/Solaris_10/Product/     -s /var/spool/pkg/     SUNWgtar
#pkgadd  SUNWgtar
#pkgadd –d     /path/to/cdrom/Product    SUNWjaf     SUNWjato    SUNWjmail
#pkginfo -l | grep wget
#pkginfo -l SUNWwgetu

#gunzip lsof_1106-4.80-sol10-sparc-local.gz
#pkgadd -d lsof_1106-4.80-sol10-sparc-local or *.pkg
If gunzip cannot run add the variable path to
/usr/local/bin
/usr/local/lib
/usr/local/man

For installing all the packages, create an install administration file such as:
# cat /var/tmp/admin
mail=
conflict=nocheck
setuid=nocheck
action=nocheck
partial=nocheck
instance=overwrite
idepend=nocheck
rdepend=nocheck
space=check
#pkgadd -a /var/tmp/admin -d /cdrom/cdrom/Solaris_Software_Companion/Solaris_i386/

Download zipped ISO from http://www.sun.com/software/solaris/freeware/
# unzip sol-10-u8-companion-ga-iso.zip
# lofiadm -a `pwd`/sol-10-u8-companion-ga.iso
# mount -oro -Fhsfs /dev/lofi/1 /mnt
# /bin/yes | pkgadd -d /mnt/Solaris_Software_Companion/Solaris_sparc/Packages all
# pkgrm SFWvnc
# umount /mnt
# lofiadm -d `pwd`/sol-10-u8-companion-ga.iso
# rm sol-10-u8-companion-ga.iso
# rm sol-10-u8-companion-ga-iso.zip

22.To remove package
#pkgrm

23.System Info Commands
#cat /etc/release
#showrev
#uname -a
#prtconf  | grep -i memory
#psrinfo
#psrinfo -pv
#isainfo -bv
#isalist
#date ‘+DateTime: %m.%d.%y @ %H:%M:%S’
date mmddHHMMccyy
date  091810022013
#ps -ef
#ps -U root
#tty / w
#pgrep sshd
#pgrep -o sshd
#pgrep -o sshd
#pkill (PID) or sshd
#pwdx (PID) – Lists the working directories of process
#prstat
#svcs -o FMRI,DESC

24.KDE Login after intallation from Companion DVD
#/opt/sfw/kde/dtlogin/install-dtlogin

25.To disable the GUI login Solaris
First login with ssh and Kill desktop login
#/usr/dt/bin/dtconfig -kill
#/usr/dt/bin/dtconfig -d
#/usr/dt/bin/dtconfig -e
#/usr/dt/bin/dtconfig -reset
#/usr/dt/bin/dtconfig -inetd

26.Static Routing (-p) option for permanent route
#route -p add -net 192.168.2.0    192.168.1.2   255.255.255.0
Network         Gateway
add net 192.168.2.0: gateway 192.168.1.254
add persistent net 192.168.2.0: gateway 192.168.1.254

The above created route would still appear the same in a listing of the
routing table, however, you may notice that there is a secondary line
of output upon creating the route:

add persistent net 192.168.2.0: gateway 192.168.1.254

This simply means that the ‘route’ command updated config file
/etc/inet/static_routes.  By default, this file will not exist until
a static route is created via ‘route -p …’ or you create it.  Before
getting to contents, the following are the ownership / permissions set
to the file by ‘route’:

#ls -l /etc/inet/static_routes
-rw-r–r–   1 root     root          45 Oct  6 13:35 /etc/inet/static_routes
And now, the contents, which are effectively the arguments to ‘route add’:

#cat /etc/inet/static_routes

# File generated by route(1M) – do not edit.
-net 192.168.2.0 192.168.1.2 255.255.255.0

Yes, I know it says do not edit, though in checking out the source of
‘route’ via opensolaris.org, it doesn’t appear that manual editing
is an issue.  Finally, Solaris has a native, standardized means of
configuring persistent static routes.

Additionally, to remove a static route, delete it from
/etc/inet/static_routes and remove via ‘route’ or simply use the following
‘route’ command:

#route -p delete -net 192.168.2.0 192.168.1.2 255.255.255.0

27.Bind DNS Server Solaris
————————–
pkginfo -x |grep -i bind
SUNWbind                          BIND DNS Name server and tools
SUNWbindr                         BIND Name server Manifest

pkgchk -l SUNWbind (Client & Server Utilities)
pkgchk -l SUNWbindr | grep -i pathname | less

dig linuxcbt.com ns

ls -l /usr/sbin/named

ls -l /usr/sbin/in.named

ls -ltr /var/named

vi /etc/named.conf  <— br=”” by=”” create=”” default=”” etc=”” file=”” have=”” no=”” s=”” there=”” this=”” to=”” within=”” you=””>

options {
directory “/var/named”;
dump-file “/var/named/data/cache_dump.db”;
statistics-file “/var/named/data/named_stats.txt”;
memstatistics-file “/var/named/data/named_mem_stats.txt”;
listen-on port 53 { 127.0.0.1; 192.168.100.103; };
allow-query { localhost; 192.168.100.0/24; };
forwarders { 192.168.100.254; 8.8.8.8; };
recursion yes;
max-cache-size 100m;
cleaning-interval 60;
};

zone “.” {
type hint;
file “named.root”;
};

zone “mmx.com” {
type master;
file “db.mmx.com”;
allow-update { none; };
};

zone “0.0.127.in-addr.arpa” {
type master;
file “db.127.0.0”;
};

zone “100.168.192.in-addr.arpa” {
type master;
file “db.192.168.100″;
allow-update { none; };
};

—————————————————————————
@ is a variable which indicates the name of the zone as configured in /etc/named.conf

############/var/named/db.127.0.0###############################
$TTL 28800
@ IN SOA  ns1.mmx.com.  zawhtet.mmx.com. (
2013100301 ; serial number yyyymmdd01
7200 ; Refresh Interval
3600 ; Retry Interval
86400 ; Expiry
600 )  ; Minimum TTL

NS    ns1.
1  IN    PTR   localhost.mmx.com.

——————————————————————
#############/var/named/db.192.168.100############################
$TTL 28800
@ IN SOA  ns1.mmx.com.  zawhtet.mmx.com. (
2013100301 ; serial number yyyymmdd01
7200 ; Refresh Interval
3600 ; Retry Interval
86400 ; Expiry
600 )  ; Minimum TTL

NS    ns1.
89  IN    PTR   ns1.
————————————————————-
###############/var/named/db.mmx.com##########################
$TTL 28800
@ IN SOA  ns1.mmx.com.  zawhtet.mmx.com. (
2013100301 ; serial number yyyymmdd01
7200 ; Refresh Interval
3600 ; Retry Interval
86400 ; Expiry
600 )  ; Minimum TTL

NS    ns1.
IN  MX  10 ns1.mmx.com.

ns1 IN  A     192.168.100.89
www    CNAME  ns1.mmx.com.
————————————————————–
svcadm enable / restart dns/server && dig @localhost ns1.mmx.com
svcs -l dns/server
dig @localhost ns1.mmx.com
dig @localhost msn.com
dig @localhost www.mmx.com
named-checkconf -z /etc/named.conf
svcs  -a \*dns\*
————————————————————–
Slave DNS Server
—————-

Copy following files to slave server:

1. /var/named/db.127.0.0 – Houses reverse, loopback zone info
2. /var/named/named.root – root hints
3. /etc/named.conf

cd /var/named

scp db.127.0.0 db.cache /etc/named.conf 192.168.100.2:/root

On SLave DNS Server

cp /root/db.*  /var/named
cp /root/named.conf /etc

vi /etc/named.conf  <— br=”” dns=”” server=”” slave=””>
options {
directory “/var/named”;
allow-query { localhost; 192.168.100.0/24; };
};

zone “.” {
type hint;
file “named.root”;
};

zone “mmx.com” {
type slave;
file “db.mmx.com”;
masters { 192.168.100.89; };
};

zone “0.0.127.in-addr.arpa” {
type master;
file “db.127.0.0”;
};

zone “1.168.192.in-addr.arpa” {
type slave;
file “db.192.168.1″;
masters { 192.168.100.89; };
};

4. After synchronized with Master server

db.mmx.com – will download to /var/named on Slave DNS Server.
—————————————————————————————————-
28. Install and Configure dhcp server from console

#pkginfo | grep DHCP

If it is not installed then install it from solaris CD <– br=”” dhcpconfig=”” java=”” need=”” run=”” to=”” will=”” you=””>
# pkgadd  -d . SUNWdhc*

#which dhcpmgr

no dhcpmgr in /usr/bin /usr/sbin /usr/sfw/bin /opt/sfw/bin /usr/dt/bin

#/usr/sadm/admin/bin/dhcpmgr & (Everyone can configure DHCP Server from GUI)

#dhtadm

If there’s no DHCP manager . let’s configure it

#ifconfig -a <– br=”” check=”” network=”” the=””>
#netstat -rn <– br=”” check=”” gateway=””>
Then create dhcp database

#dhcpconfig  -D -r SUNWfiles -p /var/dhcp/ -a 192.168.1.4;8.8.8.8 -d mmx.com -l 86400

(Note: 192.168.1.4 – DNS Server / mmx.com = domain / Lease time = 86400)

or

#dhcpconfig  -D -r SUNWfiles -p /var/dhcp/

Created DHCP configuration file.
Created dhcptab.
Added “Locale” macro to dhcptab.
Added server macro to dhcptab – solaris-1.
DHCP server started.

#svcs -a | grep dhcp
online         18:57:30 svc:/network/dhcp-server:default

#dhtadm -P <— br=”” check=”” database=””>
Now configure network and IP

#dhcpconfig -N 192.168.1.0 -m 255.255.255.240 -t 192.168.1.1
(Note: 192.168.1.1 = Gateway)

#pntadm -P 192.168.1.0

#dhcpconfig –help

#pntadm -r SUNWfiles -p /var/dhcp/ -A 192.168.1.7 192.168.1.0
#pntadm -r SUNWfiles -p /var/dhcp/ -A 192.168.1.8 192.168.1.0
#pntadm -r SUNWfiles -p /var/dhcp/ -A 192.168.1.9 192.168.1.0
#pntadm -r SUNWfiles -p /var/dhcp/ -A 192.168.1.10 192.168.1.0

Or

#pntadm  -A  192.168.1.7 –f  MANUAL -i 01001BFC92BC10 -m  192.168.1.0 -y  192.168.1.0

#pntadm -P 192.168.1.0

#pntadm -L

#dhtadm -P

#svcadm restart dhcp-server

#svcs -a | grep dhcp

find  /usr/ -name in.dhcp

#/usr/lib/inet/in.dhcpd -i e1000g0 -d -v

#pntadm -P 192.168.1.0

If a DHCP server is already configured, you can unconfigure it by using the
dhcpconfig command with the unconfigure flag. For example:

# dhcpconfig -Ux

RHCE EXAM POST – Apache

RHCE training notes – Apache

Linux under Apache, package is httpd. Httpd main distribution file is /etc/httpd/conf/httpd.conf, its configuration instructions are primarily
divided into three parts :
the control of the Apache server part ( the ‘global environment );
define the parameters of the primary or default services directive;
virtual host setting parameters.
Httpd plethora of relevant information , will not elaborate here , this experiment is a combination of dns and apache virtual host site release.

Experimental platform is CentOS 6.4, environment:

Apache server DNS server cum

Host Name : rmohan IP Address : 192.168.1.40

Client Tester
Host Name : station IP address : 192.168.1.2
Preparation:

DNS and Apache installed the apprrmriate software , you can directly use yum to install and set up boot

[root @ rmohan ~] # yum install httpd bind bind-chroot bind-util*

[root @ rmohan ~] # chkconfig httpd on

[root @ rmohan ~] # chkconfig named on

[root @ rmohan ~] # service httpd start

[root @ rmohan ~] # service named start

One , first configure a DNS server

1 Configure the primary distribution file /etc/named.conf

In the rmtions {} , locate and modify the contents of the following three :
listen-on port 53 {any;}; # parentheses content to any
listen-on-v6 port 53 {any;}; # to any
allow-query {any;}; # to any

2 Configure the zone configuration file , at the end custom zone ( defined here only positive analytical , reverse lookup is not defined )

Modified as follows:

[root @ rmohan ~] # cat /etc/named.rfc1912.zones
Omit part …… ……
zone “msn.com” IN {
type master;
file “rm.com.zone”;
allow-update {none;};
} ;
zone “rm.com” IN {
type master;
file “msn.com.zone”;
allow-update {none;};
} ;

3 in the /var/named  data files created rm.com.zone and msn.com.zone
[root @ rmohan ~] # cd /var/named/
[root @ rmohan ~] # cp -p named.localhost rm.com.zone
[root @ rmohan ~] # cp -p named.localhost msn.com.zone
The final contents of the two files is as follows ( in fact, is the same ) :
[root @ rmohan named] # cat rm.com.zone
$ORIGIN rm.com.
$TTL 1D
@      SOA ns1.rm.com.  root.rm.com. (
0       ; serial
1D      ; refresh
1H      ; retry
1W      ; expire
3H )    ; minimum
@     IN   NS       ns1.rm.com.
www    IN   A      192.168.1.40
ns1     IN   A      192.168.1.40
@        IN   A    192.168.1.40

[root @ rmohan named] # cat msn.com.zone
$ORIGIN msn.com.
$TTL 1D
@      SOA ns1.msn.com.  root.msn.com. (
0       ; serial
1D      ; refresh
1H      ; retry
1W      ; expire
3H )    ; minimum
@     IN   NS       ns1.msn.com.
www    IN   A      192.168.1.40
ns1     IN   A      192.168.1.40
@        IN   A    192.168.1.40

[root @ rmohan named] # cd
[root @ rmohan ~] #

4 Restart named service
[root @ zoro ~] # service named restart
Strmping named:. [OK]
Starting named: [OK]

Two , Apache server configuration

1 . First, create a test file need to use the main surface , as follows:
[root @ rmohan html] # ls
index.html rm msn
[root @ rmohan html] # cat index.html
this is home page
server: 192.168.1.40

[root @ rmohan html] # cat rm/index.html
this is rm page
server: 192.168.1.40

[root @ rmohan html] # cat msn/index.html
this is mns page
server: 192.168.1.40

(2) modify the main feature file /etc/httpd/conf/httpd.conf

First enable NameVirtualHost field, which is a name-based virtual hosts specify an IP address, * indicates the machine ‘s IP address currently used .
NameVirtualHost *: 80 # this line is commented out by default , to remove its previous #
Then in the end of the file add the following:
<VirtualHost *:80>

# ServerAdmin webmaster@dummy-host.example.com
DocumentRoot /var/www/html # document root directory. By default, all requests from this directory,
ServerName 192.168.1.40 # machine ‘s name, said that with the IP address or domain name
# ErrorLog logs /dummy-host.example.com-error_log # You can specify the error log storage directory ,
if not specified, the default is /etc/httpd/logs/access_log Lane
# CustomLog logs/dummy-host.example.com-access_log common
</VirtualHost>

<VirtualHost *:80>
DocumentRoot /var/www/html/rm
ServerName www.rm.com
</ VirtualHost>

<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
</ VirtualHost>

3 Restart httpd service

[root @ rmohan ~] # service httpd restart
Strmping httpd: [OK]
Starting httpd: httpd: apr_sockaddr_info_get () failed for zoro
httpd: Could not reliably determine the server’s fully qualified domain name, using 127.0.0.1 for ServerName
[OK]

Third, the client test

1 Test DNS correctly
[root @ station ~] # nslookup www.rm.com
Server: 192.168.1.40
Address: 192.168.1.40 # 53

Name: www.rm.com
Address: 192.168.1.40

[root @ station ~] # cat /etc/resolv.conf
nameserver 192.168.1.40

[root @ station ~] # nslookup www.msn.com
Server: 192.168.1.40
Address: 192.168.1.40 # 53

Name: www.msn.com
Address: 192.168.1.40

These results without exception, which means that the DNS server running.

2 Test page views
Visit http://192.168.1.40 using firefox, the returned results:
this is home page server: 192.168.1.40

Visit http://www.rm.com using firefox, the returned results:
this is rm page server: 192.168.1.40

Visit http://www.msn.com using firefox, the returned results:
this is msn page server: 192.168.1.40

Fourth, to further debug

1 main httpd on the server with the file /etc/httpd/conf/httpd.conf in NameVirtualHost
commented ( previously enabled , and now re- comment ) , and then restart httpd, and then view the test results.

To the client to access the site , you will find ,
whether it is http://192.168.1.40, http://www.rm.com, or http://www.msn.com,
the result returned is “this is home page server: 192.168.1.40 “, that is the main surface http://192.168.1.40 content.

2 In the above rmeration 1 , based on the then “ServerName 192.168.1.40” delete the contents of the virtual host that virtual host configuration read:
Omit part …… ……
# NameVirtualHost *: 80
Omit part …… ……
<VirtualHost *:80>
DocumentRoot /var/www/html/rm
ServerName www.rm.com
</ VirtualHost>

<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
</ VirtualHost>

After editing , restart httpd.

And then use the client browser access , whether it is http://192.168.1.40, http://www.rm.com, or http://www.msn.com, the returned results are “this is rm page server: 192.168.1.40 “, that is the main surface http://www.rm.com content.

3 In the above rmeration on the basis of two , and then NameVirtualHost enabled , the upcoming primary service httpd file / etc / httpd / conf / httpd.conf NameVirtualHost before the comment character # remove , modify, complete, restart httpd.

And then use the client browser access, access http://192.168.1.40 and http://www.rm.com, the returned results are “this is rm page server: 192.168.1.40”, visit http:// www.msn.com result returned is “this is msn page server: 192.168.1.40”

Based on the above three rmerations , it can be concluded :

If hpptd primary service file , # NameVirtualHost *: 80 is commented out ( default is commented out ) , then the httpd virtual function is not turned on, this time with the master file regardless of how many virtual hosts write ( corresponding in , to the specified DocumentRoot directory to create site pages , home must be index.html), eventually only the first one ( written on trm ) effect , namely client access where ServerName, ultimately points to the first ServerName the corresponding page .

Conclusion II enabled virtual function, the main station ( This article is http://192.168.1.40, corresponding to the / var / www / html / index.html), have written to the virtual host inside, otherwise they would not be accessed . Such as rmerating three , access http://192.168.1.40 when , in fact, is not the main station visit , but the first virtual host http://www.rm.com page.

Fifth, the access control

In the above configuration file hosting www.msn.com last example .

Use the virtual host where <Directory> container to set access control.

1 . The virtual host configuration changes to the following:

<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
<Directory “var/www/html/msn”>
order allow, deny # Allow first , refused to
allow from 192.168.1. # allowed content , visit here allows 192.168.1.0 network
deny from 192.168.1.123 # reject the content , where visitors refused to host 192.168.1.123
</ Directory>
</ VirtualHost>
Restart httpd.

At this time , IP address is 192.168.1.123 host can not access www.msn.com home , there is the Apache test page , in the /etc/httpd/logs/access_log logs , you can see 403 error, as follows:
192.168.1.123 – [14/May/2013: 07:13:55 +0800] “GET / HTTP/1.1” 403 5039 “-” “Mozilla/5.0 (X11; U; Linux i686; en-US; rv: 1.9.2.24) Gecko/20111104 Red Hat/3.6.24-3.el6_1 Firefox/3.6.24 ”
In addition to 192.168.1.123 outside , 192.168.1.0 subnet other hosts , can be a normal visit www.msn.com homepage.

(2) modify the virtual host configuration to the following ( as compared with the rmeration 1 , access control reverse the order ) :
<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
<Directory “var/www/html/msn”>
order deny, allow # first refusal , after allowing
allow from 192.168.1.1
deny from 192.168.1.123
</Directory>
</ VirtualHost>

Restart httpd.

At this point , 192.168.1.0 subnet to all hosts , including 192.168.1.123, you can normally access www.msn.com homepage.

3 . The virtual host configuration changes to the following ( as compared with rmerating two to allow and deny the contents of the swap for a moment ) :

<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
<Directory “var/www/html/msn”>
order deny, allow # first refusal , after allowing
allow from 192.168.1.123
deny from 192.168.1.
</Directory>
</VirtualHost>

Restart httpd.

At this time , IP address is 192.168.1.123 hosts can access www.msn.com home , in addition to 192.168.1.123 outside , 192.168.1.0 subnet other hosts can not access www.msn.com homepage.
4 Modify the virtual host configuration to the following ( as compared with rmerating three , access control sequence reversed ; compared with 2 , access control sequence is inverted, and the refusal to allow the content exchanged ; compared with one that allows and denial of the contents of the exchanged ) :

<VirtualHost *:80>
DocumentRoot /var/www/html/msn
ServerName www.msn.com
<Directory “var/www/html/msn”>
order allow, deny # Allow first , refused to
allow from 192.168.1.123
deny from 192.168.1.123
</Directory>
</VirtualHost>

At this point , 192.168.1.0 subnet to all hosts , including 192.168.1.123, do not have access to www.msn.com homepage.

Based on the above four rmerations, explains what is wrong ? Dizziness, and that is not clear . Combination of two problems in the practical work,
probably will not write it,
the purpose of this experiment in terms of the demand for the same network segment , using only deny statements ,
do not use allow statements will be more easily achieve access control.

IBM HTTP Server

IBM HTTP Server

Installation

Ensure you have the IBM Developer Kit, Java Technology Edition Version 1.4, installed on your machine. Files included

* gskit.sh
* setup.jar
* A GSKit run-time executable:
* Linux for Intel: gsk7bas_295-7.0-1.10.i386.rpm
Go to the directory where you uncompressed the install image and type
java -jar setup.jar
To do a silent installation, type:
java -jar setup.jar -silent -options silent.res
To customize the install options, edit the silent.res text file. All options are set to true
by default. To disable an option, set its value to false
* Choose the language in which to run the installation.
* The license agreement accept
* The default directory : /opt/IBMHIHS/
* Type of installation : typical
cd IHS

./install launches the installer HTTP Server 6.0

Accept License agreement
Next

Install Directory Directory name
/opt/IBMIHS

Select Custom
Product Installation
HTTPServer base
Security

Click Next

IBM Http Server communicates using the port numbers below

HTTP Port 80
HTTP Administration Port 8008

Click Next

IBM HTTP Server 6.0 will be installed in the following location:
/opt/IBMIHS with the following features:
HTTPServer base Security

Next

Installalation Completed
Then a checkbox to launch the Websphere Application server
launch the WebSphere Application Server – Plugin Install
Uninstall the IBM http Server
Go to the directory where you installed the IBM HTTP Server. Change to the_uninst directory
Type java -jar uninstall.jar
Silent uninstall type java -jar uninstall.jar -silent
Looking at known problems on the UNIX platform

Getting the suexec module to work
The suexec module does not work unless IBM HTTP Server V2.0 is installed to the default location.
Running the /<ihs install root>/bin/httpd command
Source the /<ihs install root>/bin/envvars file first to ensure you can run the /<ihs install root>/bin/httpd command to start the IBM HTTP Server. To source the envvars file, enter . /<ihs install root>/bin/envvars at the
command line. The envvars file contains the path to the libraries needed to run the /<ihs install root>/bin/httpd command.
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.ihs.doc/info/welcome_ihs.html

Enabling access to the administration server using the htpasswd utility

The administration server is installed with authentication enabled. This means that the administration server will not accept a connection without a valid user ID and password. This is done to protect the IBM HTTP Server
configuration file from unauthorized access.

Procedure
Launch the htpasswd utility that is shipped with the administration server. This utility creates and updates the files used to store user names andpassword for basic authentication of users who access your Web server. Locate htpasswd in the bin directory.
./htpasswd -cm <install_dir>/conf/admin.passwd [login name]
where <install_dir> is the IBM HTTP Server installation directory and [login name] is the user ID that you use to log into the administration server.
Results
The password file is referenced in the admin.conf file with the AuthUserFile directive.

Running the setupadm script (/opt/IBMIHS/bin/setupadm)

The setupadm script establishes permissions for configuration file updates. About this task

You cannot update the configuration files after a default server installation, unless you run the setupadm script, or you set permissions manually.

The setupadm script prompts you for the following input:

* User ID – The user ID that you use to log on to the administration server. The script creates this user ID.
* Group name – The administration server accesses the configuration files and authentication files
through group file permissions. The script creates the specified group through this script.
* Directory – The directory where you can find configuration files and authentication files.
* File name – The following file groups and file permissions change:
o Single file name
o File name with wildcard
o All (default) – All of the files in the specific directory
o Processing – The setupadm script changes the group and file permissions of the configuration files
and authentication files.
The administration server requires read and write access to configuration files and authentication files to perform Web server configuration data administration. In addition to the Web server files, you must change the
permissions to the targeted plug-in configuration files.
Setting Permissions manually

Once you have created a user and group, set up file permissions as follows:

1. Update the permissions for the targeted IBM HTTP Server conf directory.
At a command prompt, change to the directory where you installed IBM HTTP Server.
Type the following commands:
chgrp <group_name> <directory_name>
chmod g+rw <directory_name>

2. Update the file permission for the targeted IBM HTTP Server configuration files.
At a command prompt, change to the directory that contains the configuration files.
Type the following commands:
chgrp <group_name> <file_name>
chmod g+rw <file_name>

3. Update the admin.conf configuration file for the IBM HTTP Server administration server.
Change to the IBM HTTP Server administration server admin.conf directory.
Search for the following lines in the admin.conf file:

User nobody
Group nobody

3. Change those lines to reflect the user ID and unique group name

4. Update the file permission for the targeted plug-in configuration files.
1. At a command prompt, change to the directory that contains the plug-in configuration files.
2. Type the following commands:
chgrp <group_name> <file_name>
chmod g+rw <file_name>

Key differences from the Apache HTTP Server

IBM HTTP Server is based on the Apache HTTP Server. IBM HTTP Server includes the following additional features not available in the Apache HTTP Server:

Support for the WebSphere administrative console.
InstallShield for multiple platforms enables consistent installation of the IBM HTTP Server on different platforms.
Dynamic content generation with FastCGI.
Operational differences between Apache and IBM HTTP Server
The apachectl command is the only supported command to start IBM HTTP Server. You cannot directly invoke the httpd command because it will not find the required libraries. The apachectl command is the preferred command to start Apache V2.0 and higher, but the httpd command might work on the Apache server as expected, depending on the platform and how Apache was built. You can specify httpd options on the apachectl command line.
IBM HTTP Server supports the suEXEC program, which provides for execution of CGI scripts under a particular user ID.
If you use the suEXEC program, you must install the IBM HTTP Server to the default installation directory only. The suEXEC program uses the security model which requires that all configuration paths are hard-coded in theexecutable file, and the paths chosen for IBM HTTP Server are those of the default installation directory.
When an Apache user chooses an installation location for Apache at compile time, the suEXEC program is pre-built with the chosen paths, so this issue is seen by the Apache users.
Customers need to use the suEXEC program with arbitrary configuration paths can build it with Apache on their platform and use the generated suEXEC binary with IBM HTTP Server. Customers must save and restore their custom suEXEC file when applying IBM HTTP Server maintenance.

Configuring IBM HTTP Server

Special considerations for IBM HTTP Server.
The IBM HTTP Server and administration server configuration files, httpd.conf and admin.conf respectively, support only single-byte characters (SBCS). This restriction applies to all operating system platforms.

Learn about FastCGI

FastCGI is an interface between Web servers and applications which combines some of the performance characteristics of native Web server modules with the Web server independence of the Common Gateway Interface (CGI) programming interface. IBM HTTP Server provides FastCGI support with the mod_fastcgi module. The mod_fastcgi module implements the capability for IBM HTTP Server to manage FastCGI applications and to allow them to process requests.

A FastCGI application typically uses a programming library such as the FastCGI development kit from http://www.fastcgi.com/. IBM HTTP Server does not provide a FastCGI programming library for use by FastCGI applications.

Example of mod_fastcgi configuration

Load the mod_fastcgi module into the server, and then configure FastCGI using the FastCGI directives.
The following directive is required to load mod_fastcgi into the server
LoadModule fastcgi_module modules/mod_fastcgi.so

A complete configuration example for UNIX and Linux platforms. In this example, the /opt/IBM/HTTPServer/fcgi-bin/ directory contains FastCGI applications, including the echo.exe application. Requests from Web browsers for the /fcgi-bin/echo URI will be handled by the FastCGI echo.exe application

LoadModule fastcgi_module modules/mod_fastcgi.so
<IfModule mod_fastcgi.c>
ScriptAlias /fcgi-bin/ “/opt/IBM/HTTPServer/fcgi-bin/”

<Directory “/opt/IBM/HTTPServer/fcgi-bin/”
AllowOverride None
Options +ExecCGI
SetHandler fastcgi-script
</Directory>

FastCGIServer “/opt/IBM/HTTPServer/fcgi-bin/echo” -processes 1
</IfModule>

IBM HTTP Server remote administration
IBM HTTP Server remote administration using WebSphere Application Server Network Deployment: You can administer and configure IBM HTTP Server using the WebSphere Administrative Console. The IBM HTTP Server installation includes the IBM administration server, which installs by default during a typical IBM
HTTP Server installation. When you install IBM HTTP Server on a machine without the WebSphere Application Server, the IBM administration server is necessary for administration. In order for the IBM administration server to handle requests for the administration of IBM HTTP Server, the IBM administration server must be started and defined to an unmanaged WebSphere Application Server node. Administration of IBM HTTP Server is available without the IBM administration server if the IBM HTTP Server is installed on a machine with a WebSphere managed node.

You must define IBM HTTP Server through the WebSphere administrative console. Once defined, an administrator can administer and configure IBM HTTP Server through the WebSphere administrative console. Administration includes the ability to start and stop the IBM HTTP Server. You can display and edit the
IBM HTTP Server configuration file, and you can view the IBM HTTP Server error and access logs. The plug-in configuration file can be generated for IBM HTTP Server and propagated to the remote or locally-installed IBM HTTP Server.

On Linux platforms – troubleshooting:
/opt/IBM/HTTPServer/logs/error_log
Setting Up SSL and Certs
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp

Steps for this task

Use the IBM HTTP Server IKEYMAN utility to create a CMS key database file and self signed server certificate.
Enable SSL directives in the IBM HTTP Server httpd.conf configuration file .
Uncomment the LoadModule ibm_ssl_module modules/mod_ibm_ssl.so configuration directive.
Create an SSL virtual host stanza in the httpd.conf file using the following examples and directives.

LoadModule ibm_ssl_module modules/mod_ibm_ssl.so
<IfModule mod_ibm_ssl.c>
Listen 443
<VirtualHost *:443>
SSLEnable
</VirtualHost>
</IfModule>
SSLDisable
KeyFile “c:/Program Files/IBM HTTP Server/key.kdb”

 

Setting up SSL enabled https

On Sql,
/opt/IBMIHS/conf/http.conf.sql

Edit the file to include
ServerName sql
ServerRoot “/opt/IBMIHS”
LoadModule ibm_ssl_module modules/mod_ibm_ssl.so

<IfModule mod_ibm_ssl.c>
Listen 443
<VirtualHost *:443>
SSLEnable
</VirtualHost>
</IfModule>
SSLDisable
KeyFile “/opt/IBMIHS/keys/key.kdb”
User wasadmin
Group wwwwas

DocumentRoot “/opt/IBMIHS/htdocs/en_US”
ServerAdmin seela@cse.yorku.ca
To generate the key.kdb file /opt/IBMIHS/bin/ikeyman sets up a graphical interface
Select Key Database File
New
Gui: key database type – select CMS
Filename key.kdb
Location: /opt/IBMIHS/keys
Passwd : root passwd
Confirm

Set expiration time: 1460 Days
Stash the password file:
Two files are generated:
key.kdb
key.sth

But now start the apache server /opt/IBMIHS/bin
./apachectl -k stop
./apachectl -k start -f /opt/IBMIHS/conf/httpd.conf.sql

Testing the web browser https://sql.cs.yorku.ca will not work

Disabled the firewall
/sbin/iptables -F
(-F option is to flush the tables)

Now we can connect

Add firewalls rules
/etc/sysconfig/iptables – added the following lines
-A RH-Firewall-1-INPUT -s 192.168.9.0/255.255.255.0 -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.9.0/255.255.255.0 -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.9.0/255.255.255.0 -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.9.0/255.255.255.0 -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j ACCEPT

 

Secure Sockets Layer protocol
SSL ensures the data that is transferred between a client and a server remains private. This protocol enables the client to authenticate the identity of the server. SSL Version 3, requires authentication of the client identity.
When your server has a digital certificate, SSL-enabled browsers can communicate securely with your server, using SSL
SSL uses a security handshake to initiate a secure connection between the client and the server.
During the handshake, the client and server agree on the security keys to use for the session

After the handshake, SSL encrypts and decrypts all the information in both the HTTPS request and the server response, including:

* The URL requested by the client
* The contents of any submitted form
* Access authorization information, like user names and passwords
* All data sent between the client and the server

HTTPS represents a unique protocol that combines SSL and HTTP. Specify https:// as an anchor in HTML documents that link to SSL-protected documents
A client user can also open a URL by specifying https:// to request an SSL-protected document.

Because HTTPS (HTTP + SSL) and HTTP are different protocols and use different ports (443 and 80, respectively), you can run both SSL and non-SSL requests simultaneously. This capability enables you to provide information to users without security, while providing specific information only to browsers making
secure requests.

Uninstalling the IBM HTTP Server

This section contains procedures for uninstalling the IBM HTTP Server. The uninstaller program is customized for each product installation, with specific disk locations and routines for removing installed features. The uninstaller program does not remove configuration and log files

Steps for this task
1. Stop IBM HTTP Server.
2. Change directories to the directory where you installed the IBM HTTP Server, then go to the
_uninst directory
3. Double-click uninstall to launch the uninstaller program. You can also choose to do a silent uninstall
by running the uninstall -silent command. The uninstall process on Linux and UNIX systems does
not automatically uninstall the GSKit. You have to uninstall the GSKit manually by using the
native uninstall method.
4. Click Next to begin uninstalling the product.The Uninstaller wizard displays a Confirmation panel that
lists the product and features that you are uninstalling
5. Click Next to continue uninstalling the product. The Uninstaller wizard deletes existing profiles first.
After deleting profiles, the Uninstaller wizard deletes core product files by component.
6. Click Finish to close the wizard after the wizard removes the product.

Result

The IBM HTTP Server uninstallation is now complete. The removal is logged in the ihs_install_directory/ihsv6_uninstall.log file.
Starting and stopping IBM HTTP Server

You can use the WebSphere administrative console to start and stop IBM HTTP Server. You can also use commands. See the following topics for more information:
Choose to do a silent uninstall by running the uninstall -silent command. The uninstall process on Linux and UNIX systems does not automatically uninstall the GSKit. You have to uninstall the GSKit manually by using the native uninstall method.
Click Next to begin uninstalling the product.The Uninstaller wizard displays a Confirmation panel that lists the product and features that you are uninstalling.
Click Next to continue uninstalling the product. The Uninstaller wizard deletes existing profiles first. After deleting profiles, the Uninstaller wizard deletes core product files by component.
Click Finish to close the wizard after the wizard removes the product.

Result

The IBM HTTP Server uninstallation is now complete. The removal is logged in the ihs_install_directory/ihsv6_uninstall.log file.
You can use the WebSphere administrative console to start and stop IBM HTTP Server. You can also use commands. See the following topics for more information:

* Starting and stopping IBM HTTP Server with the WebSphere Application Server administrative console
* Starting IBM HTTP Server on Linux and UNIX platforms
* Starting IBM HTTP Server on Windows operating systems

Starting IBM HTTP Server on Linux and UNIX platforms

* /opt/IBMIHS/bin/apachectl start|stop

To start IBM HTTP Server using an alternate configuration file, run the
apachectl -k start -f path_to_configuration_file command.
To stop IBM HTTP Server using an alternate configuration file, run the
apachectl -k stop -f path_to_configuration_file command

 

 

 

DB2 Basics

How to get a snapshot for DB2

db2 get snapshot for dynamic sql write to file on dbname | tee output.file

or

db2 "select * from table( SNAPSHOT_DYN_SQL(dbname',-1) as t"

[edit]How to check DB2 licence

/export/home/ldapdb2/sqllib/adm/db2licm -l

[edit]How to get the DB2 version

db2level

[edit]How to make sure DB2 starts and works with WAS on AIX.

Modify startServer.sh – add

. ~db2inst1/sqllib/db2profile

or add the lines below to the file /etc/profile, and .dtprofile:

#The following three lines have been added by UDB DB2.
if [ -f /home/wasadmin/sqllib/db2profile ]; then
. /home/wasadmin/sqllib/db2profile
fi

[edit]How to move a DB2 database

chown ldapdb2 /ldapfs
su - ldapdb2
db2start
db2 force application all
db2 termincate
db2 backup db ldapdb2 to /Another_filesystem
db2 drop db ldapdb2
db2 create db ldapdb2 on /ldapfs
db2 force application all
db2 termincate
db2 restore db ldapdb2 from /Another_filesystem replace existing redirect
db2 "set tablespace containers for 3 using (path '/disks/1/3', path '/disks/2/3', path '/disks/3/3', path '/disks/4/3', path '/disks/5/3')"
db2 restore db ldapdb2 continue

Additionally, I recommend you to set the path to the DB2 log file directory on another file system to eliminate output I/O wait time as follows:

db2 update database configuration for ldapdb2 using newlogpath [path]

[edit]How to stop db2 instance when it is still active

Often I met this problem when I want to stop the db2 instance using

db2stop
SQL1025N The database manager was not stopped because databases are still active.

Here you can find a solution

This appendix describes the necessary steps to stop and start a DB2 instance. There are many ways to stop and start a DB2 instance, but the following steps will guide you to stop a DB2 instance to ensure that any defunct DB2 processes, interprocess communications, and defunct DARI processes have been removed successfully.

Current configuration:Instance: db2inst1 Database: sample Server: phantom

Stop the DB2 instance. Check existing applications that are currently connected to the database by logging on to phantom server as DB2 instance owner db2inst1:

$ db2 list applications

Auth Id   Appl. Name   Appl.. Handle   Appl. Id                       DB Name   # of Agents
-------   ----------   -------------   ----------------------------   -------   -----------
DB2INST1  db2bp        207             *LOCAL.db2inst1.010824003917   SAMPLE    1
DB2INST1  java         276             CCF21FFC.E5D8.010829004049     SAMPLE    1
DB2INST1  java         51              CCF21FFC.E5D9.010829004051     SAMPLE    1

If there is any application connected to the database, you can tell who is currently connected and from which location they are connected. In this case, there is one local connection from db2inst1 user ID, and there are two remote connections from IP address: xxx.xxx.31.252 converted from hex to decimal: CCF21FFC.

For remote connections, after you get the IP address, you can get the hostname by issuing the nslookup command:

$ /usr/sbin/nslookup xxx.242.31.252

Server:  charter.xxx.com
Address:  xxx.242.31.83

Name:    phantom.xxx.com
Address:  xxx.242.31.252

If there are any applications connected to the database, verify that they are not currently executing:

$ db2 list applications show detail | egrep -i "executing|pending"

If there are applications executing or pending, you can now force them off. Then verify to make sure there is no application connected to the database. If you see the following message, you’re ready to stop the DB2 instance:

$ db2 force application all

DB20000I  The FORCE APPLICATION command completed successfully.
DB21024I  This command is asynchronous and may not be effective immediately.

$ db2 list applications

SQL1611W  No data was returned by Database System Monitor.  SQLSTATE=00000

Now you can stop the DB2 instance. When you get the message “SQL1064N DB2STOP processing was successful” you’re ready to do the next step. If you get the message below, you must start this step again:

$ db2stop

SQL1025N  The database manager was not stopped because databases are still active.

LAST RESORT. If for some reason you cannot stop the DB2 instance or DB2 commands are hung, you must run this utility to remove the DB2 engine and client’s IPC resources for that instance. This is your lifesaver:

$ ipclean

ipclean: Removing DB2 engine and client's IPC resources for db2inst1.

Stop the DB2 Administration Server instance. Skip this step if DB2 Admin instance is not running; otherwise, execute this command:

$ db2admin stop

Remove defunct DARI processes, DB2 background processes, or other defunct threads. List all DB2 processes for this instance:

$ ps -ef | grep db2

db2as 23797 23796  0   Aug 28 ?        0:00 db2sysc
db2as 23800 23798  0   Aug 28 ?        0:00 db2sysc
db2inst1 22229     1  0 13:08:01 pts/5    0:00 /db2/dbhome/db2inst1/sqllib/bin/db2bp 20580 5
db2as 23802 23797  0   Aug 28 ?        0:00 db2sysc
db2as 23801 23797  0   Aug 28 ?        0:00 db2sysc
db2as 23799 23797  0   Aug 28 ?        0:00 db2sysc

From the list above, we notice that there are processes belonging to the DB2 Admin services instance, so you must leave them alone. There is only one process that belongs to db2inst1, and that is a DB2 background process that did not get cleaned up after executing ipclean. Get the PID number and kill that process:

$ kill -9 22229

Most of the time, you will see many defunct processes, and to save time, you should execute the following command instead of executing the kill -9 ${PID} command many times:

$ ps -ef | grep db2inst1 | awk '{print "kill -9 "$2}' > /tmp/kpid
$ chmod +x /tmp/kpid
$ /tmp/kpid

Verify that no defunct processes are left. Repeat this step if necessary:

$ ps -ef | grep db2inst1

Remove defunct interprocess communication segments.

List all memory segments:

$ ipcs -am | grep db2inst1
IPC status from  as of Thu Aug 30 13:16:55 2001
T      ID            KEY            MODE           OWNER          GROUP
Shared Memory:
m      9910          0x74006380     --rw-rw-rw-    db2inst1       db2grp
m      59714         0x61006380     --rw-------    db2inst1       db2grp

From the list above, you notice that there are two memory segments that were not removed when executing ipclean. You must remove them manually:

$ ipcrm -m 9910
$ ipcrm -m 59714

List all semaphore segments:

$ ipcs -as | grep db2inst1
IPC status from  as of Thu Aug 30 13:16:55 2001
T      ID          KEY          MODE         OWNER        GROUP
Shared Memory:
s      1900549     0x74006380   --ra-ra-ra-  db2inst1     db2grp   1
s      1310727     00000000     --ra-ra----  db2inst1     db2grp   1
s      2031624     0x73006380   --ra-ra-ra-  db2inst1     db2grp   1

From the list above, notice that there are three semaphore segments that were not removed after executing ipclean. You must remove them manually:

$ ipcrm -s 1900549
$ ipcrm -s 1310727
$ ipcrm -s 2031624

List all message queue segments:

$ ipcs -aq | grep db2inst1

IPC status from  as of Thu Aug 30 13:16:55 2001
T           ID         KEY         MODE         OWNER     GROUP
Shared Memory:
q           1572868    0x01dadd16  -Rrw-------  db2inst1  db2grp 65535
q           901125     0x01eba5ed  --rw-------  db2inst1  db2grp 65535
q           1609739    00000000    --rw-------  db2inst1  db2grp 65535
q           659468     00000000    -Rrw-------  db2inst1  db2grp 65535

From the list above, notice that there are four message queue segments that were not removed after executing ipclean. You must remove them manually:

$ ipcrm -q 1572868
$ ipcrm -q 901125
$ ipcrm -q 1609739
$ ipcrm -q 659468

Verify that there are no defunct interprocess communications left. Repeat this step if necessary:

$ ipcs -a | grep db2inst1

Before you start the DB2 instance, it is best practice to back up the previous db2diag.log, any event logs, notification log, and the associated trap files, and start with a fresh copy. Move the current db2diag.log to the backup directory:

$ mkdir -p /db2/backup/db2inst1/diaglogSep12
$ cd /db2/dbhome/db2inst1/sqllib/db2dump
$ mv db2diag.log /db2/backup/db2inst1/diaglogSep12/
$ mv db2eventlog* /db2/backup/db2inst1/diaglogSep12/
$ mv db2inst1.nfy /db2/backup/db2inst1/diaglogSep12/
$ touch db2diag.log db2inst1.nfy db2eventlog.nnn where nnn is the database partition number
$ chmod 664 db2diag.log db2inst1.nfy db2eventlog.*

If there are any trap files, group them together:

$ cd /db2/dbhome/db2inst1/sqllib/db2dump
$ tar -cvf /db2/backup/db2inst1/diaglog/trapAug292001.tar t* c* l* [0-9]*

Or execute this keepDiagLog.sh script:

#!/bin/ksh
#
# Clean up db2diag.log, trap files, dump files, etc
#
# Usage:  keepDiagLog.sh
#
# Execute as DB2 instance owner
#
LOGTIME=`date '+%y%m%d%H%M%S'`
DIAGDIR=${HOME}/sqllib/db2dump
typeset instname=${1-db2inst1}
typeset ROOTDIR=${2-/dbbackup}
typeset dbname=${3-sample}
typeset OLDDIR=${4-${ROOTDIR}/${instname}/${dbname}/db2diag${LOGTIME}}
mkdir -p ${OLDDIR}
cd ${DIAGDIR}
cp -r * ${OLDDIR}/
for j in `ls`
do
  if  [ -d "${j}" ]; then
    rm -r ${j}
  else
    rm ${j}
  fi
done
touch db2diag.log ${instname}.nfy
chmod 666 db2diag.log ${instname}.nfy
exit 0
# You need to add the steps for the event log files based on the
# number of database partitions defined on your server.

Now you’re ready to start the DB2 instance. Start the DB2 instance:

$ db2start

SQL1063N  DB2START processing was successful.

And you’re ready to start the DB2 Admin instance. Start the DB2 Admin instance:

$ db2admin start

Verify the database connection.

Connect to the sample database:

$ db2 connect to sample

   Database Connection Information
 Database server        = DB2/SUN 8.1.0
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

Disconnect from the sample database:

 $ db2 terminate

Reactivate the database to improve performance.

Activate the sample database:

$ db2 activate database sample
DB20000I  The ACTIVATE DATABASE command completed successfully.

Geotrust Deploy wildcard certificate on Zimbra

Deploy wildcard certificate on Zimbra

Download the root ca cert and save mainca.crt

Download the intermediate.crt  and save intermediate.crt

cat intermediate.crt mainca.crt > rootEN.crt

/opt/zimbra/openssl/bin/openssl verify -purpose sslserver -CAfile rootEN.crt www.rmohan.com.crt

www.rmohan.com.crt: OK

-bash-4.1# /opt/zimbra/bin/zmcertmgr verifycrt comm www.rmohan.com.key www.rmohan.com.crt rootEN.crt
** Verifying www.rmohan.com.crt against www.rmohan.com.key
Certificate (www.rmohan.com.crt) and private key (www.rmohan.com.key) match.
Valid Certificate: www.rmohan.com.crt: OK

Take a backup of /opt/zimbra/ssl/zimbra/commercial

cp /root/software/ss11/www.rmohan.com.crt commercial.crt
cp /root/software/ss11/www.rmohan.com.key commercial.key
cp /root/software/ss11/rootEN.crt commercial_ca.crt

/opt/zimbra/bin/zmcertmgr verifycrt comm   /opt/zimbra/ssl/zimbra/commercial/commercial.key /opt/zimbra/ssl/zimbra/commercial/commercial.crt  /opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt

/opt/zimbra/bin/zmcertmgr deployca   /opt/zimbra/ssl/zimbra/commercial/commercial.crt /opt/zimbra/ssl/zimbra/commercial/commercial.key /opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt

cd /opt/zimbra/ssl/zimbra/commercial

/opt/zimbra/bin/zmcertmgr verifycrt comm commercial.key commercial.crt commercial_ca.crt
/opt/zimbra/bin/zmcertmgr deploycrt comm commercial.crt commercial_ca.crt

su – zimbra

zmcontrol stop; zmcontrol start;

RSYNC

What is rsync?

Rsync is a program for synchronizing two directory trees across different file systems even if they are on different computers. It can run its host to host communications over ssh to keep things secure and to provide key based authentication. If a file is already present in the target and is the same as on the source the file will not be transmitted. If the file on the target is different than the one on the source then only the parts of it that are different are transferred. These features greatly increase the performance of rsync over a network.

What are hard links?

Hard links are similar to symlinks. They are normally created using the ln command but without the -s switch. A hard link is when two file entries point to the same inode and disk blocks. Unlike symlinks there isn’t a file and a pointer to the file but rather two links to the same file. If you delete either entry the other will remain and will still contain the data. Here is an example of both:

————- Symbolic Link Demo ——-
% echo foo > x
% ln -s x y
% ls -li ?
38062 -rw-r–r–  1 msn  admin 4 Jul 25 14:28 x
38066 lrwxrwxrwx  1 msn  admin 1 Jul 25 14:28 y -> x
— As you can see, y is only a pointer to x.
% grep . ?
x:foo
y:foo
— They contain the same data.
% rm x
% ls -li ?
38066 lrwxrwxrwx  1 msn  admin 1 Jul 25 14:28 y -> x
% grep . ?
grep: y: No such file or directory
— Now that x is gone y is simply broken.
———— Hard Link Demo ————
% echo foo > x
% ln x y
% ls -li ?
38062 -rw-r–r–  2 msn  admin 4 Jul 25 14:28 x
38062 -rw-r–r–  2 msn  admin 4 Jul 25 14:28 y
— They are the same file occupying the same disk space.
% grep . ?
x:foo
y:foo
— They contain the same data.
% rm x
% ls -li ?
38062 -rw-r–r–  1 msn  admin 4 Jul 25 14:28 y
% grep . ?
y:foo
— Now y is simply an ordinary file.
———- Breaking a Hard Link ———-
% echo foo > x
% ln x y
% ls -li ?
38062 -rw-r–r–  2 msn  admin 4 Jul 25 14:34 x
38062 -rw-r–r–  2 msn  admin 4 Jul 25 14:34 y
% grep . ?
x:foo
y:foo
% rm y ; echo bar > y
% ls -li ?
38062 -rw-r–r–  1 msn  admin 4 Jul 25 14:34 x
38066 -rw-r–r–  1 msn  admin 4 Jul 25 14:34 y
% grep . ?
x:foo
y:bar

Why backup with rsync instead of something else?

Disk based: Rsync is a disk based backup system. It doesn’t use tapes which are too slow to backup (and more importantly restore) modern systems with large hard drives. Also, disk based backup solutions are much cheaper than equivalently sized tape backup systems.
Fast: Rsync only backs up what has changed since the last backup. It NEVER has to repeat the full backup unlike most other systems that have monthly/weekly/daily differential configurations.
Less work for the backup client: Most of the work in rsync backups including the rotation process is done on the backup server which is usually dedicated to doing backups. This means that the client system being backed up is not hit with as much load as with some other backup programs. The load can also be tailored to your particular needs through several rsync options and backup system design decisions.
Fastest restores possible: If you just need to restore a single file or set of files it is as simple as a cp or scp command. Restoring an entire file system is just a reverse of the backup procedure. Restoring an entire system is a bit long but is less work than backup systems that require you to reinstall your OS first and about the same as other manual backup systems like dump or tar.
Only one restore needed: Even though each backup is an incremental they are all accessible as full backups. This means you only restore the backup you want instead of restoring a full and an incremental or a monthly followed by a weekly followed by a daily.
Cross Platform: Rsync can backup and recover anything that can run rsync. I have used it to backup Linux, Windows, DOS, OpenBSD, Solaris, and even ancient SunOS 4 systems. The only limitation is that the file system that the backups are stored on must support all of the file metadata that the file systems containing files to be backed up supports. In other words if you were to use a vfat file system for your backups you would not be able to preserve file ownership when backing up an ext3 file system. If this is a problem for you try looking into rdiff-backup.
Cheap: It doesn’t seem like it would be cheap to have enough disk space for 2 copies of everything and then some but it is. With tape drives you have to choose between a cheap drive with expensive tapes or an expensive drive with cheap tapes. In a hard drive based system you just buy cheap hard drives and use RAID to tie them together. My current backup server uses two 500GB IDE drives in a software RAID-0 configuration for a total of 1TB for about $100 which is about 1/6th what I paid for the DDS3 tape drive that I used to use and that doesn’t even include the tapes that cost about $10/12GB.
Internet: Since rsync can run over ssh and only transfers what has changed it is perfect for backing up things across the internet. This is perfect for backing up and updating a web site at a web hosting company or even a co-located server. Internet based backup systems are also becoming more and more popular. Rsync is the perfect tool to backup to such services over the internet.
Do-it-yourself: There are FOSS backup packages out now that use rsync as their back end but the nice thing here is that you are using standard command line tools (rsync, ssh, rm) so you can engineer your own backup system that will do EXACTLY what you want and you don’t need a special tool to restore.

Why/When wouldn’t you want to use rsync for backups?

Databases: Rsync is a file level backup so it is not suitable for databases. If your primary data is databases then you should look somewhere else. If you have databases but they are not your primary data then there are procedures below to integrate database backups into the rsync backups.
Windows: If you plan to backup windows boxes then rsync probably isn’t for you. It is possible to backup Windows boxes with rsync but the system recovery process is UGLY and if you want a complete backup of the OS you will have to boot the computer into Linux or use Shadow Copy to be able to read some of the files. I have yet to find a simple comprehensive procedure for restoring a complete Windows system based on a copy of all files from C:. If someone has such a procedure I would love to see it.
Compression: Since rsync doesn’t put the files into any kind of archive there is no compression at all. In most cases it is still more cost effective to store uncompressed data on a hard drive than it is to store compressed data on a tape or some other media but this might not be true for everyone. Also, most modern file formats are already compressed so in many cases the compression wouldn’t help anyways.
Commercial support: Like most of the stuff I talk about there is no real commercial support for this. If you want a backup software vendor that you can call and beg for help from then go buy some big commercial backup system but expect to pay a ton of money for something that isn’t anywhere near as flexible as rsync.
Security: Since rsync runs over ssh you would normally set it up so that root on your backup server can ssh into all of your other machines as root without a password. This means that the security of your backup server becomes very important as anyone who roots it can root any other server with one command. There are ways that you could design around this or you could simply require the person running the backup to type in the root passwords as it goes but those solutions all over-complicate things. Giving your backup server all of the keys isn’t really as bad as it sounds though when you consider that in any other backup system the backup server would still have some kind of root access to the other servers as well as a complete copy of them that a hacker could use to find vulnerabilities. Note that it is possible to restrict the ssh key used by the backups to only work from the backup server using the from= parameter in the authorized_keys file.
Do-it-yourself: Again, this is a do-it-yourself system. You have to decide how you want your backups to work and how you want them organized. If you don’t want to write/modify shell scripts then look for something else or look at the available backup systems that use rsync as their back end. Examples of less do-it-yourself oriented backup systems include rsnapshot, dirvish, and hopefully the one I created based on rsync.

Why not just use RAID / Is this like using RAID-1? / Is this like DRBD?

I don’t think I can ever say this enough times…. RAID is

NOT

a backup system! RAID (other than level 0) does a wonderful job of protecting your data from disk failures. However, it provides absolutely

NO

protection against file corruption, files destroyed by a virus or a hacker, or the “oops, I deleted the wrong file” problem which most of us have encountered. There is a time and a place for RAID and RAID is not always needed however data should

ALWAYS

be backed up regardless of what media it is stored on or how redundant that media may be. Networked mirroing solutions like DRBD have the same drawbacks as RAID as they are a simple real-time mirror. My general rule of thumb is that if you can’t restore your data to the way it was last Monday or last night using a storage device other than the one the data was on last Monday then you don’t have a backup system.

Do I need to backup the OS or just the data?

In my opinion yes, you need to backup the OS as well as the data. Many people feel that the OS is easily recreated by doing a re-install plus loading a list of applications that was saved during the backup run. While this is true in theory it isn’t so easy in practice. If you ever have the catastrophic loss of a server you will find out very quickly that every minute counts. If you have a backup of the OS and an established and practiced procedure for restoring it the recovery will go very quickly and it will probably work the first time. If your recovery procedure includes “install the OS” and “install all the applications” expect to add a full day of listening to admin complain while you do those steps. Also, in terms of gigabytes, the OS is usually tiny compared to the data it supports. The extra disk space required to backup the OS will probably not even make a difference in the choice of how big to make the backup system. With the typical ratio of OS vs data it is just silly to not backup the OS. Finally, the worst case scenario is that your data-only backup system misses some configuration or data file that was assumed to be part of the OS but had been modified. If you aren’t backing it up you will simply lose it.

Why all this talk of a backup server? Why not just use an external hard drive?

While it is completely possible to do the backups this way using these procedures (and I have done it this way myself) there are a couple of drawbacks…
Security: One of the reasons we have backups is because of the possibility of malicious activity (hackers, worms, trojans, etc). If your backup device is plugged into the computer being backed up then any malicious admin or software that can destroy your data can also destroy your backups. Keeping your backups on a separate isolated server protects them from this possibility. Note that this is also why I prefer to pull backups from a script running on the backup server rather than pushing backups from a script running on the backup clients.
Performance: Rsync’s ability to transfer only the parts of a file that have changed does not work on local transfers. This is because the feature would actually be counter-productive on a local transfer. Rsync would have to read and hash both versions of the file then write out the new version of the file instead of simply reading from the source and writing to the target. Also, most external hard drives are USB which is a pretty slow interface. Note that this is also true if you use a network mount (such as NFS, Samba, or CIFS) to access the remote data instead of a network transport like ssh or rsyncd. Read the rsync man page section on –whole-file for more information.

What about a Network Attached Storage (NAS) device instead of a server?

This depends on the quality of the NAS device. Rsync is designed to reduce network IO at the expense of disk IO and CPU cycles. It can do this because normally you have two instances of rsync running on two systems with a network protocol in between. Both instances of rsync have local disk access to one side of the transfer so they can do calculations to reduce the amount of data that needs to be transmitted across the network. If rsync is only running at one end of the network connection then disk IO is really network IO so those features are automatically disabled. Some of the higher end NAS devices support rsync directly and can be treated exactly like a standard rsync backup server. Unfortunately not all NAS devices are this smart. Some will only provide access via network mounts and some even only support CIFS instead of NFS. If yours doesn’t at least support NFS I would not suggest using it for rsync backups. If your best choice is NFS then note that –whole-file will be forced by rsync to reduce the performance impact of NFS.

How do you do off-site/off-line backups with rsync?

The best way to do an off-site or off-line backup is to do the rsync backup like normal and then backup the backup to tape or whatever media you want to use for your off-line/off-site backups. This gives you all the speed advantages of rsync during the actual backups and restores while allowing you to do the slower tape backups during the day when the backup server would otherwise be idle. Note that I do not recommend using removable hard drives for off-site rsync backups. Hard drives have very fragile moving parts and if you are constantly transporting them around they will not last long and will probably fail when you need them most as that is when they will be transported.

How do you handle databases?

Databases can’t just be backed up like files. This is because database engines are constantly making changes to the database files at the block level. If you backed them up with a file based tool like rsync the backup would be inconsistent and possibly even unusable. The best way to backup most databases is to take an LVM snapshot of the database then rsync backup the snapshot of the database. This allows you to have all the advantages of an rsync backup with as little impact to the running database as possible. If you can’t use LVM snapshots then your next best bet is to use the database specific tools (mysqldump, pg_dump, etc) to dump the database contents to files that can be backed up. If all else fails you can lock or shutdown the database engine so the files are not changing during the backup but this will be a huge impact outage.

How much space does it take to do rsync backups while keeping old copies?

This completely depends on how much change there is between each backup and how many backups you retain. I have seen it as low as 5% and as high as 40% but it is completely dependant on your data and your retention policy.

Organizing backups

Since this is a do-it-yourself system this is totally up to you to design. I have my backup storage mounted under /backup and put all of my rsync backups under /backup/rsync. Within that directory I make a directory for each host that gets backed up. Then for each backup of each file system I change ‘/’ to ‘_’ in the mount point name and time stamp the file system so my backup of /home/rmohan done at 17:47 on 2005-07-25 would be stored in /backup/rsync/rmohan/_home_rmohan.2005-07-25.17-47-42. When the backup is done I would create a symlink from that directory to /backup/rsync/rmohan/_home_rmohan.current to make it easier to find especially from scripts. As the backup is running I replace the date and time in the filename with “incomplete” so that if a backup is aborted or fails it does not appear to be a complete one and does not count against the number of old backups to keep.

Rotating backups

Rsync does the incremental backups using “hard links” and the –link-dest parameter. However, it has no mechanism for purging old backups when they reach a predefined age. The purging can be done with a simple rm -rf of the oldest backup(s) as needed. However, the deletion of a large directory tree can take a significant amount of time and probably isn’t something you want to waste time on during your backup window. Therefore, instead of doing an rm -rf I suggest just doing an mv to a deletion pool directory within the same file system. This will allow you to easily do all of the deletions later after the backups have finished.

Here is how the organization with the hard links looks:

You can determine the current backup with:

# readlink _home_rmohan.current
_home_rmohan.2005-07-25.15-32-42

Here is an example of 10 backups of my home directory:

# du -shc _home_rmohan.2*
9.7G    _home_rmohan.2005-06-21.15-29-25
161M    _home_rmohan.2005-06-22.20-12-01
207M    _home_rmohan.2005-06-30.18-36-21
125M    _home_rmohan.2005-07-01.12-15-05
173M    _home_rmohan.2005-07-05.11-05-34
181M    _home_rmohan.2005-07-07.13-43-22
176M    _home_rmohan.2005-07-07.17-22-09
234M    _home_rmohan.2005-07-13.11-14-32
160M    _home_rmohan.2005-07-18.16-32-54
168M    _home_rmohan.2005-07-25.15-32-42
12G     total
# foreach f (_home_rmohan.2*)
foreach? du -sh $f
foreach? end
9.7G    _home_rmohan.2005-06-21.15-29-25
9.7G    _home_rmohan.2005-06-22.20-12-01
9.7G    _home_rmohan.2005-06-30.18-36-21
9.7G    _home_rmohan.2005-07-01.12-15-05
9.7G    _home_rmohan.2005-07-05.11-05-34
9.8G    _home_rmohan.2005-07-07.13-43-22
9.8G    _home_rmohan.2005-07-07.17-22-09
9.8G    _home_rmohan.2005-07-13.11-14-32
9.7G    _home_rmohan.2005-07-18.16-32-54
9.8G    _home_rmohan.2005-07-25.15-32-42

Note that each backup individually is complete but when taken together there is only a small increase in disk usage. This concept is the key of rsync incremental backups.

To purge old backups simply count how many there are and if there are too many just move the oldest one to your deletion pool and repeat the procedure until there is no longer too many old backups.

Actually backing up

Now we get to actually look at rsync. When you run rsync you will tell it to backup the live file system into a new empty directory and to look to the previous backup for files that have already been backed up. Whenever rsync finds a new file it will copy over that file. Whenever it finds a modified file it will copy over the differences making a new file in the new backup directory but leaving the old version of the file as it was in the old backup directory. When rsync finds a file that has not changed since the last backup it will simply be hard linked into the new backup directory requiring almost no additional disk space. There is a wide variety of options that can be used with rsync to tailor it to your specific needs but here is what my system uses by default:

# rsync –archive –one-file-system –hard-links \
–human-readable –inplace –numeric-ids –delete \
–delete-excluded –exclude-from=excludes.txt \
–link-dest=/backup/rsync/rmohan/_home_rmohan.2005-07-25.15-32-42 \
rmohan:/home/rmohan/ /backup/rsync/rmohan/_home_rmohan.incomplete/

I also add –verbose –progress –itemize-changes when I am watching the backup run instead of using it from a cron job. Now I will explain the components of that rather long command…
rsync: Duh, the rsync command 😉
–archive: This causes rsync to backup (they call it “preserve”) things like file permissions, ownerships, and timestamps.
–one-file-system: This causes rsync to NOT recurse into other file systems. If you use this like I do then you must backup each file system (mount point) one at a time. The alternative is to simply backup / and exclude things you don’t want to backup (like /proc, /sys, /tmp, and any network or removable media mounts)
–hard-links: This causes rsync to maintain hard links that are on the server being backed up. This has nothing to do with the hard links used during the rotation.
–human-readable: This tells rsync to output numbers of bytes with K, M, G, or T suffixes instead of just long strings of digits.
–inplace: This tells rsync to update files on the target at the block level instead of building a temporary replacement file. It is a significant performance improvement however it should not be used for things other than backups or if your version of rsync is old enough that –inplace is incompatible with –link-dest.
–numeric-ids: This tells rsync to not attempt to translate UID <> userid or GID <> groupid. This is very important when doing backups and restores. If you are doing a restore from a live cd such as SystemRescueCD or Knoppix your file ownerships will be completely screwed up if you leave this out.
–delete: This tells rsync to delete files that are no longer on the server from the backup. This is less important when using –link-dest because you should be backing up to an empty directory so there would be nothing to delete however I include it because of the possibility that the *.incomplete directory I am backing up to is actually left over from a previous failed run and may have things to delete.
–delete-excluded: This tells rsync that it can delete stuff from a previous backup that is now within the excluded list.
–exclude-from=excludes.txt: This is a plain text file with a list of paths that I do not want backed up. The format of the file is simply one path per line. I tend to add things that will always be changing but are unimportant such as unimportant log and temp files. If you have a ~/.gvfs entry you should add it too as it will cause a non-fatal error.
–link-dest=/backup/rsync/rmohan/_home_rmohan.2005-07-25.15-32-42: This is the most recent complete backup that was current when we started. We are telling rsync to link to this backup for any files that have not changed.
rmohan:: This is the host name that rsync will ssh to.
/home/rmohan/: This is the path on the server that is to be backed up. Note that the trailing slash IS significant.
/backup/rsync/rmohan/_home_rmohan.incomplete/: This is the empty directory we are going to backup to. It should be created with mkdir -p first. If the directory exists from a previous failed or aborted backup it will simply be completed. This trailing slash is not significant but I prefer to have it.
–verbose: This causes rsync to list each file that it touches.
–progress: This adds to the verbosity and tells rsync to print out a %completion and transfer speed while transferring each file.
–itemize-changes: This adds to the file list a string of characters that explains why rsync believes each file needs to be touched. See the man page for the explanation of the characters.

Recovering files from backups

Because rsync doesn’t put the backed up files into any kind of archive this is as simple as copying a file. Just find the file you need on the backup server and copy it to where you need it to be. If you are restoring it to another server just use rsync or scp to get it there. Here are 2 examples of files that can be restored from my home directory:

# ls -li _home_rmohan.2*/msn /bin/encode
3605946 5 msn  admin 2223 Jul  2 11:34 _home_rmohan.2005-07-05.11-05-34/msn /bin/encode
3605946 5 msn  admin 2223 Jul  2 11:34 _home_rmohan.2005-07-07.13-43-22/msn /bin/encode
3605946 5 msn  admin 2223 Jul  2 11:34 _home_rmohan.2005-07-07.17-22-09/msn /bin/encode
3605946 5 msn  admin 2223 Jul  2 11:34 _home_rmohan.2005-07-13.11-14-32/msn /bin/encode
3605946 5 msn  admin 2223 Jul  2 11:34 _home_rmohan.2005-07-18.16-32-54/msn /bin/encode
4853134 1 msn  admin 4012 Jul 21 19:31 _home_rmohan.2005-07-25.15-32-42/msn /bin/encode
# ls -li _home_rmohan.2*/msn /bin/mp3db
4074469 1 msn  admin 29598 Jun 19 16:01 _home_rmohan.2005-06-21.15-29-25/msn /bin/mp3db
4082467 1 msn  admin 29943 Jun 22 19:10 _home_rmohan.2005-06-22.20-12-01/msn /bin/mp3db
4124342 1 msn  admin 30570 Jun 30 17:22 _home_rmohan.2005-06-30.18-36-21/msn /bin/mp3db
2617551 1 msn  admin 30701 Jul  1 12:17 _home_rmohan.2005-07-01.12-15-05/msn /bin/mp3db
3605948 1 msn  admin 35604 Jul  1 16:50 _home_rmohan.2005-07-05.11-05-34/msn /bin/mp3db
4411207 2 msn  admin 35668 Jul  6 11:06 _home_rmohan.2005-07-07.13-43-22/msn /bin/mp3db
4411207 2 msn  admin 35668 Jul  6 11:06 _home_rmohan.2005-07-07.17-22-09/msn /bin/mp3db
4523360 1 msn  admin 37041 Jul  9 17:28 _home_rmohan.2005-07-13.11-14-32/msn /bin/mp3db
4675812 1 msn  admin 37201 Jul 18 09:50 _home_rmohan.2005-07-18.16-32-54/msn /bin/mp3db
4853138 1 msn  admin 37200 Jul 19 16:46 _home_rmohan.2005-07-25.15-32-42/msn /bin/mp3db

As you can see my encode script has been fairly constant while my mp3db script has changed almost every time I have run a backup. I can choose to restore whichever version I want as they are all just plain files.

Recovering entire file systems from backups

This is a simple reverse of the backup procedure. Just format the new file system and rsync the files back to it and make sure you use the same rsync options especially –archive and –numeric-ids.

Recovering entire systems from backups

This is where things get a little ugly. Of course this is for times that are already ugly because you probably just lost your boot drive and have a brand new one installed that is completely blank. This procedure varies a bit depending on what OS you are restoring but here is the general idea:
Boot from some media that gives you an OS, networking, rsync, and ssh. SystemRescueCD, Knoppix, or most other Live distribution can do the job for Linux systems. In the case of OpenBSD I boot their install disc and then use ftp to transfer a tarball the rsync backup instead of using rsync. The same thing will work in Solaris although it is usually easier to NFS mount the backup repository.
Partition the new drive with fdisk or whatever you usually use. If you follow my advice in the advanced section you will have an .sfdisk file and you can duplicate the original partition table with ‘sfdisk /dev/whatever < file.sfdisk’.
Format the new partitions. Linux choices are mke2fs, mkfs.ext4, mkfs.xfs, and mkswap. For most other operating systems it is simply newfs.
Mount up the new partitions in a convenient location with something like:

# mkdir /s
# mount -vt [fstype] /dev/[root partition] /s
# mkdir /s/usr /s/var /s/proc /s/dev /s/tmp
# chmod 1777 /s/tmp
# mount -vt [fstype] /dev/[var partition] /s/var
# mount -vt [fstype] /dev/[usr partition] /s/usr

Now run your file system level restores just like you would if you weren’t recovering the entire system. You will need to restore each file system that was on the old boot disk.
If you have made any changes such as device names, mount points, or partition layouts you should now update /s/etc/fstab and /s/boot/grub/menu.lst.
Fix up /dev if needed

# cp -av /dev/console /dev/null /s/dev/

Now you have to make the disk bootable again. This totally varies by operating system and boot loader…
For Linux systems using grub:

# grub-install –root-directory=/s /dev/sda

Or, if that doesn’t work:

# grub
root (hd0,0) # or whatever partition matches your boot disk
setup (hd0)
exit

For Linux systems using lilo (why are you still using lilo?):

# mount -vo bind /dev /s/dev
# mount -vo bind /proc /s/proc
# chroot /s /bin/bash
# lilo -v
# exit

For OpenBSD systems:

# cd /s/usr/mdec
# ./installboot /s/boot ./biosboot /dev/rwd0c (or /dev/rsd0c if using SCSI)

For Solaris systems:

# installboot /s/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

Advanced topics

System Cloning: Once you have an rsync backup system and restore procedure in place it is easy to use your restore procedure as a method of cloning existing OS installs onto new systems. This has the added benefit of forcing you to practice your restore procedure so it will be well known and tested when the day comes that you need to restore something.
Format of backup repository: Assuming you are using a Linux box as your backup server you have multiple choices for the file system type that you want to format the backup drive with. I generally use ext4 because it is the fastest well established file system available currently and it does a good job as long as there aren’t too many files for fsck to handle in a reasonable amount of time. However, XFS is also a good choice because it is better at dealing with large files and it is much better at doing the delete portion of the backup rotation. XFS also eliminates the need for the periodic off-line fsck which may make it your only choice if you have many millions of files to deal with. You may want to play with these 2 choices a bit before you make your final decision. I do not recommend using JFS as it has horrible performance or reiserfs as it has horrible reliability.
ZFS: If you have many millions of files to deal with you may discover that this system simply takes too long to delete old backups and if you ever need an fsck you may even be down for days waiting for it to finish. ZFS on OpenSolaris is the answer to your prayers. ZFS can handle multiple LVM-like snapshots. The benefit here is that you can run rsync backups without the –link-dest parameter and simply overwrite the previous backup each run. Then you use the ZFS snapshots to retain the old backups. Each old backup becomes a snapshot mount. The snapshots are created and deleted in less than a second removing the need for the long rm operations to purge old backups and allowing rsync to just sync files without bothering to create hard links. Hopefully soon btrfs will give us this capability in Linux but until then ZFS on OpenSolaris is the thing that completes large scale rsync backups.
RAIDed backup repository: This is a somewhat interesting topic. There are many opinions out there about whether or not a backup repository should be made redundant using RAID. For many people (including me for personal use) the backups are an additional redundant copy of something that is already stored on a redundant RAID array and therefore the backups do not need to be redundant. That is why my personal backups are on a RAID-0 volume for pure speed and large capacity. For others (including me for professional use) the most important part of the backups is the old backups which contain data that no longer exists on the other systems. This means that the backups should be on redundant storage here. At work I use either RAID-1 or RAID-10 depending on the size of the backup system. It is of course also possible to use RAID-5 but depending on your hardware you may not like the performance. If you do use RAID other than RAID-1 you should set the stripe size fairly small as most of the work for rsync backups is done at the file system metadata level not the large file level.
Cross-platform handling of /dev and other device files: Since different operating systems handle major and minor numbers differently I suggest excluding /dev from the rsync backups. I keep a /dev.tar tarball on all of my boxes with a backup of /dev in it just in case I ever need to restore that. The tarball will be very small since there are no actual data in it. Note that this is completely unimportant on Linux systems that use udev for /dev.
What is different between 2 backups: I wrote a perl script that scans 2 backups of the same directory and lists what has changed between them. I have published that script at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/diff_backup.pl.txt
Storing data that isn’t kept in a file: I wrote a perl script that does backups of data that isn’t stored in files such as partition tables. My main backup script runs this “getinfo” script at the start of a backup if it detects an infotab file telling it what to backup. The script is published at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/getinfo.pl.txt. I also have an example of its tab file format published at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/rmohan/infotab
rsync –dry-run: This is rsync’s testing mode. You can use this on any other rsync command to have rsync tell you what it would have done without actually doing anything.
rsync –whole-file: This tells rsync to transfer entire files instead of using its block level comparison system. If you have a nice fast link (like a LAN) this can make things faster since rsync doesn’t have to checksum files at all but if you are transferring across the internet you don’t want this.
rsync –checksum: Normally rsync compares the timestamp and the size of a file to determine if it has changed since the last backup. If you use –checksum rsync will ignore the time stamps and checksum any files that are the same size to determine if they are different. Obviously this adds a significant slowdown to the backup process. You wouldn’t normally use this option however it is good to have if you believe your backup data has become corrupted in a way that doesn’t affect the information you see in an ls -l output.
rsync –size-only:
rsync –sparse: This tells rsync to turn files with large chunks of null characters into sparse files as it transfers them. This is common for things like virtual machine images that have free space inside of them. Without this option such files will be larger on the backup than on the source.
rsync –delete-*: There are several options that control when rsync does the deletion process. Normally you would just use –delete and let rsync use the fastest one available in your version of rsync however there are times when you want to force it to behave differently. As of version 3.0.6 –delete-during is the default for –delete. See the man page for more information.
rsync –temp-dir: If you have a tmpfs mount you can get a very small speed boost by using this parameter. It causes the partial files used during the delta transfers to be stored in an alternate (faster) location until the file is complete. This will only help if you are doing delta transfers and if the directory you specify is on a tmpfs mount. Note that your tmpfs mount must be big enough to hold any single file or it will cause rsync to fail with an insufficient disk space error. Also, if your tmpfs mount goes into swap you will completely kill your performance. IOW, don’t use this unless you are sure it is going to help. Also, note that the –inplace parameter is even better than this.
rsync –bwlimit: Allows you to limit how much bandwidth rsync uses in its network communications. It is measured in KB per second.
rsync –ignore-errors: This overrides one of rsync’s built in safety features. Normally if there is a problem during the backup rsync will NOT run its delete pass. If you use –ignore-errors the delete pass will run regardless of any other errors. Note that this isn’t as dangerous as it sounds because rsync with –link-dest should be operating on an empty directory with nothing to delete anyway and even if it does delete something it would not delete from the previous backup directories.
rsync –max-delete: This allows you to re-implement the safety feature above with a threshold. You can tell rsync how many files it can delete before it decides that something must be wrong and stops.
rsync –compress: This tells rsync to use zlib compression on its communications. This would be good if you are backing up over the internet but it is usually counter productive on a LAN. You can also do compression at the ssh level however rsync’s is a little more efficient. Of course you should not do both.
rsync –acls: This tells rsync to transfer ACLs in addition to permissions. Note that this is a compile time option.
rsync –xattr: This tells rsync to transfer extended attributes (the ones you set with chattr) in addition to permissions. Note that this is a compile time option.
push instead of pull: Rsync can push data just as well as it can pull it. It is possible to have all servers push their backups to the backup server instead of the backup server pulling the data from them. I personally don’t like this approach because it means that all your servers have the key to your backup server instead of the other way around and because you have to engineer a much more complicated way of doing the rotations as well as making sure you don’t have 20 servers trying to back themselves up at the same time which would flood the backup server.
Buddy backups: If you don’t want to dedicate a box to running backups you could pair off your boxes and have them backup each other. You could also do this in a ring layout.
LVM Snapshots: It is often wise to use an LVM to take an instant shot of a file system and then backup that snapshot. This would remove any chance of a file system changing during the backup. As mentioned before this is the preferred method of backing up a database but it is also good for things like email servers.
Squashfs for archives: If you want to make a permanent archive of a particular backup (perhaps to burn it) squashfs is a great way to do it. Squashfs creates a compressed mountable archive of a directory tree. You create a squashfs archive with mksquashfs which works much like mkisofs and then you can mount the resulting file as a loopback device.
FAT: I do not recommend backing up to a FAT file system using rsync. However, rsync is perfectly capable of backing up a FAT file system. However, there are issues with how FAT stores time stamps. FAT can only store time stamps with a 2 second resolution. The easy fix for that is to use rsync with –modify-window=2. FAT also handles daylight savings time changes differently. When the time changes FAT file systems will appear to be 1 hour off. The easiest solution for that is to use rsync with –modify-window=3602.
Sudo: It is possible to use rsync under sudo even on both ends. I personally believe that this is ugly, insecure, and an abuse of the sudo system but it can be done. If you run rsync under sudo and add –rsync-path=’/usr/bin/sudo /usr/bin/rsync’ then configure sudo to not prompt for a password when that user runs rsync –server it will work.

Helpful links

http://rsync.samba.org/: The home page for rsync.
http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/: My rsync based backup system (it could use a better name).
http://www.opensolaris.org: The official OpenSolaris web site.
http://www.sysresccd.org/: SystemRescueCD. This is a live Linux distribution that is designed specifically for system recovery. It has all the tools you should need to recover a Linux system from an rsync backup. It is also very easy to customize if you have a reason to.
irc://irc.freenode.net/#rsync: The Rsync support IRC channel. I am usually there as BasketCase.