November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Raid Partition how to

RAID (Redundant Array of Inexpensive Disks)

Create 3 partitions for implementing RAID using fdisk command.

e.g. #fdisk /dev/hda

Press n to create the 3 new partitions each of 100Mb in size.

Press p to see the partition table.

Press t to change the partition id of all the three partitions created by you to fd

(linux raid auto).

Press wq to save and exit from fdisk utility in linux.

#partprobe

Use fdisk -l to list the partition table.

Creating RAID

# mdadm –create /dev/md0 –level=5 –raid-devices=3 /dev/hda6 /dev/hda7 /dev/

hda8

Press y to create the arrays.

To see the details of raid use the following command: –

# cat /proc/mdstat

# mdadm –detail /dev/md0

Creating the file system for your RAID devices

#mkfs.ext3 /dev/md0

Mounting the RAID partition

#mkdir data

# mount /dev/md0 data

#df -h /root/data (Command is used to see the space allocation).

Crashing the raid devices

# mdadm –manage /dev/md0 –fail /dev/hda8

Removing raid devices

# mdadm –manage /dev/md0 –remove /dev/hda8

Adding raid devices

# mdadm –manage /dev/md0 –add /dev/hda8

View failed and working raid devices

# cat /proc/mdstat

# mdadm –detail /dev/md0

# tail /var/log/messages

To remove the RAID follow these steps: –

1) unmount the mounted directory where raid is mounted.

e.g. umount data

2) Stop the device

e.g. mdadm –stop /dev/md0

3) View the details of your raid level using following command: –

#cat /proc/mdstat

#mdadm –detail /dev/md0

Kdump for Linux Kernel Crash Analysis

Kdump is an utility used to capture the system core dump in the event of system crashes.
These captured core dumps can be used later to analyze the exact cause of the system failure and implement the necessary fix to prevent the crashes in future.
Kdump reserves a small portion of the memory for the secondary kernel called crashkernel.
This secondary or crash kernel is used the capture the core dump image whenever the system crashes.
1. Install Kdump Tools

First, install the kdump, which is part of kexec-tools package.
# yum install kexec-tools

2. Set crashkernel in grub.conf

Once the package is installed, edit /boot/grub/grub.conf file and set the amount of memory to be reserved for the kdump crash kernel.
You can edit the /boot/grub/grub.conf for the value crashkernel and set it to either auto or user-specified value. It is recommended to use minimum of 128M for a machine with 2G memory or higher.
In the following example, look for the line that start with “kernel”, where it is set to “crashkernel=auto”.
# vi /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-419.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-419.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-419.el6.x86_64.img
3. Configure Dump Location

Once the kernel crashes, the core dump can be captured to local filesystem or remote filesystem(NFS) based on the settings defined in /etc/kdump.conf (in SLES operating system the path is /etc/sysconfig/kdump).
This file is automatically created when the kexec-tools package is installed.
All the entries in this file will be commented out by default. You can uncomment the ones that are needed for your best options.
# vi /etc/kdump.conf
#raw /dev/sda5
#ext4 /dev/sda3
#ext4 LABEL=/boot
#ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937
#net my.server.com:/export/tmp
#net user@my.server.com
path /var/crash
core_collector makedumpfile -c –message-level 1 -d 31
#core_collector scp
#core_collector cp –sparse=always
#extra_bins /bin/cp
#link_delay 60
#kdump_post /var/crash/scripts/kdump-post.sh
#extra_bins /usr/bin/lftp
#disk_timeout 30
#extra_modules gfs2
#options modulename options
#default shell
#debug_mem_level 0
#force_rebuild 1
#sshkey /root/.ssh/kdump_id_rsa
In the above file:
To write the dump to a raw device, you can uncomment “raw /dev/sda5? and change it to point to correct dump location.
If you want to change the path of the dump location, uncomment and change “path /var/crash” to point to the new location.
For NFS, you can uncomment “#net my.server.com:/export/tmp” and point to the current NFS server location.
4. Configure Core Collector

The next step is to configure the core collector in Kdump configuration file. It is important to compress the data captured and filter all the unnecessary information from the captured core file.
To enable the core collector, uncomment the following line that starts with core_collector.
core_collector makedumpfile -c –message-level 1 -d 31

makedumpfile specified in the core_collector actually makes a small DUMPFILE by compressing the data.
makedumpfile provides two DUMPFILE formats (the ELF format and the kdump-compressed format).
By default, makedumpfile makes a DUMPFILE in the kdump-compressed format.
The kdump-compressed format can be read only with the crash utility, and it can be smaller than the ELF format because of the compression support.
The ELF format is readable with GDB and the crash utility.
-c is to compresses dump data by each page
-d is the number of pages that are unnecessary and can be ignored.
If you uncomment the line #default shell then the shell is invoked if the kdump fails to collect the core. Then the administrator can manually take the core dump using makedumpfile commands.
5. Restart kdump Services

Once kdump is configured, restart the kdump services,
# service kdump restart
Stopping kdump: [ OK ]
Starting kdump: [ OK ]

# service kdump status
Kdump is operational
If you have any issues in starting the services, then kdump module or crashkernel parameter has not been setup properly. So, verify /proc/cmdline and make sure it reflects to include the crashkernel value.
6. Manually Trigger the Core Dump

You can manually trigger the core dump using the following commands:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
The server will reboot itself and the crash dump will be generated.

7. View the Core Files

Once the server is rebooted, you will see the core file is generated under /var/crash based on location defined in /var/crash.
You will see vmcore and vmcore-dmseg.txt file:
# ls -lR /var/crash
drwxr-xr-x. 2 root root 4096 Mar 26 11:06 127.0.0.1-2014-03-26-11:06:43

/var/crash/127.0.0.1-2014-03-26-11:06:43:
-rw——-. 1 root root 33595159 Mar 26 11:06 vmcore
-rw-r–r–. 1 root root 79498 Mar 26 11:06 vmcore-dmesg.txt
8. Kdump analysis using crash

Crash utility is used to analyze the core file captured by kdump.
It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump.
You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel.
Launch the crash tool as shown below. After you this command, you will get a cash prompt, where you can execute crash commands:
# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/ /vmlinux

crash>

9. View the Process when System Crashed

Execute ps command at the crash prompt, which will display all the running process when the system crashed.
crash> ps
PID PPID CPU TASK ST %MEM VSZ RSS COMM
0 0 0 ffffffff81a8d020 RU 0.0 0 0 [swapper]
1 0 0 ffff88013e7db500 IN 0.0 19356 1544 init
2 0 0 ffff88013e7daaa0 IN 0.0 0 0 [kthreadd]
3 2 0 ffff88013e7da040 IN 0.0 0 0 [migration/0]
4 2 0 ffff88013e7e9540 IN 0.0 0 0 [ksoftirqd/0]
7 2 0 ffff88013dc19500 IN 0.0 0 0 [events/0]

10. View Swap space when System Crashed

Execute swap command at the crash prompt, which will display the swap space usage when the system crashed.
crash> swap
FILENAME TYPE SIZE USED PCT PRIORITY
/dm-1 PARTITION 2064376k 0k 0% -1
11. View IPCS when System Crashed

Execute ipcs command at the crash prompt, which will display the shared memory usage when the system crashed.
crash> ipcs
SHMID_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
(none allocated)

SEM_ARRAY KEY SEMID UID PERMS NSEMS
ffff8801394c0990 00000000 0 0 600 1
ffff880138f09bd0 00000000 65537 0 600 1

MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES
(none allocated)

12. View IRQ when System Crashed

Execute irq command at the crash prompt, which will display the IRQ stats when the system crashed.
crash> irq -s
CPU0
0: 149 IO-APIC-edge timer
1: 453 IO-APIC-edge i8042
7: 0 IO-APIC-edge parport0
8: 0 IO-APIC-edge rtc0
9: 0 IO-APIC-fasteoi acpi
12: 111 IO-APIC-edge i8042
14: 108 IO-APIC-edge ata_piix
.
.

vtop – This command translates a user or kernel virtual address to its physical address.
foreach – This command displays data for multiple tasks in the system
waitq – This command displays all the tasks queued on a wait queue.
13. View the Virtual Memory when System Crashed

Execute vm command at the crash prompt, which will display the virtual memory usage when the system crashed.
crash> vm
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
MM PGD RSS TOTAL_VM
ffff88013975d880 ffff88013a0c5000 1808k 108340k
VMA START END FLAGS FILE
ffff88013a0c4ed0 400000 4d4000 8001875 /bin/bash
ffff88013cd63210 3804800000 3804820000 8000875 /lib64/ld-2.12.so
ffff880138cf8ed0 3804c00000 3804c02000 8000075 /lib64/libdl-2.12.so
14. View the Open Files when System Crashed

Execute files command at the crash prompt, which will display the open files when the system crashed.
crash> files
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
ROOT: / CWD: /root
FD FILE DENTRY INODE TYPE PATH
0 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
1 ffff88013c4a5d80 ffff88013c90a440 ffff880135992308 REG /proc/sysrq-trigger
255 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
..

15. View System Information when System Crashed

Execute sys command at the crash prompt, which will display system information when the system crashed.
crash> sys
KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-03-26-12:24:39/vmcore [PARTIAL DUMP]
CPUS: 1
DATE: Wed Mar 26 12:24:36 2014
UPTIME: 00:01:32
LOAD AVERAGE: 0.17, 0.09, 0.03
TASKS: 159
NODENAME: elserver1.abc.com
RELEASE: 2.6.32-431.5.1.el6.x86_64
VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014
MACHINE: x86_64 (2132 Mhz)
MEMORY: 4 GB
PANIC: “Oops: 0002 [#1] SMP ” (check log for details)

Note : For kernel debugging we need following package to be installed,
2014 kernel-debuginfo-2.6.32-220.el6.i686.rpm
kernel-debuginfo-common-i686-2.6.32-220.el6.i686.rpm

Listing 2: Panic Routine for NMI Event
#?cat?/proc/sys/kernel/unknown_nmi_panic
1
#?sysctl?kernel.unknown_nmi_panic
kernel.unknown_nmi_panic?=?1
#?grep?nmi?/etc/sysctl.conf
kernel.unknown_nmi_panic?=?1

YUM SERVER

YUM SERVER

#################### Y U M S E R V E R ####################

[root@server~]#vi /etc/hosts

192.168.10.253 server.example.com server

[root@server~]#rpm -qa | grep yum #require packages

yum-3.0.1-5.el5
yum-metadata-parser-1.0-8.fc6
yum-rhn-plugin-0.4.3-1.el5
yum-updatesd-3.0.1-5.el5

[root@server~]#rpm -qa | grep createrepo

createrepo-0.4.4-2.fc6 # To create repository

[root@server~]#rpm -qa | grep vsftpd #FTP service for yum server

vsftpd-2.0.5-10.el5

#service vsftpd restart
#chkconfig vsftpd on

> Now mount RHEL- CD/DVD into /mnt folder

[root@server~]#mount /dev/cdrom /mnt OR U can Copy all rpm in pub directory

> Copy Server directory from /mnt and paste into /var/ftp/pub directory.

[root@server~]#cp -ar /mnt/Server /var/ftp/pub

> Edit /etc/yum.repos.d/rhel-debuginfo.repo file and modify as given bellow:

[root@server~]#vi /etc/yum.repos.d/rhel-debuginfo.repo

[rhel-debuginfo]
name=Red Hat Enterprise Linux $releasever – $basearch – Debug
baseurl=ftp://server.example.com/pub/Server
enabled=1
gpgcheck=0

> Now create repositoy fo packages.

[root@server~]#createrepo -v /var/ftp/pub/Server

Note: It will take long time depends upon rpm packages and performance of machine.

> After repository created, It may be prompt for error that “remove .olddata directory manually”

[root@server~]#rm -rf /var/ftp/pub/Server/.olddata

> Everything done well then try to use yum utility by using yum command.

(Note: If allready yum is configured and you are configuring new yum server
then use #yum clean all command)

[root@server~]#yum list #list rmp repository
[root@server~]#yum info vsftpd #Provide information about package
[root@server~]#yum install bind #To install packages
[root@server~]#yum install http*
[root@server~]#yum remove bind #To remove packages

#################### Y U M S E R V E R ####################

—————————————————————————————————-

#################### Y U M C L I E N T ####################

> Make sure that vsftpd package is installed.
:> Suppose client is client.example.com

[root@client~]#rpm -qa | grep vsftpd

#service vsftpd restart
#chkconfig vsftpd on

> Edit /etc/yum.repos.d/rhel-debuginfo.repo file and modify as given bellow:

[root@client~]#vi /etc/yum.repos.d/rhel-debuginfo.repo

[rhel-debuginfo]
name=Red Hat Enterprise Linux $releasever – $basearch – Debug
baseurl=ftp://server.example.com/pub/Server
enabled=1
gpgcheck=0

[root@client~]#yum list

#################### Y U M C L I E N T ####################

lsof command for monitoring

lsof command for monitoring
In Linux operating system lsof is powerfull tool for find out various kinds of status and list of open files from all over the system, there are various options provided in lsof which helps a lot to a Linux admin in his/her day to day life. In this post I am just trying to capture various lsof examples that I am using most of times in my present setup

1. Open TCP and UDP ports with running protocols
Want to see open Ports, both TCP and UDP ports. In this command output Port numbers will not display

[root@server ~]# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 rpc 3u IPv4 12002 UDP *:sunrpc
portmap 6805 rpc 4u IPv4 12010 TCP *:sunrpc (LISTEN)
rpc.statd 6844 root 3u IPv4 12088 UDP *:purenoise
rpc.statd 6844 root 6u IPv4 12072 UDP *:mac-srvr-admin
rpc.statd 6844 root 7u IPv4 12132 TCP *:mdqs (LISTEN)
hpiod 7086 root 0u IPv4 12657 TCP server:2208 (LISTEN)
python 7091 root 4u IPv4 12686 TCP server:2207 (LISTEN)
sshd 7121 root 3u IPv6 12743 TCP *:ssh (LISTEN)
cupsd 7161 root 3u IPv4 12812 TCP server:ipp (LISTEN)
cupsd 7161 root 5u IPv4 12815 UDP *:ipp
sendmail 7204 root 4u IPv4 12947 TCP server:smtp (LISTEN)
avahi-dae 7347 avahi 13u IPv4 13269 UDP *:mdns
avahi-dae 7347 avahi 14u IPv6 13270 UDP *:mdns
avahi-dae 7347 avahi 15u IPv4 13271 UDP *:filenet-tms
avahi-dae 7347 avahi 16u IPv6 13272 UDP *:filenet-rpc
sshd 7528 root 3u IPv6 15320 TCP 192.168.1.110:ssh->192.168.1.3:49232 (ESTABLISHED)

2. Open ports with Port numbers
This is same command, but it just display Port numbers as well instead of protocol
[root@server ~]# lsof -i -P
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 rpc 3u IPv4 12002 UDP *:111
portmap 6805 rpc 4u IPv4 12010 TCP *:111 (LISTEN)
rpc.statd 6844 root 3u IPv4 12088 UDP *:663
rpc.statd 6844 root 6u IPv4 12072 UDP *:660
rpc.statd 6844 root 7u IPv4 12132 TCP *:666 (LISTEN)
hpiod 7086 root 0u IPv4 12657 TCP server:2208 (LISTEN)
python 7091 root 4u IPv4 12686 TCP server:2207 (LISTEN)
sshd 7121 root 3u IPv6 12743 TCP *:22 (LISTEN)
cupsd 7161 root 3u IPv4 12812 TCP server:631 (LISTEN)
cupsd 7161 root 5u IPv4 12815 UDP *:631
sendmail 7204 root 4u IPv4 12947 TCP server:25 (LISTEN)
avahi-dae 7347 avahi 13u IPv4 13269 UDP *:5353
avahi-dae 7347 avahi 14u IPv6 13270 UDP *:5353
avahi-dae 7347 avahi 15u IPv4 13271 UDP *:32768
avahi-dae 7347 avahi 16u IPv6 13272 UDP *:32769
sshd 7528 root 3u IPv6 15320 TCP 192.168.1.110:22->192.168.1.3:49232 (ESTABLISHED)

3. Open Ports numbers with their process Id numbers
Above commands will display Process id of backend process that cause this port up, but in below command will also display parent process IDs of process. Some time we also need to know parent process of process which cause this port up.
[root@server ~]# lsof -i -P +R
COMMAND PID PPID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 1 rpc 3u IPv4 12002 UDP *:111
portmap 6805 1 rpc 4u IPv4 12010 TCP *:111 (LISTEN)
rpc.statd 6844 1 root 3u IPv4 12088 UDP *:663
rpc.statd 6844 1 root 6u IPv4 12072 UDP *:660
rpc.statd 6844 1 root 7u IPv4 12132 TCP *:666 (LISTEN)
hpiod 7086 1 root 0u IPv4 12657 TCP server:2208 (LISTEN)
python 7091 1 root 4u IPv4 12686 TCP server:2207 (LISTEN)
sshd 7121 1 root 3u IPv6 12743 TCP *:22 (LISTEN)
cupsd 7161 1 root 3u IPv4 12812 TCP server:631 (LISTEN)
cupsd 7161 1 root 5u IPv4 12815 UDP *:631
sendmail 7204 1 root 4u IPv4 12947 TCP server:25 (LISTEN)
avahi-dae 7347 1 avahi 13u IPv4 13269 UDP *:5353
avahi-dae 7347 1 avahi 14u IPv6 13270 UDP *:5353
avahi-dae 7347 1 avahi 15u IPv4 13271 UDP *:32768
avahi-dae 7347 1 avahi 16u IPv6 13272 UDP *:32769
sshd 7528 7121 root 3u IPv6 15320 TCP 192.168.1.110:22->192.168.1.3:49232 (ESTABLISHED)

4. Port working for TCP only
Above commands are used for TCP and UDP ports, but in case we only want to display TCP or UDP ports. Below command will able to display it.
[root@server ~]# lsof -itcp
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 rpc 4u IPv4 12010 TCP *:sunrpc (LISTEN)
rpc.statd 6844 root 7u IPv4 12132 TCP *:mdqs (LISTEN)
hpiod 7086 root 0u IPv4 12657 TCP server:2208 (LISTEN)
python 7091 root 4u IPv4 12686 TCP server:2207 (LISTEN)
sshd 7121 root 3u IPv6 12743 TCP *:ssh (LISTEN)
cupsd 7161 root 3u IPv4 12812 TCP server:ipp (LISTEN)
sendmail 7204 root 4u IPv4 12947 TCP server:smtp (LISTEN)
sshd 7528 root 3u IPv6 15320 TCP 192.168.1.110:ssh->192.168.1.3:49232 (ESTABLISHED)
vsftpd 7648 root 3u IPv4 18398 TCP *:ftp (LISTEN)
rpc.rquot 7770 root 4u IPv4 18813 TCP *:netrcs (LISTEN)
rpc.mount 7813 root 7u IPv4 18896 TCP *:784 (LISTEN)
ypserv 7855 root 6u IPv4 19065 TCP *:itm-mcell-s (LISTEN)

5. Display specific ports
Above commands will cover all ports but sometime we just need to know status of only one port like mentioned in below command.
[root@server ~]# lsof -i :25
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sendmail 7204 root 4u IPv4 12947 TCP server:smtp (LISTEN)

6. Display range of ports
In some requirements, we need to know status of some port range like 1-100 ports. we can use below commands for this
[root@client ~]# lsof -i :1-100
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 7125 root 3u IPv6 12891 TCP *:ssh (LISTEN)
sendmail 7196 root 4u IPv4 13081 TCP server:smtp (LISTEN)
sshd 7513 root 3u IPv6 15376 TCP 192.168.1.101:ssh->192.168.1.3:49298 (ESTABLISHED)
sshd 7607 root 3u IPv6 15570 TCP 192.168.1.101:ssh->192.168.1.3:49999 (ESTABLISHED)
sshd 19026 root 3u IPv6 58713 TCP 192.168.1.101:ssh->192.168.1.3:50134 (ESTABLISHED)
sshd 19838 root 3u IPv6 61673 TCP 192.168.1.101:ssh->192.168.1.3:50784 (ESTABLISHED)
sshd 19840 u1 3u IPv6 61673 TCP 192.168.1.101:ssh->192.168.1.3:50784 (ESTABLISHED)

7. Display open udp ports
[root@server ~]# lsof -iudp
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 rpc 3u IPv4 12002 UDP *:sunrpc
rpc.statd 6844 root 3u IPv4 12088 UDP *:purenoise
rpc.statd 6844 root 6u IPv4 12072 UDP *:mac-srvr-admin
cupsd 7161 root 5u IPv4 12815 UDP *:ipp
avahi-dae 7347 avahi 13u IPv4 13269 UDP *:mdns
avahi-dae 7347 avahi 14u IPv6 13270 UDP *:mdns
avahi-dae 7347 avahi 15u IPv4 13271 UDP *:filenet-tms
avahi-dae 7347 avahi 16u IPv6 13272 UDP *:filenet-rpc
rpc.rquot 7770 root 3u IPv4 18807 UDP *:739
rpc.mount 7813 root 6u IPv4 18893 UDP *:781
ypserv 7855 root 5u IPv4 19060 UDP *:825
rpc.yppas 7887 root 4u IPv4 19123 UDP *:856

8. open files in specfic directory, will not search recursive
[root@server ~]# lsof +d /var/run/
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
audispd 6717 root 5u unix 0xc1bf4900 11829 /var/run/audispd_events
rpc.statd 6844 root 8w REG 253,0 5 3202319 /var/run/rpc.statd.pid
sdpd 6939 root 5u unix 0xf6a45580 12350 /var/run/sdp
pcscd 7001 root mem REG 253,0 65537 3202338 /var/run/pcscd.pub
pcscd 7001 root 3u REG 253,0 65537 3202338 /var/run/pcscd.pub
pcscd 7001 root 5u unix 0xf63eae40 12465 /var/run/pcscd.comm
automount 7046 root 9u FIFO 253,0 3202343 /var/run/autofs.fifo-misc
automount 7046 root 15u FIFO 253,0 3202345 /var/run/autofs.fifo-net
acpid 7070 root 4u unix 0xf63ea900 12623 /var/run/acpid.socket
acpid 7070 root 5u unix 0xf3efac80 14735 /var/run/acpid.socket
hpiod 7086 root 3u REG 253,0 5 3202351 /var/run/hpiod.pid
python 7091 root 3u REG 253,0 5 3202354 /var/run/hpssd.pid
sendmail 7204 root 5wW REG 253,0 33 3202368 /var/run/sendmail.pid
sendmail 7212 smmsp 4wW REG 253,0 49 3202372 /var/run/sm-client.pid

9. open files in specfic directory, will search recursive as well
[root@server ~]# lsof +D /proc/
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
klogd 6753 root 0r REG 0,3 0 4026531849 /proc/kmsg
rpc.idmap 6890 root 4u REG 0,3 0 4026532773 /proc/net/rpc/nfs4.nametoid/channel
rpc.idmap 6890 root 9u REG 0,3 0 4026532769 /proc/net/rpc/nfs4.idtoname/channel
acpid 7070 root 3r REG 0,3 0 4026532142 /proc/acpi/event
hald 7364 haldaemon 11r REG 0,3 0 482607121 /proc/7364/mounts
rpc.mount 7813 root 3u REG 0,3 0 4026532568 /proc/net/rpc/auth.unix.ip/channel
rpc.mount 7813 root 4u REG 0,3 0 4026532761 /proc/net/rpc/nfsd.export/channel
rpc.mount 7813 root 5u REG 0,3 0 4026532765 /proc/net/rpc/nfsd.fh/channel
lsof 20404 root 3r DIR 0,3 0 1 /proc/
lsof 20404 root 6r DIR 0,3 0 1337196553 /proc/20404/fd

10. Display Established connections
[root@server ~]# lsof -i @192.168.1.3
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 7528 root 3u IPv6 15320 TCP 192.168.1.110:ssh->192.168.1.3:49232 (ESTABLISHED)

[root@client ~]# lsof -i @192.168.1.110
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
ftp 20157 u1 3u IPv4 62569 TCP 192.168.1.101:36490->192.168.1.110:ftp (ESTABLISHED)
ftp 20157 u1 4u IPv4 62569 TCP 192.168.1.101:36490->192.168.1.110:ftp (ESTABLISHED)

11. Display open files per process id(pid)
[root@server ~]# lsof -p 7121
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 7121 root cwd DIR 253,0 4096 2 /
sshd 7121 root rtd DIR 253,0 4096 2 /
sshd 7121 root txt REG 253,0 387308 2433014 /usr/sbin/sshd
sshd 7121 root mem REG 253,0 261433 /lib/libutil-2.5.so (path inode=263739)
sshd 7121 root mem REG 253,0 2423991 /usr/lib/libz.so.1.2.3 (path inode=2440370)
sshd 7121 root mem REG 253,0 2425597 /usr/lib/libnssutil3.so (path inode=2430740)
sshd 7121 root mem REG 253,0 46680 261417 /lib/libnss_files-2.5.so
sshd 7121 root mem REG 253,0 261579 /lib/libcom_err.so.2.1 (path inode=263737)
sshd 7121 root mem REG 253,0 2425593 /usr/lib/libplds4.so (path inode=2440382)
sshd 7121 root mem REG 253,0 261407 /lib/libdl-2.5.so (path inode=263715)
sshd 7121 root 0u CHR 1,3 1527 /dev/null
sshd 7121 root 1u CHR 1,3 1527 /dev/null
sshd 7121 root 2u CHR 1,3 1527 /dev/null
sshd 7121 root 3u IPv6 12743 TCP *:ssh (LISTEN)

12. Disaply oprn files on User basis — This is NIS user, user’s process will not display on NIS Server
[root@client ~]# lsof -u u1
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 19190 u1 cwd DIR 253,0 4096 2 /
sshd 19190 u1 rtd DIR 253,0 4096 2 /
sshd 19190 u1 txt REG 253,0 387308 2433014 /usr/sbin/sshd
sshd 19190 u1 mem REG 253,0 45288 261596 /lib/libcrypt-2.5.so
sshd 19190 u1 mem REG 253,0 8072 263733 /lib/libkeyutils-1.2.so
sshd 19190 u1 mem REG 253,0 11460 2440345 /usr/lib/libplds4.so
sshd 19190 u1 mem REG 253,0 29856 2438824 /usr/lib/libcrack.so.2.8.0
sshd 19190 u1 mem REG 253,0 125736 263713 /lib/ld-2.5.so
sshd 19190 u1 mem REG 253,0 1242224 263719 /lib/libcrypto.so.0.9.8b
sshd 19190 u1 mem REG 253,0 190712 2440341 /usr/lib/libgssapi_krb5.so.2.2
sshd 19190 u1 mem REG 253,0 600084 2440340 /usr/lib/libkrb5.so.3.3
sshd 19190 u1 mem REG 253,0 228028 2434949 /usr/lib/libnspr4.so
sshd 19190 u1 mem REG 253,0 125744 263720 /lib/libpthread-2.5.so
sshd 19190 u1 3u IPv6 59141 TCP 192.168.1.101:ssh->192.168.1.3:50257 (ESTABLISHED)

13. Display Process holding for certain file descriptor
[root@server ~]# lsof -d 15
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
automount 7046 root 15u FIFO 253,0 3202345 /var/run/autofs.fifo-net
avahi-dae 7347 avahi 15u IPv4 13271 UDP *:filenet-tms
hald 7364 haldaemon 15u unix 0xf3efaac0 15006 socket

14. Display all Ipv4 connections
[root@server ~]# lsof -i4
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
portmap 6805 rpc 3u IPv4 12002 UDP *:sunrpc
portmap 6805 rpc 4u IPv4 12010 TCP *:sunrpc (LISTEN)
rpc.statd 6844 root 3u IPv4 12088 UDP *:purenoise
rpc.statd 6844 root 6u IPv4 12072 UDP *:mac-srvr-admin
rpc.statd 6844 root 7u IPv4 12132 TCP *:mdqs (LISTEN)
hpiod 7086 root 0u IPv4 12657 TCP server:2208 (LISTEN)
python 7091 root 4u IPv4 12686 TCP server:2207 (LISTEN)
cupsd 7161 root 3u IPv4 12812 TCP server:ipp (LISTEN)
cupsd 7161 root 5u IPv4 12815 UDP *:ipp
sendmail 7204 root 4u IPv4 12947 TCP server:smtp (LISTEN)
avahi-dae 7347 avahi 13u IPv4 13269 UDP *:mdns
avahi-dae 7347 avahi 15u IPv4 13271 UDP *:filenet-tms
vsftpd 7648 root 3u IPv4 18398 TCP *:ftp (LISTEN)
rpc.rquot 7770 root 3u IPv4 18807 UDP *:739
rpc.rquot 7770 root 4u IPv4 18813 TCP *:netrcs (LISTEN)
rpc.mount 7813 root 6u IPv4 18893 UDP *:781
rpc.mount 7813 root 7u IPv4 18896 TCP *:784 (LISTEN)
ypserv 7855 root 5u IPv4 19060 UDP *:825
ypserv 7855 root 6u IPv4 19065 TCP *:itm-mcell-s (LISTEN)
rpc.yppas 7887 root 4u IPv4 19123 UDP *:856

15. Diaply process running on a open file
[root@client ~]# lsof /var/log/messages
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
syslogd 6742 root 1w REG 253,0 106636 3201966 /var/log/messages

16. Display list of process that start of a command name
[root@server ~]# lsof -c ypserv
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
ypserv 7855 root cwd DIR 253,0 4096 3201900 /var/yp
ypserv 7855 root rtd DIR 253,0 4096 2 /
ypserv 7855 root txt REG 253,0 44232 2440342 /usr/sbin/ypserv
ypserv 7855 root mem REG 253,0 2424344 /usr/lib/libgdbm.so.2.0.0 (path inode=2434857)
ypserv 7855 root 3uW REG 253,0 5 3202541 /var/run/ypserv.pid
ypserv 7855 root 5u IPv4 19060 UDP *:825
ypserv 7855 root 6u IPv4 19065 TCP *:itm-mcell-s (LISTEN)
ypserv 7855 root 7r REG 253,0 12503 4018684 /var/yp/linuxphobia/hosts.byaddr
ypserv 7855 root 8r REG 253,0 12472 4018679 /var/yp/linuxphobia/passwd.byname
ypserv 7855 root 9r REG 253,0 12472 4018680 /var/yp/linuxphobia/passwd.byuid
ypserv 7855 root 10r REG 253,0 12414 4018681 /var/yp/linuxphobia/group.byname
ypserv 7855 root 11r REG 253,0 12414 4018682 /var/yp/linuxphobia/group.bygid
ypserv 7855 root 12r REG 253,0 12586 4018683 /var/yp/linuxphobia/hosts.byname

17. Display process working on mountpints
[root@client ~]# lsof /home
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
automount 7772 root 21r DIR 0,23 0 16112 /home

18. Display process running from users except those mentioned user
[root@client ~]# lsof -u ^root -u ^u1 -u ^rpc -u ^haldaemon -u ^avahi -u ^xfs -u ^smmsp
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
dbus-daem 6911 dbus cwd DIR 253,0 4096 2 /
dbus-daem 6911 dbus rtd DIR 253,0 4096 2 /
dbus-daem 6911 dbus txt REG 253,0 351900 4639486 /bin/dbus-daemon
dbus-daem 6911 dbus mem REG 253,0 261425 /lib/libpthread-2.5.so (path inode=263720)
dbus-daem 6911 dbus mem REG 253,0 261407 /lib/libdl-2.5.so (path inode=263715)
dbus-daem 6911 dbus mem REG 253,0 46680 261417 /lib/libnss_files-2.5.so
dbus-daem 6911 dbus mem REG 253,0 261457 /lib/libcap.so.1.10 (path inode=263730)
dbus-daem 6911 dbus mem REG 253,0 261451 /lib/libaudit.so.0.0.0 (path inode=263716)
dbus-daem 6911 dbus mem REG 253,0 261567 /lib/libselinux.so.1 (path inode=263736)
dbus-daem 6911 dbus mem REG 253,0 261394 /lib/ld-2.5.so (path inode=263713)
dbus-daem 6911 dbus mem REG 253,0 261455 /lib/libexpat.so.0.5.0 (path inode=263725)
dbus-daem 6911 dbus mem REG 253,0 261472 /lib/libsepol.so.1 (path inode=263735)
dbus-daem 6911 dbus mem REG 253,0 261401 /lib/libc-2.5.so (path inode=263714)
dbus-daem 6911 dbus 0u CHR 1,3 1527 /dev/null
dbus-daem 6911 dbus 1u CHR 1,3 1527 /dev/null
dbus-daem 6911 dbus 2u CHR 1,3 1527 /dev/null
dbus-daem 6911 dbus 3u unix 0xf6b9ce40 12419 /var/run/dbus/system_bus_socket
dbus-daem 6911 dbus 4u CHR 1,3 1527 /dev/null
dbus-daem 6911 dbus 5r DIR 253,0 4096 4051635 /etc/dbus-1/system.d
dbus-daem 6911 dbus 6u unix 0xf6b9cc80 12421 socket
dbus-daem 6911 dbus 7u unix 0xf6b9cac0 12422 socket

18. Display only print Process id (Pid) running from User
[root@client ~]# lsof -t -u ^root -u ^u1 -u ^rpc -u ^haldaemon -u ^avahi -u ^xfs -u ^smmsp
6911

19.Display only for specific user for specific command
Some time we want to display specific command for only specific user like in this With this options -a in last -c and -u will work together to print only bash command process for user u1
[root@srv3 ~]# lsof -c bash -u u1 -a
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 6050 u1 cwd DIR 253,0 4096 1633607 /home/u1
bash 6050 u1 rtd DIR 253,0 4096 2 /
bash 6050 u1 txt REG 253,0 735804 326755 /bin/bash
bash 6050 u1 mem REG 253,0 130860 4410722 /lib/ld-2.5.so
bash 6050 u1 mem REG 253,0 1706232 4410738 /lib/libc-2.5.so
bash 6050 u1 mem REG 253,0 20668 4410769 /lib/libdl-2.5.so
bash 6050 u1 mem REG 253,0 13276 4410806 /lib/libtermcap.so.2.0.8
bash 6050 u1 mem REG 253,0 50848 4410760 /lib/libnss_files-2.5.so
bash 6050 u1 mem REG 253,0 56466992 1061342 /usr/lib/locale/locale-archive
bash 6050 u1 mem REG 253,0 25462 1143780 /usr/lib/gconv/gconv-modules.cache
bash 6050 u1 0u CHR 136,1 0t0 3 /dev/pts/1
bash 6050 u1 1u CHR 136,1 0t0 3 /dev/pts/1
bash 6050 u1 2u CHR 136,1 0t0 3 /dev/pts/1
bash 6050 u1 255u CHR 136,1 0t0 3 /dev/pts/1

20.Real time lsof like top
Default time is 15 second. This will repeat this till the process ends or terminated with an interrupt or quit signal in every 5 seconds.
[root@client ~]# lsof -u u1 -c cat -a +r5
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
cat 19868 u1 cwd DIR 0,24 4096 2842495 /home/u1 (192.168.1.110:/home/u1)
cat 19868 u1 rtd DIR 253,0 4096 2 /
cat 19868 u1 txt REG 253,0 23132 4639532 /bin/cat
cat 19868 u1 mem REG 253,0 125736 263713 /lib/ld-2.5.so
cat 19868 u1 mem REG 253,0 1602128 263714 /lib/libc-2.5.so
cat 19868 u1 mem REG 253,0 56417840 2423648 /usr/lib/locale/locale-archive
cat 19868 u1 0u CHR 136,0 2 /dev/pts/0
cat 19868 u1 1w REG 0,24 0 2842504 /home/u1/f1 (192.168.1.110:/home/u1)
cat 19868 u1 2u CHR 136,0 2 /dev/pts/0
=======

21. NFS files used by user u1
[root@client ~]# lsof -N -u u1 -a
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
bash 19841 u1 cwd DIR 0,24 4096 2842495 /home/u1 (192.168.1.110:/home/u1)

Kdump for Linux Kernel Crash Analysis

Kdump for Linux Kernel Crash Analysis

Kdump is an utility used to capture the system core dump in the event of system crashes.
These captured core dumps can be used later to analyze the exact cause of the system failure and implement the necessary fix to prevent the crashes in future.
Kdump reserves a small portion of the memory for the secondary kernel called crashkernel.
This secondary or crash kernel is used the capture the core dump image whenever the system crashes.
1. Install Kdump Tools

First, install the kdump, which is part of kexec-tools package.
# yum install kexec-tools

2. Set crashkernel in grub.conf

Once the package is installed, edit /boot/grub/grub.conf file and set the amount of memory to be reserved for the kdump crash kernel.
You can edit the /boot/grub/grub.conf for the value crashkernel and set it to either auto or user-specified value. It is recommended to use minimum of 128M for a machine with 2G memory or higher.
In the following example, look for the line that start with “kernel”, where it is set to “crashkernel=auto”.
# vi /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-419.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-419.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-419.el6.x86_64.img
3. Configure Dump Location

Once the kernel crashes, the core dump can be captured to local filesystem or remote filesystem(NFS) based on the settings defined in /etc/kdump.conf (in SLES operating system the path is /etc/sysconfig/kdump).
This file is automatically created when the kexec-tools package is installed.
All the entries in this file will be commented out by default. You can uncomment the ones that are needed for your best options.
# vi /etc/kdump.conf
#raw /dev/sda5
#ext4 /dev/sda3
#ext4 LABEL=/boot
#ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937
#net my.server.com:/export/tmp
#net user@my.server.com
path /var/crash
core_collector makedumpfile -c –message-level 1 -d 31
#core_collector scp
#core_collector cp –sparse=always
#extra_bins /bin/cp
#link_delay 60
#kdump_post /var/crash/scripts/kdump-post.sh
#extra_bins /usr/bin/lftp
#disk_timeout 30
#extra_modules gfs2
#options modulename options
#default shell
#debug_mem_level 0
#force_rebuild 1
#sshkey /root/.ssh/kdump_id_rsa
In the above file:
To write the dump to a raw device, you can uncomment “raw /dev/sda5? and change it to point to correct dump location.
If you want to change the path of the dump location, uncomment and change “path /var/crash” to point to the new location.
For NFS, you can uncomment “#net my.server.com:/export/tmp” and point to the current NFS server location.
4. Configure Core Collector

The next step is to configure the core collector in Kdump configuration file. It is important to compress the data captured and filter all the unnecessary information from the captured core file.
To enable the core collector, uncomment the following line that starts with core_collector.
core_collector makedumpfile -c –message-level 1 -d 31

makedumpfile specified in the core_collector actually makes a small DUMPFILE by compressing the data.
makedumpfile provides two DUMPFILE formats (the ELF format and the kdump-compressed format).
By default, makedumpfile makes a DUMPFILE in the kdump-compressed format.
The kdump-compressed format can be read only with the crash utility, and it can be smaller than the ELF format because of the compression support.
The ELF format is readable with GDB and the crash utility.
-c is to compresses dump data by each page
-d is the number of pages that are unnecessary and can be ignored.
If you uncomment the line #default shell then the shell is invoked if the kdump fails to collect the core. Then the administrator can manually take the core dump using makedumpfile commands.
5. Restart kdump Services

Once kdump is configured, restart the kdump services,
# service kdump restart
Stopping kdump: [ OK ]
Starting kdump: [ OK ]

# service kdump status
Kdump is operational
If you have any issues in starting the services, then kdump module or crashkernel parameter has not been setup properly. So, verify /proc/cmdline and make sure it reflects to include the crashkernel value.
6. Manually Trigger the Core Dump

You can manually trigger the core dump using the following commands:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
The server will reboot itself and the crash dump will be generated.

7. View the Core Files

Once the server is rebooted, you will see the core file is generated under /var/crash based on location defined in /var/crash.
You will see vmcore and vmcore-dmseg.txt file:
# ls -lR /var/crash
drwxr-xr-x. 2 root root 4096 Mar 26 11:06 127.0.0.1-2014-03-26-11:06:43

/var/crash/127.0.0.1-2014-03-26-11:06:43:
-rw——-. 1 root root 33595159 Mar 26 11:06 vmcore
-rw-r–r–. 1 root root 79498 Mar 26 11:06 vmcore-dmesg.txt
8. Kdump analysis using crash

Crash utility is used to analyze the core file captured by kdump.
It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump.
You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel.
Launch the crash tool as shown below. After you this command, you will get a cash prompt, where you can execute crash commands:
# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/ /vmlinux

crash>

9. View the Process when System Crashed

Execute ps command at the crash prompt, which will display all the running process when the system crashed.
crash> ps
PID PPID CPU TASK ST %MEM VSZ RSS COMM
0 0 0 ffffffff81a8d020 RU 0.0 0 0 [swapper]
1 0 0 ffff88013e7db500 IN 0.0 19356 1544 init
2 0 0 ffff88013e7daaa0 IN 0.0 0 0 [kthreadd]
3 2 0 ffff88013e7da040 IN 0.0 0 0 [migration/0]
4 2 0 ffff88013e7e9540 IN 0.0 0 0 [ksoftirqd/0]
7 2 0 ffff88013dc19500 IN 0.0 0 0 [events/0]

10. View Swap space when System Crashed

Execute swap command at the crash prompt, which will display the swap space usage when the system crashed.
crash> swap
FILENAME TYPE SIZE USED PCT PRIORITY
/dm-1 PARTITION 2064376k 0k 0% -1
11. View IPCS when System Crashed

Execute ipcs command at the crash prompt, which will display the shared memory usage when the system crashed.
crash> ipcs
SHMID_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
(none allocated)

SEM_ARRAY KEY SEMID UID PERMS NSEMS
ffff8801394c0990 00000000 0 0 600 1
ffff880138f09bd0 00000000 65537 0 600 1

MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES
(none allocated)

12. View IRQ when System Crashed

Execute irq command at the crash prompt, which will display the IRQ stats when the system crashed.
crash> irq -s
CPU0
0: 149 IO-APIC-edge timer
1: 453 IO-APIC-edge i8042
7: 0 IO-APIC-edge parport0
8: 0 IO-APIC-edge rtc0
9: 0 IO-APIC-fasteoi acpi
12: 111 IO-APIC-edge i8042
14: 108 IO-APIC-edge ata_piix
.
.

vtop – This command translates a user or kernel virtual address to its physical address.
foreach – This command displays data for multiple tasks in the system
waitq – This command displays all the tasks queued on a wait queue.
13. View the Virtual Memory when System Crashed

Execute vm command at the crash prompt, which will display the virtual memory usage when the system crashed.
crash> vm
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
MM PGD RSS TOTAL_VM
ffff88013975d880 ffff88013a0c5000 1808k 108340k
VMA START END FLAGS FILE
ffff88013a0c4ed0 400000 4d4000 8001875 /bin/bash
ffff88013cd63210 3804800000 3804820000 8000875 /lib64/ld-2.12.so
ffff880138cf8ed0 3804c00000 3804c02000 8000075 /lib64/libdl-2.12.so
14. View the Open Files when System Crashed

Execute files command at the crash prompt, which will display the open files when the system crashed.
crash> files
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
ROOT: / CWD: /root
FD FILE DENTRY INODE TYPE PATH
0 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
1 ffff88013c4a5d80 ffff88013c90a440 ffff880135992308 REG /proc/sysrq-trigger
255 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
..

15. View System Information when System Crashed

Execute sys command at the crash prompt, which will display system information when the system crashed.
crash> sys
KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-03-26-12:24:39/vmcore [PARTIAL DUMP]
CPUS: 1
DATE: Wed Mar 26 12:24:36 2014
UPTIME: 00:01:32
LOAD AVERAGE: 0.17, 0.09, 0.03
TASKS: 159
NODENAME: elserver1.abc.com
RELEASE: 2.6.32-431.5.1.el6.x86_64
VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014
MACHINE: x86_64 (2132 Mhz)
MEMORY: 4 GB
PANIC: “Oops: 0002 [#1] SMP ” (check log for details)

Note : For kernel debugging we need following package to be installed,
2014 kernel-debuginfo-2.6.32-220.el6.i686.rpm
kernel-debuginfo-common-i686-2.6.32-220.el6.i686.rpm

Listing 2: Panic Routine for NMI Event
#?cat?/proc/sys/kernel/unknown_nmi_panic
1
#?sysctl?kernel.unknown_nmi_panic
kernel.unknown_nmi_panic?=?1
#?grep?nmi?/etc/sysctl.conf
kernel.unknown_nmi_panic?=?1

Backup with Rsnapshot tool

Backup with Rsnapshot tool

How To Set Red hat / CentOS Linux Remote Backup / Snapshot Server

A. rsnapshot is easy, reliable and disaster recovery backup solution. It is a remote backup program that uses rsync to take backup snapshots of filesystems. It uses hard links to save space on disk and offers following features:
Filesystem snapshot – for local or remote systems.
Database backup – MySQL backup
Secure – Traffic between remote backup server is always encrypted using openssh
Full backup – plus incrementals
Easy to restore – Files can restored by the users who own them, without the root user getting involved.
Automated backup – Runs in background via cron.
Bandwidth friendly – rsync used to save bandwidth
Sample setup

snapshot.example.com – HP box with RAID 6 configured with Red Hat / CentOS Linux act as backup server for other clients.
DNS ns1.example.com – Red Hat server act as primary name server.
DNS ns2.example.com – Red Hat server act as secondary name server.
www.example.com – Red Hat running Apache web server.
mysql.example.com – Red Hat mysql server.
Install rsnapshot

Login to snapshot.example.com. Download rsnapshot rpm file, enter:
# cd /tmp
# wget http://www.rsnapshot.org/downloads/rsnapshot-1.3.0-1.noarch.rpm
# wget http://www.rsnapshot.org/downloads/rsnapshot-1.3.0-1.noarch.rpm.md5

# cd /tmp
# wget http://www.rsnapshot.org/downloads/rsnapshot-1.3.0-1.noarch.rpm
# wget http://www.rsnapshot.org/downloads/rsnapshot-1.3.0-1.noarch.rpm.md5

Configure rsnapshot

You need to perform following steps
Step # 1: Configure password less login

To perform remote backup you need to setup password less login using openssh. Create ssh rsa key and upload they to all servers using scp (note you are overwriting ~/ssh/authorized_keys2 files). You need to type following commands on snapshot.example.com server:
# ssh-keygen -t rsa
# scp .ssh/id_rsa.pub root@ns1.example.com:.ssh/authorized_keys2
# scp .ssh/id_rsa.pub root@ns2.example.com:.ssh/authorized_keys2
# scp .ssh/id_rsa.pub root@www.example.com:.ssh/authorized_keys2
# scp .ssh/id_rsa.pub root@mysql.example.com:.ssh/authorized_keys2
Step # 2: Configure rsnapshot

The default configuration file is located at /etc/rsnapshot.conf. Open configuration file using a text editor, enter:
# vi /etc/rsnapshot.conf

Configuration rules

You must follow two configuration rules:
rsnapshot config file requires tabs between elements.
All directories require a trailing slash. For example, /home/ is correct way to specify directory, but /home is wrong.
First, specify root directory to store all snapshots such as /snapshots/ or /dynvol/snapshot/ as per your RAID setup, enter:
snapshot_root /raiddisk/snapshots/
You must separate snapshot_root and /raiddisk/snapshots/ by a [tab] key i.e. type snapshot_root hit [tab] key once and type /raiddisk/snapshots/.
Define snapshot intervals

You need to specify backup intervals i.e. specify hourly, daily, weekly and monthly intervals:
interval hourly 6
interval daily 7
interval weekly 4
interval monthly 3
The line “interval hourly 6” means 6 hourly backups a day. Feel free to adapt configuration as per your backup requirements and snapshot frequency .

Remote backup directories

To backup /var/named/ and /etc/ directory from ns1.example.com and ns2.example.com, enter:
backup root@ns1.example.com:/etc/ ns1.example.com/
backup root@ns1.example.com:/var/named/ ns1.example.com/
backup root@ns2.example.com:/etc/ ns2.example.com/
backup root@ns2.example.com:/var/named/ ns2.example.com/
To backup /var/www/, /var/log/httpd/ and /etc/ directory from www.example.com, enter
backup root@www.example.com:/var/www/ www.example.com/
backup root@www.example.com:/etc/ www.example.com/
backup root@www.example.com:/var/log/httpd/ www.example.com/
To backup mysql database files stored at /var/lib/mysql/, enter:
backup root@mysql.example.com:/var/lib/mysql/ mysql.example.com/dbdump/
Save and close the file. To test your configuration, enter:
# rsnapshot configtest
Sample output:
Syntax OK

Schedule cron job

Create /etc/cron.d/rsnapshot cron file. Following values used correspond to the examples in /etc/rsnapshot.conf.
0 */4 * * * /usr/bin/rsnapshot hourly
50 23 * * * /usr/bin/rsnapshot daily
40 23 * * 6 /usr/bin/rsnapshot weekly
30 23 1 * * /usr/bin/rsnapshot monthly
Save and close the file. Now rsnapshot will work as follows to backup files from remote boxes:
6 hourly backups a day (once every 4 hours, at 0,4,8,12,16,20)
1 daily backup every day, at 11:50PM
1 weekly backup every week, at 11:40PM, on Saturdays (6th day of week)
1 monthly backup every month, at 11:30PM on the 1st day of the month
How do I see backups?

To see backup change directory to
# cd /raiddisk/snapshots/
# ls -l
Sample output:
drwxr-xr-x 4 root root 4096 2008-07-04 06:04 daily.0
drwxr-xr-x 4 root root 4096 2008-07-03 06:04 daily.1
drwxr-xr-x 4 root root 4096 2008-07-02 06:03 daily.2
drwxr-xr-x 4 root root 4096 2008-07-01 06:02 daily.3
drwxr-xr-x 4 root root 4096 2008-06-30 06:02 daily.4
drwxr-xr-x 4 root root 4096 2008-06-29 06:05 daily.5
drwxr-xr-x 4 root root 4096 2008-06-28 06:04 daily.6
drwxr-xr-x 4 root root 4096 2008-07-05 18:05 hourly.0
drwxr-xr-x 4 root root 4096 2008-07-05 15:06 hourly.1
drwxr-xr-x 4 root root 4096 2008-07-05 12:06 hourly.2
drwxr-xr-x 4 root root 4096 2008-07-05 09:05 hourly.3
drwxr-xr-x 4 root root 4096 2008-07-05 06:04 hourly.4
drwxr-xr-x 4 root root 4096 2008-07-05 03:04 hourly.5
drwxr-xr-x 4 root root 4096 2008-07-05 00:05 hourly.6
drwxr-xr-x 4 root root 4096 2008-07-04 21:05 hourly.7
drwxr-xr-x 4 root root 4096 2008-06-22 06:04 weekly.0
drwxr-xr-x 4 root root 4096 2008-06-15 09:05 weekly.1
drwxr-xr-x 4 root root 4096 2008-06-08 06:04 weekly.2

How do I restore backup?

Let us say you would like to restore a backup for www.example.com. Type the command as follows (select day and date from ls -l output):
# cd /raiddisk/snapshots/
# ls -l
# cd hourly.0/www.example.com/
# scp -r var/www/ root@www.example.com:/var/www/
# scp -r etc/httpd/ root@www.example.com:/etc/httpd/
How do I exclude files from backup?

To exclude files from backup, open rsnapshot.conf file and add following line:
exclude_file /etc/rsnapshot.exclude.www.example.com
Create /etc/rsnapshot.exclude.www.example.com as follows:
/var/www/tmp/
/var/www/*.cache
Further readings:

man rsnaspshot, ssh, ssh-keygen

Cluster Admin: Interview Question

Cluster Admin: Interview Question

Cluster Administration
1 What is a Cluster
A cluster is two or more computers (called as nodes or members) that works together to perform a taks.
2 What are the types of cluster
Storage
High Availability
Load Balancing
High Performance
3 What is CMAN
CMAN is Cluster Manager. It manages cluster quorum and cluster membership.
CMAN runs on each node of a cluster
4 What is Cluster Quorum
Quorum is a voting algorithm used by CMAN.
CMAN keeps a track of cluster quorum by monitoring the count of number of nodes in cluster.
If more than half of members of a cluster are in active state, the cluster is said to be in Quorum
If half or less than half of the members are not active, the cluster is said to be down and all cluster activities will be stopped
Quorum is defined as the minimum set of hosts required in order to provide service and is used to prevent split-brain situations.
The quorum algorithm used by the RHCS cluster is called “simple majority quorum”, which means that more than half of the hosts must be online and communicating in order to provide service.
5 What is split-brain
It is a condition where two instances of the same cluster are running and trying to access same resource at the same time, resulting in corrupted cluster integrity
Cluster must maintain quorum to prevent split-brain issues
6 What is Quorum disk
In case of a 2 node cluster, quorum disk acts as a tie-breaker and prevents split-brain issue
If a node has access to network and quorum disk, it is active
If a node has lost access to network or quorum disk, it is inactive and can be fenced
A Quorum disk, known as a qdisk is small partition on SAN storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. It does this by using configured heuristics, that is custom tests, to decided which which node or partition is best suited for providing clustered services during a cluster reconfiguration.
7 What is RGManager
RGManager manages and provides failover capabilities for collections of cluster resources called services, resource groups, or resource trees.
In the event of a node failure, RGManager will relocate the clustered service to another node with minimal service disruption. You can also restrict services to certain nodes, such as restricting httpd to one group of nodes while mysql can be restricted to a separate set of nodes.
When the cluster membership changes, openais tells the cluster that it needs to recheck it’s resources. This causes rgmanager, the resource group manager, to run. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.
Within rgmanager, one or more resources are brought together as a service. This service is then optionally assigned to a failover domain, an subset of nodes that can have preferential ordering.
8 What is Fencing
Fencing is the disconnection of a node from the cluster’s shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon, fenced.
Power fencing — A fencing method that uses a power controller to power off an inoperable node.
storage fencing — A fencing method that disables the Fibre Channel port that connects storage to an inoperable node.
Other fencing — Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.
9 How to manually fence an inactive node
# fence_ack_manual –n
10 How to see shared IP address (Cluster Resource) if ipconfig doesn’t show it
# ip addr list
11 What is DLM
A lock manager is a traffic cop who controls access to resources in the cluster
As implied in its name, DLM is a distributed lock manager and runs in each cluster node; lock management is distributed across all nodes in the cluster. GFS2 and CLVM use locks from the lock manager.
12 What is Conga
This is a comprehensive user interface for installing, configuring, and managing Red Hat High Availability Add-On.
Luci — This is the application server that provides the user interface for Conga. It allows users to manage cluster services. It can be run from outside cluster environment.
Ricci — This is a service daemon that manages distribution of the cluster configuration. Users pass configuration details using the Luci interface, and the configuration is loaded in to corosync for distribution to cluster nodes. Luci is accessible only among cluster members.
13 What is OpenAis or Corosync
OpenAIS is the heart of the cluster. All other computers operate though this component, and no cluster component can work without it. Further, it is shared between both Pacemaker and RHCS clusters.
In Red Hat clusters, openais is configured via the central cluster.conf file. In Pacemaker clusters, it is configured directly in openais.conf.
14 What is ToTem
The totem protocol defines message passing within the cluster and it is used by openais. A token is passed around all the nodes in the cluster, and the timeout in fencing is actually a token timeout. The counter, then, is the number of lost tokens that are allowed before a node is considered dead.
The totem protocol supports something called ‘rrp’, Redundant Ring Protocol. Through rrp, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as “ring 0? and “ring 1?.
15 What is CLVM
CLVM is ideal in that by using DLM, the distributed lock manager, it won’t allow access to cluster members outside of openais’s closed process group, which, in turn, requires quorum.
It is ideal because it can take one or more raw devices, known as “physical volumes”, or simple as PVs, and combine their raw space into one or more “volume groups”, known as VGs. These volume groups then act just like a typical hard drive and can be “partitioned” into one or more “logical volumes”, known as LVs. These LVs are where Xen’s domU virtual machines will exist and where we will create our GFS2 clustered file system.
16 What is GFS2
It works much like standard filesystem, with user-land tools like mkfs.gfs2, fsck.gfs2 and so on. The major difference is that it and clvmd use the cluster’s distributed locking mechanism provided by the dlm_controld daemon. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster’s closed process group. All nodes can then safely read from and write to the data on the partition simultaneously.
17 What is the importance of DLM
One of the major roles of a cluster is to provide distributed locking on clustered storage. In fact, storage software can not be clustered without using DLM, as provided by the dlm_controld daemon and using openais’s virtual synchrony via CPG.
Through DLM, all nodes accessing clustered storage are guaranteed to get POSIX locks, called plocks, in the same order across all nodes. Both CLVM and GFS2 rely on DLM, though other clustered storage, like OCFS2, use it as well.
18 What is CCS_TOOL
we can use ccs_tool, the “cluster configuration system (tool)”, to push the new cluster.conf to the other node and upgrade the cluster’s version in one shot.
ccs_tool update /etc/cluster/cluster.conf
19 What is CMAN_TOOL
It is a Cluster Manger tool, it can be used to view nodes and status of cluster
Cman_tool nodes
Cman_tool status
20 What is clusstat
Clusstat is used to see what state the cluster’s resources are in
21 What is clusvadm
Clusvadm is a tool to manage resource in a cluster
clusvcadm -e -m : Enable the on the specified . When a is not specified, the local node where the command was run is assumed.
clusvcadm -d -m : Disable the .
clusvcadm -l : Locks the prior to a cluster shutdown. The only action allowed when a is frozen is disabling it. This allows you to stop the so that rgmanager doesn’t try to recover it (restart, in our two services). Once quorum is dissolved and the cluster is shut down, the service is unlocked and returns to normal operation next time the node regains quorum.
clusvcadm -u : Unlocks a , should you change your mind and decide not to stop the cluster.
22 What is Luci_admin init
This command is run to create Luci Admin user and set password for it
Service luci start, chckconfig luci on
Default port for Luci web server is 8084

Linux File Systems: Ext2 vs Ext3 vs Ext4 vs Xfs

Linux File Systems: Ext2 vs Ext3 vs Ext4 vs Xfs

ext2, ext3 and ext4 are all filesystems created for Linux. This article explains the following:
High level difference between these filesystems.
How to create these filesystems.
How to convert from one filesystem type to another.

Ext2
Ext2 stands for second extended file system.
It was introduced in 1993. Developed by Rémy Card.
This was developed to overcome the limitation of the original ext file system.
Ext2 does not have journaling feature.
On flash drives, usb drives, ext2 is recommended, as it doesn’t need to do the over head of journaling.
Maximum individual file size can be from 16 GB to 2 TB
Overall ext2 file system size can be from 2 TB to 32 TB
How to create an ext2 filesystem
# mke2fs /dev/sda1

Ext3
Ext3 stands for third extended file system.
It was introduced in 2001. Developed by Stephen Tweedie.
Starting from Linux Kernel 2.4.15 ext3 was available.
The main benefit of ext3 is that it allows journaling.
Journaling has a dedicated area in the file system, where all the changes are tracked. When the system crashes, the possibility of file system corruption is less because of journaling.
Maximum individual file size can be from 16 GB to 2 TB
Overall ext3 file system size can be from 2 TB to 32 TB
There are three types of journaling available in ext3 file system.
Journal – Metadata and content are saved in the journal.
Ordered – Only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default.
Writeback – Only metadata is saved in the journal. Metadata might be journaled either before or after the content is written to the disk.
You can convert a ext2 file system to ext3 file system directly (without backup/restore).

How to create ext3 file system :-
# mkfs.ext3 /dev/sda1
(or)
# mke2fs –j /dev/sda1
( -j for adding journaling capability )
How to convert ext2 to ext3 :-
# umount /dev/sda2
# tune2fs -j /dev/sda2
# mount /dev/sda2 /var

Ext4
Ext4 stands for fourth extended file system.
It was introduced in 2008.
Starting from Linux Kernel 2.6.19 ext4 was available.
Supports huge individual file size and overall file system size.
Maximum individual file size can be from 16 GB to 16 TB
Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).
Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
You can also mount an existing ext3 fs as ext4 fs (without having to upgrade it).
Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3.
In ext4, you also have the option of turning the journaling feature “off”.

Creating ext4 file system :-
# mkfs.ext4 /dev/sda1
(or)
# mke2fs -t ext4 /dev/sda1
Converting ext3 to ext4
( Warning :- Never try this live or production servers )
# umount /dev/sda2
# tune2fs -O extents,uninit_bg,dir_index /dev/sda2
# e2fsck -pf /dev/sda2
# mount /dev/sda2 /var

Find your servers filesystem type
We can find the filesystem type used in our servers using any one of the following commands
# mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
# file -sL /dev/sda1
/dev/sda1: Linux rev 1.0 ext3 filesystem data (needs journal recovery)
# df -T | awk ‘{print $1,$2,$7}’ | grep “^/dev”
/dev/sda3 ext3 /
/dev/sda1 ext3 /boot

XFS

XFS is a high-performance file system which was designed by SGI for their IRIX platform. Since XFS was ported to the Linux kernel in 2001, XFS has remained a preferred choice for many enterprise systems especially with massive amount of data, due to its high performance, architectural scalability and robustness. For example, RHEL/CentOS 7 and Oracle Linux have adopted XFS as their default file system, and SUSE/openSUSE have long been an avid supporter of XFS.

XFS has a number of unique features that make it stand out among the file system crowd, such as scalable/parallel I/O, journaling for metadata operations, online defragmentation, suspend/resume I/O, delayed allocation for performance, etc.

If you want to create and mount an XFS file system on your Linux platform, here is how to do it.

XFS is packed full of cool features like guaranteed rate I/O, online resizing, built-in quota enforcement, and it can theoretically support filesystems up to 8 exabytes in size. It’s been used on Linux since about 2001, and is available as an install option on many popular Linux distributions. With variable block sizes, you can tune your system like a sliding scale to tweak for space efficiency or read performance.

Best for extremely large file systems, large files, and lots of files
Journaled (an asymmetric parallel cluster file system version is also available)
POSIX extended access controls

The XFS file system is Open Source and included in major Linux distributions. It originated from SGI (Irix) and was designed specifically for large files and large volume scalability. Video and multi-media files are best handled by this file system. Scaling to petabyte volumes, it also handles great deals of data. It is one of the few filesystems on Linux which supports Data Migration (SGI contributed the Hierarchical Storage Management interfaces into the Linux Kernel a number of years ago). SGI also offers a closed source cluster parallel version of XFS called cXFS which like cVxFS is an asymmetrical model. It has the unique feature, however, that it’s slave nodes can run on Unix, Linux and Windows, making it a cross platform file system. Its master node must run on SGI hardware.

Recommended Use: If you really like to tweak your system to meet your needs, XFS is a great way to go.

The XFS file system is an extension of the extent file system .XFS is a high performance 64 bit journaling file system .Support of XFS
was merged into the linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of XFS file system .
Now RHEL 7.0 uses XFS as the default file system .

XFS supports maximum file system size of 8 exbibytes for 64 bit file system .Some comparison of XFS file system is XFS file system cannot be shrunk and poor performance with
deletions of large numbers of files.

32-bit system 64-bit system
File size: 16 Terabytes 16 Exabytes
File system: 16 Terabytes 18 Exabytes

Creating Xfs file system

#fdisk /dev/sdb <-create font="" partition="" the="">
#mkfs.xfs -f /dev/sdb1
#mount -t xfs /dev/sdb1 /storage
#df -Th /storage

Yum repository

Yum Repository (Yellow dog Updater)

1) mount /dev/cdrom /mnt

2)touch /data

3)rsync -prav /mnt/CentOS/ /data

4)rpm -ivh /mnt/CentOS/Createrepo-0.4.11.3.el5.noarch.rpm

5)Createrepo /data

6)cd /etc/yum.repos.d/

7)touch local.repo

8)vi local.repo
ENtry,
[local]
name=local
baseurl=file:///data/
enabled=1
gpgcheck=0
:wq

9)yum install samba

How to Configure Primary DNS Server in redhat 6 Step by Step

How to Configure Primary DNS Server in redhat 6 Step by Step
Domain Name Server (DNS) Configuration and Administration

Domain Name System
The Domain Name System (DNS) is the crucial glue that keeps computer networks in harmony by converting human-friendly hostnames to the numerical IP addresses computers require to communicate with each other. DNS is one of the largest and most important distributed databases the world depends on by serving billions of DNS requests daily for public IP addresses. Most public DNS servers today are run by larger ISPs and commercial companies but private DNS servers can also be useful for private home networks. This article will explo

To Check IP
[root@www Desktop]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:84:6D:8C
inet addr:10.90.12.1 Bcast:10.90.12.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe84:6d8c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6624 errors:0 dropped:0 overruns:0 frame:0
TX packets:1474 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:442710 (432.3 KiB) TX bytes:1901220 (1.8 MiB)
Interrupt:19 Base address:0x2000

eth1 Link encap:Ethernet HWaddr 00:0C:29:84:6D:96
inet addr:10.23.151.66 Bcast:10.23.159.255 Mask:255.255.224.0
inet6 addr: fe80::20c:29ff:fe84:6d96/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13927 errors:0 dropped:0 overruns:0 frame:0
TX packets:7518 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9215651 (8.7 MiB) TX bytes:948169 (925.9 KiB)
Interrupt:19 Base address:0x2080

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:480 (480.0 b) TX bytes:480 (480.0 b)

To Set DNS Server IP
vi /etc/sysconfig/network-scripts/ifcfg-eth0

EVICE=eth0
HWADDR=00:0c:29:84:6d:8c
NM_CONTROLLED=no
ONBOOT=yes
IPADDR=10.90.12.1
BOOTPROTO=none
NETMASK=255.255.255.0
DNS=10.90.12.1
TYPE=Ethernet
IPV6INIT=no
USERCTL=no

save :wq

To Set Host Name
[root@station Desktop]# vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=station.example.com

save :wq

[root@station Desktop]# vim /etc/hosts
0.90.12.1 station.example.com station

save :wq

[root@station Desktop]# vim /etc/resolv.conf
search station.example.com
nameserver 10.90.12.1

save :wq

[root@station Desktop]# hostname station.example.com

[root@station Desktop]# hostname
station.example.com

To Install Package
[root@station Desktop]# yum install bind*
Loaded plugins: fastestmirror, refresh-packagekit, security
Repository ‘yum’ is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Setting up Install Process
Package 32:bind-utils-9.7.3-8.P3.el6.i686 already installed and latest version
Package 32:bind-libs-9.7.3-8.P3.el6.i686 already installed and latest version
Resolving Dependencies
–> Running transaction check
—> Package bind.i686 32:9.7.3-8.P3.el6 will be installed
—> Package bind-chroot.i686 32:9.7.3-8.P3.el6 will be installed
—> Package bind-dyndb-ldap.i686 0:0.2.0-7.el6 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
bind i686 32:9.7.3-8.P3.el6 yum 3.9 M
bind-chroot i686 32:9.7.3-8.P3.el6 yum 67 k
bind-dyndb-ldap i686 0.2.0-7.el6 yum 49 k

Transaction Summary
================================================================================
Install 3 Package(s)

Total download size: 4.0 M
Installed size: 7.1 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): bind-9.7.3-8.P3.el6.i686.rpm | 3.9 MB 00:00
(2/3): bind-chroot-9.7.3-8.P3.el6.i686.rpm | 67 kB 00:00
(3/3): bind-dyndb-ldap-0.2.0-7.el6.i686.rpm | 49 kB 00:00
——————————————————————————–
Total 20 MB/s | 4.0 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
Installing : 32:bind-9.7.3-8.P3.el6.i686 1/3
Installing : 32:bind-chroot-9.7.3-8.P3.el6.i686 2/3
Installing : bind-dyndb-ldap-0.2.0-7.el6.i686 3/3

Installed:
bind.i686 32:9.7.3-8.P3.el6 bind-chroot.i686 32:9.7.3-8.P3.el6
bind-dyndb-ldap.i686 0:0.2.0-7.el6

Complete!
[root@station Desktop]#

To Copy named.conf file
[root@station Desktop]# cp /etc/named.conf /var/named/chroot/etc/named.conf

To Change directory
cd /var/named/chroot/etc/

To edit configuration file
[root@station etc]#vim named.conf
options {
directory “/var/named”;
};

zone “example.com” IN {
type master;
file “for.zone”;
};

zone “12.90.10.in-addr.arpa” IN {
type master;
file “rev.zone”;
};

save :wq

To Change Group Name
[root@station etc]# chgrp named named.conf

To Copy File same Location
[root@station etc]# cp /var/named/named.localhost /var/named/chroot/var/named/for.zone
[root@station etc]# cp /var/named/named.loopback /var/named/chroot/var/named/rev.zone

To change directory
[root@station etc]# cd /var/named/chroot/var/named/

To edit configuration file
[root@station named]# vim for.zone
$TTL 1D
@ IN SOA example.com. root.example.com. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS station.example.com.
station IN A 10.90.12.1

save :wq

To edit configuration file
[root@station named]# vim rev.zone
$TTL 1D
@ IN SOA example.com. root.example.com. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS station.example.com.
1 IN PTR station.example.com.

save :wq

To Change Group Name
[root@station named]# chgrp named for.zone
[root@station named]# chgrp named rev.zone
[root@station named]# ll
total 8
-rw-r—–. 1 root named 190 Jun 1 19:12 for.zone
-rw-r—–. 1 root named 196 Jun 1 19:15 rev.zone
[root@station named]#

To Restart Service & On
[root@station named]# service named restart
Stopping named: [ rajesh ]
Starting named: [ rajesh ]

[root@station named]# chkconfig named on

To Check Named Server
[root@station named]# dig 10.90.12.1

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6 <<>> 10.90.12.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- 23819="" br="" id:="" nxdomain="" opcode:="" query="" status:="">;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;10.90.12.1. IN A

;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012060501 1800 900 604800 86400

;; Query time: 193 msec
;; SERVER: 113.193.1.14#53(113.193.1.14)
;; WHEN: Fri Jun 1 19:17:27 2012
;; MSG SIZE rcvd: 103

[root@station named]# dig station.example.com

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6 <<>> station.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- 24133="" br="" id:="" noerror="" opcode:="" query="" status:="">;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;station.example.com. IN A

;; ANSWER SECTION:
station.example.com. 86400 IN A 10.90.12.1

;; AUTHORITY SECTION:
example.com. 86400 IN NS station.example.com.

;; Query time: 1 msec
;; SERVER: 10.90.12.1#53(10.90.12.1)
;; WHEN: Fri Jun 1 19:17:47 2012
;; MSG SIZE rcvd: 67

[root@station named]#

Client end Setting

[admin@station1]$vim /etc/resolve.conf

search station.example.com
nameserver 10.90.12.1
Save :wq
[admin@station1]$ dig station.example.com

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6 <<>> station.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- 24133="" br="" id:="" noerror="" opcode:="" query="" status:="">;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;station.example.com. IN A

;; ANSWER SECTION:
station.example.com. 86400 IN A 10.90.12.1

;; AUTHORITY SECTION:
example.com. 86400 IN NS station.example.com.

;; Query time: 1 msec
;; SERVER: 10.90.12.1#53(10.90.12.1)
;; WHEN: Fri Jun 1 19:17:47 2012
;; MSG SIZE rcvd: 67
Enjoy……….!!!!!!