April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Linux Interview Questions

1        Q. Which command is used to check the number of files and disk space used and the each user’s defined quota

  • Repquota , it shows filesystem, no. of blocks used, soft and hard limit, no. of files used, soft and hard limits

2        What is the name and path of the main system log

  • A. /var/log/messages. (Syslog)

3        Which command is used to review boot messages

  • Dmesg, used as dmesg | more or dmesg | grep Memory, etc

4        Which utility is used to automate rotation of logs

  • logrotate (/etc/logrotate.conf and /etc/logrotate.d)

5        What are the fields in the /etc/passwd file

  • Username, mask password, UID, GID, comment, home directory, default shell

6        Which commands are used to set a processor-intensive job to use less CPU time

  • nice, it is used for scheduling priority of PIDs. -20 means highest priority. 19 means lowest priority.
  • Top command can also be used for this job. Press r and PID and Priority.

7        How do you create a new user account

  • Useradd –d /home/newuser –s /bin/ksh –c “New User”  newuser

8        Which shell account do you assign to a POP3 mail-only account

  • /sbin/nologin

9        Which daemon is responsible for tracking events on Linux system

  • Syslogd, it logs events to /var/log/messages

10    Which daemon is used for scheduling of the commands

  • Crond, it schedules commands with crontab –e command

11    How environment variables is set so that the file permission can be automatically set to the newly created files

  • Umask, umask 000 means full full permission, umask 777 means least permissions will be assign to newly created files.

12    Which key combination can you press to suspend a running job and place it in background

  • Ctrl+z

13    What file would you edit in your home directory to change the default window manager

  • ~/.xinitrc

14    Which command can split long text files into smaller ones

  • Split, it divides file into equal sizes

15    What is pwconv

  • Pwconv command creates /etc/shadow and changes all passwords to X in /etc/passwd

16    What is page in, page out, swap in, swap out

  • Page-ins and page-outs are pages moved in and out between RAM and Disk
  • swap-ins and swap-outs are processes moved in and out between RAM and Disk
  • page-out = The system’s free memory is less than a threhsold “lotsfree” and vhand daemon use “LRU, Last Recently Used” algorithm to move some unused / least used pages to the swap area.
    page-in = One process which is running requested for a page that is not in the current memory (page-fault), vhand daemon is bringing it’s pages to memory.
  • Page in – Page outs – They are similar in function to any other operating system. When a particular page is requested by the main memory, but it is not present in the main memory; a page fault occurs…and this page is “paged in” to the main memory. Similarly pages that have been inactive for a while are “paged out” to page data sets on the auxiliary memory(Swap).
  • swap-out = System is thrashing and swapper daemon has de-activated a process and it’s memory pages are moved into the swap area.
    swap-in = A deactivated process is back to work and it’s pages are being brought into the memory.
  • Swapping involves the moving of a process’s entire collection data in memory to a range of space on the backing store, often to a swapfile or swap partition. The process goes from being in memory to swapped out entirely; there is no in-between.
  • Swapping occurs when whole process is transferred to disk, while paging is when some part of process is transferred to disk while rest is still in physical memory.

17    What is tee command used for

  • It reads standard input and transfers it to standard output while storing the contents in a separate file
    sort inputfile.txt | tee outputfile.txt | cat
    tee “Hello, I am output” > outputfile.txt
    who | tee userlist.txt
  • It can also be used to write multiple files at the same time
    date | tee –a file1 file2 file3

18    What are $? And $! System variables

  • Echo $? à Shows zero if the last executed command was successful
  • Echo $! à Shows last executed background job

19    What is difference between find and grep

  • Find is used to search / locate files
  • Grep is used to search a pattern inside a file

20    What are differences between Hard and Soft links

  • Hard Link is a mirror copy of the original file.
  • Hard links share the same inode.
  • Any changes made to the original or Hard linked file will reflect the other.
  • Even if you delete any one of the files, nothing will happen to the other.
  • Hard links can’t cross file systems.
  • Soft Link is a symbolic link to the original file.
  • Soft Links will have a different Inode value.
  • A soft link points to the original file.
  • If you delete the original file, the soft link fails.
  • If you delete the soft link, nothing will happen.
  • Soft links can cross file systems.

21    Which file defines the level of logs written to system log

  • Kernel.h

22    Describe the boot process of Linux

  • BIOS (Basic Input/Output System) Loads from BIOS chip on motherboard
  • POST (Power On Self Test) Checks all connected devices
  • BIOS checks for Boot device availability
  • BIOS loads MBR (Master Boot Record) in Memory (which is first 512 bytes of primary disk)
  • MBR contains information about Boot Loader. MBR loads default boot loader i.e. GRUB
  • Grub loads Kernel of Operating System, VMLinuz
  • Here onwards Kernel controls booting process
  • Kernel starts INITRD (Initial RAM DISK). InitRD contains preloaded drivers for hardware
  • After loading drivers from INITRD, partitions are mounted (ready only)
  • Init process is started, it becomes first process of system (PID = 1)
  • INIT will mount root and other partitions(read/write) and does FSCK
  • INIT sets up System Clock and Hostname, etc
  • Based on runlevel it will load the services and startup scripts
  • Finally, it will run rc.local script
  • Now the Login Prompt will appear

23    What is DORA Process

  • DORA (Discover, Offer, Request, Accept) is the process by which a client acquires DHCP IP Address

24    What is output of Kill -3 <PID> and Kill -0

  • Kill -3 <PID> is used to take thread dump of a running JAVA Process
  • Kill -0 will kill all process in current process group except Login shell

25    What is difference between Kill and Kill -9 command

  • kill <PID>à  Generates SIGTERM signal requesting process to terminate
  • kill -9 <PID> à Generates SIGKILL signal for process to terminate immediately
  • KILL -9 is FORCE KILL a process because the signal can’t be caught by the process

26    What is VLAN

  • Virtual LAN, is a broadcast domain created by switches. With VLAN a switch can create and broadcast domain. It separates large broadcast domains into smaller ones thus improves performance.

27    What are hard and soft mount

  • Hard mount is used to mount local filesystem. The filesystem will be in the mounted state until you unmount it manually.
  • Soft mount is an option that is very useful for mounting network filesystems(NFS). Soft mount will allow automatic unmount if the filesystem is idle for a specified time period.
  • NFS supports two types of mounts — hard mounts and soft mounts. If a mount is a hard mount, an NFS request affecting any part of the mounted resource is issued repeatedly until the request is satisfied (for example, the server crashes and comes back up at a later time). When a mount is a soft mount, an NFS request returns an error if it cannot be satisfied (for example, the server is down), then quits.
  • Hard mount ensures data integrity and soft mount causes data loss if NFS server is unreachable.
  • Soft mount improves performance and Hard mount improves reliability

28    What is PS1 in Linux

  • Bash supports 4 prompts:
    PS1 – the default prompt
    PS2 – for multi-line input
    PS3 – printed for the select command
    PS4 – printed before output if set -x is set

29    What is difference between a deamon and a server process

  • A daemon (Disk and Execution Monitor) is a software process that runs in the background (continuously) and provides the service to client upon request. For example named is a daemon. When requested it will provide DNS service.
    Other examples are:
    * xinetd (it is a super-daemon, it is responsible for invoking other Internet servers when they are needed)
    * inetd (same as xinetd, but with limited configuration options)
    * sendmail/postfix (to send/route email)
    * Apache/httpd (web server)
  • Browser Running one daemon for each of the services could significantly increase the load. However if you are running big site (with many user) it is advisable to use dedicated daemon. For example web server or MySQL database server.
  • A server process runs one time, when called by a daemon. Once done it will stop. For example telnetd (in.telnetd) or ftpd called from xinetd/inetd daemon. By calling server process from daemon you can save the load and memory. Use a server process for small services such as ftpd, telnetd

30    Where is kernel located in Linux

  • Kernel file is stored in /boot with the name VMLinux
  • When Linux OS is running, kernel is loaded into memory

31    Explain configure, make and make install

  • ./configure
  • The above command makes the shell run the script named ‘ configure ‘ which exists in the current directory. The configure script basically consists of many lines which are used to check some details about the machine on which the software is going to be installed. This script checks for lots of dependencies on your system. For the particular software to work properly, it may be requiring a lot of things to exist on your machine already. When you run the configure script you would see a lot of output on the screen , each being some sort of question and a respective yes/no as the reply. If any of the major requirements are missing on your system, the configure script would exit and you cannot proceed with the installation, until you get those required things.
  • The main job of the configure script is to create a ‘ Makefile ‘ . This is a very important file for the installation process. Depending on the results of the tests (checks) that the configure script performed it would write down the various steps that need to be taken (while compiling the software) in the file named Makefile.
  • If you get no errors and the configure script runs successfully (if there is any error the last few lines of the output would glaringly be stating the error) then you can proceed with the next command which is
  • make
  • ‘make’ is actually a utility which exists on almost all Unix systems. For make utility to work it requires a file named Makefile in the same directory in which you run make. As we have seen the configure script’s main job was to create a file named Makefile to be used with make utility. (Sometimes the Makefile is named as makefile also)
  • make would use the directions present in the Makefile and proceed with the installation. The Makefile indicates the sequence that Linux must follow to build various components / sub-programs of your software. The sequence depends on the way the software is designed as well as many other factors.
  • The Makefile actually has a lot of labels (sort of names for different sections). Hence depending on what needs to be done the control would be passed to the different sections within the Makefile or it is possible that at the end of one of the section there is a command to go to some next section.
  • Basically the make utility compiles all your program code and creates the executable. For particular section of the program to complete might require some other part of the code already ready, this is what the Makefile does. It sets the sequence for the events so that your program does not complain about missing dependencies.
  • One of the labels present in the Makefile happens to be named ‘install’.
  • If make ran successfully then you are almost done with the installation. Only the last step remains which is
  • make install
  • As indicated before make uses the file named Makefile in the same directory. When you run make without any parameters, the instruction in the Makefile begin executing from the start and as per the rules defined within the Makefile (particular sections of the code may execute after one another.. that’s why labels are used.. to jump from one section to another). But when you run make with install as the parameter, the make utility searches for a label named install within the Makefile, and executes only that section of the Makefile.
  • The install section happens to be only a part where the executable and other required files created during the last step (i.e. make) are copied into the required final directories on your machine. E.g. the executable that the user runs may be copied to the /usr/local/apache2 so that all users are able to run the software. Similarly all the other files are also copied to the standard directories in Linux. Remember that when you ran make, all the executable were created in the temporary directory where you had unzipped your original tarball. So when you run make install, these executable are copied to the final directories.

32    What is LD_LIBRARY_PATH

  • LD_LIBRARY_PATH is an environment variable. It is used for debugging a new library or a non-standard library. It is also used for which directories to search. Path to search for directories need to given
  • LD_LIBRARY_PATH is an environment variable you set to give the run-time shared library loader (ld.so) an extra set of directories to look for when searching for shared libraries. Multiple directories can be listed, separated with a colon (:). This list is prepended to the existing list of compiled-in loader paths for a given executable, and any system default loader paths.

33    Explain RSync

  • rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.
  • # rsync  options  <source>  <destination>

i)        -z is to enable compression

ii)       -a archive (recursive, preserve symbolic links, permissions, timestamps, owner and group)

iii)     -l copy symbolic links as well

iv)     -h output numbers in human readable format

v)      -v verbose

vi)     -r indicates recursive

vii)   -u Update (do not overwrite)

viii)  -d sync only directory structure(not the files)

ix)     -i only displays difference in source and destination

x)      –progress to view progress during transfer

xi)     –delete to delete the files not present at source but present at destination

xii)   –exclude to exclude file or directory or pattern or RELATIVE path

xiii)  –exclude-from <FileName> to exclude files/directories listed in FileName

xiv) –max-size not to transfer files larger than this limit

34    How to enable password-less authentication among two linux servers

  • Generate key on server1

i)        # ssh-keygen

  • copy public key to server2

i)        # ssh-copy-id -i ~/.ssh/id_rsa.pub <remote-server>

35    How to create users in Linux

  • Using useradd command
  • To see all the defaults of useradd command
  • # useradd -D

i)        GROUP=100

ii)       HOME=/home

iii)     INACTIVE=-1

iv)     EXPIRE=

v)      SHELL=/bin/bash

vi)     SKEL=/etc/skel

vii)   CREATE_MAIL_SPOOL=yes

  • Modify defaults of useradd
  • # useradd -D –shell=/bin/ksh
  • # useradd -D

i)        GROUP=100

ii)       HOME=/home

iii)     INACTIVE=-1

iv)     EXPIRE=

v)      SHELL=/bin/ksh

vi)     SKEL=/etc/skel

vii)   CREATE_MAIL_SPOOL=yes

  • Create customized users using useradd
  • # useradd -s <shell> -m -d <home> -g <secondary group> username

i)        -s = shell

ii)       -m = create home directory, if not exists

iii)     -d = where to create home directory

iv)     -g = gid or name of group user will become member of

  • Adduser command
  • # adduser <username>
  • Creating n number of users
  • # newusers <file containing list of users>

36    How to define Password expiry

  • To see current settings for password age policy
  • # chage –list <user>

i)        Last password change                                    : Apr 01, 2009

ii)       Password expires                                        : never

iii)     Password inactive                                       : never

iv)     Account expires                                         : never

v)      Minimum number of days between password change          : 0

vi)     Maximum number of days between password change          : 99999

vii)   Number of days of warning before password expires       : 7

  • Set password expiry date for a user using -m option
  • # chage -M 10 <user>
  • This will change ‘password expires’ and ‘Max number of days between password change’
  • Set password expiry date for a user using -E option (YYYY-MM-DD)
  • # chage -E “2012-12-31? <user>
  • Set the user accound to be locked after X number of inactivity days
  • # chage -I 10 <user>
  • This will change ‘password inactive’
  • Force user to change password upon next logon
  • # chage -d 0 <user>

37     What is the use of login.defs

  • /etc/login.defs file contains defaults for a new user. Various options in login.defs file are

i)        MAIL_DIR /var/spool/mail

ii)       PASS_MAX_DAYS   99999

iii)     PASS_MIN_DAYS   0

iv)     PASS_MIN_LEN    5

v)      PASS_WARN_AGE   7

vi)     UID_MIN                   500

vii)   UID_MAX                 60000

viii)  GID_MIN                   500

ix)     GID_MAX                 60000

x)      CREATE_HOME     yes

xi)     UMASK           077

xii)   USERGROUPS_ENAB yes

xiii)  MD5_CRYPT_ENAB yes

38    What is the use of limits.conf

  • /etc/security/limits.conf file is used to describe limits for a user/group
  • Add session required /lib/security/pam_limits.so in /etc/pam.d/login
  • Limits defined in limits.conf

i)        core – limits the core file size KB

ii)       data – max data size KB

iii)     fsize – max file size KB

iv)     nofile – max number of open files

v)      cpu – max CPU time (Mins)

vi)     nproc – max number of process

vii)   maxlogins – max number of logins for this user

viii)  maxsyslogins – max number of logins on the system

ix)     priority – the priority to run user process with

x)      locks – max number of file locks user and hold

xi)     nice – max nice priority allowed to raise to

  • ex.

i)        @students soft nproc 10

ii)       @students hard nproc 20

39     What is RAID and explain different RAID levels used

  • RAID is Redundant Array of Inexpensive Disks. It improves performance, redundancy and flexibility
  • RAID 0 = Striping (pronounced as stryping)

i)        Two or more disks

ii)       Data is broken into equal size chunks and distributed over all disks

iii)     Performance is improved because of simultaneous read and write disk operations

iv)     No fault tolerance (no redundancy)

v)      Suitable for intensive i/o tasks

vi)     Total size = sum of disks used

vii)   Two 80G disk = 160×1 = 160G available disk in RAID 0 (Space efficiency =1)

viii)  Mathematical  AND function

  • RAID 1 = Mirroring

i)        Two or more disks

ii)       Data is duplicated to disks simultaneously

iii)     Performance remains same

iv)     Provides fault tolerance if one disk fails, Redundancy increases

v)      Suitable for non-intensive i/o tasks

vi)     Total size = Size of smallest disk used

vii)   Two 80G disk = 160×1/2 = 80G available disk in RAID 0 (Space efficiency = 1/n = 1/2)

viii)  Mathematical OR function

  • RAID 4 = Striping with dedicated Parity Disk

i)        Three or more disks

ii)       Data is broken into stripes and distributed over two disks

iii)     Parity bit is stored only in third disk i.e. Parity Disk

iv)     Performance also depends on performance of Parity Disk

v)      Provides fault tolerance if one disk fails

vi)     Suitable for intensive i/o tasks

vii)   3x80G disk = 240×2/3 = 160G available disk in RAID 4 (Space efficiency = 1-1/n = 1-1/3 = 2/3)

  • RAID 5 = Striping with distributed Parity

i)        Three or more disks

ii)       Data is broken into stripes and distributed over three disks

iii)     Parity bit is also distributed over three disks

iv)     Performance is improved with simultaneous i/o on three disks

v)      Provides fault tolerance if one disks fails

vi)     Suitable for intensive i/o tasks

vii)   3x80G disk = 240×2/3 = 160G available disk in RAID 5 (Space efficiency = 1-1/n = 1-1/3 = 2/3)

40     How to boot client with Kick Start file

  • Boot: linux ks=http://server.com/path/kickstart.cfg
  • Boot: linux ks=nfs://server.com/path/kickstart.cfg

41    How to setup Kick Start server

  • Install DHCP and configure it
  • Install system-config-kickstart
  • Run system-config-kickstart
  • Provide answers to question in installation wizard
  • Save the file in NFS/HTTP path
  • Add the name of groups and packages at the bottom of files which needs to be pre-installed on remote server

42    How to check system boot / reboot time

  • # last reboot
  • # last shutdown
  • # who –b
  • # uptime

43    What is difference between ext2 and ext3 file systems

  • Ext3 supports journaling whereas ext2 doesn’t.
  • Journal is a type of log file which tracks all the file system changes
  • So that you can recover data in case of filesystem crash
  • Journal contains ‘metadata’ i.e. ownership, date stamp information etc

44    How to extend LVM with 2GB space (add 2GB)

  • # lvextend –L +2G <LVNAME>
  • # resize2fs <LVNAME>

45    How to extend LVM to a final of 2GB space

  • # lvextend –L 2G <LVNAME>
  • # resize2fs <LVNAME>

46    How do you check hardware errors in Linux

  • dmesg
  • /var/log/messages
  • dmidecode –t system
  • IML (Integrated Management Logs) – An iLO console feature
  • hpacucli – To check RAID array status
  • use grep or less commands on
  • /var/log/messages and /var/log/warn
  • /var/log/mcelog

47    How do you find BIOS version from Linux Command

  • # dmidecode –type 0

48    What is dmidecode command

  • dmidecode  is  a  tool for dumping a computer’s DMI (some say SMBIOS) table contents in a human-readable format. This table contains a description of the system’s hardware components, as well as other useful pieces of information such as serial numbers and  BIOS  revision.  Thanks to this table, you can retrieve this information without having to probe for the actual hardware.

49    How do you find out server architecture

  • # uname –a
  • # arch

50    How to perform automatic reboot after kernel panic (10seconds)

  • # cat /proc/sys/kernel/panic
  • # sysctl –a | grep kernel.panic

i)        Kernel.panic = 0

  • # echo “10” > /panic/sys/kernel/panic
  • # cat /etc/sysctl.conf | grep kernel.panic

i)        Kernel.panic = 10

51    What are the general causes of kernel panic

  • Defective or incompatible RAM
  • Incompatible, obsolete, or corrupted kernel extensions.
  • Incompatible, obsolete, or corrupted drivers.
  • Incorrect permissions on System-related files or folders.
  • Hard disk corruption, including bad sectors, directory corruption, and other hard-disk ills.
  • Insufficient RAM and available hard disk space.
  • Improperly installed hardware or software.
  • Incompatible hardware

52    What are the uses of dd command

  • Disk Dump (copy all content from one disk to another)
  • # dd if=/dev/sda of=/dev/sdb
  • Partition Dump (copy all content from one partition to another)
  • # dd if=/dev/sda1 of=/dev/sda2
  • Creating empty file of specific size (File used as swap)
  • # dd if=/dev/zero of=/swapfile bs=1024 count=524288

i)        1024×512=524288 block size = 512MB

53    What is DMM

  • DMM or DM-Multipath or Device Mapper Multipathing allows you to configure multiple I/O paths between server nodes and storage arrays into a single device.
  • I/O paths are physical SAN connections , multipath combines these I/O paths and creates a new device
  • Redundancy

i)        Active/Passive configuration

ii)       Only half of the paths are used at a time for I/O

  • Improved Performance

i)        Active/Active mode

ii)       Round robin fashion

54    What is WWID in DM-Multipath

  • World Wide Identifier is a unique and unchanging name of every multipath device

55    What is use of multipath command

  • It lists and configures multipath devices

56    What is the procedure to configure your system with DM-Multipath

  • Install device-mapper-multipath rpm
  • Edit the /etc/multipath.conf configuration file:

i)        comment out the default blacklist  (it blacklists all devices)

ii)       change any of the existing defaults as needed

iii)     save the configuration file

  • Start the multipath daemons

i)        # modprobe dm-multipath

ii)       # service multipathd start

iii)     # multipath –v2

iv)     # chkconfig multipathd on

  • Create the multipath device with the multipath command

57    How to exclude local disk from multipath list

  • Modify /etc/multipath.conf and write local disk’s WWID in blacklist section

i)        blacklist {

ii)              wwid 26353900f02796769

iii)     }

  • You can also black list device by its Device Name and Device Type
  • # multipath –F à Removes all multipath devices
  • # multipath –f < device > à Removes the given device
  • # multipath –v2 à verbosity = 2
  • # multipath –l à Displays info from sysfs and device mapper
  • # multipath –ll à Also displays variable components of the system

58    How to find WWID

  • # cat /var/lib/multipath/binding

59    How to add devices to multipath database

  • Multipath by default includes support for most common storage arrays
  • This list can be found in multipath.conf.defaults file
  • If you want to add a unsupported device then edit /etc/multipath.conf

i)        devices {

ii)              device {

iii)                    vendor “HP”

iv)                    product “OPEN-V.”

v)                     getuid_callout “/sbin/scsi_id -g -u -p0x80 -s /block/%n”

vi)            }

vii)   }

  • To know Vendor and Product information

i)        # cat /sys/block/sda/device/vendor

ii)       # cat /sys/block/sda/device/model

60    What is the use of DMSetup command

  • DMSetup command is used to find out Device Mapper entries match the Multipathed device
  • # dmsetup ls

61    How do you troubleshoot multipath

  • # multipathd –k

i)        show config

ii)       reconfigure

iii)     show paths

iv)     CTRL+D

62    How to format, mount and use SAN Volumes

  • # fdisk /dev/sda
  • # kpartx –a /dev/mapper/mpath0
  • # ll /dev/mapper

i)        mpath0    mpath0p1

  • # mkfs.ext3 /dev/mapper/mpath0p1
  • # mount /dev/mapper/mpath0p1 /mnt/san
  • Kpartx creates device maps from partition tables
  • We must use fdisk command on underlying device /dev/sda

63    How to resize online multipath disk

  • Use the following command to find paths to LUNs

i)        # multipath –l

  • Now, resize your paths, for SCSI device

i)        # echo 1 > /sys/block/<device>/device/rescan

  • Resize multipath device

i)        # multipathd –k ‘resize map mpath0’

  • Resize the file system (if there is no LVM configured upon mpath0)

i)        # resize2fs /dev/mapper/mpath0

  • If LVM resides over mpath0 then we should not resize it. We should resize LVM

i)        # pvscan

ii)       # vgscan

iii)     # lvextend –L +SizeG <LVNAME>

iv)     # resize2fs <LVNAME>

64    How to differentiate local storage from SAN

  • # ls –l /sys/block/*/device

65    How to upgrade Linux Kernel

  • Kernel can be upgraded either by compiling from source or by installing kernel rpm
  • Kernel should be compiled only in case if you need custom kernel with specific patch
  • Using RPM –ivh command is safer than RPM –Uvh (ivh will preserve old kernel to fall back)

i)        # rpm –Uvh kernel-headers kernel-source kernel-devel

ii)       # rpm –ivh kernel kernel-smp à SMP is multi core or multi CPU

  • RPM command modifies grub.conf accordingly
  • Linux OS can have as many kernels but can load only 1 at a time

66    How to delete or remove unnecessary kernel

  • /boot/vmlinux à Kernel File
  • /boot/grub.conf à Edit
  • /lib/modules/kernel-VERSION à Modules
  • If Kernel was installed using rpm, it can be removed via rpm –e

i)        # rpm –qa | grep kernel

ii)       # rpm –vv –e kernel-smp

67    Where are the Kernel Modules (Device Drivers in Windows terminology) stored

  • /lib/modules/kernel-version
  • /lib/modules/$(uname –r)

68    How to list all the loaded kernel modules

  • # lsmod
  • # less /proc/modules
  • # modinfo ipv6

69    How to add or remove modules from running kernel

  • MODPROBE is the command used to add or remove modules in kernel on fly
  • # modprobe  ip_tables
  • # lsmod à uses file /proc/modules
  • # modprobe –r ip_tables
  • # lsmod
  • Alternatively, we can use insmod and rmmod

i)        INSMOD à Load a module

ii)       RMMOD à Unload a module

70    How to load a module in kernel automatically at system boot

  • If you want to load cdrom module in kernel upon next boot, modify modules.conf [old method]

i)        # vi /etc/modules.conf

ii)       ide-cd

iii)     ide-core

iv)     cdrom

v)      save and close file, reboot system

  • Or we can use rc.modules file. We should use rc.modules file and not rc.local for loading kernel modules because rc.modules file is read much eary in boot sequence

i)        # echo modeprobe ide-cd >> /etc/rc.modules

ii)       # chmod u+x /etc/rc.modules

71    How to delete log files older than 10 days

  • # find /var/log/http/ -name *.log  -mtime +10 –exec rm –f {} \;

72    How to find Disk being used by a user

  • # find /directory –user <username> -type –f –exec du –sh {} \;

73    How to find information about your Hard Disk from Linux Command

  • # hdpram /dev/sda à INFO
  • # hdpram –I /dev/sda à More INFO
  • # hdpram –tT /dev/sda à Read Write Speed

i)        Timing cached reads:   9460 MB in  2.00 seconds = 4737.22 MB/sec

ii)       Timing buffered disk reads: 708 MB in  7.57 seconds =  93.49 MB/sec

74    How to mount ISO files in Linux

  • # mount –o loop linux-dvd.iso /mnt

75    Explain the output of PS command

  • S: State of the process

i)        S: Sleeping,

ii)       O: Runing on processor,

iii)     R: Runnable (it is in run queue),

iv)     Z: Zombie,

v)      T: Stopped process (either by a job control signal or because it is being traced)

  • PID: Process ID
  • PPID: Parent process ID
  • USER: User name who initiated process
  • GROUP: Group name from whom user belong/currently launched the job
  • RSS: The resident set size of the process, in kilobytes.
  • VSZ:The total size of the process in virtual memory, in kilobytes.
  • %CPU: Total % of CPU taken by this process
  • %MEM: Total % of Memory taken by this process
  • TIME: the cumulative CPU time of the process in the form
  • ELAPSED: Total time elapsed since this process is live
  • TT: Terminal ID
  • COMMAND: Command/daemon/process with args
  • # ps -eo s,pid,ppid,user,group,rss,vsz,pcpu,pmem,time,etime,tty,args
  • # ps L à to see list of format codes like above

76    Explain what is /proc file system

  • /proc file system contains information about

i)        Kernel

ii)       Hardware

iii)     Running Process

  • Important files under proc are: cpuinfo, mdstat, meminfo, modules, mounts, partitions, net, version, /proc/sys/kernel/hostname, /proc/sys/net/ipv4/ip_forward

77    What is a Zombie Process

  • When the parent keeps some of the information of child although the child process is dead, such a process is called as Zombie Process
  • Zombie process is dead but not have been removed from process table
  • Zombie process doesn’t cause any load or issues to machine (because it is already dead)

78    How to tune Linux kernel

  • # vi /etc/sysctl.conf à Modify / Add / Remove kernel parameters
  • # /sbin/sysctl –p à Save configuration
  • # sysctl –a à Check configuration

79    How to configure ntp client

  • Open system-config-date, Network Management Tab and add NTP Server’s name/IP
  • Click OK
  • Run command ntpq –p to check available NTP servers

i)        # ntpq –p

ii)      * is displayed against active NTP server

iii)    Stratum number 16 means you are not synchronized

iv)    Lower the stratum number, nearer the NTP server is

  • Run ntpstat to see if Time is updated (synchronous) and what is Time lag (seconds behind)

i)        # ntpstat

  • To synchronize client with server manually

i)        # ntpupdate –u <NTP Sever>

80    How to unmounts file system when resource is busy

  • # umount /dev/sda1
  • # fuser –m /dev/sda1 à identify which pid is using resource
  • # lsof | grep /dev/sda1 à identify which pid is using resource
  • # kill -9 <PID> à Kill the pid
  • # umount /dev/sda1

81    What is Network Bonding? What are the steps for Network Bonding?

  • Bonding is creation of a single bonded interface by combining 2 or more ethernet interfaces. This helps in high availability and performance improvement.
  • Step 1:
  • Create the file ifcfg-bond0 with the IP address, netmask and gateway.

i)         $ cat /etc/sysconfig/network-scripts/ifcfg-bond0

ii)       DEVICE=bond0

iii)     IPADDR=192.168. 1.100

iv)     NETMASK=255. 255.255.0

v)      GATEWAY=192. 168.1.1

vi)     USERCTL=no à Only root can control services (say no to other users)

vii)   BOOTPROTO=none à Can be Static/DHCP or none

viii)  ONBOOT=yes à device will start when system starts

  • Step 2:
  • Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

i)        $ cat /etc/sysconfig/network-scripts/ifcfg-eth0

ii)       DEVICE=eth0

iii)     BOOTPROTO=none

iv)     ONBOOT=yes

v)      MASTER=bond0

vi)     SLAVE=yes

vii)  $ cat /etc/sysconfig/network-scripts/ifcfg-eth1

viii)  DEVICE=eth1

ix)     BOOTPROTO=none

x)      ONBOOT=yes

xi)     USERCTL=no

xii)  MASTER=bond0

xiii)  SLAVE=yes

xiv) $ cat /etc/sysconfig/network-scripts/ifcfg-eth2

xv)   DEVICE=eth2

xvi) BOOTPROTO=none

xvii)            ONBOOT=yes

xviii)          MASTER=bond0

xix) SLAVE=yes

  • Step 3:
  • Set the parameters for bond0 bonding kernel module. Add the following lines to/etc/modprobe. conf

i)        # bonding commands

ii)       alias bond0 bonding

iii)     options bond0 mode=balance-alb miimon=100

  • Here, balance-alb = Adaptive Load Balancing
  • Other options are, balance-rr = Balanced Round Robin
  • Note: Here we configured the bonding mode as “balance-alb”. All the available modes are given at the end and you should choose appropriate mode specific to your requirement.
  • Step 4:
  • Load the bond driver module from the command prompt.

i)        $ modprobe bonding

  • Step 5:
  • Restart the network, or restart the computer.

i)        $ service network restart  Or restart computer

  • When the machine boots up check the proc settings.

i)        $ cat /proc/net/bonding/bond0

ii)       Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

iii)     Bonding Mode: adaptive load balancing

iv)     Primary Slave: None

v)      Currently Active Slave: eth2

vi)     MII Status: up

vii)   MII Polling Interval (ms): 100

viii)  Up Delay (ms): 0

ix)     Down Delay (ms): 0

x)      Slave Interface: eth2

xi)     MII Status: up

xii)   Link Failure Count: 0

xiii)  Permanent HW addr: 00:14:72:80: 62:f0

  • Look at ifconfig -a and check that your bond0 interface is active. You are done!
  • RHEL bonding supports 7 possible “modes” for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.
  • ·* Mode 0 (balance-rr)
  • This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.
  • * Mode 1 (active-backup)
  • This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.
  • * Mode 2 (balance-xor)
  • Transmits based on XOR formula. (Source MAC address is XOR’d with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.
  • * Mode 3 (broadcast)
  • This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.
  • * Mode 4 (802.3ad)
  • This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.
  • * Mode 5 (balance-tlb)
  • This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.
  • * Mode 6 (balance-alb)
  • This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

82    What are LVM Snapshots

  • “lvcreate –size 100m –snapshot –name snap /dev/vg00/lvol1?
  •  Ceates a snapshot logical volume named /dev/vg00/snap which has access to the contents of the original logical volume named /dev/vg00/lvol1 at snapshot logical volume creation time.  If  the  original logical  volume  contains  a  file  system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the filesystem  to  run  a  backup  while  the  original filesystem continues to get updated.

83    How to backup MySQL using LVM Snapshot

  • First login to mysql and lock all tables. This will ensure that no update operations are performed on LVM mount point

i)        Mysql> flush tables with read lock;

ii)       Mysql> flush logs;

  • Now create LVM Snapshot of /dev/vg01/mysql (mounted as /var/lib/mysql)

i)        # lvcreate –snapshot –size=1000M –name=db-snapshot /dev/vg01/mysql

  • Now login to MySQL and release the lock

i)        Mysql> unlock tables;

  • Now move the backup to Tape or another server

i)        # find /dev/vg01/db-snapshot | cpio -o -H tar -F /dev/nst0

ii)       OR

iii)     # mount –o ro /dev/vg01/db-snapshot /mnt

iv)     # cd /mnt

v)      # tar cvfz mysql.tar * (# tar cvfz /dev/st0 /mnt)

vi)     # cd

vii)   # umount /mnt

viii)  # lvremove –f /dev/vg01/db-snapshot

84    Explain in detail what is LVM Snapshot

  • It is a feature of Linux which creates virtual image of a device. This image will then keep track of the changes being made to the origin.
  • Example, If you have 1000mb data logical volume out of which 800mb is already used. Now you take a lvm-snapshot of this volume with size 1000mb. Then this snapshot will be able to keep a track of changes made to origin data volume till next 200mb. After 200mb usage on data volume, our snapshot will become INVALID.
  • The size of snapshot need to be chosen by admin looking at the amount of expected level of changes in data (origin)

85    What does lvdiskscan shows

  • Shows block devices which can be used as physical volumes

86    How to scan for volumes

  • pvscan, vgscan, lvscan

Cluster Administration

1         What is a Cluster

  • A cluster is two or more computers (called as nodes or members) that works together to perform a taks.

2         What are the types of cluster

  • Storage
  • High Availability
  • Load Balancing
  • High Performance

3         What is CMAN

  • CMAN is Cluster Manager. It manages cluster quorum and cluster membership.
  • CMAN runs on each node of a cluster

4         What is Cluster Quorum

  • Quorum is a voting algorithm used by CMAN.
  • CMAN keeps a track of cluster quorum by monitoring the count of number of nodes in cluster.
  • If more than half of members of a cluster are in active state, the cluster is said to be in Quorum
  • If half or less than half of the members are not active, the cluster is said to be down and all cluster activities will be stopped
  • Quorum is defined as the minimum set of hosts required in order to provide service and is used to prevent split-brain situations.
  • The quorum algorithm used by the RHCS cluster is called “simple majority quorum”, which means that more than half of the hosts must be online and communicating in order to provide service.

5         What is split-brain

  • It is a condition where two instances of the same cluster are running and trying to access same resource at the same time, resulting in corrupted cluster integrity
  • Cluster must maintain quorum to prevent split-brain issues

6         What is Quorum disk

  • In case of a 2 node cluster, quorum disk acts as a tie-breaker and prevents split-brain issue
  • If a node has access to network and quorum disk, it is active
  • If a node has lost access to network or quorum disk, it is inactive and can be fenced
  • A Quorum disk, known as a qdisk is small partition on SAN storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. It does this by using configured heuristics, that is custom tests, to decided which which node or partition is best suited for providing clustered services during a cluster reconfiguration.

7         What is RGManager

  • RGManager manages and provides failover capabilities for collections of cluster resources called services, resource groups, or resource trees.
  • In the event of a node failure, RGManager will relocate the clustered service to another node with minimal service disruption. You can also restrict services to certain nodes, such as restricting  httpd to one group of nodes while  mysql can be restricted to a separate set of nodes.
  • When the cluster membership changes, openais tells the cluster that it needs to recheck it’s resources. This causes rgmanager, the resource group manager, to run. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.
  • Within rgmanager, one or more resources are brought together as a service. This service is then optionally assigned to a failover domain, an subset of nodes that can have preferential ordering.

8         What is Fencing

  • Fencing is the disconnection of a node from the cluster’s shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon,  fenced.
  • Power fencing — A fencing method that uses a power controller to power off an inoperable node.
  • storage fencing — A fencing method that disables the Fibre Channel port that connects storage to an inoperable node.
  • Other fencing — Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.

9         How to manually fence an inactive node

  • # fence_ack_manual –n <node2>

10      How to see shared IP address (Cluster Resource) if ipconfig doesn’t show it

  • # ip addr list

11      What is DLM

  • A lock manager is a traffic cop who controls access to resources in the cluster
  • As implied in its name, DLM is a distributed lock manager and runs in each cluster node; lock management is distributed across all nodes in the cluster. GFS2 and CLVM use locks from the lock manager.

12      What is Conga

  • This is a comprehensive user interface for installing, configuring, and managing Red Hat High Availability Add-On.
  • Luci — This is the application server that provides the user interface for Conga. It allows users to manage cluster services. It can be run from outside cluster environment.
  • Ricci — This is a service daemon that manages distribution of the cluster configuration. Users pass configuration details using the Luci interface, and the configuration is loaded in to corosync for distribution to cluster nodes. Luci is accessible only among cluster members.

13      What is OpenAis or Corosync

  • OpenAIS is the heart of the cluster. All other computers operate though this component, and no cluster component can work without it. Further, it is shared between both Pacemaker and RHCS clusters.
  • In Red Hat clusters, openais is configured via the central cluster.conf file. In Pacemaker clusters, it is configured directly in openais.conf.

14      What is ToTem

  • The totem protocol defines message passing within the cluster and it is used by openais. A token is passed around all the nodes in the cluster, and the timeout in fencing is actually a token timeout. The counter, then, is the number of lost tokens that are allowed before a node is considered dead.
  • The totem protocol supports something called ‘rrp’, Redundant Ring Protocol. Through rrp, you can add a second backup ring on a separate network to take over in the event of a failure in the first ring. In RHCS, these rings are known as “ring 0? and “ring 1?.

15      What is CLVM

  • CLVM is ideal in that by using DLM, the distributed lock manager, it won’t allow access to cluster members outside of openais’s closed process group, which, in turn, requires quorum.
  • It is ideal because it can take one or more raw devices, known as “physical volumes”, or simple as PVs, and combine their raw space into one or more “volume groups”, known as VGs. These volume groups then act just like a typical hard drive and can be “partitioned” into one or more “logical volumes”, known as LVs. These LVs are where Xen’s domU virtual machines will exist and where we will create our GFS2 clustered file system.

16        What is GFS2

  • It works much like standard filesystem, with user-land tools like mkfs.gfs2, fsck.gfs2 and so on. The major difference is that it and clvmd use the cluster’s distributed locking mechanism provided by the dlm_controld daemon. Once formatted, the GFS2-formatted partition can be mounted and used by any node in the cluster’s closed process group. All nodes can then safely read from and write to the data on the partition simultaneously.

17      What is the importance of DLM

  • One of the major roles of a cluster is to provide distributed locking on clustered storage. In fact, storage software can not be clustered without using DLM, as provided by the dlm_controld daemon and using openais’s virtual synchrony via CPG.
  • Through DLM, all nodes accessing clustered storage are guaranteed to get POSIX locks, called plocks, in the same order across all nodes. Both CLVM and GFS2 rely on DLM, though other clustered storage, like OCFS2, use it as well.

18      What is CCS_TOOL

  • we can use ccs_tool, the “cluster configuration system (tool)”, to push the new cluster.conf to the other node and upgrade the cluster’s version in one shot.
  • ccs_tool update /etc/cluster/cluster.conf

19      What is CMAN_TOOL

  • It is a Cluster Manger tool, it can be used to view nodes and status of cluster
  • Cman_tool nodes
  • Cman_tool status

20      What is clusstat

  • Clusstat is used to see what state the cluster’s resources are in

21      What is clusvadm

  • Clusvadm is a tool to manage resource in a cluster
  • clusvcadm -e <service> -m <node>: Enable the <service> on the specified <node>. When a <node> is not specified, the local node where the command was run is assumed.
  • clusvcadm -d <service> -m <node>: Disable the <service>.
  • clusvcadm -l <service>: Locks the <service> prior to a cluster shutdown. The only action allowed when a <service> is frozen is disabling it. This allows you to stop the <service> so that rgmanager doesn’t try to recover it (restart, in our two services). Once quorum is dissolved and the cluster is shut down, the service is unlocked and returns to normal operation next time the node regains quorum.
  • clusvcadm -u <service>: Unlocks a <service>, should you change your mind and decide not to stop the cluster.

22      What is Luci_admin init

  • This command is run to create Luci Admin user and set password for it
  • Service luci start, chckconfig luci on
  • Default port for Luci web server is 8084

 

Common Ports and Protocols

Port (IP Protocols) Service/Protocol
21 (TCP) FTP
22 (TCP/UDP) SSH/ SFTP
25 and 587 (TCP) SMTP
53 (TCP/UDP) DNS
80 (TCP/UDP) HTTP
110 (TCP) POP3
143 (TCP/UDP) IMAP
389 (TCP/UDP) LDAP
443 (TCP/UDP) HTTPS
465 (TCP) SMTPS
636 (TCP/UDP) LDAPS
694 (UDP) Heartbeat
873 (TCP) rsync
3306 (TCP/UDP) MySQL
5900 (TCP/UDP) VNC
6660-6669 (TCP) IRC
8080 (TCP) Apache Tomcat

The inetd Super Server

Programs that provide application services via the network are called network daemons . A daemon is a program that opens a port, most commonly a well-known service port, and waits for incoming connections on it. If one occurs, the daemon creates a child process that accepts the connection, while the parent continues to listen for further requests. This mechanism works well, but has a few disadvantages; at least one instance of every possible service you wish to provide must be active in memory at all times. In addition, the software routines that do the listening and port handling must be replicated in every network daemon.

To overcome these inefficiencies, most Unix installations run a special network daemon, what you might consider a “super server.” This daemon creates sockets on behalf of a number of services and listens on all of them simultaneously. When an incoming connection is received on any of these sockets, the super server accepts the connection and spawns the server specified for this port, passing the socket across to the child to manage. The server then returns to listening.

The most common super server is called inetd , the Internet Daemon. It is started at system boot time and takes the list of services it is to manage from a startup file named /etc/inetd.conf . In addition to those servers, there are a number of trivial services performed by inetd itself called internal services . They include chargen , which simply generates a string of characters, and daytime , which returns the system’s idea of the time of day.

Services managed by inetd daemon are ftp, tftp, chargen, daytime, finger, etc

Running TOP Command in batch

top -b -d 10 -n 3 >> top-file

This command will run TOP in –b(batch) mode, with a delay (-d) of 10 seconds and 3(-n) times.

To write multiple files at the same time using TEE

ps | tee file1 file2 file3

This will send output of ps command to multiple files at the same time use TEE Command

Use IOStat to get Disk and CPU usage

iostat -x 10 10

This will show stats for 10 times, every 10 seconds

Memory usage monitoring using VMSTAT

vmstat -x 10 10

This command shows memory stats every 10 seconds for 10 times

procs            memory                        swap        io       system    cpu
r  b   swpd   free   buff  cache         si   so    bi    bo     in    cs       us sy id wa
2  5 375912  19548  17556 477472    0    1     0     0      1     1        1  0  0  1

proc:

r: Process that are waiting for CPU time

b: Process that are waiting for I/O

Memory:

Swapd: Shows how many blocks (1KB) are swapped out(paged) to disk

Free: Idle memory

Buff: Memory used as buffer, before/after I/O operation

Cache: Memory used as cache by OS

SWAP:

Si: Blocks per sec swapped in (From swap area(disk) to memory(RAM))

So: Blocks per sec swapped out (From memory(RAM) to swap area(disk))

IO:

Bi: Blocks per sec received from block device – Read Hard Disk

Bo: Blocks per sec sent to block device – Write Hard Disk

System:

In: No. of interrupts per sec

Cs: No. of context switches per sec (storing and restoring state of CPU. This enables multiple processes to share a single CPU)

CPU:

Us: % of CPU used for running non-kernel code (user process)

Sys: % of CPU used for running kernel code (system time, network, I/O, clock etc)

Id: CPU idle time in %

Wa: % of time spent by CPU in waiting for I/O

Listing Dynamic Dependencies (LDD)

ldd /bin/ls

This command will list all the dependent missing libraries for ls command

List Open Files (LSOF)

To list all open files in system

lsof

To list all open files by a particular process

lsof –p <pid>

To list all open files by a user

lsof –u <name>

To list all open files in a partition

lsof | grep /dev/sda1

To list files/command/pid LISTENING to any port

lsof | grep LISTEN

To list files/command/pid listening to 6366

lsof | grep 6366

To list open IPV4 ports

lsof –i4

To list open IP v6 ports open

lsof –i6

To list files/operations running on nas directory

lsof +d /mnt/nas

This is extremely useful in unmounting a directory when it shows message ‘device is busy’

Commands for checking System Load

  • Uptime
  • Top
  • Vmstat
  • Free
  • IOTop
  • HTop
  • aTOP

Using TOP

After running top command

Shift+m (or M) for sort by %MEM

n à 20 à for showing only 20 lines in output

Shift+o (or O) à k à Enter à to sort output by %CPU

Shift + w (or W) à To save the configuration

P – Sort by CPU usage

T – Sort by cumulative time

z – Color display

k – Kill a process

q – quit

Understanding OUTPUT of TOP Command

The first line in top:

top – 22:09:08 up 14 min,  1 user,  load average: 0.21, 0.23, 0.30

“22:09:08? is the current time; “up 14 min” shows how long the system has been up for; “1 user” how many users are logged in; “load average: 0.21, 0.23, 0.30? the load average of the system (1minute, 5 minutes, 15 minutes).

Load average is an extensive topic and to understand its inner workings can be daunting. The simplest of definitions states that load average is the cpu utilization over a period of time. A load average of 1 means your cpu is being fully utilized and processes are not having to wait to use a CPU. A load average above 1 indicates that processes need to wait and your system will be less responsive. If your load average is consistently above 3 and your system is running slow you may want to upgrade to more CPU’s or a faster CPU.

The second line in top:

Tasks:  82 total,   1 running,  81 sleeping,   0 stopped,   0 zombie

Shows the number of processes and their current state.

The third line in top:

Cpu(s):  9.5%us, 31.2%sy,  0.0%ni, 27.0%id,  7.6%wa,  1.0%hi, 23.7%si,  0.0%st

Shows CPU utilization details. “9.5%us” user processes are using 9.5%; “31.2%sy” system processes are using 31.2%; “27.0%id” percentage of available cpu; “7.6%wa” time CPU is waiting for IO.

When first analyzing the Cpu(s) line in top look at the %id to see how much cpu is available. If %id is low then focus on %us, %sy, and %wa to determine what is using the CPU.

The fourth and fifth lines in top:

Mem:    255592k total,   167568k used,    88024k free,    25068k buffers

Swap:   524280k total,        0k used,   524280k free,    85724k cached

Describes the memory usage. These numbers can be misleading. “255592k total” is total memory in the system; “167568K used” is the part of the RAM that currently contains information; “88024k free” is the part of RAM that contains no information; “25068K buffers and 85724k cached” is the buffered and cached data for IO.

So what is the actual amount of free RAM available for programs to use ?

The answer is: free + (buffers + cached)

88024k + (25068k + 85724k) = 198816k

How much RAM is being used by progams ?

The answer is: used – (buffers + cached)

167568k – (25068k + 85724k) = 56776k

The processes information:

Top will display the process using the most CPU usage in descending order. Lets describe each column that represents a process.

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

3166 apache    15   0 29444 6112 1524 S  6.6  2.4   0:00.79 httpd

PID – process ID of the process

USER – User who is running the process

PR – The priority of the process

NI – Nice value of the process (higher value indicates lower priority, -20 is highest, 19 is lowest)

VIRT – The total amount of virtual memory used

RES – Resident memory used

SHR – Amount of shared memory used

S – State of the task. Values are S (sleeping), D (uninterruptible sleep), R (running), Z(zombies), or T (stopped or traced)

%CPU – Percentage of CPU used

%MEM – Percentage of Memory used

TIME+ – Total CPU time used by process

COMMAND – Command issued

Using Free

Free –m

Actual Usage is shown

-/+ bufferes/cache: 51 202

It means out of 254MB, 51MB is used by running programs. So I have 202MB to play with and for my other application to grow into.

Using Fuser

It gives information about file user or the process that is currently using the file/directory

fuser –v /etc/init.d/httpd

USER        PID ACCESS COMMAND

/etc/init.d/httpd:

root       2652 …e. httpd

apache    28592 …e. httpd

apache    28595 …e. httpd

  • c      current directory
  • e      executable being run
  • f      open file. f is omitted in default display mode
  • F      open file for writing. F is omitted in default display mode
  • r      root directory
  • m      map’ed file or shared library

To find and kill a PID using Fuser:

fuser –v –k -i /etc/init.d/httpd

TAR

Extract individual file from archive

tar xvjf dest.tar.bz2 textfile.txt

Add a file to existing archive

tar rvf dest.tar myfile.txt

Add a directory to existing archive

tar rvf dest.tar myfolder/

Delete a file from existing archive

tar –delete -vf dest.tar myfile.txt

Delete a folder from existing archive

tar –delete -vf dest.tar myfolder/

Exclude a file from being archived

tar cvf dest.tar –exclude=’myfile.txt’ myfolder/

Use a exclude list

tar cvf dest.tar -X exclude.txt myfolder/

How to use CPIO

GNU cpio is a tool for creating and extracting archives, or copying files from one place to another. It handles a number of cpio formats as well as reading and writing tar files. cpio command works just like tar, only better. It can read input from the “find” command.

# find / -name “*.c” | cpio -o –format=tar > c-file.backup.tar

# find / -iname “*.pl” | cpio -o -H tar > perl-files.tar

# find / -iname “*.pl” | cpio -o -H tar -F perl-files.tar

# cpio -i -F perl-files.tar

# cpio -it -F perl-files.tar

  • -o: Create archive
  • -F: Archive filename to use instead of standard input or output. To use a tape drive on another machine as the archive.
  • -H format: Specify file format to use.
  • -i: Restore archive
  • -t: List files in archive

Archive contents to tape /dev/nst0

# find /home | cpio -o -H tar -F /dev/nst0

Restore contents from tape

# cpio -i -F /dev/nst0

Backup /home to remote system tape drive

# find /home | cpio -o -H tar -F user@backup.domain.com:/dev/nst0 –rsh-command=/usr/bin/ssh

Package installation using APT/Dpkg (Debian, Ubuntu)

  • Install package:
aptitude install PACKAGENAME
  • Reinstall package:
aptitude reinstall PACKAGENAME
  • Remove package (keep config files):
aptitude remove PACKAGENAME
  • Remove package and purge config files:
aptitude remove --purge PACKAGENAME
  • Update package list:
aptitude update
  • Upgrade system (security/bug fixes):
aptitude upgrade
  • Upgrade system to newest release (dangerous!):
aptitude dist-upgrade
  • Show info on an installed package:
aptitude show PACKAGENAME
  • Search package repositories:
aptitude search SEARCHSTRING

Package installation using Yum/RPM (CentOS, Fedora, Red Hat)

  • Install package:
yum install PACKAGENAME
  • Remove package:
yum remove PACKAGENAME
  • Update package:
yum update PACKAGENAME
  • List available updates:
yum list updates
  • Update system:
yum update
  • Upgrade system to newest release (dangerous!):
yum upgrade
  • Show package:
yum list PACKAGENAME
  • Search package repositories:
yum search SEARCHSTRING
  • List package groups:
yum grouplist
  • Install package group:
yum groupinstall 'GROUP NAME'
  • Update package group:
yum groupupdate 'GROUP NAME'
  • Remove package group:
yum groupremove 'GROUP NAME'
  • Download RPM file without installing it:
yum install yum-utils.noarch
yumdownloader httpd
  • How to extract files from RPM without installing it:
rpm2cpio httpd* | cpio –idmv
    • i = restore mode
    • d = create directories wherever necessary
    • m = retain time stamps
    • v = verbose mode
  • How to build RPM from tar
    • rpmbuild –ta abc.tar
    • rpm –ivh /usr/src/redhat/RPMS/[arch]/abc.xxx.[arch].rpm
  • How to build RPM from spec
    • rpmbuild –ba package.spec
    • Install rpmbuild if ‘command not found’ by ‘yum install rpm-build’ command

SUID, SGID, Sticky Bit

What’s that about SUID, SGID, and the sticky bit (oh my!)? Once again, a table seems appropriate…

Access File Directory
SUID (setuid) (4) File executes with rights of its owner (not the user who executed it) Ignored
SGID (setgid) (2) File executes with rights of its group (not the user who executed it) Files created within directory inherit the directory’s group memberships (rather than the creator’s group memberships)
Sticky Bit (1) Ignored Files created within directory may only be moved or deleted by their owner (or directory’s owner)

This probably isn’t intuitive, so we’ll go over it in a bit more detail. First, the sticky bit. One place the sticky bit is commonly used on Unix-like systems is the /tmp directory. This directory needs to be world-writable, but you don’t want anyone going around and deleting everyone else’s files. The sticky bit offers exactly this protection.

The Sticky Bit is a permission bit that can be set on either a file or a directory.

If it is set on a file, then that file will remain in memory after execution, thus sticking in memory. This is useful when running a multi-user program (such as a bulletin board system that I ran once) to make the program execute faster for the next user. This was a common programming tactic earlier in the history of computer programming when speed and disk space were at a premium. This feature is UNIX specific. This feature is not used in LINUX. Sticky Bit used on a file is USELESS in LINUX. It was useful when fast disk access and memory access technologies were not around. So in today’s age concept of Sticky Bit is obsolete.

If the sticky bit is set on a directory, only the owner of files in that directory will be able to modify or delete files in that directory – even if the permissions set on those files would otherwise allow it.

RSync for backup

rsync -e ‘ssh -p 30000? -avl –delete –stats –progress –-exclude ‘source’ –exclude ‘source/file.txt’ –exclude-from ‘/root/exclude.txt’ demo@123.45.67.890:/home/demo /backup

-e ‘ssh –p 30000’ à This ensures rsync uses the SSH protocol and sets the port to 30000

-a à Archive mode, retains the permissions of file

-v à Verbose mode

-vv à Double verbosity

-l à Preserves links

–delete à Delete files from destination folder that have been deleted from source folder

–stats à Gives Transfer Statistics

–progress à Progress of each file transfer, useful for rysncing large files

–exclude à exclude directory or file from being backed up

–exclude-from à exclude the list of file/folders written in exclude.txt

Logs on Linux

Some of the common log files and directories you might see in /var/log:

Filename(s) Purpose
auth.log Authentication logs
boot.log Boot logs
btmp Invalid login attempts
cron Cron logs
daemon.log Logs for specific services (daemons)
dmesg Kernel boot messages
httpd/ Apache logs
kern.log Kernel logs
mail* Mail server logs
messages General/all logs
mysql* MySQL logs
secure Security/authentication logs
syslog All system logs
wtmp User logins and logouts

VPN Tunneling on CentOS using OpenVPN

3 Types of tunneling available:

·       Simple VPN (no security or encryption)

Server 1

/usr/sbin/openvpn –remote 10.100.1.50 –dev tun1 –ifconfig 172.16.1.1 172.16.1.2

Server 2

/usr/sbin/openvpn –remote 10.100.1.20 –dev tun1 –ifconfig 172.16.1.2 172.16.1.1

·       Static Key VPN (simply 128-bit security)

Server 1

openvpn –genkey –secret key

scp key root@10.100.1.20:/usr/share/doc/openvpn-2.0.9/

/usr/sbin/openvpn –remote 10.100.1.50 –dev tun1 –ifconfig 172.16.1.1 172.16.1.2 –secret key

Server 2

/usr/sbin/openvpn –remote 10.100.1.20 –dev tun1 –ifconfig 172.16.1.2 172.16.1.1 –secret key

·       Full TLS VPN (revolving-key encryption)

Simple Load Balancing with APACHE MOD_PROXY

<VirtualHost *:80>

ProxyRequests off

ServerName domain.com

<Proxy balancer://mycluster>

# WebHead1

BalancerMember http://10.176.42.144:80

# WebHead2

BalancerMember http://10.176.42.148:80

# Security “technically we aren’t blocking

# anyone but this the place to make those

# chages

Order Deny,Allow

Deny from none

Allow from all

# Load Balancer Settings

# We will be configuring a simple Round

# Robin style load balancer.  This means

# that all webheads take an equal share of

# of the load.

ProxySet lbmethod=byrequests

</Proxy>

# balancer-manager

# This tool is built into the mod_proxy_balancer

# module and will allow you to do some simple

# modifications to the balanced group via a gui

# web interface.

<Location /balancer-manager>

SetHandler balancer-manager

# I recommend locking this one down to your

# your office

Order deny,allow

Allow from all

</Location>

# Point of Balance

# This setting will allow to explicitly name the

# the location in the site that we want to be

# balanced, in this example we will balance “/”

# or everything in the site.

ProxyPass /balancer-manager !

ProxyPass / balancer://mycluster/

</VirtualHost>

mysqld and mysqld_safe

Behind the scenes there are actually two versions of the MySQL server, “mysqld” and “mysqld_safe”. Both read the same config sections. The main difference is that mysqld_safe launches with a few more safety features enabled to make it easier to recover from a crash or other problem.

Both mysqld and mysqld_safe will read config entries in the “mysqld” section. If you include a “mysqld_safe” section, then only mysqld_safe will read those values in.

To LOCK and UNLOCK all tables in MySQL (Useful for backup/LVM Snapshot)

mysql -u root -p”password” -e “FLUSH TABLES WITH READ LOCK;”

mysql -u root -p”password” -e “UNLOCK TABLES;”

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>