October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

HTTP Status Codes for Beginners

All valid HTTP 1.1 Status Codes simply explained.
This article is part of the For Beginners series.

HTTP, Hypertext Transfer Protocol, is the method by which clients (i.e. you) and servers communicate. When someone clicks a link, types in a URL or submits out a form, their browser sends a request to a server for information. It might be asking for a page, or sending data, but either way, that is called an HTTP Request. When a server receives that request, it sends back an HTTP Response, with information for the client. Usually, this is invisible, though I’m sure you’ve seen one of the very common Response codes – 404, indicating a page was not found. There are a fair few more status codes sent by servers, and the following is a list of the current ones in HTTP 1.1, along with an explanation of their meanings.

A more technical breakdown of HTTP 1.1 status codes and their meanings is available at http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. There are several versions of HTTP, but currently HTTP 1.1 is the most widely used.

Informational

  • 100 – Continue
    A status code of 100 indicates that (usually the first) part of a request has been received without any problems, and that the rest of the request should now be sent.
  • 101 – Switching Protocols
    HTTP 1.1 is just one type of protocol for transferring data on the web, and a status code of 101 indicates that the server is changing to the protocol it defines in the “Upgrade” header it returns to the client. For example, when requesting a page, a browser might receive a statis code of 101, followed by an “Upgrade” header showing that the server is changing to a different version of HTTP.

Successful

  • 200 – OK
    The 200 status code is by far the most common returned. It means, simply, that the request was received and understood and is being processed.
  • 201 – Created
    A 201 status code indicates that a request was successful and as a result, a resource has been created (for example a new page).
  • 202 – Accepted
    The status code 202 indicates that server has received and understood the request, and that it has been accepted for processing, although it may not be processed immediately.
  • 203 – Non-Authoritative Information
    A 203 status code means that the request was received and understood, and that information sent back about the response is from a third party, rather than the original server. This is virtually identical in meaning to a 200 status code.
  • 204 – No Content
    The 204 status code means that the request was received and understood, but that there is no need to send any data back.
  • 205 – Reset Content
    The 205 status code is a request from the server to the client to reset the document from which the original request was sent. For example, if a user fills out a form, and submits it, a status code of 205 means the server is asking the browser to clear the form.
  • 206 – Partial Content
    A status code of 206 is a response to a request for part of a document. This is used by advanced caching tools, when a user agent requests only a small part of a page, and just that section is returned.

Redirection

  • 300 – Multiple Choices
    The 300 status code indicates that a resource has moved. The response will also include a list of locations from which the user agent can select the most appropriate.
  • 301 – Moved Permanently
    A status code of 301 tells a client that the resource they asked for has permanently moved to a new location. The response should also include this location. It tells the client to use the new URL the next time it wants to fetch the same resource.
  • 302 – Found
    A status code of 302 tells a client that the resource they asked for has temporarily moved to a new location. The response should also include this location. It tells the client that it should carry on using the same URL to access this resource.
  • 303 – See Other
    A 303 status code indicates that the response to the request can be found at the specified URL, and should be retrieved from there. It does not mean that something has moved – it is simply specifying the address at which the response to the request can be found.
  • 304 – Not Modified
    The 304 status code is sent in response to a request (for a document) that asked for the document only if it was newer than the one the client already had. Normally, when a document is cached, the date it was cached is stored. The next time the document is viewed, the client asks the server if the document has changed. If not, the client just reloads the document from the cache.
  • 305 – Use Proxy
    A 305 status code tells the client that the requested resource has to be reached through a proxy, which will be specified in the response.
  • 307 – Temporary Redirect
    307 is the status code that is sent when a document is temporarily available at a different URL, which is also returned. There is very little difference between a 302 status code and a 307 status code. 307 was created as another, less ambiguous, version of the 302 status code.

Client Error

  • 400 – Bad Request
    A status code of 400 indicates that the server did not understand the request due to bad syntax.
  • 401 – Unauthorized
    A 401 status code indicates that before a resource can be accessed, the client must be authorised by the server.
  • 402 – Payment Required
    The 402 status code is not currently in use, being listed as “reserved for future use”.
  • 403 – Forbidden
    A 403 status code indicates that the client cannot access the requested resource. That might mean that the wrong username and password were sent in the request, or that the permissions on the server do not allow what was being asked.
  • 404 – Not Found
    The best known of them all, the 404 status code indicates that the requested resource was not found at the URL given, and the server has no idea how long for.
  • 405 – Method Not Allowed
    A 405 status code is returned when the client has tried to use a request method that the server does not allow. Request methods that are allowed should be sent with the response (common request methods are POST and GET).
  • 406 – Not Acceptable
    The 406 status code means that, although the server understood and processed the request, the response is of a form the client cannot understand. A client sends, as part of a request, headers indicating what types of data it can use, and a 406 error is returned when the response is of a type not i that list.
  • 407 – Proxy Authentication Required
    The 407 status code is very similar to the 401 status code, and means that the client must be authorised by the proxy before the request can proceed.
  • 408 – Request Timeout
    A 408 status code means that the client did not produce a request quickly enough. A server is set to only wait a certain amount of time for responses from clients, and a 408 status code indicates that time has passed.
  • 409 – Conflict
    A 409 status code indicates that the server was unable to complete the request, often because a file would need to be editted, created or deleted, and that file cannot be editted, created or deleted.
  • 410 – Gone
    A 410 status code is the 404’s lesser known cousin. It indicates that a resource has permanently gone (a 404 status code gives no indication if a resource has gine permanently or temporarily), and no new address is known for it.
  • 411 – Length Required
    The 411 status code occurs when a server refuses to process a request because a content length was not specified.
  • 412 – Precondition Failed
    A 412 status code indicates that one of the conditions the request was made under has failed.
  • 413 – Request Entity Too Large
    The 413 status code indicates that the request was larger than the server is able to handle, either due to physical constraints or to settings. Usually, this occurs when a file is sent using the POST method from a form, and the file is larger than the maximum size allowed in the server settings.
  • 414 – Request-URI Too Long
    The 414 status code indicates the the URL requested by the client was longer than it can process.
  • 415 – Unsupported Media Type
    A 415 status code is returned by a server to indicate that part of the request was in an unsupported format.
  • 416 – Requested Range Not Satisfiable
    A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long.
  • 417 – Expectation Failed
    The 417 status code means that the server was unable to properly complete the request. One of the headers sent to the server, the “Expect” header, indicated an expectation the server could not meet.

Server Error

  • 500 – Internal Server Error
    A 500 status code (all too often seen by Perl programmers) indicates that the server encountered something it didn’t expect and was unable to complete the request.
  • 501 – Not Implemented
    The 501 status code indicates that the server does not support all that is needed for the request to be completed.
  • 502 – Bad Gateway
    A 502 status code indicates that a server, while acting as a proxy, received a response from a server further upstream that it judged invalid.
  • 503 – Service Unavailable
    A 503 status code is most often seen on extremely busy servers, and it indicates that the server was unable to complete the request due to a server overload.
  • 504 – Gateway Timeout
    A 504 status code is returned when a server acting as a proxy has waited too long for a response from a server further upstream.
  • 505 – HTTP Version Not Supported
    A 505 status code is returned when the HTTP version indicated in the request is no supported. The response should indicate which HTTP versions are supported.

NAGIOS

Nagios

Firewall and SElinux is Disable

Server IP :- 192.168.0.10
Hostname :- shashi.example.com
 
Client IP :- 192.168.0.11
Hostname :- client.example.com
 
1.Package Requirement :-
 
# yum install httpd php
# yum install gcc glibc glibc-common
# yum install gd gd-devel
 
2.Create Nagios user and group :-
 
# useradd -m nagios
# passwd nagios
# usermod -G nagios  nagios
# groupadd nagcmd
# usermod -a -G nagcmd nagios
# usermod -a -G nagcmd apache
 
3.Download Some Package :-
 
# mkdir /opt/nagios
 
# cd /opt/nagios
 
3.2.3.tar.gz
 
plugins-1.4.11.tar.gz
 
# tar xvf nagios-3.2.3.tar.gz
 
# cd nagios-3.2.3
 
# ./configure –with-command-group=nagcmd
 
# make all
 
# make install
 
# make install-init
 
# make install-config
 
# make install-commandmode
 
# make install-webconf
 

 4.vim /usr/local/nagios/etc/objects/contacts.cfg

 
35: change —> email youadmin-mail-ID
 

6.Give the password web nagiosadmin user

 
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
New password :
Re-type new password :
 
Note:- If you want to change admin “nagiosadmin” name change all in “/usr/local/nagios/etc/cgi.cfg” file too.
 
7. Go to this Path :-
 
# cd /opt/shashi
 
# tar xvf nagios-plugin-1.4.11.tar.gz
 
# cd nagios-plugin-1.4.11
 
# ./configure – – with-nagios-user=nagios – – with-nagios-group=nagios
 
# make
 
# make install
 
# chkconfig – – add nagios
 
# chkconfig nagios on
 
# /etc/init.d/nagios start
 
# /etc/init.d/nagios restart
 
# chkconfig nagios on

# /etc/init.d/httpd restart

#chkconfig httpd on

 
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg (check nagios)
 
Total warnings: 0
Total Errors: 0
Things look okay – No serious problems
 
8.Login to web interface :-
 
 
username :- nagiosadmin
passwd :- *******
 
 
 
9.Package Requirement :-
 
# yum install openssl-devel xinetd
 
# cd /opt/shashi
 
2.x/nrpe-2.13/nrpe-2.13.tar.gz/download
 
# tar xvf nrpe-2.13.tar.gz
 
# cd nrpe
 
# ./configure
 
General Options:
 
————————-
 
NRPE port:5666
 
NRPE user:nagios
 
NRPE group:nagios
 
Nagios user: nagios
 
Nagios group: nagios
 
 
 
# make all
 
# make install-plugin
 
# make install-daemon
 
# make install-daemon-config
 
# make install-xinetd
 
 
10.Enter the following entry :-
 
# vim /etc/xinetd.d/nrpe
 
 
 
only_from = 127.0.0.1 192.168.0.10 (nagios_ip_address
)
 
11.Now, add entry for nrpe daemon to /etc/services file :-
 
# vim /etc/services
 
 
nrpe 5666/tcp # NRPE
 
# service xinetd restart
 
 
# chkconfig xinetd on
 
12.Test NRPE Daemon Install :-
 
# netstat -at |grep nrpe
Output should be :
 
tcp 0 0 *:nrpe *.* LISTEN
 
13.Check NRPE Service :-
 
# /usr/local/nagios/libexec/check_nrpe -H 192.168.0.10
 
Output should be NRPE version:
 
 
NRPE v2.12
 
 
 
-: CLIENT SETUP :-
 
1.Package Requirement :-
 
# yum install openssl-devel xinetd
 
# yum install httpd php
 
# yum install gcc glibc glibc-common
 
# yum install gd gd-devel
 
2.Create Nagios user and group :-
 
# useradd -m nagios
# passwd nagios
# usermod -G nagios  nagios
# groupadd nagcmd
# usermod -a -G nagcmd nagios
# usermod -a -G nagcmd apache
 
3.Create a Directary shashi :-
 
# mkdir /opt/shashi
 
# cd /opt/shashi
 
# wget http://sourceforge.net/projects/nagios/files/nrpe-2.x/nrpe-2.13/nrpe-2.13.tar.gz/download
 
 
# tar -xvf nrpe-2.13.tar.gz
# cd nrpe
 
# ./configure
 
General Options:
 
————————-
 
NRPE port: 5666
NRPE user: nagios
NRPE group: nagios
Nagios user: nagios
Nagios group: nagios
 
# make all
 
# make install-plugin
 
# make install-daemon
 
# make install-daemon-config
 
# make install-xinetd
 
Enter the following entry in :-
 
 
# vim /etc/xinetd.d/nrpe
 
 
only_from = 127.0.0.1 <192.168.0.10>(nagios_ip_address)
 
 
 
 
 
4.Now, add entry for nrpe daemon to /etc/services file
 
# vim /etc/services
nrpe 5666/tcp # NRPE
 
# service xinetd start
 
# service xinetd restart
 
 
# chkconfig xinetd on
 
5.Test NRPE Daemon Install
 
# netstat -at |grep nrpe
 
Output should be:
tcp 0 0 *:nrpe *.* LISTEN
 
This command run by server side :-
# /usr/local/nagios/libexec/check_nrpe -H 192.168.0.11 (client IP)
 
NRPE v2.12
 
# cd /opt/shashi
 
 
# tar xvf nagios-plugins-1.4.11.tar.gz
 
# cd nagios-plugins-1.4.11
 
# ./configure –with-nagios-user=nagios –with-nagios-group=nagios
 
# make
 
# make install
 
# chkconfig –add nagios
 
# chkconfig nagios on
 
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
 
Total Warnings: 0
 
Total Errors: 0
Things look okay – No serious problems were detected during the pre-flight check
 
6.All Plugin :-
 
# cd /usr/local/nagios/libexec
 
# cp check_http check_tomcat
# cp check_http check_jboss
 
GO TO SERVER SIDE :-
 
# vim /usr/local/nagios/etc/nagios.cfg
 
>Definitions for monitoring the local (Linux) host
 
cfg_file=/usr/local/nagios/etc/objects/localhost.cfg (add line)
 
cfg_file=/usr/local/nagios/etc/objects/client.cfg (add line)
 
# vim /usr/local/nagios/etc/objects/client.cfg
 
 
define host {
 
name client ; Name of this template
 
use generic-host ; Inherit default values
 
check_period 24×7
 
check_interval 5
 
retry_interval 1
 
max_check_attempts 10
 
check_command check-host-alive
 
notification_period 24×7
 
notification_interval 30
 
notification_options d,r
 
contact_groups admins
 
register 0 ; DONT REGISTER THIS – ITS A TEMPLATE
 
}
 
define host {
 
use client ; Inherit default values from a template
 
host_name client ; The name we’re giving to this server
 
alias client ; A longer name for the server
 
address 192.168.0.11 ; IP address of the server
}
define service {
 
use local-service
 
host_name client
 
service_description PING
 
check_command check_ping!100.0,20%!500.0,60%
}
 
 
define service {
 
use local-service ; Name of service template to use
 
host_name client
 
service_description Disk Space
 
check_command check_disk!20%!10%!/
}
 
 
define service {
 
use local-service ; Name of service template to use
 
host_name client
 
service_description Total Processes
 
check_command check_local_procs!150!300!RSZDT
 
}
 
define service {
 
use local-service ; Name of service template to use
 
host_name client
 
service_description HTTP
 
check_command check_http
 
notification_interval 0 ; set > 0 if you want to be renotified
 
}
 
 
 
define service {
 
use local-service
 
host_name client
 
service_description MySQL connection-time
 
check_command check_mysql_health!root!XXXX!connection-time
 
notifications_enabled 1
}
 
 
#define service {
 
use local-service
 
host_name client
 
service_description Tomcat
 
check_command check_tomcat
}
 
define service {
 
use local-service
 
host_name client
 
service_description Jboss
 
check_command check_jboss
}
 
 
define service {
 
use local-service
 
host_name client
 
service_description SSH
 
check_command check_ssh
}
 
 
define service {
 
use local-service ; Name of service template to use
 
host_name client
 
service_description Current Users
 
check_command check_local_users!20!50
}
 
 
# vim /usr/local/nagios/etc/objects/command.cfg
 
 
################################################################################
 
# NOTE: The following ‘check_…’ commands are used to monitor services on
 
# both local and remote hosts.
 
################################################################################
 
 
 
# ‘check_load’ command definition
 
define command {
 
command_name check_procs
 
command_line $USER1$/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$
}
 
 
# ‘check_ftp’ command definition
 
define command {
 
command_name check_ftp
 
command_line $USER1$/check_ftp -H $HOSTADDRESS$ $ARG1$
}
 
# ‘check_http’ command definition
 
define command {
 
command_name check_http
 
command_line $USER1$/check_http -H $HOSTADDRESS$ -w 10 -c 20
}
# ‘check_ssh’ command definition
 
define command {
 
command_name check_ssh
 
command_line $USER1$/check_ssh $ARG1$ $HOSTADDRESS$
}
 
 
# ‘check_ping’ command definition
 
define command {
 
command_name check_ping
 
command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5
}
 
 
# ‘check_remote_users’ command definition
 
define command {
 
command_name check_remote_users
 
command_line $USER1$/check_users -w $ARG1$ -c $ARG2$
}
 
# ‘check_disk’ command defintion
 
define command {
 
command_name check_disk
 
command_line $USER1$/check_disk -w 20% -c 10% -p /dev/sda1
}
 
# ‘check_mysql’comman definition
 
define command {
 
command_name check_mysql_health
 
command_line $USER1$/check_mysql_health -H $HOSTADDRESS$ –user $ARG1$
–password $ARG2$ –mode $ARG3$
}
 
# ‘check_tomcat’comman definition
 
define command {
 
command_name check_tomcat
 
command_line $USER1$/check_tomcat -H $HOSTADDRESS$ -p 8080 -w 4 -c 5
}
 
#’check_jboss’command definition
 
define command{
 
command_name check_jboss
 
command_line $USER1$/check_jboss -H $HOSTADDRESS$ -p 4444 -w 4 -c 5
}
 
 
 
Check Configuration :-
 
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
 
 
 
# /etc/init.d/nagios restart
 
######################### Now Enjoy Working with Nagios ###########################
 
 

Installing nagios with nrpe to monitor remote hosts

This post explains installingnagios with nrpe to monitor remote hosts. Nagios is one of the most used monitoring tools today.

On Remote client server to be monitored:

 
Create the user nagios and set password:
# useradd nagios
# passwd nagios
 
Download the nagios plugin from http://www.nagios.org/download/plugins
 
# mkdir -p /opt/Nagios/Nagios_Plugins
# cd /opt/Nagios/Nagios_Plugins
# cd ..
# tar xzf nagios-plugins-1.4.15.tar.gz
# cd nagios-plugins-1.4.15
 
Compiling and Installing:
Pere-requisite openssl-devel package.
#rpm -q openssl-devel
if not installed, then
# yum -y install openssl-devel
 
Configuring: 
 
# cd /opt/Nagios/nagios-plugins-1.4.15
# ./configure –with-nagios-user=nagios –with-nagios-group=nagios
If the configure struck with ICMP ping check run as below
./configure –with-nagios-user=nagios –with-nagios-group=nagios –with-ping-command=ping
# make
# make install
 
Changing permissions:
# chown nagios.nagios /usr/local/nagios
# chown -R nagios.nagios /usr/local/nagios/libexec
 
Installing xinetd super demon if not installed
# yum install xinetd
 
Now downloading and installing nrpe demon from
 
# mkdir -p /opt/Nagios/Nagios_NRPE
# cd /opt/Nagios/Nagios_NRPE
#cd ..
# tar -xzf nrpe-2.12.tar.gz
# cd nrpe-2.12
 
Compiling and Configuring nrpe
# cd /opt/Nagios/nrpe-2.12
# ./configure 
# make all
# make install-plugin
# make install-daemon
# make install-daemon-config
# make install-xinetd
 
Add Nagios Monitoring server to the “only_from” directive
# vi /etc/xinetd.d/nrpe
only_from =  
 
Add entry for nrpe daemon to services
# vi /etc/services
nrpe      5666/tcp    # NRPE
 
Restart Xinetd and set chkconfig on
# chkconfig xinetd on
# service xinetd restart
 
Checking whether NRPE daemon is running and listening on port 5666:
# netstat -at |grep nrpe
tcp    0    0 *:nrpe    *.*    LISTEN
 
Open Port 5666 on Firewall
if using csf add 5666 to TCP_IN and TCP_OUT in /etc/csf/csf.conf and restart as
#csf -r

And add the following lines to /usr/local/nagios/etc/nrpe.cfg

command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/sda
command[check_mem]=/usr/local/nagios/libexec/check_mem 85 95
 
 
Nagios server  Setup (mail nagios server)
Downloading and installing nrpe demon from
 
# mkdir -p /opt/Nagios/Nagios_NRPE
# cd /opt/Nagios/Nagios_NRPE
#cd ..
# tar -xzf nrpe-2.12.tar.gz
# cd nrpe-2.12
 
Compiling and Configuring nrpe
# cd /opt/Nagios/nrpe-2.12
# ./configure 
# make all
# make install-plugin
 
Check NRPE daemon is functioning from nagios server. 
# /usr/local/nagios/libexec/check_nrpe -H
Output:
NRPE v2.12
 
Check whether it is defined or not.
# vi /usr/local/nagios/etc/objects/commands.cfg
define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
        }
 
If you want to add a few ips then define them in hosts.cfg
and make a hostgroup in hostgroups.cfg with all the needed users as members
and now in services.cfg and the services you want and specidy hostgroup name as follows
 
### CPU LOAD/Load Average ###
define service{
        use                             basic-service
        hostgroup_name                  customer1
        contact_groups                  admins
        service_description             CPU LOAD
        check_command                   check_nrpe!check_load
}
 
### Disk Usage ###
define service{
        use                             basic-service
        hostgroup_name                  customer1
        contact_groups                  admins
        service_description             CHECK DISK
        check_command                   check_nrpe!check_disk
}
 
### RAM Usage ###
define service{
        use                             basic-service
        hostgroup_name                  customer1
        contact_groups                  admins
        service_description             CHECK MEM
        check_command                   check_nrpe!check_mem
 
Check the configuration  as :
#/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Now restart
#/etc/init.d/nagios restart
 
 
 
 How to setup url or website monitoring in nagios server

 

First of all create a configuration directory for writing the rules. You can also create the rules in localhost.cfg but I recommend  to create a separate directory and create the files in it.

#mkdir /etc/nagios/monitor_websites
and cd to this directory

And create file host.cfg in this directory for setting the urls.
#vi host.cfg

Suppose I want to monitor three sites
www.abc.com, www.xyz.com, www.pqr.com

Configure host.cfg as below.
#vi host.cfg

define host{
host_name  abc.com
alias         abc
address    www.abc.com
use        generic-host
}

define host{
host_name  xyz.com
alias      xyz
address    www.xyz.com
use        generic-host
}

define host{
host_name  pqr.com
alias           pqr
address    www.pqr.com
use        generic-host
}

#Defining group of urls  – you should add this if you want to set up an HTTP check service.
define hostgroup {
hostgroup_name    monitor_websites
alias           monitor_urls
members         www.abc.com, www.xyz.com, www.pqr.com
}
:wq #save it

And now create the file services.cfg for setting the service ( http_check )

#vi services.cfg
## Hostgroups services ##
define service {
hostgroup_name                 monitor_websites
service_description             HTTP
check_command                 check_http
use                             generic-service
notification_interval           0
}

Now give the permissions for directory and configuration files.
#chown  -R nagios:nagios monitor_websites

List and check.
[root@mail nagios]#  ll monitor_websites
total 16
-rw-r–r– 1 nagios nagios 669 Apr 25 23:13 host.cfg
-rw-r–r– 1 nagios nagios 253 Apr 25 23:15 services.cfg
[root@mail nagios]#

Now give the configuration directory path in main nagios configuration file.
#vi /etc/nagios/nagios.cfg
cfg_dir=/etc/nagios/monitor_websites
:wq

Now restart the nagios service.
#service nagios restart

Thats it. Check the nagios site. You are done. You rocks.

 How to install and configure Nagios Monitoring tool in redhat linux rhel5 or centos

This article will help you to install and configure Nagios monitoring tool in redhat linux or other redhat distributions like fedora, centos etc.
Nagios Installation :
Installing packages. Apache, PHP, GCC & GD

Installing Apache web server:
#yum -y install httpd*
set hostname in FQDN
#service httpd on


Installing PHP, GCC and GD:
#yum -y install php*
#yum -y install gcc*
#yum -y install gd*

Getting the package:
Get the latest packages from net. Mov it to some directory and untar them.

#wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.0.tar.gz
#mv nagios-3.2.0.tar.gz  /usr/local/src
#tar xvf nagios-3.2.0.tar.gz

#wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.14.tar.gz
#mv nagios-plugins-1.4.14.tar.gz/usr/local/src
#tar xvf nagios-plugins-1.4.14.tar.gz

Adding nagios user and setting password for that user:
#useradd nagios
#passwd nagios
usermod -a -G nagios apache              //To permit the commands through web interface.

Configuration of Nagios:
cd /usr/local/src/nagios-3.2.0

./configure
make all
make install
make install-init
make install-config
make install-commandmode
make install-webconf

Admin account setting for nagios:
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin          /passwd  for web interface
give passwd
service httpd restart

Nagios Plugin installation:
install nagios – pluggin

cd /usr/local/src/nagios-plugins-1.4.14
./configure
make
make install

Creating entry in /etc/init.d/:
chkconfig –add nagios
chkconfig nagios on

Checking the configuration:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Nagios Alert Plugin – Mozilla Addon:
Name : Nagios Checker
[give name and url]

Configuring remotehost in nagios:
cp /usr/local/nagios/etc/objects/localhost.cfg  /usr/local/nagios/etc/objects/remotehost.cfg
[comment the hostgroup entries in order to prevent duplicate entries]
[change the localhost entries to the remote machine hostname and IP address.]

 

 

 

 

 

password protected directory in tomcat

How to protect a web directory with a password? if we are using Apache, we can do it easily with .htaccess. It will prompt user for credentials while entering the directory. But how to protect a directory with password in tomcat web server? In this post we will discuss how to do it with tomcat Realms. This example was tested in tomcat 7 and tomcat 6.

 

Steps :
1) Add user, password and role in conf/tomcat-users.xml
2) In the webapps/examples/WEB-INF/web.xml specify role, method and urls.
3) Restart Tomcat and check.

Step 1:
in vi conf/tomcat-users.xml

<tomcat-users>
<role rolename=”webadmin”/>  //webadmin is the rolename of the users who can access the application
<user username=”randeep” password=”randeep” roles=”webadmin”/>
</tomcat-users>

Step 2:

<security-constraint>
<display-name>Example Security Constraint</display-name>
<web-resource-collection>
<web-resource-name>application</web-resource-name>
<url-pattern>/*</url-pattern> //applicable toall urls in the application
<http-method>DELETE</http-method>
<http-method>GET</http-method>
<http-method>POST</http-method>
<http-method>PUT</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>webadmin</role-name>
</auth-constraint>
</security-constraint>

<login-config>
<auth-method>BASIC</auth-method> //Authentication type
<realm-name>application</realm-name>
</login-config>

Step 3:
Restart tomcat
/etc/init.d/tomcat restart
or
bin/shutdown.sh
bin/startup.sh

Configuring multiple domains or sub-domains in tomcat

We all know how to create multiple domains in Apache. By adding virtual host etries. But how to configure multiple domains in tomcat? We can do this by adding multiple Host tags in server.xml. Its very simple. See the example below. Edit server.xml under conf directory. Suppose you want to setup three domains domain1.com, domain2.com and domain3.com. Every domain points to same ip address in the server.

 

<Engine defaultHost=”arnid” name=”Catalina”>
<Host name=”domain1.com” appBase=”/usr/tomcat/apache-tomcat-6.0.26/webapps/domain_root1″>
        unpackWARs=”true” autoDeploy=”true”
        <Alias>www.domain1.com</Alias>
        <Context path=”” docBase=”.”/>
</Host>
<Host name=”domain2.com” appBase=”/usr/tomcat/apache-tomcat-6.0.26/webapps/domain_root2″>
        unpackWARs=”true” autoDeploy=”true”
        <Alias>www.domain2.com</Alias>
        <Context path=”” docBase=”.”/>
</Host>
<Host name=”domain3.com” appBase=”/usr/tomcat/apache-tomcat-6.0.26/webapps/domain_root3″>
        unpackWARs=”true” autoDeploy=”true”
        <Alias>www.domain3.com</Alias>
        <Context path=”” docBase=”.”/>
</Host>
<Engine

Now restart tomcat and everything should be working fine.

url redirection in apache using proxypass

Url redirection in Apache web server.

Here is a small example of url redirection in Apache using proxypass. I used this when I was using Apache as proxy to Apache tomcat using mod_jk.

Example:
you want to forward
www.yourdomain.come/abc      to www.yourdomain.com/linux/commands/abc

Here is how you can do it. Open httpd.conf in your favorite editor. Add the following lines.
ProxyPass /abc                   http://localhost/linux/commands/abc
ProxyPassReverse /abc    http://localhost/linux/commands/abc

But while using above solution you may have to give trailing “/” to correct it use the following rewrite rule.
RewriteEngine on
RewriteRule ^/abc$ /abc/ [R]

But for using the above solution you have to load modules mod_proxy and mod_rewrite.
Load the modules, add the rules, restart Apache  You are done. Please suggest if you know any better ways to do it. Remember that all my webapps is in tomcat and Apache is just front end.

Securing Tomcat with Apache Web Server mod_proxy

Securing Tomcat with Apache Web Server mod_proxy

 

I wanted to enable SSL encryption to allow secure channels (https) to our tomcat server. There were 2 obvious ways to do this:

  1. Secure Tomcat directly
  2. Secure an Apache web server front-end that controls access to tomcat

Secure Tomcat directly

Securing tomcat directly is fairly straight-forward and is the easiest. But it does have some drawbacks. The major drawback for me was restricting access to other webapps running within the tomcat container. I had about 5 different webapps running, but I only wanted one to be publicly available. Now some will argue that you can restrict access by enforcing rules within the firewall, but I found that to be clunky. If you’re interested in going this route, here is a link describing how to enable security for tomcat directly:
http://tomcat.apache.org/tomcat-5.5-doc/ssl-howto.html

Secure an Apache web server front-end

 I prefer using Apache web server as the front-end for many reasons which has been discussed to death. I’ll note some of the more important reasons:

  • Apache can server static content much faster
  • Apache can run as a load balancer in front of a cluster of tomcat instances
  • Apache can handle SSL encryption for a cluster of tomcat instances
  • Apache has several modules that can easily be plugged in

For more reasons have a look at this article: http://wiki.apache.org/tomcat/FAQ/Connectors

In this instance I will be using Apache’s mod_proxy module to redirect traffic to the tomcat server and use Apache to provide the SSL encryption.

To get an idea of how it works see the diagram below:

When a user visits our website using the default web port of 80, Apache will redirect the traffic to Tomcat on port 8080. Similarly, when browser is communicating on port 443 (https), apache will enable encryption and redirect traffic to tomcat on port 8443.

In my setup of Apache, I have 2 main configuration files:

  1. httpd.conf
  2. ssl.conf

httpd.conf contains the configuration for handling traffic running on port 80:

 Listen 80 ProxyRequests Off ProxyPreserveHost on <VirtualHost _default_:80> ServerName your_company_domain_name ProxyPass /app http://localhost:8080/app ProxyPassReverse /app http://localhost:8080/app RewriteEngine On RewriteRule ^(.*)/login$ https://%{SERVER_NAME}$1/login [L,R] </VirtualHost> 

The ProxyPass and ProxyPassReverse is responsible for the redirection.
The RewriteEngine and RewriteRule is responsible for redirecting  any requrests for the login page on port 80 to the secure channel running on port 443.

ssl.conf contains the configuration for handling traffic running on port 443:

Listen 443 <VirtualHost _default_:443> SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/pki/tls/certs/your_company_certificate.pem SSLCertificateKeyFile /etc/pki/tls/certs/your_company_private_key.pem ServerName your_company_domain_name ProxyPass /app http://localhost:8443/app ProxyPassReverse /app http://localhost:8443/app </VirtualHost> 

The SSLCertificateFile and SSLCertificateKeyFile are responsible for enabling encryption and requires the private key as well as the certificate file provided by your certificate authority.
Just as before, the lines ProxyPass and ProxyPassReverse are responsible for the redirection of traffic from port 443 to tomcat on port 8443.

server.xml contains the tomcat configuration details

Server.xml <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="true" redirectPort="443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/> <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="true" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" scheme="https" secure="true" SSLEnabled="false" proxyPort="443" proxyName="your_company_domain_name" /> 

Importing certficates into keystore

keytool -import -alias auscert -keystore -trustcacerts -file

Extracting existing certificates and private keys from a keystore to be used in Apache in PEM format

Originally, I had setup encryption witin Tomcat rather than apache. When I wanted to  migrate the control of security from Tomcat to Apache, I was faced with the issue that each Tomcat and Apache expected the certificates in different formats. After much researching I found a tool that was helpful in extracting the private key and the certificate out of the keystore into the PEM format expected by Apache. The opensource tool can be downloaded here: http://sourceforge.net/projects/portecle
 

To extract the private key from JKS keystore, use this:
http://www.softpedia.com/get/Security/Security-Related/KeyTool-IUI.shtml
Select Export -> Keystore’s entry -> Private key
When identifying the Target files, remember to choose ‘PEM
And the rest is self explantory

Piranha LVS

load balanced and redundant solution for a squid proxy server. My primary goals were simplicity and redundancy. I started with a stand-alone squid proxy server (CentOS 6, squid, NTLM authentication). Alone, this works great for an AD environment; single sign-on authentication so no password prompt, integrates with AD groups for various access levels, auto-proxy config using DHCP and DNS, etc. (anyway, I should post this config later).

First step was building an identical squid server (I just cloned the vm, changed the name and IP). I then set up a unison cron job to sync the squid configs, so any change on one would propogate to the other:

* * * * * /usr/bin/unison /etc/squid ssh://proxy2//etc/squid -batch >> /dev/null 2>&1

Once I had two functional proxy servers, my first attempt to add redundancy was relying on most browsers ability to switch to a second proxy server if the first goes down. I used an auto-configuration script to push out these settings. Also note I incorporated IP-based load balancing as well. In the proxy.pac:

function FindProxyForURL(url, host) {
 var proxy1="proxy1:8080"
 var proxy2="proxy2:8080"
 var myip=myIpAddress()
 var ipbits=myip.split(".")
 var myseg=parseInt(ipbits[3])
 if(myseg==Math.floor(myseg/2)*2) {
  var proxone=proxy1
  var proxtwo=proxy2
 }
 else {
  var proxone=proxy2
  var proxtwo=proxy1
 }
 return "PROXY "+proxone+"; PROXY "+proxtwo+";";
}

This works, but the problem I discovered is some browsers can take a long time to determine if a proxy is down – about 15 seconds. And some browsers check for every host. Most website these days have links, images, ads to several different hosts, so just loading a singe home page can literally take minutes if one of the proxy servers is down. And, if you close your browser and re-open it, it has to check all over again.

Then I thought I would try a real load-balancing solution. I found Piranha which is Red-Hat’s load-balancing solution with optional redundancy. It’s very similiar to Microsft’s NLB for those familiar with it, but my biggest complaint about it is it typically is designed to run on separate servers than your proxy servers (or whatever IP service you are load balancing). That means adding two more servers to the mix. I wanted simple, and turning two servers into four isn’t my kind of simple. So, why not try to get Piranha running on the proxy servers themselves? Here’s my layout:

proxy1: 10.120.100.60
proxy2: 10.120.100.61
Squid port: 8080
Available IP address to be used as the virtual IP: 10.120.100.62

I installed Piranha on each server:

# yum install piranha

and set the Piranha password:

# piranha-passwd

I had to enable iptables; I had turned it off initially. I turned it on and wiped the config – of course, don’t wipe yours if you are using it for other stuff! (more on why we need iptables later):

# chkconfig iptables on
# service iptables start
# iptables -F
# iptables -t nat -F
# iptables -t mangle -F
# iptables -X
# service iptables save

Enable IP forwarding (not sure this is required for our all-in-one design, but it’s standard for Piranha configs anyway).

# sysctl -w net.ipv4.ip_forward=1

To make this peristent across reboots, edit that line in /etc/sysctl.conf

Turned on the web-based Piranha config on the first server:

# service piranha-gui start

The Piranha GUI runs a web server on port 3636. Browse to it and login. Username is “piranha” and the password that you set earlier:

On the GLOBAL SETTINGS page, set the Primary server public IP to first server’s IP address. Network type should be Direct Routing. Private IP should be blank.

 


On the REDUNDANCY page, plug in the IP address of your second server

Add a virtual server. Use an available unused IP address on the same subnet as your Piranha servers.

 



Add two Real Servers, which in this case are the same as your Piranha servers. Use their IP addresses, and you can leave the port blank, since that was defined on the virtual server.


In my case, using squid, the default Monitoring Scripts work since a HTTP GET provides a response.

Now that the GUI config is done, copy the config from server 1 to server 2:

# scp /etc/sysconfig/ha/lvs.cf proxy2:/etc/sysconfig/ha/lvs.cf

The config need to be identical on both servers. For reference, here’s what mine looks like:

serial_no = 26
primary = 10.120.100.60
service = lvs
backup_active = 1
backup = 10.120.100.61
heartbeat = 1
heartbeat_port = 539
keepalive = 3
deadtime = 6
network = direct
debug_level = NONE
monitor_links = 1
syncdaemon = 0
virtual proxy {
     active = 1
     address = 10.120.100.62 eth0:1
     vip_nmask = 255.255.255.0
     port = 8080
     persistent = 60
     send = "GET / HTTP/1.0\r\n\r\n"
     expect = "HTTP"
     use_regex = 0
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server proxy1 {
         address = 10.120.100.60
         active = 1
         weight = 1
     }
     server proxy2 {
         address = 10.120.100.61
         active = 1
         weight = 1
     }
}

Now your ready to start. Lets start up the primary server first. One server 1:

# chkconfig pulse on
# service pulse start

Watch /var/log/messages for the following:

Feb  3 10:33:10 proxy1 pulse[2850]: STARTING PULSE AS MASTER
Feb  3 10:33:13 proxy1 pulse[2850]: partner dead: activating lvs
Feb  3 10:33:13 proxy1 lvs[2853]: starting virtual service Proxy active: 8080
Feb  3 10:33:13 proxy1 lvs[2853]: create_monitor for Proxy/proxy1 running as pid 2861
Feb  3 10:33:13 proxy1 lvs[2853]: create_monitor for Proxy/proxy2 running as pid 2862
Feb  3 10:33:13 proxy1 nanny[2862]: starting LVS client monitor for 10.120.100.62:8080 -> 10.120.100.61:8080
Feb  3 10:33:13 proxy1 nanny[2861]: starting LVS client monitor for 10.120.100.62:8080 -> 10.120.100.60:8080
Feb  3 10:33:14 proxy1 nanny[2861]: [ active ] making 10.120.100.60:8080 available
Feb  3 10:33:14 proxy1 nanny[2862]: [ active ] making 10.120.100.61:8080 available
Feb  3 10:33:14 proxy1 ntpd[1528]: Listening on interface #7 eth0:1, 10.120.100.62#123 Enabled
Feb  3 10:33:18 proxy1 pulse[2855]: gratuitous lvs arps finished

ifconfig should show the virtual IP address bound to eth0:1

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:C3:50:80
          inet addr:10.120.100.62  Bcast:10.120.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

Start pulse on the backup server:

# chkconfig pulse on
# service pulse start

all you should see in /var/log/messages is:

Feb  3 10:35:45 proxy2 pulse[2820]: STARTING PULSE AS BACKUP

If you tried to use your new Virtual IP address at this point, it won’t work. We need to do some iptables work. On each server:

# iptables -t nat -A PREROUTING -p tcp -d 10.120.100.62 --dport 8080 -j REDIRECT
# service iptables save

That’s it. Now test everything, make sure your clients still work while you stop/start pulse, stop/start squid, reboot, etc. Worst case, clients may be unable to use the service for a few seconds. If you ever make changes, be sure and copy the lvs.cf to the other server and then reload pulse. Unfortunately, the GUI does not provide a method to copy the configs and reload the services.

Notes:
I was using VMware 4.1 as the host servers. Originally, I could not get the Redundancy heartbeat monitor to work – both servers could not see each other, so they both became active, assigning themselves the same IP address. Turns out, I had to do two things to the VM guests:

  1. Use the E1000 Network card instead of the VMXNET.
  2. add “pci=nomsi” to the kernel line in /boot/grub/grub.conf (maybe – not sure if this did anything)

You may notice that even after you stop pulse on both servers, your clients can still connect? Why? iptables is still listening to that virtual IP address, even if it’s not bound to an adapter. And as long as your clients and/or switch has the mac-address cached, traffic will still be sent to the last known port with that IP.

 

Linux File Find command to find and xargs Detailed

Find command general form;

  find pathname-options [-print-exec-ok ...] 


2, find the parameters of the command;

  pathname: the find command to find the path to the directory.  For example, used to represent the current directory, use / to represent the system root directory.
 -Print: find command files that match the output to standard output.
 -Exec: find command files that match the parameters given shell command.  Corresponding commands in the form of 'command' {} \;, Note {} \; between spaces.
 -Ok: same-exec role, but in a more secure mode to execute shell commands given by the parameter will be prompted before each command is executed, allowing the user to determine whether to execute. 


3, find command options

  -Name

 Find the file according to the file name.

 -Perm
 To find files by file permissions.

 -Prune
 Use this option if you can not in the specified directory to find the find command, If you use the-depth option at the same time, then-prune is find command to ignore.

 -User
 In accordance with the owner of the file to locate the file.

 -Group
 In accordance with the group the file belongs to locate the file.

 -Mtime-n + n
 Change the time according to the file to locate the file - the file to change the time from the within n days now, + n represents a file changes from now n days ago.  find command-atime and-ctime option, but they are all-m time option.

 -Nogroup
 Find no valid group owning the file, the file belongs to the group does not exist in the / etc / groups.

 -Nouser
 Find no valid owner of the file, the owner of the file / etc / passwd does not exist.
 -Newer file1! File2

 The file file1 new but older than the file file2 files to find the changes.
 -Type

 Finding a certain type of file, such as:

 b - block device file.
 d - directory.
 c - character device file.
 p - pipe file.
 l - symbolic link files.
 f - file.

 -Size n: [c] Find the file length n block of the file, said file length in bytes with c.
 -Depth: find files, first find the file in the current directory and find then its subdirectories.
 -Fstype: Find the file in a particular type of file system, these file system types can usually be found in the configuration file / etc / fstab, the system contains information about the file system in the configuration file.

 -Mount: find files across file system mount points.
 -Follow: find command encounters a symbolic link file, on track to file pointed to by the link.
 -Cpio: cpio command matching files These files are backed up to a tape device. 

Additionally, the following three differences:

  -Amin n
 Find the last N minutes to access the file-atime n
 Find the last n * 24-hour access to the file-cmin n
 Find system last N minutes to change the file file-ctime n
 Find system last n * 24 hours to change the state of the file file-mmin n
 Find system last N minutes to change the file data files-mtime n
 Find system last n * 24 hours to change the file data files 


4, execute shell commands using the exec or ok

Use find, as long as the operation you want to write a file, you can use exec to match find find very convenient

Some operating systems only allow-exec option such as ls or ls-l command. Most users use this option to find old files and delete them. Recommended before the real execution rm command to delete the file, it is best to use the ls command look to confirm that they want to delete the file.

the exec option followed with the command or script to be executed, and then a pair of children {}, a space and a \, and finally a semicolon. In order to use the exec option, you must also use the print option. Verify the find command, you will find the command only output from the current path from the relative path and file name.

For example: ls-l command lists the files matched the ls-l command on the find-exec command options

  # Find.-Type f-exec ls-l {} \;
 -Rw-r - r - 1 root root 34928 2003-02-25. / Conf / httpd.conf
 -Rw-r - r - 1 root root 12959 2003-02-25. / Conf / magic
 -Rw-r - r - 1 root root 180 2003-02-25. / Conf.d / README 

The above example, find command match to all ordinary files in the current directory, use the ls-l command will list them in the-exec option.
Find / logs directory change the time in the five days prior to the file and delete them:

  $ Find logs-type f-mtime +5-exec rm {} \; 

Remember: in any way with the shell before you delete a file, you should view the file must be careful! Such as mv or rm command, you can use the-exec option of safe mode. It will prompt you before each match to file.

In the example below, find the command in the current directory to find all file names. LOG the end, change the file time in 5 days, and delete them, but prompted before deleting.

  $ Find.-Name "*. Log"-mtime +5-ok rm {} \;
 <Rm .... / conf / httpd.conf>? N 

Press y to delete the file, press n is not deleted.

Any form of command can use the-exec option.

In the following example, we use the grep command. The find commands first file named “passwd *” matches all files, such as the passwd, passwd.old, passwd.bak, and then execute the grep command to see whether there is a sam user in these documents.

  # Find / etc-name "passwd *"-exec grep "sam" {} \;
 sam: x: 501:501 :: / usr / sam :/ bin / bash 


Second, find examples of command;


1, to find all the files in the current user’s home directory:

The following two methods can be used

  $ Find $ HOME-print
 $ Find ~-print 


2, so that the current directory in the file owner has read and write permissions, and the file belongs to the group of users and other users have read access to the file;

  $ Find.-Type f-perm 644-exec ls-l {} \; 


3, in order to find the length of the file system for ordinary files, and list their complete path;

  $ Find /-type f-size 0-exec ls-l {} \; 


4, look for changes in the directory / var / logs time in the seven days prior to the regular file and ask before deleting them;

  $ Find / var / logs-type f-mtime +7-ok rm {} \; 


5, in order to find all files belonging to the root group;

  $ Find.-Group root-exec ls-l {} \;
 -Rw-r - r - 1 root root 595 10 31 01:09 ./fie1 


6, find the command will be deleted when the directory access time since the 7th, the file containing numeric suffix admin.log.

This command checks only three digits, so the corresponding file suffix should not exceed 999. To build the first few admin.log * file, use the following command to

  $ Find.-Name "admin.log [0-9] [0-9] [0-9]"-atime -7-ok
 rm {} \;
 <Rm .... / admin.log001>? N
 <Rm .... / admin.log002>? N
 <Rm .... / admin.log042>? N
 <Rm .... / admin.log942>? N 


7, in order to find the current file system directory and sort;

  $ Find.-Type d | sort 


8, in order to find all of the system rmt tape devices;

  $ Find / dev / rmt-print 


Third, xargs

xargs – build and execute command lines from standard input

When matched to the file-exec option processing using the find command, find command with all matching files passed to the exec execution. Some systems can be passed to the exec command length limit, so the find command to run a few minutes later, there will be an overflow error. The error message is usually “parameter column is too long” or “parameter column overflow. This is the xargs command useful where, in particular, used in conjunction with the find command.

The find command to match to the file passed to xargs command, xargs command each time only for a part of the file, but not all, unlike the-exec option as. So that it can get the first part of the file, and then the next batch, and so continue.

In some systems, the use-exec option will initiate a process to deal with every match to file, not the match to all the documents as a parameter to perform; process so that in some cases there will be too much, the problem of system performance degradation, and thus the efficiency is not high;

The xargs command only one process. Use xargs command, what is the time for all parameters, or in batches to obtain parameters, as well as every obtain the number of parameters will be determined according to the adjustable parameters in the command options and system kernel.

Take a look at how xargs command used in conjunction with the find command, and give some examples.

What kind of file they belong to the following example to find an ordinary file system, and then use the xargs command to test

  # Find.-Type f-print | xargs file
 . / .kde / Autostart / Autorun. desktop : UTF-8 Unicode English text
 . / .kde / Autostart / .directory: ISO-8859 text \
 ...... 

Find the entire system memory dump files (core dump), and then save the results to a file / tmp / core.log:

  $ Find /-name "core"-print | xargs echo ""> / tmp / core.log 

Above this execution is too slow, I change the current directory to find

  # Find.-Name "file *"-print | xargs echo ""> / temp / core.log
 # Cat / temp / core.log
 ./file6 

All users have read, write, and execute permissions to the file in the current directory to find and recover the appropriate write permissions:

  # Ls-l
 drwxrwxrwx 2 sam adm 4096 10 30 20:14 file6
 -Rwxrwxrwx 2 sam adm 0 10 31 01:01 http3.conf
 -Rwxrwxrwx 2 sam adm 0 10 31 01:01 httpd.conf

 # Find.-Perm -7-print | xargs chmod ow
 # Ls-l
 drwxrwxr-x 2 sam adm 4096 10 30 20:14 file6
 -Rwxrwxr-x 2 sam adm 0 10 31 01:01 http3.conf
 -Rwxrwxr-x 2 sam adm 0 10 31 01:01 httpd.conf 

Grep command in all common file search hostname word:

  # Find.-Type f-print | xargs grep "hostname"
 ./httpd1.conf: # different IP addresses or hostnames and have them handled by the
 ./httpd1.conf: # VirtualHost: If you want to maintain multiple domains / hostnames
 on your 

Hostnames word with the grep command to search all ordinary files in the current directory:

  # Find.-Name \ *-type f-print | xargs grep "hostnames"
 ./httpd1.conf: # different IP addresses or hostnames and have them handled by the
 ./httpd1.conf: # VirtualHost: If you want to maintain multiple domains / hostnames
 on your 

Note that in the above example, \ is used to cancel a special meaning in the shell find command *.

The find command is used in conjunction with exec and xargs allows the user to perform almost all of the command of the match to the file.


, Find the parameters of the command

Below find examples of some of the commonly used parameters, useful to look up on the line, like a few posts on the front, which some parameters are used, but also can find command with other post the man or view forum manual


1, using the name option

The file name option is the find command is the most commonly used options, the option either used alone or with other options.

Can use a file name pattern to match files, remember to quote the the filename pattern caused.

No matter what the current path, if you want to find the file name in the root directory $ HOME meet the * txt file, use ~ as ‘pathname’ parameter, tilde ~ represents your $ HOME directory.

  $ Find ~-name "*. Txt"-print 

In the current directory and subdirectories you want to find all the ‘*. Txt’ file, you can use:

  $ Find.-Name "*. Txt"-print 

Find the file name in the current directory and subdirectories want a capital letter at the beginning of the file, you can use:

  $ Find.-Name "[AZ] *"-print 

Want to find the file name in the / etc directory to host beginning of the file, can be used:

  $ Find / etc-name "host *"-print 

Want to Find the file $ HOME directory, you can use:

  $ Find ~-name "*"-print or find.-Print 

To get high-load operation of the system, to find all the files from the root directory.

  $ Find /-name "*"-print 

If you want to find the file name in the current directory to the two lowercase letter followed by two numbers, the last txt file, the following command will be able to return to a file named ax37.txt:

  $ Find.-Name "[az] [az] [0 - 9] [0 - 9]. Txt"-print 


2, with the perm option

-Perm option, according to the file permissions mode to find the file, then the file permissions mode. Best use permissions octal notation.

Find files in the current directory permissions to 755 files, the file owner can read, write, execute, and other users can read and execute the file, you can use:

  $ Find.-Perm 755-print 

There is also a form of expression: to add a dash in front of octal numbers – said match, as -007 is equivalent to 777, -006 is equivalent to 666

  # Ls-l
 -Rwxrwxr-x 2 sam adm 0 10 31 01:01 http3.conf
 -Rw-rw-rw-1 sam adm 34890 10 31 00:57 httpd1.conf
 -Rwxrwxr-x 2 sam adm 0 10 31 01:01 httpd.conf
 drw-rw-rw-2 gem group 4096 October 26 19:48 sam
 -Rw-rw-rw-1 root root 2792 10 31 20:19 temp

 # Find.-Perm 006
 # Find.-Perm -006
 . / Sam
 ./httpd1.conf
 . / Temp 

-Perm mode: the file permissions fits mode

-Perm + mode: file permissions part in line mode

-Perm-mode: file permission is completely in line with the mode


3, to ignore a directory

If you find a file when want to ignore a directory, because you know that the directory does not find the files you want, then you can use the-prune option to point out the need to ignore the directory. -Prune option should be careful, because if you use the-depth option, then-prune option would be find command to ignore.

If you want to find the file, directory / apps / apps / bin directory but do not want to find, you can use:

  $ Find / apps-path "/ apps / bin"-prune-o-print 


Find find files how to avoid a file directory

Such as Find subdirectory of dir1 not all the files in the directory / usr / sam

  find / usr / sam-path "/ usr/sam/dir1"-prune-o-print 
  find [-path ..] [expression] is followed in the path list is the expression 

-Path “/ usr / sam”-prune-o-print-path “/ usr / sam-a-prune-o
-Print shorthand expression is evaluated in order-a and-o are short-circuit evaluation value, with the shell’s && and | | similar if-path “the / usr / sam” is true, then the evaluator-prune-prune returns true, the logical expression is true; otherwise not seek value-prune logical expression is false. If the-path “/ usr / sam-a-prune false, is seeking value-print,-print returns true or logical expression is true; otherwise not evaluated-print, or logical expression is true.

This expression is a combination of a special case can be written as pseudo code

  if-path "/ usr / sam" then
 -Prune
 else
 -Print 

Avoid multiple folders

  find / usr / sam \ (-path / usr/sam/dir1-o-path / usr/sam/file1 \)-prune-o-print 

Parentheses represents the combination of the expression.

  \ References that indicate the shell right behind the characters for special explanation, and left to find the command to explain its significance. 

Find a determined file,-name options-o

  # Find / usr / sam \ (-path / usr/sam/dir1-o-path / usr/sam/file1 \)-prune-o-name "temp"-print 


5, the use of the user and nouser option

According to the owner of the file to find the file, such as $ HOME directory lookup file are Lord sam file, you can use:

  $ Find ~-user sam-print 

Find files in the / etc directory is owned by the uucp file:

  $ Find / etc-user uucp-print 

In order to find the owner of the account has been deleted files, you can use the-nouser option. So that you can find who the owner is not a valid account files in the / etc / passwd file. -Nouser option is used, it is not necessary to give the user name; find command to be able to complete the work for you.

For example, you want to find all these files in the / home directory, you can use:

  $ Find / home-nouser-print 


6, group and nogroup option

Belong to the user group for the file, find the command also has the same options as the user and nouser option in order to find the files belonging to the user group’s gem in the / apps directory, you can use:

  $ Find / apps-group gem-print 

Not effectively belongs to the user group you want to find all the files, nogroup option. Following the find command to find such files from the root directory of the file system

  $ Find /-nogroup-print 


7, change the time or access time find files

If you want to change the time to find the file, you can use the mtime, atime or ctime options. If the system suddenly there is no space available, likely the length of a file in the meantime has grown rapidly, then you can find this file mtime option.

Minus sign – to qualify change time in less than n days ago, with a plus sign + to qualify change file time ago n days ago.

I hope to change the time in the system root directory search files in 5 days or less, you can use:

  $ Find /-mtime -5-print 

In order to find the changes in the directory / var / adm file before the 3rd, you can use:

  $ Find / var / adm-mtime +3-print 


8, find the file than a file is new or old

If you want to find the changes the new all files older than another file than a file, you can use the-newer option. Its general form:

  newest_file_name! oldest_file_name 

Among them, Are logical non Symbols.

Find change the time than the file sam new but older than the file temp file:

Example: There are two files

  -Rw-r - r - 1 sam adm 0 10 31 01:07 fiel
 -Rw-rw-rw-1 sam adm 34890 10 31 00:57 httpd1.conf
 -Rwxrwxr-x 2 sam adm 0 10 31 01:01 httpd.conf
 drw-rw-rw-2 gem group 4096 October 26 19:48 sam
 -Rw-rw-rw-1 root root 2792 10 31 20:19 temp

 # Find-newer httpd1.conf!-Newer temp-ls
 1077669 0-rwxrwxr-x 2 sam adm 0 10 Yue 31 01:01 / httpd.conf
 1,077,671 4-rw-rw-rw-1 root root 2792 10 31 20:19. / Temp
 1,077,673 0-rw-r - r - 1 sam adm 0 10 May 31 01:07. / Fiel 

Find change the time than the temp file new file:

  $ Find.-Newer temp-print 


9, using the type option

Find all directories under the / etc directory, you can use:

  $ Find / etc-type d-print 

In addition to a directory other than the current directory to find all types of files, you can use:

  $ Find.!-Type d-print 

Find all symbols linked file, you can use the / etc directory

  $ Find / etc-type l-print 


10, using the size option

In accordance with the length of the file to find the file, the length of the documents referred to here either block (block) to measure, can also be measured in bytes. In the form of expression of the byte length of the measurement file of N C; only to the length of the block measured File can be expressed numerically.

Find files in accordance with the length of the file, the general use of the file length in bytes, in view of the size of the file system, because the block measured more easily converted.
File length is greater than 1 M bytes of the file in the current directory to find:

  $ Find.-Size +1000000 c-print 

Find files larger than 10M in the current directory:

  find.-size +10000 k-exec ls-ld {}; 

To find out the file is copied to another place:

  find *. c-exec cp '{}' / tmp ';' 

Find the length of the file is exactly 100 bytes in / home / apache directory file:

  $ Find / home / the apache -size 100c-print 

Find the length of more than 10 blocks of files in the current directory (one is equal to 512 bytes):

  $ Find.-Size +10-print 


11, using the depth option

When using the find command, you may want to match all of the files in the subdirectory Find. The depth options can find the command to do so. To do so is one of the reasons, when using the find command to a tape backup of the file system, you want to first back up all files, followed by re-backup files in subdirectories.

In the following example, the find command from the root directory of the file system, Find file named CON.FILE.

It will match all files first then into a subdirectory.

  $ Find /-name "CON.FILE"-depth-print 


12, using the mount option

Find the file (do not enter the other file system) file system, you can use the find command mount option.

In the file name in the file system XC end of file from the current directory to find:

  $ Find.-Name "*. XC"-mount-print 

IPTABLES firewall script generated online website

According to the website wizard can automatically generate the IPTABLES firewall wall script!

1, Bifrost-GUI firewall management interface to iptbales
[Url] http://bifrost.heimdalls.com/ [/ url]

2, LinWiz-Linux configuration file and scripting Wizards
[Url] http://www.lowth.com/LinWiz/ [/ url]

3, GIPTables Firewall-IPTABLES Rules Generator
[Url] http://www.giptables.org [/ url]

4, Easy Firewall Generator for IPTables
[Url] http://morizot.net/firewall/gen [/ url]

5, Firewall Builder
[Url] http://www.fwbuilder.org/index.html [/ url]

This article comes from “the Countrymen home “blog, be sure to keep this source

(13)Permission denied: make_sock: could not bind to addres

Permission denied: make_sock: could not bind to address

 
Stopping httpd:                                            [FAILED]
Starting httpd: (13)Permission denied: make_sock: could not bind to address [::]:88
(13)Permission denied: make_sock: could not bind to address 0.0.0.0:88
no listening sockets available, shutting down
Unable to open logs
                                                           [FAILED]

In apache, this type of error occurs at the time of starting the service after editing the httpd.conf file to listen to a particular port number.. The reason is apache allowing only specified http port numbers, and the one you have given is not in that list, first check that by

semanage port -l|grep http

If the port number is not in the list (ex: 4080), add by using,

semanage port -a -t http_port_t -p tcp 4080

Now start apache, you are done