October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

mod_mpmstats – IBM HTTP SERVER

mod_mpmstats

mod_mpmstats provides three capabilities:

  1. check if the web server is at or near the maximum concurrent clients setting (MaxClients for Unix and Linux; ThreadsPerChild for Windows)
  2. optionally report the IHS worker thread usage at a specified interval
  3. optionally report the plug-in module where requests are currently stalled

Often this type of monitoring has been performed with mod_status, which requires that a request be sent to the web server from a browser or script, and then be viewed or parsed to record the web server activity. mod_mpmstats can eliminate the need to constantly monitor the mod_status report to determine the current thread usage.

The ability to report the plug-in module where requests are stalled allows the administrator to quickly determine what components to check when the web server is unresponsive.

 

The mpmstats module, to see the health of the IHS server, displaying the information on the assets and threads being used.

Activation

To activate the mpmstats module should load the module in the IHS server and set the monitoring interval

loadmodule mpmstats_module modules/mod_mpmstats.so

# mod_mpmstats will write a report of thread usage every 30 seconds
ReportInterval 30

Once the server is configured, it must be restarted for the changes to take effect.

Information

Once enabled statistics, you write about the error file one line at a configured interval as follows:

[Wed Aug 19 14:01:52 2004] [notice] mpmstats: rdy 712 bsy 312 rd 121 WRT 173 can log 0 0 0 DNS 18 cls
[Wed Aug 19 14:01:54 2004] [notice] mpmstats: rdy 718 bsy 306 rd 120 WRT 169 0 log 0 0 DNS 17 cls
[Wed Aug 19 14:01:56 2004] [notice] mpmstats: rdy 729 bsy 295 rd 123 WRT 135 can log 0 0 0 DNS 37 cls
[Wed Aug 19 14:01:58 2004] [notice] mpmstats: rdy 759 bsy 265 rd 130 wr 67 can log 0 0 0 DNS 68 cls
[Wed Aug 19 14:02:00 2004] [notice] mpmstats: rdy 809 bsy 215 rd 131 wr 44 can log 0 0 0 DNS 40 cls
[Wed Aug 19 14:02:02 2004] [notice] mpmstats: rdy 781 bsy 243 rd 117 WRT 106 can log 0 0 0 DNS 20 cls
[Wed Aug 19 14:02:04 2004] [notice] mpmstats: rdy 813 bsy 211 rd 118 wr 72 can log 0 0 0 DNS 21 cls
[Wed Aug 19 14:02:07 2004] [notice] mpmstats: rdy 685 bsy 339 rd 177 WRT 122 can log 0 0 0 DNS 40 cls

Where the fields are displayed:

field description
rdy (ready) Breed registry id thread initialized and ready to process client requests.
bsy (busy) Table Number of threads that are currently processing requests.
rd (reading) Number of threads (bsy) currently reading customer requests.
wr (writing) Number of threads (bsy) who have already read the request but are processing the same (eg awaiting response from WebSphere Application Server)
also (keepalive) Number of threads (bsy), which are not processing a request but are waiting in case the client sends another request on the same connection.
log (logging) Number of threads (bsy) who are writing to log.
dns (dns lookup) Number of threads (bsy) executing a DNS lookup.
cls (closing) Number of threads (bsy) who are waiting for the ACK from the client indicating that it has processed the response and to close the connection.

Important note for IBM HTTP Server 7.0 and later users

On any platform where mod_mpmstats is supported, it is now shipped as part of the product (modules/debug/*.so) starting in IBM HTTP Server 7.0 and later. On these releases, diagnostic modules should not be downloaded or installed from this mustgather.

Supported server versions:

mod_mpmstats is provided for AIX, Linux, Solaris, HP-UX, Windows, and z/OS.

There are two levels of support, based on the capabilities of your level of IHS.

Base level of support

  • IHS 2.0.42.2 and above

With this level of support, the reports of busy conditions and the number of threads in each state can be generated. The additional module tracking capability enabled by the TrackModules directive is not available.

Module-tracking support

  • IHS 2.0.42.2 with PK01070 or later cumulative e-fix
  • IHS 2.0.47.1 with PK01070 or later cumulative e-fix
  • IHS 6.0.2 or later

The additional information available is the name of the module of request processing, in addition to the information provided with the base level of support.

The Recommended Updates for IBM HTTP Server page has links to download the latest maintenance.

IBM recommendation

Enable this module if you need to track web server thread usage over time but you don’t need extensive details such as what requests are being processed.

If your level of IHS includes module-tracking support, the module tracking feature can result in quicker diagnosis of web server hangs.

Customers with third-party modules and prior problems with unresponsive behavior need to use the module tracking feature.

Note that mod_status provides more details, including actual requests being processed, so we recommend that mod_status be enabled as well.

Installation

IBM HTTP Server 6.1 and earlier: Copy mod_mpmstats.so for your platform to the modules directory in the web server installation location (e.g., to /opt/IBMIHS/modules).

IBM HTTP Server 7.0 and later: On supported platforms, this module is provided with the product in the modules/debug/ subdirectory.

Activation

  1. Add the following directive to the end of your configuration file:
    loadmodule mpmstats_module modules/mod_mpmstats.so
    
  2. Add the ReportInterval directive if you want a report of thread usage written to the error log at intervals.
  3. Add the TrackModules directive if you want the count of requests by modules written to the error log along with the thread usage. Verify that mod_status is loaded and ExtendedStatus is set to On when this directive is used.
  4. Restart the server so that the updated configuration takes effect.

IBM HTTP Server 7.0 and later: See conf/httpd.conf.default for the LoadModule and NetTrace directives referencing this module from the modules/debug/ subdirectory.

Deactivation

  1. Comment out the LoadModule, ReportInterval, and TrackModules directives added as part of the activation step.
  2. Restart the server so that the updated configuration takes effect.

Format of output and optional directives

Whenever enabled, mod_mpmstats will report to the error log when most or all of the web server threads are busy handling clients. Here is an example from Unix:

[Thu Aug 19 14:14:00 2004] [notice] mpmstats: approaching MaxClients (48/50)
[Thu Aug 19 14:14:05 2004] [error] server reached MaxClients setting, consider raising the MaxClients setting

(The second message would appear even without mod_mpmstats.)

Here is an example from Windows:

[Wed Aug 17 08:47:29 2005] [notice] mpmstats: approaching ThreadsPerChild (23/25)
[Wed Aug 17 08:47:42 2005] [warn] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting

(The second message would appear even without mod_mpmstats.)

Without this module, the web server will only report the busy condition when it actually reaches the maximum clients setting, and it will only report it once per restart of the server. mod_mpmstats will check every second, and will report the busy condition as often as once every 90 seconds.

ReportInterval

The ReportInterval directive specifies the interval in seconds between reports of the number of threads in different states. If the ReportInterval directive is not specified, reports of thread states will not be written.

Example:

# mod_mpmstats will write a report of thread usage every 30 seconds
ReportInterval 30

Here is an example section of the error log when mod_mpmstats has been configured to report the thread usage every two seconds with the ReportInterval 2 directive:

[Thu Aug 19 14:01:52 2004] [notice] mpmstats: rdy 712 bsy 312 rd 121 wr 173 ka 0 log 0 dns 0 cls 18
[Thu Aug 19 14:01:54 2004] [notice] mpmstats: rdy 718 bsy 306 rd 120 wr 169 ka 0 log 0 dns 0 cls 17
[Thu Aug 19 14:01:56 2004] [notice] mpmstats: rdy 729 bsy 295 rd 123 wr 135 ka 0 log 0 dns 0 cls 37
[Thu Aug 19 14:01:58 2004] [notice] mpmstats: rdy 759 bsy 265 rd 130 wr 67 ka 0 log 0 dns 0 cls 68
[Thu Aug 19 14:02:00 2004] [notice] mpmstats: rdy 809 bsy 215 rd 131 wr 44 ka 0 log 0 dns 0 cls 40
[Thu Aug 19 14:02:02 2004] [notice] mpmstats: rdy 781 bsy 243 rd 117 wr 106 ka 0 log 0 dns 0 cls 20
[Thu Aug 19 14:02:04 2004] [notice] mpmstats: rdy 813 bsy 211 rd 118 wr 72 ka 0 log 0 dns 0 cls 21
[Thu Aug 19 14:02:07 2004] [notice] mpmstats: rdy 685 bsy 339 rd 177 wr 122 ka 0 log 0 dns 0 cls 40
[Thu Aug 19 14:02:09 2004] [notice] mpmstats: rdy 707 bsy 317 rd 193 wr 97 ka 0 log 0 dns 0 cls 27
[Thu Aug 19 14:02:11 2004] [notice] mpmstats: rdy 731 bsy 293 rd 196 wr 39 ka 0 log 0 dns 0 cls 58
[Thu Aug 19 14:02:11 2004] [error] [client 9.27.164.15] File does not exist: /home/trawick/testihsbuild/install/htdocs/en_US/jakdsljadslkfjalkfdsasf
[Thu Aug 19 14:02:13 2004] [notice] mpmstats: rdy 747 bsy 277 rd 186 wr 71 ka 0 log 0 dns 0 cls 20
[Thu Aug 19 14:02:15 2004] [notice] mpmstats: rdy 749 bsy 275 rd 162 wr 89 ka 0 log 0 dns 0 cls 24
[Thu Aug 19 14:02:17 2004] [notice] mpmstats: rdy 764 bsy 260 rd 156 wr 83 ka 0 log 0 dns 0 cls 21
[Thu Aug 19 14:02:19 2004] [notice] mpmstats: rdy 784 bsy 240 rd 158 wr 33 ka 0 log 0 dns 0 cls 49
[Thu Aug 19 14:02:21 2004] [notice] mpmstats: rdy 788 bsy 236 rd 159 wr 59 ka 0 log 0 dns 0 cls 18
[Thu Aug 19 14:02:23 2004] [notice] mpmstats: rdy 735 bsy 289 rd 161 wr 117 ka 0 log 0 dns 0 cls 11

Note: The report will not be written when the web server is not actively processing any requests. This information does not require ExtendedStatus to be enabled.

The fields logged are described in the table below:

field description
rdy (ready) the number of web server threads started and ready to process new client connections
bsy (busy) the number of web server threads already processing a client connection
rd (reading) the number of busy web server threads currently reading the request from the client
wr (writing) the number of busy web server threads that have read the request from the client but are either processing the request (e.g., waiting on a response from WebSphere Application Server) or are writing the response back to the client
ka (keepalive) the number of busy web server threads that are not processing a request but instead are waiting to see if the client will send another request on the same connection; refer to the KeepAliveTimeout directive to decrease the amount of time that a web server thread remains in this state
log (logging) the number of busy web server threads that are writing to the access log
dns (dns lookup) the number of busy web server threads that are performing a dns lookup
cls (closing) the number of busy web server threads that are waiting for the client to acknowledge that the entire response has been received so that the connection can be closed

Here’s a comparison of displays by mod_mpmstats vs. mod_status for the same server state:

mod_mpmstats would simply write

[Thu Aug 19 15:22:43 2004] [notice] mpmstats: rdy 5 bsy 45 rd 10 wr 3 ka 0 log 0 dns 0 cls 31

The mod_status report would look like this:

Apache Server Status for b80-2.raleigh.ibm.com

Server Version: IBM_HTTP_Server/2.0.47.1-PQ90698 Apache/2.0.47 (Unix)
Server Built: Jun 28 2004 10:32:12

Current Time: Thursday, 19-Aug-2004 15:22:43 EDT
Restart Time: Thursday, 19-Aug-2004 15:22:03 EDT
Parent Server Generation: 0
Server uptime: 40 seconds
Total accesses: 372 – Total Traffic: 5.7 MB
CPU Usage: u44.02 s8.66 cu0 cs0 – 132% CPU load
9.3 requests/sec – 146.9 kB/second – 15.8 kB/request
45 requests currently being processed, 5 idle workers
CCCCCCCCRRCCRR___RWCCCCCCCCCCW_CCRCWRC_CCCCCRCCRRC..............
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................

Scoreboard Key:
_” Waiting for Connection, “S” Starting up, “R” Reading Request,
W” Sending Reply, “K” Keepalive (read), “D” DNS Lookup,
C” Closing connection, “L” Logging, “G” Gracefully finishing,
I” Idle cleanup of worker, “.” Open slot with no current process

If ExtendedStatus On were specified in httpd.conf, the actual requests being processed would be listed in the mod_status report.

The mod_status report is easier to read but it is not in a form that is easy to save at intervals and easily see the thread use.

TrackModules

The TrackModules directive is used to enable the reporting of how many requests are currently in each plug-in module. This directive has the following requirements:

  1. ReportInterval directive must be specified
  2. mod_status must be loaded and ExtendedStatus must be set to On
  3. a recent level of IHS must be used (see this information on support levels)

Example:

# mod_mpmstats will write a report of thread usage and counts by
# module every 15 seconds
ReportInterval 15
TrackModules On

Here is an example section of the error log when mod_mpmstats has been configured to report the thread and module usage every 15 seconds with the prior example:

[Sat Mar 19 07:47:12 2005] [notice] mpmstats: rdy 250 bsy 50 rd 0 wr 50 ka 0 log 0 dns 0 cls 0
[Sat Mar 19 07:47:12 2005] [notice] mpmstats: bsy: 50 in mod_cgid.c
[Sat Mar 19 07:47:27 2005] [notice] mpmstats: rdy 250 bsy 50 rd 0 wr 45 ka 5 log 0 dns 0 cls 0
[Sat Mar 19 07:47:27 2005] [notice] mpmstats: bsy: 45 in mod_cgid.c
[Sat Mar 19 07:47:42 2005] [notice] mpmstats: rdy 250 bsy 50 rd 0 wr 44 ka 6 log 0 dns 0 cls 0
[Sat Mar 19 07:47:42 2005] [notice] mpmstats: bsy: 44 in mod_cgid.c
[Sat Mar 19 07:47:57 2005] [notice] mpmstats: rdy 250 bsy 50 rd 0 wr 44 ka 6 log 0 dns 0 cls 0
[Sat Mar 19 07:47:57 2005] [notice] mpmstats: bsy: 44 in mod_cgid.c

In this example, the predominant type of request is a CGI request, which is handled by mod_cgid. Usually, only a subset of active threads will be attributed to a particular plug-in module. Threads not attributed to a plug-in module are busy performing I/O with the client or are handling basic request processing.

Interpreting the module counts

When web server requests are handled with reasonable response time, the module counts are not so important, and will reflect the types of processing which is being performed. If there is a meaningful number of CGI requests, then some number of IHS threads will typically be busy in mod_cgid.c. If there is a meaningful number of WebSphere requests, then some number of IHS threads will typically be busy in mod_was_ap20_http.c. If SiteMinder is handling authentication, then some number of IHS threads will typically be busy in mod_sm.c.

When response time starts to deteriorate or the web server stops responding altogether, the module counts are more important. Check the reports of module counts for the time leading up to the deteriorated or unresponsive behavior. If a particular module becomes the predominant module where IHS threads are busy, that is the likely cause of the response problem. For example, if a problem occurs with a WebSphere application which prevents it from responding as expected, the number of IHS threads busy in mod_was_ap20_http.c will begin increasing. If there are enough users making requests to that application, all IHS threads could eventually become busy in mod_was_ap20_http.c. Another example is with authentication requests. If SiteMinder is used and a problem occurs with a policy or LDAP server it uses, the number of IHS threads busy in mod_sm.c will begin increasing. If enough requests arrive before the server problem is resolved, all IHS threads could eventually become busy in mod_sm.c.

awk built-in variables

Properties Description
$0 The current record (as a single variable)
$1~$n N-th field of the current record, separated by FS between fields
FS Enter the default field separator is a space
NF The number of fields in the current record is the number of columns
NO The number of records have been read out, is the line number, starting at 1
RS He recorded input delimiter defaults to a newline
OFS The default output field separator is a space
ORS Output record separator, by default a newline
ARGC The number of command line arguments
ARGV Command line parameter array
FILENAME Enter the file name of the current
IGNORECASE If true, the case-insensitive match
ARGIND ARGV identifier is currently processing files
CONVFMT Digital conversion formats% .6g
ENVIRON UNIX Environment Variables
Errno UNIX system error message
FIELDWIDTHS Enter keywords separated by the width of the string field
FNR The current number of records
OFMT Digital output format% .6g
RSTART Was first matched string matching function
RLENGTH Is the length of the matched string matching function
SUBSEP \034

IIS7 installation scenarios Chart

default Server Install Components

Server Manager
Update Name
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Request Filtering
IIS-RequestFiltering
Static Content Compression
IIS-HttpCompressionStatic
IIS Management Console
IIS-ManagementConsole
ASP.NET Workload Server Options
Server Manager
Update Name
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Request Filtering
IIS-RequestFiltering
Static Content Compression
IIS-HttpCompressionStatic
IIS Management Console
IIS-ManagementConsole
ASP.NET IIS-ASPNET
.NET Extensibility IIS-NetFxExtensibility
ISAPI Filters IIS-ISAPIFilter
ISAPI Extensions IIS-ISAPIExtensions
Class ASP Workload Server Options
Server Manager
Update Name
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Request Filtering
IIS-RequestFiltering
Static Content Compression
IIS-HttpCompressionStatic
IIS Management Console
IIS-ManagementConsole
ASP IIS-ASP
ISAPI Extensions IIS-ISAPIExtensions
FastCGI Workload Server Options
Server Manager
Update Name
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Request Filtering
IIS-RequestFiltering
Static Content Compression
IIS-HttpCompressionStatic
IIS Management Console
IIS-ManagementConsole
CGI IIS-CGI
IIS Managed Modules and .NET Extensibility Server Workload
Server Manager
Update Name
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Request Filtering
IIS-RequestFiltering
Static Content Compression
IIS-HttpCompressionStatic
IIS Management Console
IIS-ManagementConsole
.NET Extensibility IIS-NetFxExtensibility
Full Server Install Components
Server Manager
Update Name
Internet Information Services IIS-WebServerRole
World Wide Web Services IIS-WebServer
Common HTTP Features IIS-CommonHttpFeatures
Static Content
IIS-StaticContent
Default Document
IIS-DefaultDocument
Directory Browsing
IIS-DirectoryBrowsing
HTTP Errors
IIS-HttpErrors
HTTP Redirection IIS-HttpRedirect
Application development IIS-ApplicationDevelopment
ASP.NET IIS-ASPNET
.NET Extensibility IIS-NetFxExtensibility
ASP IIS-ASP
CGI IIS-CGI
ISAPI Extensions IIS-ISAPIExtensions
ISAPI Filters IIS-ISAPIFilter
Ser-Side Includes IIS-ServerSideInclude
Health and diagnostics IIS-HealthAndDiagnostics
HTTP Logging
IIS-HttpLogging
Logging Tools
IIS-LoggingLibraries
Request Monitor
IIS-RequestMonitor
Tracing IIS-HttpTracing
Custom Logging IIS-CustomLogging
ODBC Logging IIS-ODBCLogging
Security IIS-Security
Basic Authentication IIS-BasicAuthenticaition
Windows Authentication IIS-WindowsAuthentication
Digest Authentication IIS-DigestAuthentication
Client Certificate Mapping Authentication IIS-ClientCertificateMappingAuthentication
IIS Client Certificate Mapping Authentication IIS-IISCertificateMappingAuthentication
URL Authorization IIS-URLAuthorization
Request Filtering
IIS-RequestFiltering
IP and Domain Restrictions IIS-IPSecurity
Performance IIS-Performance
Static Content Compression
IIS-HttpCompressionStatic
Dynamic Content Compression IIS-HttpCompressionDynamic
Management Tools IIS-WebServerManagementTools
IIS Management Console
IIS-ManagementConsole
IIS Management Scripts and Tools IIS-ManagementScriptingTools
Management Service IIS-ManagementService
IIS6 Management Compatibility IIS-IIS6ManagementCompatibility
IIS Metabase Compatibility IIS-Metabase
IIS 6 WMI Compatibility IIS-WMICompatibility
IIS 6 Scripting Tools IIS-LegacyScripts
IIS 6 Management Console IIS-LegacySnapin
FTP Publishing Service IIS-FTPPublishingService
FTP Server IIS-FTPServer
FTP Management Console IIS-FTPManagemnt
Windows Process Activation service WAS-WindowsActivationService
Process Model WAS-ProcessModel
.NET Environment WAS-NetFxEnvironment
Configiuration APIs WAS-ConfigurationAPI

 

sed example

Text Interval:
——–

# Add a blank line after each line
sed G

# The original delete all blank lines and add a blank line after each line.
# So that each line of text output later and only a blank line.
sed ‘/ ^ $ / d; G’

# Add two blank lines after each line
sed ‘G; G’

# All blank lines generated by the first script to delete (ie remove all the even-numbered lines)
sed ‘n; d’

# Insert before matching pattern “regex” line a blank line
sed ‘/ regex / {x; p; x;}’

# In the matching pattern “regex” insert a blank row after row
sed ‘/ regex / G’

# Before matching pattern “regex” line and insert a blank line after each
sed ‘/ regex / {x; p; x; G;}’

Serial number:
——–

# For each line in the file number (simple left alignment). Here the use of “tabs”
# (Tab, see the end of this description of the ‘\ t’ of usage) instead of spaces to align the edges.
sed = filename | sed ‘N; s / \ n / \ t /’

# For all lines in the file number (line number in left, right-aligned).
sed = filename | sed ‘N; s / ^ / /; s / * \ (\ {6, \} \.) \ n / \ 1 /’

# All lines of the file number, but displays only non-blank line number.
sed ‘/./=’ filename | sed ‘/./N; s / \ n / /’

# Calculate the number of (simulated “wc -l”) line
sed -n ‘$ =’

Text conversion and substitution:
——–

# Unix environment: convert DOS newlines (CR / LF) to Unix format.
sed ‘s /.$//’ # assumes that all lines with CR / LF end
sed ‘s / ^ M $ //’ # in bash / tcsh, and will Ctrl-M to press Ctrl-V
sed ‘s / \ x0D $ //’ # ssed, gsed 3.02.80, and later

# Unix environment: convert Unix newlines (LF) to DOS format.
sed “s / $ /` echo -e \\\ r` / “# used in the ksh command
sed ‘s / $’ “/` echo \\\ r` / “# used in the bash command
sed “s / $ /` echo \\\ r` / “# used in zsh command
sed ‘s / $ / \ r /’ # gsed and later 3.02.80

# DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format.
sed “s / $ //” # 1
sed -n p # Method 2

# DOS ENVIRONMENT: convert DOS newlines (CR / LF) to Unix format.
# The following script only UnxUtils sed 4.0.7 and later effective. To identify UnxUtils version
# Sed through its unique “–text” option. You can use the help option (“–help”) see
# Where the presence of a “–text” items used in order to determine whether it is UnxUtils version. Other DOS
# Sed version of this conversion is not possible. But you can use “tr” to achieve this transformation.
sed “s / \ r //” infile> outfile # UnxUtils sed v4.0.7 or later
tr -d \ r outfile # GNU tr version 1.22 or higher

# Each line leading “whitespace” (spaces, tabs) Delete
# Make Zhizuo aligned
sed ‘s / ^ [\ t] * //’ # see note on ‘\ t’ usage description

# Each line trailing “whitespace” (spaces, tabs) Delete
sed ‘s / [\ t] * $ //’ # see note on ‘\ t’ usage description

# Whitespace from each line, remove leading and trailing
sed ‘s / ^ [\ t] * //; s / [\ t] * $ //’

# Insert five spaces at the beginning of each line (so that the full text moves to the right position 5 characters)
sed ‘s / ^ / /’

# To 79 characters for the width, all the text right-aligned
sed -e: a -e ‘s / ^ \ {1,78 \} $ / & /; ta.’ # 78 characters plus the last one space

# To 79 characters for the width, so that all the text centered. In method 1, in order to allow the front of each line of text is centered
# Head and behind are filled with blanks. In method 2, in front of the text is filled only during the middle of the text in
# Spaces, and eventually they will have half the space will be deleted. Also behind each row is not filled spaces.
sed -e: a -e ‘s / ^ \ {1,77 \} $ / & /; ta.’ # 1
sed -e: a -e ‘s / ^ \ {1,77 \} $ / & /; ta.’ -e ‘s / \ (* \) \ 1 / \ 1 /’ # method 2

# Find the string on each line “foo”, “foo” and find the replacement for “bar”
sed ‘s / foo / bar /’ # replaces only the first in each line “foo” string
sed ‘s / foo / bar / 4’ # replaces only every fourth row “foo” string
sed ‘s / foo / bar / g’ # each line all “foo” are replaced by “bar”
sed ‘s / \ (. * \) foo \ (. * foo \) / \ 1bar \ 2 /’ # replaces the penultimate “foo”
sed ‘s / \ (. * \) foo / \ 1bar /’ # replace the last one “foo”

Case # only line string “baz” appears in the “foo” replaced “bar”
sed ‘/ baz / s / foo / bar / g’

Under # will replace “foo” replaced “bar”, and only the line string “baz” does not appear in the case
sed ‘/ baz /! s / foo / bar / g’

# Whether “scarlet” “ruby” or “puce”, shall be replaced by “red”
sed ‘s / scarlet / red / g; s / ruby ??/ red / g; s / puce / red / g’ # sed are valid for most of
gsed ‘s / scarlet \ | ruby ??\ | puce / red / g’ # GNU sed only valid

# Upside down all the rows, the first row as the last row, and so on (analog “tac”).
# For some reason, use the following command HHsed v1.5 will delete blank lines in the file
sed ‘! 1 G; h;! $ d’ # method 1
sed -n ‘! 1 G; h; $ p’ # method 2

# The line of characters in reverse order, the first word to be the last word, …… (analog “rev”)
sed ‘/\n/!G;s/\(.\)\(.*\n\)/&\2\1/;//D;s/.//’

# The two lines each connected in a row (like “paste”)
sed ‘! $ N; s / \ n / /’

# If the current line with a backslash “\” end, then the next line and to the current end of the line
# And remove the original trailing backslash
sed -e: a -e ‘/ \\ $ / N; s / \\\ n //; ta’

# If the current line begins with an equal sign, the current line and onto the end of a line
# And a single space instead of the original first line of “=”
sed -e: a -e ‘$ N; s / \ n = / /; ta!’ -e ‘P; D’

# Add a comma delimited string of numbers, the “1234567” to “1,234,567”
gsed ‘: a; s / \ B [0-9] \ {3 \} \> /, & /; ta’ # GNU sed
sed -e: a -e ‘(. * [0-9] \) s / \ \ ([0-9] \ {3 \} \) / \ 1, \ 2 /; ta’ # other sed

# Is a number with a decimal point and negative sign of increased comma delimited (GNU sed)
gsed -r ‘: a; s / (^ | [^ 0-9.]) ([0-9] +) ([0-9] {3}) / \ 1 \ 2, \ 3 / g; ta ‘

# Add a blank line (5,10,15,20 in the first, and so increase the line after a blank line) after each line 5
gsed ‘0 ~ 5G’ # GNU sed only valid
sed ‘n; n; n; n; G;’ # other sed

To selectively display a particular row:
——–

# Display the file first 10 lines (analog “head” of behavior)
sed 10q

# Display the file in the first row (analog “head -1” command)
sed q

# Display the last 10 lines of a file (Analog “tail”)
sed -e: a -e ‘$ q; N; 11, $ D; ba’

# Display file last two lines (analog “tail -2” command)
sed ‘! $ N;! $ D’

# Display file last line (analog “tail -1”)
sed ‘$! d’ # method 1
sed -n ‘$ p’ # method 2

# Display file penultimate line
sed -e ‘$ {h; d;}!’ -ex # When the file only when one line, enter a blank line
sed -e ‘1 {$ q;}’ -e ‘$! {h; d;}’ -ex # When the file only when the line displays the line
sed -e ‘1 {$ d;}’ -e ‘$ {h; d;}!’ -ex # When the file only when the line is not output

# Show only rows matching regular expression (analog “grep”)
sed -n ‘/ regexp / p’ # method 1
sed ‘/ regexp /! d’ # method 2

# Only shows “no” regular expression matching lines (analog “grep -v”)
sed -n ‘/ regexp /! p’ # method 1, corresponds with the previous command
sed ‘/ regexp / d’ # method 2, similar syntax

# Find “regexp” and matching rows displayed on a single line, but does not display the matching lines
sed -n ‘/ regexp / {g; 1 p;!}; h’

# Find “regexp” the next line and matching rows displayed, but does not display the matching lines
sed -n ‘/ regexp / {n; p;}’

# Display a “regexp” line before and after the line, and before the first row plus “regexp” the
# Line line number (similar to “grep -A1 -B1”)
sed -n -e ‘/ regexp / {=; x; 1 p;! g; $ N;! p; D;}’ -e h

# Display a “AAA”, “BBB” or “CCC” of the row (in any order)
sed ‘/ AAA / d;!! / BBB / d;! / CCC / d’ # string does not affect the order of the results

# Display a “AAA”, “BBB” and “CCC” line (fixed order)
sed ‘/AAA.*BBB.*CCC/!d’

# Display a “AAA” “BBB” or “CCC” line (analog “egrep”)
sed -e ‘/ AAA / b’ -e ‘/ BBB / b’ -e ‘/ CCC / b’ -ed # majority sed
gsed ‘/ AAA \ | BBB \ |! CCC / d’ # GNU sed on effective

Paragraph (separated by a blank line between paragraphs) # display a “AAA” of
# HHsed v1.5 must be in the “x;” after joining the “G;”, the next three scripts are so
sed -e ‘/./{H;$!d;}’ -e ‘x; / AAA / d;!’

Paragraph (in any order) # display a “AAA” “BBB” and “CCC” three strings
sed -e ‘/./{H;$!d;}’ -e ‘x; / AAA / d;! / BBB / d;!! / CCC / d’

# Display a “AAA”, “BBB”, “CCC” to any one of the three passages of the string (in any order)
sed -e ‘/./{H;$!d;}’ -e ‘x; / AAA / b’ -e ‘/ BBB / b’ -e ‘/ CCC / b’ -ed
gsed ‘/./{H;$!d;};x;/AAA\|BBB\|CCC/b;d’ # GNU sed only valid

Line # Display contains 65 or more characters
sed -n ‘/^.\{65\}/p’

# Display line contains 65 characters or less
sed -n ‘/^.\{65\}/!p’ # method 1, corresponds with the above script
sed ‘/^.\{65\}/d’ # method 2, a little more simple way

# Display some of the text – from the row that contains the regular expression to the last line ends
sed -n ‘/ regexp /, $ p’

# Display some of the text – the specified line number range (from 8 to 12 lines, with 8 and 12 lines)
sed -n ‘8,12p’ # 1
sed ‘8,12! d’ # method 2

# Display line 52
sed -n ’52p’ # method 1
sed ’52! d ‘# method 2
sed ’52q; d’ # method 3, more efficient when dealing with large files

# From the beginning of the third line, the line is displayed once every 7
gsed -n ‘3 ~ 7p’ # GNU sed only valid
sed -n ‘3, $ {p; n; n; n; n; n; n;}’ # other sed

# Display between two regular text expression (including)
sed -n ‘/ Iowa /, / Montana / p’ # case sensitive manner

Selectively remove specific lines:
——–

# Display throughout the document, in addition to the content between two regular expressions
sed ‘/ Iowa /, / Montana / d’

# Delete duplicate files in adjacent rows (analog “uniq”)
# Retain only duplicate rows in the first row, the other rows deleted
sed ‘$ N;! /^\(.*\)\n\1$/!P; D’

# Delete duplicate lines in the file, regardless of whether the neighbor. Note hold space can support cache
# Size, or use GNU sed.
sed -n ‘G; s / \ n / && /;. / ^ \ ([- ~] * \ n \) * \ n \ 1 / d; s / \ n //; h; P’

# Delete all lines (analog “uniq -d”) except duplicate rows
sed ‘! $ N; s / ^ \ \ n \ 1 $ / \ 1 / (* \.); t; D’

# Delete files in the beginning of the 10 lines
sed ‘1,10d’

# Delete the last line in the file
sed ‘$ d’

# Delete files in the last two lines
sed ‘N; $ P;! $ D;! $ d’

# Delete files in the last 10 lines
sed -e: a -e ‘$ d; N; 2,10ba’ -e ‘P; D’ # 1
sed -n -e: a -e ‘1,10 {P; N; D;}; N; ba!’ # method 2

# Delete multiple rows 8
gsed ‘0 ~ 8d’ # GNU sed only valid
sed ‘n; n; n; n; n; n; n; d;’ # other sed

# Delete the rows matching style
sed ‘/ pattern / d’ # delete the row containing the pattern is. Of course pattern
# Can be replaced with any valid regular expression

# Delete files all blank lines (with “grep ‘.'” The same effect)
sed ‘/ ^ $ / d’ # method 1
sed ‘/./!d’ # method 2

# Keep only the first line of multiple adjacent blank lines. And delete files at the top and tail of blank lines.
# (Analog “cat -s”)
sed ‘/./,/^$/!d’ # method 1, the empty rows to delete files at the top, allowing the tail to keep a blank line
sed ‘/ ^ $ / N; / \ n $ / D’ # method 2, allows the top to retain a blank line, the tail does not leave blank lines

# Only the first two lines keep multiple adjacent blank lines.
sed ‘/ ^ $ / N; / \ n $ / N; // D’

# Delete all blank lines at top of file
sed ‘/./,$!d’

# Delete all files trailing blank lines
sed -e: a -e ‘/ ^ \ n * $ / {$ d; N; ba’ -e ‘}’ # sed valid for all
sed -e: a -e ‘/ ^ \ n * $ / N; / \ n $ / ba’ # above, but only for gsed 3.02 * valid.

# Delete the last line of each paragraph
sed -n ‘/^$/{p;h;};/./{x;/./p;}’

Special applications:
——–

# Remove manual page (man page) in nroff mark. In the Unix System V or bash shell so
May need to add -e option # use ‘echo’ command.
sed “s / .`echo \\\ b` // g” # outer pair of parentheses is required (Unix environment)
sed ‘s /.^ H // g’ # in bash or tcsh, press Ctrl-V and then press Ctrl-H
sed ‘s /. \ x08 // g’ # sed 1.5, GNU sed, ssed uses hexadecimal representation

# Extract newsgroup or e-mail message header
sed ‘/ ^ $ / q’ # delete all the contents of the first row after row of empty

# Extract the body of the news group or e-mail the
sed ‘1, / ^ $ / d’ # delete all content blank lines before the first line of

# Extract the “Subject” (the title bar field) from the message header, and remove the beginning of the “Subject:” words
sed ‘! / ^ Subject: * / d; s ///; q’

# Get the return address from the message header
sed ‘/ ^ Reply-To: / q; / ^ From: / h; /./d;g;q’

# Get the e-mail address. On the basis of that line messages generated by a script on the head of a further non-email
Part # address shave. (See a script)
sed ‘s / * //; s />.*//; s /.* (*.) [: <] * //' # At the beginning of each line with a sharp brackets and spaces (reference information) sed 's / ^ /> /’

# At the beginning of each line angle brackets and spaces removed (dereference)
sed ‘s / ^> //’

# Remove most HTML tags (including interbank label)
sed -e: a -e ‘s / <[^>] *> // g; / zipup.bat
dir / b * .txt | sed “s / ^ \ \ TXT / pkzip -mo \ 1 \ 1.TXT / (* \.).” >> zipup.bat

Use SED: Sed takes one or more editing commands, and each line of input in order to apply these commands.
When reading the first line of the input after, sed their applications all command, and outputs the result. Then read the second
Line input, their applications all command …… and repeat the process. Sed on an example from the standard input device
Equipment (ie, command interpreter, usually in the form of piped) to obtain input. Or give a command line
When a file name as an argument, the files become sed to replace the standard input device input. sed the output will be
Sent to standard output (monitor). As such:

cat filename | sed ’10q’ # use piped input
sed ’10q’ filename # same effect, but does not use the pipe input
sed ’10q’ filename> newfile # The output transfer (redirect) to disk

To understand the instructions for use sed command, including how to use via a script file (rather than from the command line) these life
Order, please refer to “sed & awk” second edition, author Dale Dougherty and Arnold Robbins
(O’Reilly, 1997; http: //www.ora.com), “UNIX Text Processing”, author
Dale Dougherty and Tim O’Reilly (Hayden Books, 1987), or write to teach Mike Arst
Cheng – the name of the archive is the “U-SEDIT2.ZIP” (on many sites can find it). To explore sed
Potential, it must be on the “regular expression” sufficient understanding. Information on regular expressions can be seen
“Mastering Regular Expressions” author Jeffrey Friedl (O’reilly 1997).
Unix systems provide manual page (“man”) would also be helpful (try these commands
“Man sed”, “man regexp”, or see “man ed” in the section on regular expressions), but
Comparative information booklet provided “abstract” – this is it’s always been criticized. However, it should not have
Is used to teach beginners how to use regular expressions sed or materials, but only for those who are familiar with these tools
Some texts provide references.

Bracket syntax: The preceding examples sed command basically use single quotation marks (‘…’) instead of double quotes
(“…”) This is because sed is typically used on Unix platforms. Under the single quotes, Unix’s shell (command
Interpreter) does the dollar sign ($) and post quotes (`…`) be interpreted and executed. And in double quotes under
Dollar sign will be expanded to the value of a variable or parameter, after the command is executed in quotation marks and replace output results
After the quotes content. Need to use an exclamation point when its front in “csh” shell and derivatives of the (!)
Escaped with a backslash plus side (like this:! \) To ensure that the above example can be used in normal operation
(Including the use of single quotation marks under circumstances). DOS version of Sed then always use double quotes (“…”) rather than
Quotes to enclose command.

‘\ t’ Usage: For clarity in documentation, we use the script ‘\ t’ to indicate a tab
Character. However, most versions of sed do not recognize the ‘\ t’ shorthand, so when the command line is
When the script Input tabs, you should press the TAB key to enter tabs instead of typing ‘\ t’. The following work
With software support ‘\ t’ character as a regular expression to represent tabs: awk, perl, HHsed,
sedmod and GNU sed v3.02.80.

Different versions of the SED: sed between different versions will be some differences, imagine in syntax between them
Vary. Specifically, the majority of them do not support the use of labels in the edit command (: name) or sub-
Branch commands (b, t), except those at the end. This document we try to use a high portability
Syntax, so most versions of sed users can use these scripts. However, the version of GNU sed allow
With a more concise syntax. Imagine the feeling when readers see a very long time the command:

sed -e ‘/ AAA / b’ -e ‘/ BBB / b’ -e ‘/ CCC / b’ -e d

The good news is GNU sed command enables more compact:

sed ‘/ AAA / b; / BBB / b; / CCC / b; d’ # even be written
sed ‘/ AAA \ | BBB \ | CCC / b; d’

Also, please note that although many versions of sed accept as “/ one / s / RE1 / RE2 /” this before ‘s’ with an empty
Georgia’s commands, but some of these versions do not accept this command: “/ one / s / RE1 / RE2 /!”. Then
Just need to get rid of the middle of the space on the line.

Speed ??optimization: When for some reason (for example, a large input file, the processor is too slow, or hard disk) the need to improve
When the command execution speed, consider replacing the command (“s /…/…/”) preceded by address expressions
Increase speed. For example:

sed ‘s / foo / bar / g’ filename # standard replace command
sed ‘/ foo / s / foo / bar / g’ filename # Faster
sed ‘/ foo / s // bar / g’ filename # shorthand

When you need to display only the front part of the file or delete the contents of the back when needed, you can use the “q” in the script
Command (Exit command). When dealing with large files, this will save a lot of time. As such:

sed -n ’45, 50p ‘filename # Showing 45-50 OK
sed -n ’51q; 45,50p’ filename # same, but faster

If you have another one-line script would like to share with you or you find in this document the wrong place, please power
E-mail to the author of this document (Eric Pement). Please remember to provide the message sed version you are using,
Operating system and proper description of the sed running of the problem. One-line script referred to herein refers to the command line length
Degrees in 65 characters or less 65 Annotation [1] sed script. Various scripts in this document is made as listed below
Who wrote or provide:

TOMCAT SERVER

Reasons for Using a Web Server Apache
You may be wondering why a separate Web server is needed when Tomcat already has an HTTP
Connector. Following are some reasons:
? Performance—Tomcat is inherently slower than a Web server. Therefore, it is better for the
Web server to serve up static content, while Tomcat handles the dynamic content (JSPs
and Servlets). Passing requests for static HTML pages, images, and style sheets through a
Servlet container written in Java is not as efficient compared to a Web server.
? Security—AWeb server such as Apache has been around for much longer than Tomcat,
and has far fewer security holes.
? Stability—Apache is much more stable than Tomcat. In the event of a Tomcat crash, the entire
Web site will not come down. Only the dynamic content served by Tomcat would be unavailable.
? Configurability—Apache is also far more configurable than Tomcat. Using Apache as a front end
enables you to take advantage of its rich functionality.
? Legacy support—Web sites often have legacy code in the form of CGI programs. They might
also use scripting languages (such as Perl or Python) to implement specific functionality. Web
servers such as Apache have modules for Perl and Python, whereas Tomcat does not. Tomcat
does have limited support for CGI, however, using a special CGIServlet that mimics the CGI
functionality.
___________________________________________________________________________________________________
TOMCAT INSTALLATION:
Download apache-tomcat-7.0.28.tar.gz and jdk-7u3-linux-i586.tar.gz
# Tar –xvzf apache-tomcat-7.0.28.tar.gz –C /opt
# Tar -xvzf jdk-7u3-linux-i586.tar.gz –C /opt
# vi .bash_profile
export JAVA_HOME=/opt/jdk1.7.0_03
export CATALINA_HOME=/opt/apache-tomcat-7.0.28
# source .bash_profile
# echo $JAVA_HOME; echo $CATALINA_HOME
/opt/jdk1.7.0_03
/opt/apache-tomcat-7.0.28
— > http://localhost:8080/
Aapche & Tomcat with Mod_jk
# tar –xvzf tomcat-connectors-1.2.37-src.tar.gz -C /opt/
# cd /opt/tomcat-connectors-1.2.37-src/native
# ./configure –with-apxs=/opt/apache2/bin/apxs
# make
# make install
# ls –l /opt/apache2/modules — > Check mod_jk.so module in apache
# vi /opt/apache2/conf/workers.properties
worker.list=node1
worker.node1.port=8009
worker.node1.host=localhost
worker.node1.type=ajp13
worker.node1.lbfactor=1

# vi /opt/apache2/conf/httpd.conf
LoadModule jk_module modules/mod_jk.so
JKWorkersFile conf/workers.properties

ServerName test.sapient.com
DocumentRoot /opt/apache2/htdocs/test
JKLogFile logs/mod_jk.log
JKLogLevel error
JKMount /* node1

# /opt/apache-tomcat-7.0.28/bin/startup.sh

UserHit à http://test.sapient.com:83/ à We can see tomcat welcome page

Mod_jk

Multiple instances of Tomcat
# cp –r apache-tomcat-7.0.28 apache-tomcat-7.0.28 _1
# vi /opt/ apache-tomcat-7.0.28 _1/conf/server.xml
apache-tomcat-7.0.28 (server.xml)
apache-tomcat-7.0.28 _1 (server.xml)



ServerName test.sapient.com
DocumentRoot /opt/apache2/htdocs/test
JKLogFile logs/mod_jk.log
JKLogLevel error
JKMount /* node1


ServerName test.sapient.com
DocumentRoot /opt/apache2/htdocs/
JKLogFile logs/mod_jk.log
JKLogLevel error
JKMount /* node2

# /opt/apache2/bin/apachectl -k restart
http://test.sapient.com:83/ à connect to tomcat_1
http://test.sapient.com:84/ à connect to tomcat_2

Workers Properties File
worker.list=tomcat

worker.reference.socket_timeout=10000
worker.reference.socket_keepalive=true
worker.reference.connect_timeout=5000
worker.reference.connection_pool_size=30
worker.reference.cachesize=1
worker.template.prepost_timeout=10000
worker.template.connect_timeout=10000
worker.template.connection_pool_size=30
worker.template.socket_timeout=10
worker.template.retries=20

# Define App1
# modify the host as your host IP or DNS name.

worker.tomcat.reference=worker.reference
worker.tomcat.port=8009 # This Value should be an AJP Port No
worker.tomcat.host=172.17.100.213
worker.tomcat.type=ajp13
worker.tomcat.lbfactor=1
#worker.tomcat.cachesize=10
worker.tomcat.connection_pool_timeout=20

Connection Directive:
Host = localhost //name of the appserver hostname or ip
Port=8009
socket_timeout=0 //Socket timeout (in sec) used between JK and remote host. If remote host does not respond in this time, JK will generate an error, and retry again. If set to zero (default) JK will wait for an infinite amount of time on all socket operations.
socket_connect_timeout = socket_timeout*1000 (in millisec)
socket_keepalive=False
____________________________________________________________________________________________________
Virtual Hosting in Tomcat
[root@test tomcat_1]# cp -r webapps host1
[root@test tomcat_1]# cp -r webapps host2
# vi /opt/tomcat_1/conf/server.xml







# vi /opt/tomcat_1/host1/ROOT/host1.html
This is virtual host1
# vi /opt/tomcat_1/host2/ROOT/host2.html
This is virtual host2

# vi /etc/hosts
192.168.10.102 www.host1.com www
192.168.10.102 www.host2.com www

http://www.host1.com:8080/host1.html
http://www.host1.com:8080/host2.html
____________________________________________________________________________________________________
JNDI: The Java Naming and Directory Interface (JNDI) is an application programming interface (API) for accessing different kinds of naming like CORBA, Java RMI, and EJB; and directory services like LDAP,NIS+.
The most common use case is to set up a database connection pool on a Java EE application server
Difference between ear,war,jar files
EAR is an EEnterprise Aapplication archive and may contain ejb JAR files, WAR files, and RAR (connector) files. They may also contain third-party libraries – but you have to know how to manipulate the Java extension facilities (e.g. MANIFEST.MF Class-Path directive) to make that work well.

WAR is an Web Aapplication archive and contains JSPs, “normal” HTTP served files (HTML, images, etc.), servlets, tag libraries, and such.
JAR is the “normal” Java Aapplication archive, but in this context it usually contains EJBs instead of code libraries or runnable (e.g. from outside an application container) applications.
JAR : JAVA Archives and it allows aggregating many files into One.It usually hold java classes in lib.
WAR : Web Application Archives and it stores xml,Java Classes and JSP for Web Application.
EAR : Enterprise Archives it combines JAR and WAR files into combined Archive

JDBC connection pooling (DBCP)
1. MySQL configuration
mysql> GRANT ALL PRIVILEGES ON *.* TO javauser@localhost IDENTIFIED BY ‘javadude’ WITH GRANT OPTION;
mysql> create database javatest;
mysql> use javatest;
mysql> create table testdata (id int not null auto_increment primary key, foo varchar(25), bar int);
Note: the above user should be removed once testing is complete!
mysql> insert into testdata values(null, ‘hello’, 12345);
mysql> select * from testdata;
+—-+——-+——-+
| ID | FOO | BAR |
+—-+——-+——-+
| 1 | hello | 12345 |
+—-+——-+——-+
mysql>
2. Context configuration : Configure the JNDI DataSource to Context.
maxActive: Maximum number of database connections in pool.. Set to -1 for no limit.
maxIdle: Maximum number of idle database connections to retain in pool. Set to -1 for no limit.
maxWait: Maximum time to wait for a database connection to become available in ms, ex. 10 sec. An Exception is thrown if this timeout is exceeded. Set to -1 to wait indefinitely.
#vi Server.xml



3. web.xml configuration
Now create a WEB-INF/web.xml for this test application.

MySQL Test App

DB Connection
jdbc/TestDB
javax.sql.DataSource
Container


4. Now create a simple test.jsp page for use later.
<%@ taglib uri="http://java.sun.com/jsp/jstl/sql" prefix="sql" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>

select id, foo, bar from testdata

DB Test

Results


Foo ${row.foo}
Bar ${row.bar}


____________________________________________________________________________________________________
Tomcat follow symbolic links
allowLinking=”true”
# vi context.xml


Allow or deny virtual hosting to accessible by users in tomcat
Use a valve to filter by IP or hostname to only allow a subset of machines to connect.
# vi Server.xml

Log path setting in Tomcat
# vi server.xml


Response time logging in Tomcat

Turn on Servlet Reloading
# vim $CATLINA_HOME/conf/server.xml
below

Implement custom error pages
# vi web.xml
error-page>
404
/error/404.html

To allow directory browsing via Apache Tomcat
change the parameter “listings” in the file conf/web.xml from false to true.
# vim $CATLINA_HOME/conf/web.xml
search by /listings

default
org.apache.catalina.servlets.DefaultServlet
listings true
1

Note: To Secure directory listings — > listing value should be false

Session Timeout Configuration
# vi web.xml

30

____________________________________________________________________________________________________
Tomcat Logging
Using logging.properties ( To debug more logs)
# vi logging.properties
org.apache.catalina.core.ContainerBase.[Catalina].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

Using Log4j: Tomcat 6.0 uses Commons Logging throughout its internal code allowing the developer to choose a logging configuration. If we want to collect the detailed logging of tocat then we need to configure the external logging api like log4j
Create a file called log4j.properties with the following content and save it into $CATALINA_BASE/lib
log4j.rootLogger=INFO, CATALINA (// DEBUG)
# Define all the appenders
log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.
log4j.appender.CATALINA.Append=true
log4j.appender.CATALINA.MaxFileSize=10MB

log4j.appender.CATALINA.MaxBackupIndex=10
log4j.appender.CATALINA.Encoding=UTF-8
# Roll-over the log once per day
log4j.appender.CATALINA.DatePattern=’.’yyyy-MM-dd’.log’
log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

# same for
log4j.appender.LOCALHOST=org.apache.log4j.DailyRollingFileAppender
……
log4j.appender.MANAGER=org.apache.log4j.DailyRollingFileAppender
……
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Encoding=UTF-8
log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

# Configure which loggers log to which appenders
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=INFO, LOCALHOST
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager]=\
INFO, MANAGER
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager]=\
INFO, HOST-MANAGER
Download Log4J (v1.2 or later).
3. Download or build tomcat-juli.jar and tomcat-juli-adapters.jar that are available as an “extras” component for Tomcat. See Additional Components documentation for details.
4. This tomcat-juli.jar differs from the default one. It contains the full Apache Commons Logging implementation and thus is able to discover the presense of log4j and configure itself.
5. Put log4j.jar and tomcat-juli-adapters.jar from “extras” into $CATALINA_HOME/lib.
6. Replace $CATALINA_HOME/bin/tomcat-juli.jar with tomcat-juli.jar from “extras”.
7. Create $CATALINA_BASE/bin and $CATALINA_BASE/lib directories if they do not exist.
8. Put log4j.jar and tomcat-juli-adapters.jar from “extras” into $CATALINA_BASE/lib
9. Put tomcat-juli.jar from “extras” as $CATALINA_BASE/bin/tomcat-juli.jar
10. If you are running with a security manager, you would need to edit the$CATALINA_BASE/conf/catalina.policy file to adjust it to using a different copy of tomcat-juli.jar.
11. Delete $CATALINA_BASE/conf/logging.properties to prevent java.util.logging generating zero length log files.
12. Start Tomcat
____________________________________________________________________________________________________
Tomcat Security
# vi /opt/tomcat_1/conf/tomcat-users.xml






____________________________________________________________________________________________________
Configuring Tomcat for SSL
1. Generating the KeyStore file
# /usr/java/jdk1.7.0/bin/keytool -genkey -keyalg RSA -alias tomcat -keystore /opt/apache-tomcat-7.0.28/keys/tomcat.jks
Enter keystore password: sapient
Re-enter new password: sapient
What is your first and last name? [Unknown]: vivek srivastav
What is the name of your organizational unit? [Unknown]: ISST
What is the name of your organization? [Unknown]: sapient
What is the name of your City or Locality? [Unknown]: Gurgoan
What is the name of your State or Province? [Unknown]: Haryana
What is the two-letter country code for this unit? [Unknown]: IN
Is CN=vivek srivastav, OU=ISST, O=sapient, L=Gurgoan, ST=Haryana, C=IN correct? [no]: yes
Enter key password for
(RETURN if same as keystore password): sapient
Re-enter new password: sapient

2. Configuring Tomcat for using the Keystore file
# vi /opt/apache-tomcat-7.0.28/conf/server.xml

— > https://localhost:8443/

If getting “Secure Connection Failed (Error code: sec_error_ca_cert_invalid)”
è Click on “you can add an exception”
è Click on “Add Exception”
è Get Certificate
è Confirm Security Exception

3. Import an certificate (e.g. server.crt)
# mv /opt/apache2/conf/server.crt /opt/apache-tomcat-7.0.28/keys/.
# cd /opt/apache-tomcat-7.0.28/keys/
# /usr/java/jdk1.7.0/bin/keytool -import -trustcacerts -alias cert -file server.crt -keystore tomcat.jks
Enter keystore password: sapient
Re-enter new password: sapient
Owner: CN=o2vb, OU=ISST, O=O2, L=DL, ST=GGN, C=IN
Issuer: CN=o2vb, OU=ISST, O=O2, L=DL, ST=GGN, C=IN
Serial number: fb3b6d2ecd8932db
Valid from: Tue Oct 09 23:49:00 IST 2012 until: Wed Oct 09 23:49:00 IST 2013
Certificate fingerprints:
MD5: 4C:4C:9C:93:F5:93:57:ED:2B:9D:B3:CA:CB:1D:97:C8
SHA1: 3B:70:18:A2:0D:4B:59:FF:4E:5C:64:6D:11:28:BA:49:BA:BA:BD:E2
SHA256: 3A:57:76:74:79:52:B7:81:FD:6F:2A:3D:A1:F0:FD:3C:36:C9:E9:F5:BD:B1:D5:6B:E5:15:09:73:63:3F:5D:D2
Signature algorithm name: SHA1withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore
Keystore certificates list
# /usr/java/jdk1.6.0_06/bin/keytool -list -v -keystore tomcat.jks
Generate a certificate signing request (CSR) for an existing Java keystore
# keytool -certreq -alias mydomain -keystore keystore.jks -file mydomain.csr
Check a stand-alone certificate
# keytool -printcert -v -file mydomain.crt
Check a particular keystore entry using an alias
# keytool -list -v -alias mydomain -keystore keystore.jks
Delete a certificate from a Java Keytool keystore
# keytool -delete -alias mydomain -keystore keystore.jks
Change a Java keystore password
# keytool -storepasswd -new new_storepass -keystore keystore.jks
Export a certificate from a keystore
# keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks
List Trusted CA Certs
# keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts
Import New CA into Trusted Certs
# keytool -import -trustcacerts -file /path/to/ca/ca.pem -alias CA_ALIAS -keystore $JAVA_HOME/jre/lib/security/cacerts
Source: https://www.sslshopper.com/article-most-common-java-keytool-keystore-commands.html
To check certificates validity
[tomcat@app4r]$ /usr/java/jdk1.6.0_06/bin/keytool -list -v -keystore Be.jks > abc
[tomcat@app4r]$ cat abc|grep -e “Alias” -e “Valid”
Alias name: o2_snowpatrol
Valid from: Fri Oct 08 10:25:11 BST 2010 until: Fri Apr 06 10:25:11 BST 2012
Alias name: tomcat
Valid from: Tue Nov 23 12:30:01 GMT 2010 until: Wed Nov 23 12:30:01 GMT 2011
Alias name: new_dsl_cert
Valid from: Mon Sep 15 01:00:00 BST 2008 until: Fri Sep 16 00:59:59 BST 2011
Alias name: imt_stage_snowpetrol
Certificate Handshake failure Issue
[tomcat@app4r certificates]$ wget ‘https://sdpapi.ref.o2.co.uk/services/ViewPostalAddress_1_0’
–2013-05-10 14:37:17– https://sdpapi.ref.o2.co.uk/services/ViewPostalAddress_1_0%20%3Chttps://sdpapi.ref.o2.co.uk/services/ViewPostalAddress_1_0%3E
Resolving sdpapi.ref.o2.co.uk… 82.132.158.136
Connecting to sdpapi.ref.o2.co.uk|82.132.158.136|:443… connected.
OpenSSL: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
Unable to establish SSL connection.
How to check
[tomcat@app4r certificates]$ openssl s_client -connect 82.132.158.136:443
CONNECTED(00000003)
depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
verify return:1
depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. – For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority – G5
verify return:1
depth=1 /C=US/O=Thawte, Inc./CN=Thawte SGC CA – G2
verify return:1
depth=0 /C=GB/ST=England/L=Berkshire/O=TELEFONICA UK LIMITED/OU=Operations/CN=sdpapi.ref.o2.co.uk
verify return:1
7172:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1086:SSL alert number 40
7172:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188:
____________________________________________________________________________________________________
# sh catalina.sh run
How to take thread dump
# kill -3 java_pid ; tail –f catalina.out

HEAPjvm

JCONSOLE ( Monitoring Heap Size & Garbage collector)
# ps -ef | grep java — > get Java_PID
# /opt/jdk1.7.0_03/bin/jconsole Java_PID
# pkill jconsole

For the HotSpot Java VM: The memory pools for serial garbage collection are the following.
Eden Space (heap): The pool from which memory is initially allocated for most objects.
Survivor Space (heap): The pool containing objects that have survived garbage collection of Eden space.
Tenured Generation or old Gen(heap): The pool containing objects that have existed for some time in the survivor space.
Permanent Generation (non-heap): The pool containing all the reflective data of the virtual machine itself, such as class and method objects. With Java VMs that use class data sharing, this generation is divided into read-only and read-write areas.
Code Cache (non-heap): The HotSpot Java VM also includes a code cache, containing memory that is used for compilation and storage of native code.

Garbage collection : In Java, the unused objects remain in memory until a garbage collection occurs and frees up the memory used by the objects.The garbage collection process is primarily governed by the configuration parameters of the heap. (Heap is that part of the physical memory which is used by the JVM to create objects).

jconsole

Major GC vs Minor GC:
Young Generation is the pool of temporary objects which are not fully garbage collected initially. When these objects become old, they become part of the Old generation (This is referred as Minor GC) which are then fully garbage collected (referred as Major GC).
How to identify Major/Minor GC?
suppose you start your application as –
# java HelloWorld
to determine garbage collection, start your app as –
# java –verbose:gc HelloWorld
The output of above change should be something like –
GC 325407K-> 83000K (776768K), 0.2300771 secs
GC – Indicates that it was a minor collection (young generation). If it had said Full GC then that indicates that it was a major collection (tenured generation).
325407K – The combined size of live objects before garbage collection.
83000K – The combined size of live objects after garbage collection.
(776768K) – the total available space, not counting the space in the permanent generation, which is the total heap minus one of the survivor spaces.
0.2300771 secs – time it took for garbage collection to occur.

How to increase heap size in tomcat
Heap Size: A Java Virtual Machine on 32-bit operating systems typically has a maximum heap size of 64Mb. The JVM heap space is where all Java objects are stored, as well as memory used by the garbage collector.
To increase min( -Xms) and max (-Xmx) heap size, set the JAVA_OPTS
# vi /opt/tomcat_1/bin/catalina.sh
export JAVA_OPTS= -Xms256m -Xmx512m
How the things are in Production environment
# vi /app/tomcat-6.0.18/bin/catalina.sh
export JAVA_OPTS=”-server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/tomcat-6.0.18/tomcat.hprof -Xms6144m -Xmx6144m -XX:MaxPermSize=512m -XX:NewRatio=2 -XX:SurvivorRatio=6 -Dcs.useEhcache=true -Dnet.sf.ehcache.enableShutdownHook=true -DnumOfDiskStores=10 -Dfile.encoding=UTF-8 -Dinsite.saveslotsonly=true -Dinsite.usemarkerassets=true”

How to monitor java heap configuration by Jmap
# ps -ef |grep java — > root 6092 1 0 17:28 pts/3 00:00:01 /opt/jdk1.7.0_03/bin/java
# /opt/jdk1.7.0_03/bin/jmap -heap 6092
Attaching to process ID 6092, please wait…
Mark Sweep Compact GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 266338304 (254.0MB)
NewSize = 1048576 (1.0MB)
MaxNewSize = 4294901760 (4095.9375MB)
OldSize = 4194304 (4.0MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 12582912 (12.0MB)
MaxPermSize = 67108864 (64.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 4980736 (4.75MB)
used = 1721760 (1.641998291015625MB)
free = 3258976 (3.108001708984375MB)
34.56838507401316% used
Eden Space:
capacity = 4456448 (4.25MB)
used = 1201520 (1.1458587646484375MB)
free = 3254928 (3.1041412353515625MB)
26.961382697610293% used
From Space:
capacity = 524288 (0.5MB)
used = 520240 (0.4961395263671875MB)
free = 4048 (0.0038604736328125MB)
99.2279052734375% used
To Space:
capacity = 524288 (0.5MB)
used = 0 (0.0MB)
free = 524288 (0.5MB)
0.0% used
tenured generation:
capacity = 11075584 (10.5625MB)
used = 8240504 (7.858757019042969MB)
free = 2835080 (2.7037429809570312MB)
74.40243331638314% used
Perm Generation:
capacity = 12582912 (12.0MB)
used = 9002088 (8.585060119628906MB)
free = 3580824 (3.4149398803710938MB)
71.54216766357422% used
Jinfo
# /opt/jdk1.7.0_03/bin/jinfo 6092
Attaching to process ID 6092, please wait…
JVM version is 22.1-b02
sun.boot.library.path = /opt/jdk1.7.0_03/jre/lib/i386
java.vm.vendor = Oracle Corporation
os.version = 2.6.18-164.el5
user.home = /root
user.timezone = Asia/Kolkata
java.specification.version = 1.7
catalina.home = /opt/tomcat_2
java.class.path = /opt/tomcat_2/bin/bootstrap.jar:/opt/tomcat_2/bin/tomcat-juli.jar
user.name = root
-Djava.util.logging.config.file=/opt/tomcat_2/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager –

Automatically Generating a Heap Dump when OutOfMemory(OOM) Error
# vi /app/tomcat-6.0.18/bin/catalina.sh
export JAVA_OPTS=”-server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/tomcat-6.0.18/tomcat.hprof

Manually Generating a Heap Dump
# ./jmap –dump:file= # ./jmap –dump:file=/opt/tomcat_2/tomcat.hprof 6092

Heap dump analysis by jhat
# cd /opt/jdk1.7.0_03/bin
# ./jhat -port 7000 /opt/tomcat_2/tomcat.hprof
Reading from /opt/tomcat_2/tomcat.hprof…
Started HTTP server on port 7000
Server is ready.
http://localhost:7000/
We can analysis heap dump on browser

Option and Default Value
Description
-XX:+AggressiveOpts
Turn on point performance compiler optimizations that are expected to be default in upcoming releases. (Introduced in 5.0 update 6.)
-XX:CompileThreshold=10000
Number of method invocations/branches before compiling [-client: 1,500]
-XX:MaxHeapFreeRatio=70
Maximum percentage of heap free after GC to avoid shrinking.
-XX:MaxNewSize=size
Maximum size of new generation (in bytes). Since 1.4, MaxNewSize is computed as a function of NewRatio. [1.3.1 Sparc: 32m; 1.3.1 x86: 2.5m.]
-XX:MaxPermSize=64m
Size of the Permanent Generation. [5.0 and newer: 64 bit VMs are scaled 30% larger; 1.4 amd64: 96m; 1.3.1 -client: 32m.]
-XX:MinHeapFreeRatio=40
Minimum percentage of heap free after GC to avoid expansion.
-XX:NewRatio=2
Ratio of new/old generation sizes. [Sparc -client: 8; x86 -server: 8; x86 -client: 12.]-client: 4 (1.3) 8 (1.3.1+), x86: 12]
-XX:NewSize=2.125m
Default size of new generation (in bytes) [5.0 and newer: 64 bit VMs are scaled 30% larger; x86: 1m; x86, 5.0 and older: 640k]
-XX:SurvivorRatio=8
Ratio of eden/survivor space size [Solaris amd64: 6; Sparc in 1.3.1: 25; other Solaris platforms in 5.0 and earlier: 32]
-XX:TargetSurvivorRatio=50
Desired percentage of survivor space used after scavenge.
-XX:-UseISM
Use Intimate Shared Memory
-XX:+UseLargePages
Use large page memory.
-XX:+UseStringCache
Enables caching of commonly allocated strings.
-XX:AllocatePrefetchLines=1
Number of cache lines to load after the last object allocation using prefetch instructions generated in JIT compiled code. Default values are 1 if the last allocated object was an instance and 3 if it was an array.
Pause Time in GC: The length of time during which application execution is stopped while garbage
collection is occurring.
Design Choices
• Serial versus Parallel
With serial collection, while multiple CPUs are available, only one is utilized to perform the collection. When parallel collection is used, the task of garbage collection is split into parts and those subparts are executed simultaneously, on different CPUs. The simultaneous operation enables the collection to be done more quickly.
• Concurrent versus Stop-the-world
When stop-the-world garbage collection is performed, execution of the application is completely
suspended during the collection. Alternatively, one or more garbage collection tasks can be executed
concurrently, that is, simultaneously, with the application. Typically, a concurrent garbage collector
does most of its work concurrently, but may also occasionally have to do a few short stop-the-world
pauses. Stop-the-world garbage collection is simpler than concurrent collection, since the heap is
frozen and objects are not changing during the collection. Its disadvantage is that it may be
undesirable for some applications to be paused. Correspondingly, the pause times are shorter when
garbage collection is done concurrently, but the collector must take extra care, as it is operating over
objects that might be updated at the same time by the application. This adds some overhead to
concurrent collectors that affects performance and requires a larger heap size.
• Compacting versus Non-compacting versus Copying
After a garbage collector has determined which objects in memory are live and which are garbage, it
can compact the memory, moving all the live objects together and completely reclaiming the
remaining memory. After compaction, it is easy and fast to allocate a new object at the first free
location. A simple pointer can be utilized to keep track of the next location available for object
allocation. In contrast with a compacting collector, a non-compacting collector releases the space
utilized by garbage objects in-place, i.e., it does not move all live objects to create a large reclaimed
region in the same way a compacting collector does. The benefit is faster completion of garbage
collection, but the drawback is potential fragmentation. In general, it is more expensive to allocate
from a heap with in-place deallocation than from a compacted heap. It may be necessary to search the
heap for a contiguous area of memory sufficiently large to accommodate the new object. A third
alternative is a copying collector, which copies (or evacuates) live objects to a different memory area.
The benefit is that the source area can then be considered empty and available for fast and easy
subsequent allocations, but the drawback is the additional time required for copying and the extra
space that may be required.

GC Logs
8746.664: [GC 8746.664: [ParNew: 1118528K->6000K(1258304K), 0.0692770 secs] 1118528K->6000K(4054528K), 0.0693720 secs] [Times: user=0.13 sys=0.01, real=0.08 secs]
19625.935: [Full GC 19625.935: [CMS: 0K->5886K(2796224K), 0.1273050 secs] 244248K->5886K(4054528K), [CMS Perm : 21247K->21092K(21248K)], 0.1274740 secs] [Times: user=0.10 sys=0.02, real=0.13 secs]
19626.075: [GC [1 CMS-initial-mark: 5886K(2796224K)] 17295K(4054528K), 0.0008640 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
19626.076: [CMS-concurrent-mark-start]
19627.828: [Full GC 19627.828: [CMS19627.890: [CMS-concurrent-mark: 0.062/1.814 secs] [Times: user=2.69 sys=0.07, real=1.82 secs]
21155.807: [CMS-concurrent-mark-start]
21155.912: [CMS-concurrent-mark: 0.105/0.105 secs] [Times: user=0.20 sys=0.01, real=0.10 secs]
21155.912: [CMS-concurrent-preclean-start]
21155.962: [CMS-concurrent-preclean: 0.047/0.050 secs] [Times: user=0.09 sys=0.00, real=0.05 secs]
21155.962: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 21160.988: [CMS-concurrent-abortable-preclean: 0.421/5.026 secs] [Times: user=0.90 sys=0.02, real=5.03 secs]
21160.988: [GC[YG occupancy: 530699 K (1258304 K)]21160.988: [Rescan (parallel) , 0.1831550 secs]21161.172: [weak refs processing, 0.0000470 secs] [1 CMS-remark: 9334K(2796224K)] 540034K(4054528K), 0.1833000 secs] [Times: user=0.19 sys=0.00, real=0.18 secs]
21161.172: [CMS-concurrent-sweep-start]
21161.178: [CMS-concurrent-sweep: 0.006/0.006 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
21161.178: [CMS-concurrent-reset-start]
21161.184: [CMS-concurrent-reset: 0.006/0.006 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
Heap
par new generation total 1258304K, used 733988K [0x00000006e0000000, 0x0000000735550000, 0x0000000735550000)
eden space 1118528K, 65% used [0x00000006e0000000, 0x000000070ccc9318, 0x0000000724450000)
from space 139776K, 0% used [0x000000072ccd0000, 0x000000072ccd0000, 0x0000000735550000)
to space 139776K, 0% used [0x0000000724450000, 0x0000000724450000, 0x000000072ccd0000)
concurrent mark-sweep generation total 2796224K, used 9285K [0x0000000735550000, 0x00000007e0000000, 0x00000007e0000000)
concurrent-mark-sweep perm gen total 94332K, used 62446K [0x00000007e0000000, 0x00000007e5c1f000, 0x0000000800000000)

Enable GC (Garbage Collector) in Tomcat
# vi catalina.sh
Export JAVA_OPTS= -Xloggc: /app/tomcat-6.0.18/CS7.0/cs_tomcat/logs/gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
Here -Xloggc: is path of GC logs
-verbose:gc
Prints some GC info
-XX:+PrintHeapAtGC
Prints detailed GC info including heap occupancy before and after GC
-XX:+PrintGC
Outputs basic information at every garbage collection
-XX:+PrintGCDetails
Provide information such as the size of live objects before and after garbage collection for the various generations, the total available space for each generation, and the length of time the collection took.
-XX:+PrintGCTimeStamps
Prints the garbage collection time stamps to help with debugging.

Partial garbage collection:
1.612: [GC [PSYoungGen: 12998K->1568K(18496K)] 12998K->1568K(60864K),0.0054130 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
Full garbage collection:
1.617: [Full GC (System) [PSYoungGen: 1568K->0K(18496K)] [PSOldGen: 0K->1483K(42368K)] 1568K->1483K(60864K) [PSPermGen: 9458K->9458K(21248K)],0.0294590 secs] [Times: user=0.02 sys=0.00, real=0.03 secs]

Types of Garbage Collector
The Serial Collector: -XX:+UseSerialGC
(default & stop-the-world collector)
1. Uses only one GC thread for the GC operation
2. Used for small application.
3. Tenured Generation GC done in serial threads.
The Parallel Collector(Throughput Collector): -XX:+UseParallelGC
(stop-the-world collector)
1. Uses multiple GC threads for the GC operation .
2. Young Generation GC done in parallel threads
Parallel Old Generation Collector: -XX:+UseParallelOldGC
1. This garbage collector is set for high throughput
2. Certain phases of an ‘Old Generation’ collection can be performed in parallel, speeding up a old generation collection..
The Concurrent Low Pause Collector(CMS): -XX:+UseConcMarkSweepGC
Steps of GC: – initial mark, – concurrent marking, – remark, – concurrent sweeping
1. Uses only one GC thread for the GC operation
2. This garbage collector is set for low pause time. It will result in a Java application that has a lower average throughput, but much shorter CPU-intensive garbage collections. This option is required in environments that have response time constraints.
Incremental Low Pause Collector: -XX:+UseTrainGC
Serial vs Parallel collector
Ø Both the serial and parallel collectors cause a stop-the-world during the GC. A serial collector is a default copying collector which uses only one GC thread for the GC operation, while a parallel collector uses multiple GC threads for the GC operation.
Parallel vs CMS collectors:
Ø The parallel is a ‘stop-the-world’ collector, while the CMS stops the world only during the initial mark and remark phases. During the concurrent marking and sweeping phases, the CMS thread runs along with the application’s threads.

if you wish to combine both parallelism and concurrency in your GC, you can use the following:
-XX:UserParNewGC for the new generation (multiple GC threads)
-XX:+UseConcMarkSweepGC for the old generation (one GC thread, freezes the JVM only during the initial mark and remark phases)

Collectors operate on the young generation: -XX:+UseSerialGC, -XX:+UseParallelGC, XX:+UseParNewGC
Collectors operate on the old generation: -XX:+UseParallelOldGC, -XX:+UseConcMarkSweepGC
http://robaustin.wikidot.com/jvm-garbage-collector-overview

Java Thread Dump Analyser
Thread States: There are 6 thread states
NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED
Download jtda-cli.jar
To see the usage: /usr/java/jdk1.6.0_06/bin/java -jar jtda-cli.jar –help
To Analyze thread tump: # cat catalina.out_23_05_2013_02 | /usr/java/jdk1.6.0_06/bin/java -jar jtda-cli.jar
Source: http://mchr3k.github.io/javathreaddumpanalyser/
____________________________________________________________________________________________________
Tomcat with LDAP integration
# vi /opt/apache-tomcat-7.0.28/conf/server.xml

# vi ../webapps/ROOT/WEB-INF/web.xml

Welcome to Tomcat
Welcome to Tomcat


Logging Area

Authentication for registered users.

/*
GET
POST

*

*

BASIC
Please enter your Username

# ./catalina.sh start; tail -f ../logs/catalina.out
Now once we start the Tomcat and visit your website, following popup will be shown.
http://localhost:8080/
____________________________________________________________________________________________________
Single Sign-On Implementation
Using Single Sign-on, it is possible to eliminate this annoying repetition (provided the user name and password are identical for each sign-on, and usually authenticating against the same Tomcat Realm).
The Single Sign-on Valve caches credentials (passwords) on the server side, and will invisibly authenticate users as they traverse between Web applications on a given virtual host. Without activating this Valve, the user will be prompted to authenticate for each and every protected Web application






Attribute
Description
className
Java class name of the implementation to use. This MUST be set toorg.apache.catalina.authenticator.SingleSignOn.
requireReauthentication
Default false. Flag to determine whether each request needs to be reauthenticated to the securityRealm. If “true”, this Valve uses cached security credentials (username and password) to reauthenticate to the Realm each request associated with an SSO session. If “false”, the Valve can itself authenticate requests based on the presence of a valid SSO cookie, without rechecking with the Realm.
cookieDomain
Sets the host domain to be used for sso cookies.

How SSO work in two applications

Web Application A (http://WebApplicationA/)
Web Application B (http://WebApplicationB/)

User logs into Web Application A. He clicks on a link inside Web Application A page (of the kind): http://WebApplicationB/go?sessionId=ABC&user=me@me.com

When Application B receives this request, it makes a http call to Application A to verify this information.
In other words it sends a http request (server to server) like: http://WebApplicationA/verifyUserSession?sessionId=ABC&user=me@me.com. WebApplication A checks its list of logged-in users/sessions and responds with a VERIFIED or FAILURE.

If the response was VERIFIED, WebApplicationB knows this is a logged in user inside WebApplicationA – and it proceeds to create a session for the user, and allows him in.

Configuring Customized User Directories
Some sites like to allow individual users to publish a directory of web pages on the server. For example, a university department might want to give each student a public area, or an ISP might make some web space available on one of its servers to customers that don’t have a virtually hosted web server. In such cases, it is typical to use the tilde character (~) plus the user’s name as the virtual path of that user’s web site:
http://www.cs.myuniversity.edu/~username
http://members.mybigisp.com/~username
Tomcat gives you two ways to map this on a per-host basis, using a couple of special Listener elements. The Listener’sclassName attribute should be org.apache.catalina.startup.UserConfig, with the userClass attribute specifying one of several mapping classes. If your system runs Unix, has a standard /etc/passwd file that is readable by the account running Tomcat, and that file specifies users’ home directories, use the PasswdUserDatabase mapping class:

Web files would need to be in directories such as /home/users/ian/public_html or /users/jbrittain/public_html. Of course, you can change public_html to be whatever subdirectory into which your users put their personal web pages.
In fact, the directories don’t have to be inside of a user’s home directory at all. If you don’t have a password file but want to map from a user name to a subdirectory of a common parent directory such as /home, use the HomesUserDatabase class:

In this case, web files would be in directories such as /home/ian/public_html or /home/jasonb/public_html. This format is more useful on Windows, where you’d likely use a directory such as C:\home.
These Listener elements, if present, must be inside of a Host element, but not inside of a Context element, as they apply to theHost itself.

Tomcat Interview Questions Link

http://www.pagalbytes.com/?q=node/535&page=show
http://www.javaexperience.com/category/tomcat/
http://allinoneissues.blogspot.in/2012/07/tomcat-interview-questions-answers.html
# vi .bash_profile
export ANT_HOME=/opt/apache-ant-1.9.1/
export PATH=$ANT_HOME/bin:$PATH
export JAVA_HOME=/usr/jdk1.7.0_03/

# vi /opt/build/build.xml



















# vi /opt/build/build.properties
tomcat=/opt/apache-tomcat-7.0.40/
tomcat.lib=${tomcat}/lib
tomcat.deployment=${tomcat}/webapps
tomcat.bin=${tomcat}/bin

Put sample.war into /opt/build/
# ant deploy

Bash Reference Sheet

Bash Reference Sheet

Contents
Bash Reference Sheet
Syntax
Basic Structures
Compound Commands
Command Lists
Expressions
Loops
Builtins
Dummies
Declarative
Input
Output
Execution
Jobs/Processes
Conditionals And Loops
Script Arguments
Streams
File Descriptors
Redirection
Piping
Expansions
Common Combinations
Tests
Exit Codes
Testing The Exit Code
Patterns
Glob Syntax
Testing
Parameters
Special Parameters
Parameter Operations
Arrays
Creating Arrays
Using Arrays
Examples: Basic Structures
Compound Commands
Command Lists
Expressions
Loops
Builtins
Dummies
Declarative
Input
Output
Execution

Syntax

[word] [space] [word]
Spaces separate words. In bash, a word is a group of characters that belongs together. Examples are command names and arguments to commands. To put spaces inside an argument (or word), quote the argument (see next point) with single or double quotes.
[command] ; [command] [newline]
Semi-colons and newlines separate synchronous commands from each other. Use a semi-colon or a new line to end a command and begin a new one. The first command will be executed synchronously, which means that Bash will wait for it to end before running the next command.
[command] & [command]
A single ampersand terminates an asynchronous command. An ampersand does the same thing as a semicolon or newline in that it indicates the end of a command, but it causes Bash to execute the command asynchronously. That means Bash will run it in the background and run the next command immediately after, without waiting for the former to end. Only the command before the & is executed asynchronously and you must not put a ; after the &, the & replaces the ;.
[command] | [command]
A vertical line or pipe-symbol connects the output of one command to the input of the next. Any characters streamed by the first command on stdout will be readable by the second command on stdin.
[command] && [command]
An AND conditional causes the second command to be executed only if the first command ends and exits successfully.
[command] || [command]
An OR conditional causes the second command to be executed only if the first command ends and exits with a failure exit code (any non-zero exit code).
‘ [Single quoted string] ‘
Disables syntactical meaning of all characters inside the string. Whenever you want literal strings in your code, it’s good practice to wrap them in single quotes so you don’t run the risk of accidentally using a character that also has a syntactical meaning to Bash.
” [Double quoted string] ”
Disables syntactical meaning of all characters except expansions inside the string. Use this form instead of single quotes if you need to expand a parameter or command substitution into your string.
Remember: It’s important to always wrap your expansions (“$var” or “$(command)”) in double quotes. This will, in turn, safely disable meaning of syntactical characters that may occur inside the expanded result.
Basic Structures

See BashSheet#Examples:_Basic_Structures for some examples of the syntax below.

Compound Commands

Compound commands are statements that can execute several commands but are considered as a sort of command group by Bash.

Command Lists

{ [command list]; }
Execute the list of commands in the current shell as though they were one command.
Command grouping on its own isn’t very useful. However, it comes into play wherever Bash syntax accepts only one command while you need to execute multiple. For example, you may want to pass output of multiple commands via a pipe to another command’s input:
{ cmd1; cmd2; } | cmd3
Or you may want to execute multiple commands after a || operator:
rm file || { echo “Removal failed, aborting.”; exit 1; }
It is also used for function bodies. Technically, this can also be used for loop bodies though this is undocumented, not portable and we normally prefer do …; done for this):
for digit in 1 9 7; { echo “$digit”; } # non-portable, undocumented, unsupported
for digit in 1 9 7; do echo “$digit”; done # preferred
Note: You need a ; before the closing } (or it must be on a new line).
( [command list] )
Execute the list of commands in a subshell.
This is exactly the same thing as the command grouping above, only, the commands are executed in a subshell. Any code that affects the environment such as variable assignments, cd, export, etc. do not affect the main script’s environment but are scoped within the brackets.
Note: You do not need a ; before the closing ).
Expressions

(( [arithmetic expression] ))
Evaluates the given expression in an arithmetic context.
That means, strings are considered names of integer variables, all operators are considered arithmetic operators (such as ++, ==, >, <=, etc..) You should always use this for performing tests on numbers! $(( [arithmetic expression] )) Expands the result of the given expression in an arithmetic context. This syntax is similar to the previous, but expands into the result of the expansion. We use it inside other commands when we want the result of the arithmetic expression to become part of another command. [[ [test expression] ]] Evaluates the given expression as a test-compatible expression. All test operators are supported but you can also perform Glob pattern matching and several other more advanced tests. It is good to note that word splitting will not take place on unquoted parameter expansions here. You should always use this for performing tests on strings and filenames! Loops If you're new to loops or are looking for more details, explanation and/or examples of their usage, go read the BashGuide's section on Conditional Loops. do [command list]; done This constitutes the actual loop that is used by the next few commands. The list of commands between the do and done are the commands that will be executed in every iteration of the loop. for [name] in [words] The next loop will iterate over each WORD after the in keyword. The loop's commands will be executed with the value of the variable denoted by name set to the word. for (( [arithmetic expression]; [arithmetic expression]; [arithmetic expression] )) The next loop will run as long as the second arithmetic expression remains true. The first arithmetic expression will be run before the loop starts. The third arithmetic expression will be run after the last command in each iteration has been executed. while [command list] The next loop will be repeated for as long as the last command ran in the command list exits successfully. until [command list] The next loop will be repeated for as long as the last command ran in the command list exits unsuccessfully ("fails"). select [name] in [words] The next loop will repeat forever, letting the user choose between the given words. The iteration's commands are executed with the variable denoted by name's value set to the word chosen by the user. Naturally, you can use break to end this loop. Builtins Builtins are commands that perform a certain function that has been compiled into Bash. Understandably, they are also the only types of commands (other than those above) that can modify the Bash shell's environment. Dummies true (or :): These commands do nothing at all. They are NOPs that always return successfully. false: The same as above, except that the command always "fails". It returns an exit code of 1 indicating failure. Declarative alias: Sets up a Bash alias, or print the bash alias with the given name. Aliasses replace a word in the beginning of a command by something else. They only work in interactive shells (not scripts). declare (or typeset): Assign a value to a variable. Each argument is a new variable assignment. Each argument's part before the equal sign is the name of the variable, and after comes the data of the variable. Options to declare can be used to toggle special variable flags (like read-only/export/integer/array). export: Export the given variable to the environment so that child processes inherit it. This is the same as declare -x. Remember that for the child process, the variable is not the same as the one you exported. It just holds the same data. Which means, you can't change the variable data and expect it to change in the parent process, too. local: Declare a variable to have a scope limited to the current function. As soon as the function exits, the variable disappears. Assigning to it in a function also doesn't change a global variable with the same name, should one exist. The same options as taken by declare can be passed to local. type: Show the type of the command name specified as argument. The type can be either: alias, keyword, function, builtin, or file. Input read: Read a line (unless the -d option is used to change the delimiter from newline to something else) and put it in the variables denoted by the arguments given to read. If more than one variable name is given, split the line up using the characters in IFS as delimiters. If less variable names are given than there are split chunks in the line, the last variable gets all data left unsplit. Output echo: Output each argument given to echo on one line, separated by a single space. The first arguments can be options that toggle special behaviour (like no newline at end/evaluate escape sequences). printf: Use the first argument as a format specifier of how to output the other arguments. See help printf. pwd: Output the absolute pathname of the current working directory. You can use the -P option to make pwd resolve any symlinks in the pathname. Execution cd: Changes the current directory to the given path. If the path doesn't start with a slash, it is relative to the current directory. command: Run the first argument as a command. This tells Bash to skip looking for an alias, function or keyword by that name; and instead assume the command name is a builtin, or a program in PATH. coproc: Run a command or compound command as a co-process. Runs in bg, setting up pipes for communication. See http://wiki.bash-hackers.org/syntax/keywords/coproc for details. . or source: Makes Bash read the filename given as first argument and execute its contents in the current shell. This is kind of like include in other languages. If more arguments are given than just a filename to source, those arguments are set as the positional parameters during the execution of the sourced code. If the filename to source has no slash in it, PATH is searched for it. exec: Run the command given as first argument and replace the current shell with it. Other arguments are passed to the command as its arguments. If no arguments are given to exec but you do specify Redirections on the exec command, the redirections will be applied to the current shell. exit: End the execution of the current script. If an argument is given, it is the exit status of the current script (an integer between 0 and 255). logout: End the execution of a login shell. return: End the execution of the current function. An exit status may be specified just like with the exit builtin. ulimit: Modify resource limitations of the current shell's process. These limits are inherited by child processes. Jobs/Processes jobs: List the current shell's active jobs. bg: Send the previous job (or job denoted by the given argument) to run in the background. The shell continues to run while the job is running. The shell's input is handled by itself, not the job. fg: Send the previous job (or job denoted by the given argument) to run in the foreground. The shell waits for the job to end and the job can receive the input from the shell. kill: Send a signal(3) to a process or job. As argument, give the process ID of the process or the jobspec of the job you want to send the signal to. trap: Handle a signal(3) sent to the current shell. The code that is in the first argument is executed whenever a signal is received denoted by any of the other arguments to trap. suspend: Stops the execution of the current shell until it receives a SIGCONT signal. This is much like what happens when the shell receives a SIGSTOP signal. wait: Stops the execution of the current shell until active jobs have finished. In arguments, you can specify which jobs (by jobspec) or processes (by PID) to wait for. Conditionals And Loops break: Break out of the current loop. When more than one loop is active, break out the last one declared. When a number is given as argument to break, break out of number loops, starting with the last one declared. continue: Skip the code that is left in the current loop and start a new iteration of that loop. Just like with break, a number may be given to skip out more loops. Script Arguments set: The set command normally sets various Shell options, but can also set Positional parameters. Shell options are options that can be passed to the shell, such as bash -x or bash -e. set toggles shell options like this: set -x, set +x, set -e, ... Positional parameters are parameters that hold arguments that were passed to the script or shell, such as bash myscript -foo /bar. set assigns positional parameters like this: set -- -foo /bar. shift: Moves all positional parameters' values one parameter back. This way, values that were in $1 are discarted, values from $2 go into $1, values from $3 go into $2, and so on. You can specify an argument to shift which is an integer that specifies how many times to repeat this shift. getopts: Puts an option specified in the arguments in a variable. getopts Uses the first argument as a specification for which options to look for in the arguments. It then takes the first option in the arguments that is mentioned in this option specification (or next option, if getopts has been ran before), and puts this option in the variable denoted by the name in the second argument to getopts. This command is pretty much always used in a loop: while getopts abc opt do case $opt in a) ...;; b) ...;; c) ...;; esac done This way all options in the arguments are parsed and when they are either -a, -b or -c, the respective code in the case statement is executed. Following short style is also valid for specifying multiple options in the arguments that getopts parses: -ac. Streams If you're new to handling input and output in bash or are looking for more examples, details and/or explanations, go read BashGuide/InputAndOutput. Bash is an excellent tool for managing streams of data between processes. Thanks to its excellent operators for connecting file descriptors, we take data from almost anywhere and send it to almost anywhere. Understanding streams and how you manipulate them in Bash is key to the vastness of Bash's power. File Descriptors A file descriptor is like a road between a file and a process. It's used by the process to send data to the file or read data from the file. A process can have a great many file descriptors, but by default, there are three that are used for standard tasks. 0: Standard Input This is where processes normally read information from. Eg. the process may ask you for your name, after you type it in, the information is read over FD 0. 1: Standard Output This is where processes normally write all their output to. Eg. the process may explain what it's doing or output the result of an operation. 2: Standard Error This is where processes normally write their error messages to. Eg. the process may complain about invalid input or invalid arguments. Redirection [command] > [file], [command] [n]> [file], [command] 2> [file]
File Redirection: The > operator redirects the command’s Standard Output (or FD n) to a given file.
This means all standard output generated by the command will be written to the file.
You can optionally specify a number in front of the > operator. If not specified, the number defaults to 1. The number indicates which file descriptor of the process to redirect output from.
Note: The file will be truncated (emptied) before the command is started!
[command] >&[fd], [command] [fd]>&[fd], [command] 2>&1
Duplicating File Descriptors: The x>&y operator copies FD y’s target to FD x.
For the last example, FD 1 (the command’s stdout)’s current target is copied to FD 2 (the command’s stderr).
As a result, when the command writes to its stderr, the bytes will end up in the same place as they would have if they had been written to the command’s stdout.
[command] >> [file], [command] [n]>> [file]
File Redirection: The >> operator redirects the command’s Standard Output to a given file, appending to it.
This means all standard output generated by the command will be added to the end of the file.
Note: The file is not truncated. Output is just added to the end of it.
[command] < [file], [command] [n]< [file] File Redirection: The < operator redirects the given file to the command's Standard Input. You can optionally specify a number in front of the < operator. If not specified, the number defaults to 0. The number indicates which file descriptor of the process to redirect input into. [command] &> [file]
File Redirection: The &> operator redirects the command’s Standard Output and Standard Error to a given file.
This means all standard output and errors generated by the command will be written to the file.
[command] &>> [file] (Bash 4+)
File Redirection: The &>> operator redirects the command’s Standard Output and Standard Error to a given file, appending to it.
This means all standard output and errors generated by the command will be added to the end of the file.
[command] <<< "[line of data]" Here-String: Redirects the single string of data to the command's Standard Input. This is a good way to send a single line of text to a command's input. Note that since the string is quoted, you can also put newlines in it safely, and turn it into multiple lines of data. [command] <<[WORD] [lines of data] [WORD] Here-Document: Redirects the lines of data to the command's Standard Input. This is a good way of sending multiple lines of text to a command's input. Note: The word after << must be exactly the same as the word after the last line of data, and when you repeat that word after the last line of data, it must be in the beginning of the line, and there must be nothing else on that line. Note: You can 'quote' the word after the <<. If you do so, anything in the lines of data that looks like expansions will not be expanded by bash. Piping [command] | [othercommand] Pipe: The | operator connects the first command's Standard Output to the second command's Standard Input. As a result, the second command will read its data from the first command's output. [command] |& [othercommand] (Bash 4+) Pipe: The |& operator connects the first command's Standard Output and Standard Error to the second command's Standard Input. As a result, the second command will read its data from the first command's output and errors combined. Expansions [command] "$( [command list] )", [command] "` [command list] `" Command Substitution: captures the output of a command and expands it inline. We only use command substitution inside other commands when we want the output of one command to become part of another statement. An ancient and ill-advised alternative syntax for command substitution is the back-quote: `command`. This syntax has the same result, but it does not nest well and it's too easily confused with quotes (back-quotes have nothing to do with quoting!). Avoid this syntax and replace it with $(command) when you find it. It's like running the second command, taking its output, and pasting it in the first command where you would put $(...). [command] <([command list]) Process substitution: The <(...) operator expands into a new file created by bash that contains the other command's output. The file provides whomever reads from it with the output from the second command. It's like redirecting the output of the second command to a file called foo, and then running the first command and giving it foo as argument. Only, in a single statement, and foo gets created and cleaned up automatically afterwards. NOTE: DO NOT CONFUSE THIS WITH FILE REDIRECTION. The < here does not mean File Redirection. It is just a symbol that's part of the <(...) operator! This operator does not do any redirection. It merely expands into a path to a file. [command] >([command list])
Process substitution: The >(…) operator expands into a new file created by bash that sends data you write to it to a second command’s Standard Input.
When the first command writes something to the file, that data is given to the second command as input.
It’s like redirecting a file called foo to the input of the second command, and then running the first command, giving it foo as argument. Only, in a single statement, and foo gets created and cleaned up automatically afterwards
Common Combinations

[command] < <([command list]) File Redirection and Process Substitution: The <(...) is replaced by a file created by bash, and the < operator takes that new file and redirects it to the command's Standard Input. This is almost the same thing as piping the second command to the first (secondcommand | firstcommand), but the first command is not sub-shelled like it is in a pipe. It is mostly used when we need the first command to modify the shell's environment (which is impossible if it is subshelled). For example, reading into a variable: read var < <(grep foo file). This wouldn't work: grep foo file | read var, because the var will be assigned only in its tiny subshell, and will disappear as soon as the pipe is done. Note: Do not forget the whitespace between the < operator and the <(...) operator. If you forget that space and turn it into <<(...), that will give errors! Note: This creates (and cleans up) a temporary implementation-specific file (usually, a FIFO) that channels output from the second command to the first. [command] <<< "$([command list])" Here-String and Command Substitution: The $(...) is replaced by the output of the second command, and the <<< operator sends that string to the first command's Standard Input. This is pretty much the same thing as the command above, with the small side-effect that $() strips all trailing newlines from the output and <<< adds one back to it. Note: This first reads all output from the second command, storing it in memory. When the second command is complete, the first is invoked with the output. Depending on the amount of output, this can be more memory-consuming. Tests If you're new to bash, don't fully understand what commands and exit codes are or want some details, explanation and/or examples on testing commands, strings or files, go read the BashGuide's section on Tests and Conditionals. Exit Codes An Exit Code or Exit Status is an unsigned 8-bit integer returned by a command that indicates how its execution went. It is agreed that an Exit Code of 0 indicates the command was successful at what it was supposed to do. Any other Exit Code indicates that something went wrong. Applications can choose for themselves what number indicates what went wrong; so refer to the manual of the application to find out what the application's Exit Code means. Testing The Exit Code if [command list]; then [command list]; elif [command list]; then [command list]; else [command list]; fi The if command tests whether the last command in the first command list had an exit code of 0. If so, it executes the command list that follows the then. If not, the next elif is tried in the same manner. If no elifs are present, the command list following else is executed, unless there is no else statement. To summarize, if executes a list of *command*s. It tests the exit code. On success, the then commands are executed. elif and else parts are optional. The fi part ends the entire if block (don't forget it!). while [command list], and until [command list] Execute the next iteration depending on the exit code of the last command in the command list. We've discussed these before, but it's worth repeating them in this section, as they actually do the same thing as the if statement; except that they execute a loop for as long as the tested exit code is respectively 0 or non-0. Patterns Bash knows two types of patterns. Glob Patterns is the most important, most used and best readable one. Later versions of Bash also support the "trendy" Regular Expressions. However, it is ill-advised to use regular expressions in scripts unless you have absolutely no other choice or the advantages of using them are far greater than when using globs. Generally speaking, if you need a regular expression, you'll be using awk(1), sed(1), or grep(1) instead of Bash. If you're new to bash or want some details, explanation and/or examples on pattern matching, go read the BashGuide's section on Patterns. Glob Syntax ?: A question mark matches any character. That is one single character. *: A star matches any amount of any characters. That is zero or more of whatever characters. [...]: This matches *one of* any of the characters inside the braces. That is one character that is mentioned inside the braces. [abc]: Matches either a, b, or c but not the string abc. [a-c]: The dash tells Bash to use a range. Matches any character between (inclusive) a and c. So this is the same thing as the example just above. [!a-c] or [^a-c]: The ! or ^ in the beginning tells Bash to invert the match. Matches any character that is *not* a, b or c. That means any other letter, but *also* a number, a period, a comma, or any other character you can think of. [[:digit:]]: The [:class:] syntax tells Bash to use a character class. Character classes are groups of characters that are predefined and named for convenience. You can use the following classes: alnum, alpha, ascii, blank, cntrl, digit, graph, lower, print, punct, space, upper, word, xdigit Testing case [string] in [glob pattern]) [command list];; [glob pattern]) [command list];; esac: Using case is handy if you want to test a certain string that could match either of several different glob patterns. The command list that follows the *first* glob pattern that matched your string will be executed. You can specify as many glob pattern and command lists combos as you need. [[ [string] = "[string]" ]], [[ [string] = [glob pattern] ]], or [[ [string] =~ [regular expression] ]]: Test whether the left-hand STRING matches the right-hand STRING (if quoted), GLOB (if unquoted and using =) or REGEX (if unquoted and using =~). [ and test are commands you often see in sh scripts to perform these tests. [[ can do all these things (but better and safer) and it also provides you with pattern matching. Do NOT use [ or test in bash code. Always use [[ instead. It has many benefits and no downsides. Do NOT use [[ for performing tests on commands or on numeric operations. For the first, use if and for the second use ((. [[ can do a bunch of other tests, such as on files. See help test for all the types of tests it can do for you. (( [arithmetic expression] )): This keyword is specialized in performing numeric tests and operations. See ArithmeticExpression Parameters Parameters are what Bash uses to store your script data in. There are Special Parameters and Variables. Any parameters you create will be variables, since special parameters are read-only parameters managed by Bash. It is recommended you use lower-case names for your own parameters so as not to confuse them with the all-uppercase variable names used by Bash internal variables and environment variables. It is also recommended you use clear and transparent names for your variables. Avoid x, i, t, tmp, foo, etc. Instead, use the variable name to describe the kind of data the variable is supposed to hold. It is also important that you understand the need for quoting. Generally speaking, whenever you use a parameter, you should quote it: echo "The file is in: $filePath". If you don't, bash will tear the contents of your parameter to bits, delete all the whitespace from it, and feed the bits as arguments to the command. Yes, Bash mutilates your parameter expansions by default - it's called Word Splitting - so use quotes to prevent this. The exception is keywords and assignment. After myvar= and inside [[, case, etc, you don't need the quotes, but they won't do any harm either - so if you're unsure: quote! Last but not least: Remember that parameters are the data structures of bash. They hold your application data. They should NOT be used to hold your application logic. So while many ill-written scripts out there may use things like GREP=/usr/bin/grep, or command='mplayer -vo x11 -ao alsa', you should NOT do this. The main reason is because you cannot possibly do it completely right and safe and readable/maintainable. If you want to avoid retyping the same command multiple times, or make a single place to manage the command's command line, use a function instead. Not parameters. Special Parameters If you're new to bash or want some details, explanation and/or examples on parameters, go read the BashGuide's section on Special Parameters. 1, 2, ...: Positional Parameters are the arguments that were passed to your script or your function. When your script is started with ./script foo bar, "$1" will become "foo" and "$2" will become "bar". A script ran as ./script "foo bar" hubble will expand "$1" as "foo bar" and "$2" as "hubble". *: When expanded, it equals the single string that concatenates all positional parameters using the first character of IFS to separate them (- by default, that's a space). In short, "$*" is the same as "$1x$2x$3x$4x..." where x is the first character of IFS. With a default IFS, that will become a simple "$1 $2 $3 $4 ...". @: This will expand into multiple arguments: Each positional parameter that is set will be expanded as a single argument. So basically, "$@" is the same as "$1" "$2" "$3" ..., all quoted separately. NOTE: You should always use "$@" before "$*", because "$@" preserves the fact that each argument is its separate entity. With "$*", you lose this data! "$*" is really only useful if you want to separate your arguments by something that's not a space; for instance, a comma: (IFS=,; echo "You ran the script with the arguments: $*") -- output all your arguments, separating them by commas. #: This parameter expands into a number that represents how many positional parameters are set. A script executed with 5 arguments, will have "$#" expand to 5. This is mostly only useful to test whether any arguments were set: if (( ! $# )); then echo "No arguments were passed." >&2; exit 1; fi
?: Expands into the exit code of the previously completed foreground command.
We use $? mostly if we want to use the exit code of a command in multiple places; or to test it against many possible values in a case statement.
-: The dash parameter expands into the option flags that are currently set on the Bash process.
See set for an explanation of what option flags are, which exist, and what they mean.
$: The dollar parameter expands into the Process ID of the Bash process.
Handy mostly for creating a PID file for your bash process (echo “$$” > /var/run/foo.pid); so you can easily terminate it from another bash process, for example.
!: Expands into the Process ID of the most recently backgrounded command.
Use this for managing backgrounded commands from your Bash script: foo ./bar & pid=$!; sleep 10; kill “$pid”; wait “$pid”
_: Expanding the underscore argument gives you the last argument of the last command you executed.
This one’s used mostly in interactive shells to shorten typing a little: mkdir -p /foo/bar && mv myfile “$_”.
Parameter Operations

If you’re new to bash or want some details, explanation and/or examples on parameter operations, go read the BashGuide’s section on Parameter Expansion and BashFAQ/073.

“$var”, “${var}”
Expand the value contained within the parameter var. The parameter expansion syntax is replaced by the contents of the variable.
“${var:-Default Expanded Value}”
Expand the value contained within the parameter var or the string Default Expanded Value if var is empty. Use this to expand a default value in case the value of the parameter is empty (unset or contains no characters).
“${var:=Default Expanded And Assigned Value}”
Expand the value contained within the parameter var but first assign Default Expanded And Assigned Value to the parameter if it is empty. This syntax is often used with the colon command (:): : “${name:=$USER}”, but a regular assignment with the above will do as well: name=”${name:-$USER}”.
“${var:?Error Message If Unset}”, “${name:?Error: name is required.}”
Expand the value contained within the parameter name or show an error message if it’s empty. The script (or function, if in an interactive shell) is aborted.
${name:+Replacement Value}, ${name:+–name “$name”}
Expand the given string if the parameter name is not empty. This expansion is used mainly for expanding the parameter along with some context. The example expands two arguments: notice how, unlike all other examples, the main expansion is unquoted, allowing word splitting of the inside string. Remember to quote the parameter in the inside string, though!
“${line:5}”, “${line:5:10}”, “${line:offset:length}”
Expand a substring of the value contained within the parameter line. The substring begins at character number 5 (or the number contained within parameter offset, in the second example) and has a length of 10 characters (or the number contained within parameter length). The offset is 0-based. If the length is omitted, the substring reaches til the end of the parameter’s value.
“${@:5}”, “${@:2:4}”, “${array:start:count}”
Expand elements from an array starting from a start index and expanding all or a given count of elements. All elements are expanded as separate arguments because of the quotes. If you use @ as the parameter name, the elements are taken from positional parameters (the arguments to your script – the second example becomes: “$2” “$3” “$4” “$5”).
“${!var}”
Expand the value of the parameter named by the value of the parameter var. This is bad practice! This expansion makes your code highly non-transparent and unpredictable in the future. You probably want an associative array instead.
“${#var}”, “${#myarray[@]}”
Expand into the length of the value of the parameter var. The second example expands into the number of elements contained in the array named myarray.
“${var#A Prefix}”, “${PWD#*/}”, “${PWD##*/}”
Expand the value contained within the parameter var after removing the string A Prefix from the beginning of it. If the value doesn’t have the given prefix, it is expanded as is. The prefix can also be a glob pattern, in which case the string that matches the pattern is removed from the front. You can double the # mark to make the pattern match greedy.
“${var%A Suffix}”, “${PWD%/*}”, “${PWD%%/*}”
Expand the value contained within the parameter var after removing the string A Suffix from the end of it. Works just like the prefix trimming operation, only takes away from the end.
“${var/pattern/replacement}”, “${HOME/$USER/bob}”, “${PATH//:/ }”
Expand the value contained within the parameter var after replacing the given pattern with the given replacement string. The pattern is a glob used to search for the string to replace within var’s value. The first match is replaced with the replacement string. You can double the first / to replace all matches: The third example replaces all colons in PATH’s value by spaces.
“${var^}”, “${var^^}”, “${var^^[ac]}”
Expand the value contained within the parameter var after upper-casing all characters matching the pattern. The pattern must be match a single character and the pattern ? (any character) is used if it is omitted. The first example upper-cases the first character from var’s value, the second upper-cases all characters. The third upper-cases all characters that are either a or c.
“${var,}”, “${var,,}”, “${var,,[AC]}”
Expand the value contained within the parameter var after lower-casing all characters matching the pattern. Works just like the upper-casing operation, only lower cases matching characters.
Arrays

Arrays are variables that contain multiple strings. Whenever you need to store multiple items in a variable, use an array and NOT a string variable. Arrays allow you to keep the elements nicely separated and allow you to cleanly expand the elements into separate arguments. This is impossible to do if you mash your items together in a string!

If you’re new to bash or don’t fully grasp what arrays are and why one would use them in favor of normal variables, or you’re looking for more explanation and/or examples on arrays, go read the BashGuide’s section on Arrays and BashFAQ/005

Creating Arrays

myarray=( foo bar quux )
Create an array myarray that contains three elements. Arrays are created using the x=(y) syntax and array elements are separated from each other by whitespace.
myarray=( “foo bar” quux )
Create an array myarray that contains two elements. To put elements in an array that contain whitespace, wrap quotes around them to indicate to bash that the quoted text belongs together in a single array element.
myfiles=( *.txt )
Create an array myfiles that contains all the filenames of the files in the current directory that end with .txt. We can use any type of expansion inside the array assignment syntax. The example use pathname expansion to replace a glob pattern by all the filenames it matches. Once replaced, array assignment happens like in the first two examples.
myfiles+=( *.html )
Add all HTML files from the current directory to the myfiles array. The x+=(y) syntax can be used the same way as the normal array assignment syntax, but append elements to the end of the array.
names[5]=”Big John”, names[n + 1]=”Long John”
Assign a string to a specific index in the array. Using this syntax, you explicitly tell Bash at what index in your array you want to store the string value. The index is actually interpreted as an arithmetic expression, so you can easily do math there.
read -ra myarray
Chop a line into fields and store the fields in an array myarray. The read commands reads a line from stdin and uses each character in the IFS variable as a delimiter to split that line into fields.
IFS=, read -ra names <<< "John,Lucas,Smith,Yolanda" Chop a line into fields using , as the delimiter and store the fields in the array named names. We use the <<< syntax to feed a string to the read command's stdin. IFS is set to , for the duration of the read command, causing it to split the input line into fields separated by a comma. Each field is stored as an element in the names array. IFS=$'\n' read -d '' -ra lines Read all lines from stdin into elements of the array named lines. We use read's -d '' switch to tell it not to stop reading after the first line, causing it to read in all of stdin. We then set IFS to a newline character, causing read to chop the input up into fields whenever a new line begins. files=(); while IFS= read -d '' -r file; do files+=("$file"); done < <(find . -name '*.txt' -print0) Safely read all TXT files contained recursively in the current directory into the array named files. We begin by creating an empty array named files. We then start a while loop which runs a read statement to read in a filename from stdin, and then appends that filename (contained in the variable file) to the files array. For the read statement we set IFS to empty, avoiding read's behavior of trimming leading whitespace from the input and we set -d '' to tell read to continue reading until it sees a NUL byte (filenames CAN span multiple lines, so we don't want read to stop reading the filename after one line!). For the input, we attach the find command to while's stdin. The find command uses -print0 to output its filenames by separating them with NUL bytes (see the -d '' on read). NOTE: This is the only truly safe way of building an array of filenames from a command's output! You must delimit your filenames with NUL bytes, because it is the only byte that can't actually appear inside a filename! NEVER use ls to enumerate filenames! First try using the glob examples above, they are just as safe (no need to parse an external command), much simpler and faster. declare -A homedirs=( ["Peter"]=~pete ["Johan"]=~jo ["Robert"]=~rob ) Create an associative array, mapping names to user home directories. Unlike normal arrays, associative arrays indices are strings (just like the values). Note: you must use declare -A when creating an associative array to indicate to bash that this array's indices are strings and not integers. homedirs["John"]=~john Add an element to an associative array, keyed at "John", mapped to john's home directory. Using Arrays echo "${names[5]}", echo "${names[n + 1]}" Expand a single element from an array, referenced by its index. This syntax allows you to retrieve an element's value given the index of the element. The index is actually interpreted as an arithmetic expression, so you can easily do math there. echo "${names[@]}" Expand each array element as a separate argument. This is the preferred way of expanding arrays. Each element in the array is expanded as if passed as a new argument, properly quoted. cp "${myfiles[@]}" /destinationdir/ Copy all files referenced by the filenames within the myfiles array into /destinationdir/. Expanding an array happens using the syntax "${array[@]}". It effectively replaces that expansion syntax by a list of all the elements contained within the array, properly quoted as separate arguments. rm "./${myfiles[@]}" Remove all files referenced by the filenames within the myfiles array. It's generally a bad idea to attach strings to an array expansion syntax. What happens is: the string is only prefixed to the first element expanded from the array (or suffixed to the last if you attached the string to the end of the array expansion syntax). If myfiles contained the elements -foo.txt and bar-.html, this command would expand into: rm "./-foo.txt" "bar-.html". Notice only the first element is prefixed with ./. In this particular instance, this is handy because rm fails if the first filename begins with a dash. Now it begins with a dot. (IFS=,; echo "${names[*]}") Expand the array names into a single string containing all elements in the array, merging them by separating them with a comma (,). The "${array[*]}" syntax is only very rarely useful. Generally, when you see it in scripts, it is a bug. The one use it has is to merge all elements of an array into a single string for displaying to the user. Notice we surrounded the statement with (brackets), causing a subshell: This will scope the IFS assignment, resetting it after the subshell ends. for file in "${myfiles[@]}"; do read -p "Delete $file? " && [[ $REPLY = y ]] && rm "$file"; done Iterate over all elements of the myfiles array after expanding them into the for statement. Then, for each file, ask the user whether he wants to delete it. for index in "${!myfiles[@]}"; do echo "File number $index is ${myfiles[index]}"; done Iterate over all keys of the myfiles array after expanding them into the for statement. The syntax "${!array[@]}" (notice the !) gets expanded into a list of array keys, not values. Keys of normal arrays are numbers starting at 0. The syntax for getting to a particular element within an array is "${array[index]}", where index is the key of the element you want to get at. names=(John Pete Robert); echo "${names[@]/#/Long }" Perform a parameter expansion operation on every element of the names array. When adding a parameter expansion operation to an array expansion, the operation is applied to every single array element as it is expanded. names=(John Pete Robert); echo "${names[@]:start:length}"; echo "${names[@]:1:2}" Expand length array elements, starting at index start. Similar to the simple "${names[@]}" but expands a sub-section of the array. If length is omitted, the rest of the array elements are expanded. printf '%s\n' "${names[@]}" Output each array element on a new line. This printf statement is a very handy technique for outputting array elements in a common way (in this case, appending a newline to each). The format string given to printf is applied to each element (unless multiple %s's appear in it, of course). for name in "${!homedirs[@]}"; do echo "$name lives in ${homedirs[$name]}"; done Iterate over all keys of the homedirs array after expanding them into the for statement. The syntax for getting to the keys of associative arrays is the same as that for normal arrays. Instead of numbers beginning at 0, we now get the keys for which we mapped our associative array's values. We can later use these keys to look up values within the array, just like normal arrays. printf '%s\n' "${#names[@]}" Output the number of elements in the array. In this printf statement, the expansion expands to only one argument, regardless of the amount of elements in the array. The expanded argument is a number that indicates the amount of elements in the names array. Examples: Basic Structures Compound Commands Command Lists [[ $1 ]] || { echo "You need to specify an argument!" >&2; exit 1; }
We use a command group here because the || operator takes just one command.
We want both the echo and exit commands to run if $1 is empty.
(IFS=’,’; echo “The array contains these elements: ${array[*]}”)
We use parenthesis to trigger a subshell here.
When we set the IFS variable, it will only change in the subshell and not in our main script. That avoids us having to reset it to it’s default after the expansion in the echo statement (which otherwise we would have to do in order to avoid unexpected behaviour later on).
(cd “$1” && tar -cvjpf archive.tbz2 .)
Here we use the subshell to temporarily change the current directory to what’s in $1.
After the tar operation (when the subshell ends), we’re back to where we were before the cd command because the current directory of the main script never changed.
Expressions

((completion = current * 100 / total))
Note that arithmetic context follows completely different parsing rules than normal bash statements.
[[ $foo = /* ]] && echo “foo contains an absolute pathname.”
We can use the [[ command to perform all tests that test(1) can do.
But as shown in the example it can do far more than test(1); such as glob pattern matching, regular expression matching, test grouping, etc.
Loops

for file in *.mp3; do openssl md5 “$file”; done > mysongs.md5
For loops iterate over all arguments after the in keyword.
One by one, each argument is put in the variable name file and the loop’s body is executed.

DO NOT PASS A COMMAND’S OUTPUT TO for BLINDLY!
for will iterate over the WORDS in the command’s output; which is almost NEVER what you really want!
for file; do cp “$file” /backup/; done
This concise version of the for loop iterates the positional parameters.
It’s basically the equivalent of for file in “$@”.
for (( i = 0; i < 50; i++ )); do printf "%02d," "$i"; done Generates a comma-separated list of numbers zero-padded to two digits. (The last character will be a comma, yes, if you really want to get rid of it; you can - but it defeats the simplicity of this example) while read _ line; do echo "$line"; done < file This while loop continues so long as the read command is successful. (Meaning, so long as lines can be read from the file). The example basically just throws out the first column of data from a file and prints the rest. until myserver; do echo "My Server crashed with exit code: $?; restarting it in 2 seconds .."; sleep 2; done This loop restarts myserver each time it exits with a non-successful exit code. It assumes that when myserver exits with a non-successful exit code; it crashed and needs to restart; and if it exist with a successful exit code; you ordered it to shut down and it needn't be restarted. select fruit in Apple Pear Grape Banana Strawberry; do (( credit -= 2, health += 5 )); echo "You purchased some $fruit. Enjoy!"; done A simple program which converts credits into health. Amazing. Builtins Dummies while true; do ssh lhunath@lyndir.com; done Reconnect on failure. Declarative alias l='ls -al' Make an alias called l which is replaced by ls -al. Handy for quickly viewing a directory's detailed contents. declare -i myNumber=5 Declare an integer called myNumber initialized to the value 5. export AUTOSSH_PORT=0 Export a variable on the bash process environment called AUTOSSH_PORT which will be inherited by any process this bash process invokes. foo() { local bar=fooBar; echo "Inside foo(), bar is $bar"; }; echo "Setting bar to 'normalBar'"; bar=normalBar; foo; echo "Outside foo(), bar is $bar" An exercise in variable scopes. if ! type -P ssh >/dev/null; then echo “Please install OpenSSH.” >&2; exit 1; fi
Check to see if ssh is available.
Suggest the user install OpenSSH if it is not, and exit.
Input

read firstName lastName phoneNumber address
Read data from a single line with four fields into the four named variables.
Output

echo “I really don’t like $nick. He can be such a prick.”
Output a simple string on standard output.
printf “I really don’t like %s. He can be such a prick.” “$nick”
Same thing using printf instead of echo, nicely separating the text from the data.
Execution

cd ~lhunath
Change the current directory to lhunath’s home directory.
cd() { command cd “$@” && echo “$PWD”; }
Inside the function, execute the builtin cd command, not the function (which would cause infinite recursion) and if it succeeds, echo out the new current working directory.
source bashlib; source ./.foorc
Run all the bash code in a file called bashlib which exists somewhere in PATH; then do the same for the file .foorc in the current directory.
exec 2>/var/log/foo.log
Send all output to standard error from now on to a log file.
echo “Fatal error occurred! Terminating!”; exit 1
Show an error message and exit the script.

Unix commands helps

grep
grep is a saviour command for unix users. grep can be used to search for patterns in a file or standard input. The pattern search includes finding the line number of the keyword, counting the number of occurrences of a keyword and many more. we will see in the following examples.

Example 1) grep in its simplest form,i:e without any arguments displays all the lines where the pattern occurs.

/home/kapoor $ cat AllPhones.lst
Samsung C5010 Squash : Rs. 3,245
Samsung C5130 : Rs. 3,500
LG GU285 : Rs. 3,750
Nokia C2 01 : Rs. 3,799
Nokia 2730 Classic : Rs. 3,960
INQ Mini 3G : Rs. 4,000
Spice G6500 : Rs. 4,295
Sony Ericsson Cedar : Rs. 5,100
Samsung Star Nano 3G S3370 : Rs. 5,100
Samsung L700 : Rs. 5,300
Spice QT95 : Rs. 5,500
nokia 7230 : Rs. 5,700
Sony Ericsson J105 Naite : Rs. 5,800
Samsung Metro 3G S5350 : Rs. 6,000

/home/kapoor $ grep Nokia AllPhones.lst
Nokia C2 01 : Rs. 3,799
Nokia 2730 Classic : Rs. 3,960

To make case insensitive search use grep with –i option
i:e grep –i Nokia AllPhones.lst
now the result would also include the line nokia 7230 : Rs. 5,700

Example 2)
It might be required for you to count the number of occurrences of a particular keyword, use -c option .
/home/kapoor $ grep -c -i Samsung AllPhones.lst
5

Remember always to use –i just before the search keyword.

Example 3)
You might be searching a particular thing excluding a particular word, then use -v option.
/home/kapoor $ grep –v Samsung AllPhones.lst
LG GU285 : Rs. 3,750
Nokia C2 01 : Rs. 3,799
Nokia 2730 Classic : Rs. 3,960
INQ Mini 3G : Rs. 4,000
Spice G6500 : Rs. 4,295
Sony Ericsson Cedar : Rs. 5,100
Spice QT95 : Rs. 5,500
nokia 7230 : Rs. 5,700
Sony Ericsson J105 Naite : Rs. 5,800

Example 4)
When you wish to search more than one keyword use –e option of grep before each keyword.

/home/kapoor $ grep -e LG –e –i spice AllPhones.lst
LG GU285 : Rs. 3,750
Spice G6500 : Rs. 4,295
Spice QT95 : Rs. 5,500

Similarly if you require to search for all other phones other than LG and spice use
grep -v -e LG –e –i spice AllPhones.lst

Example 5)
To accomplish a search within a searched line use pipes. suppose you are planning to buy
a nokia phone other than a classic one, use grep as shown
/home/kapoor $ grep –i nokia AllPhones.lst | grep -v –i Classic
Nokia C2 01 : Rs. 3,799
nokia 7230 : Rs. 5,700

why grep is such a great command in searching these results for you is that it does them amazingly fast.

Example 6)
You have seen in Example 4) how multiple keyword search was done using –e option. but it becomes very lengthy and untidy to use –e after each keyword if there are 100s of things to be searched.
In such cases you can use grep with -f option.the below lines show how.

Save the list of phones in a file.
/home/kapoor $cat >required_phones
Sony Ericsson
INQ
Nokia
^D
/home/kapoor $grep –f required_phones AllPhones.lst
Nokia C2 01 : Rs. 3,799
Nokia 2730 Classic : Rs. 3,960
INQ Mini 3G : Rs. 4,000
Sony Ericsson Cedar : Rs. 5,100
Sony Ericsson J105 Naite : Rs. 5,800

Similarly you can use –v –f to display all the lines except those in the

The situation is like this. You have a key word and you want to search it in a large number of
files in a path. The following examples (7-9)show you various techniques and scenarios.

Example 7)
To give only a list of files which contain your keyword use –l option of grep.
The command below lists all the scripts which uses awk .
/home/kapoor $grep -l “awk” *.sh
Pattern-gen.sh
Large-box.sh

Example 8)
If grep is used to search in a group of files for a pattern, it gives the following default output.
/home/kapoor $grep sed *.sh
pgfile-2.sh:sed -n 1,$p $h2_file.txt
pgfile-2.sh:sed ‘s/;//g’ $h3_file.txt
daily-b1.sh:sed ‘s/-/_/g’ prime_f.js

If you do not want grep to display the searched filenames and only want the patterns use grep -h
grep -h sed *.sh gives you the output.
sed -n 1,$p $h2_file.txt
sed ‘s/;//g’ $h3_file.txt
sed ‘s/-/_/g’ prime_f.js

Example 9)
To search for patterns recursively among files of directory and its sub directories use grep -r .

/home/kapoor $grep -l -r awk *.sh
Pattern-gen.sh
Large-box.sh
Shell1/getfiles_2.sh
Shell1/remainder.sh
Shell1/byte-logs.sh
Shell1/new-scripts/byte-logs.sh
Shell2/vault-value.sh

Example 10)
In shell scripts you may need to search for a pattern but do not want the command to write to
Standard output but you need to decide on whether the search succeeded, then use grep –q.
….. #lines of the script
….. ‘’
grep -q win victory-status.log
if [ $? -eq 0 ]
then
flag=1
else
flag=0
fi
..
..
Here grep does not give any output, but the exit condition is 0 if search is found and non zero if not found ( $? Stores the exit condition of the output of previous command.)

Example 11)
You can also know the line numbers of the searched patterns through grep by using –n command.

/home/kapoor $ grep –n error status-run.log
40: error cannot open file gt1.txt
160:error cannot open file uu6.txt
178:parse error

alias

alias

alias is one of those commands for people who want to be lazy. you can use alias
in situations where it is too time consuming to type the same commands again and again.
But avoid aliases to commands like rm, kill etc.

Example. 1)
To always use vim instead of vi, and to make sure that whenever
there is a system crash, network failure etc during editing ,
all the contents are recovered, use the alias as follows.
/home/viru$ alias vi = ‘vim –r’

To make this happen every time you work after logging in, save the above line in
your .profile
i:e in the file $HOME/.profile

after saving it in .profile do not forget to run it.
ie /home/viru$ . $HOME/.profile
|
|
a dot here is necessary.

Example2) after running .profile ,to view all the aliases that are set globally, just enter alias.

/home/viru$ alias
alias ls=’ls –lrt’
alias psu=’ps –fu $LOGNAME’
alias df =‘df –gt’
alias jbin=’cd /home/viru/utils/java/bin’
alias jlib=’cd /home/viru/utils/java/lib’

seems viru is so lazy..!!

Example3)
To prevent the effect of aliases defined for a word, token or a command, use unalias.
/home/viru$ unalias jbin
/home/viru$ jbin
Jbin:not found

There is another way to accomplish this,that is by using quotes (‘’) after a command.the following lines show how.
/home/viru$ alias same
alias same=’/opt/bin/samefile.exe’
/home/viru$ same ‘’
same:not found.
The effect of alias was nullified for the word same. If you use double quotes after an aliased command name ,(eg: alias df =‘df –gt’) then the actual command will be run.(ie df’’ would run /usr/bin/df instead of df -gt)

Aliases which are defined outside a script or in your .profile do not work inside scripts. so make sure not to use aliased words in a shell script to contain their actual values used outside.

df

df
df command gives you the information of the disk spaces. It is a very good command for
system administration and file system monitoring. You can also view various parameters
like total number of files in a particular file system and maximum allocated values.

Reducing the space occupied by a file system helps to boost system performance by making
The applications and utilities that use these file systems take lesser time for accessing data,
So constantly running df helps.

Example 1)
df comes various options like -g,-m,-k etc which helps you to view file system sizes in gigabytes, megabytes or kilobytes blocks resp , including t gives total allocated space.

Consider the output of df –gt.

/home/Prod$ df –gt.
Filesystem GB blocks Used Free Used Mounted on
/dev/hd4 0.75 0.35 0.40 48% /
/dev/hd2 5.50 3.68 1.82 67% /usr
/dev/hd9var 4.00 0.63 3.37 16% /var
/dev/hd3 4.62 0.71 3.91 16% /tmp
/dev/hd1 8.25 3.96 4.29 48% /home
/dev/hd10opt 4.50 1.30 3.20 29% /opt
/dev/lv00 0.12 0.00 0.12 4% /var/adm/csd
/dev/fslv00 5.00 0.03 4.97 1% /utilities
/dev/ora10g_lv 15.00 5.18 9.82 35% /ora10g

The output of the command might not look organised in some unix systems.ie: alignment does not seems to be right,
Use awk for help.

/home/Prod$ df –gt | awk ‘BEGIN{aa=”Filesystem”;bb=”GB blocks”;cc=”Used”;dd=”Free”;ee=”Used”;ff=”Mounted on”;printf(“%-40s %9s %6s %6s %6s %-11s\n”,aa,bb,cc,dd,ee,ff);} {if($1!=”Filesystem”){printf(“%-40s %9s %6s %6s %6s %-9s\n”,$1,$2,$3,$4,$5,$6) } }’

Filesystem GB blocks Used Free Used Mounted on
/dev/hd4 0.75 0.35 0.40 48% /
/dev/hd2 5.50 3.68 1.82 67% /usr
/dev/hd9var 4.00 0.63 3.37 16% /var
/dev/hd3 4.62 0.71 3.91 16% /tmp
/dev/hd10opt 4.50 1.30 3.20 29% /opt
/dev/lv00 0.12 0.00 0.12 4% /var/adm/csd
/dev/fslv00 5.00 0.03 4.97 1 % /utilitie/dev/hd1
/dev/hd1 8.25 3.96 4.29 48% /home
/dev/ora10g_lv 15.00 5.18 9.82 35% /ora10g

Now ,that seems to be a much better organised output. Use this in a script and keep an
alias for df command to this script named df.

Example 2)
To constantly monitor disk space you can make a script and run it constantly.
/home/Prod$ cat disk-monitor.sh
#!/usr/bin/ksh
for Pspace in `df -gt | awk ‘{ print $4}’ | grep –v -e used -e /ora | tr -d ‘%’ `
do
if [ $ Pspace -gt 80 ]
then
alarmfunc #call the alarm function.
fi
done

Example 3)
df is also helpful in knowing number of files present in a mount point.
It is also referred to as Inode. For all all the file system mount points there is a
Inode value assigned which gives the maximum number of files that can be placed in the file system. When the number of files exceed too a large value, you get ‘parameter list is too long’
error with commands that use wildcards as arguments to filename.

Every file has an inode value of 1 and a directory has an inode value of 2.

To see the Inode values along with other attributes,use df –i, and off course with awk filtering.(Use different values before %s for proper alignment)

/home/Prod$ df -i
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd0 19368 9976 48% 4714 5% /
/dev/hd1 24212 4808 80% 5031 19% /usr
/dev/hd2 9744 9352 4% 1900 4% /site
/dev/hd3 3868 3856 0% 986 0% /tmp
/dev/hd1 56890 23409 41% 2345 40% /home

wc

wc
wc is a very useful utility which can count lines, characters, words, bytes etc in a plain text file
or standard input. In shell scripts, It helps to store a value for the total lines of a file or output of
a command into a variable for subsequent use. The examples below show how these things can be achieved.

Example 1)
To get the counts of lines, words and bytes of a file use wc as follows.

/home/mark$ wc Bulk-SMS-file.txt
45 140 990——————————> No of bytes
| |———-> No of Words
No of lines

The individual values can be retrieved as field through awk , but wc provides options.

wc -l : Total Lines

wc -w : Total words

wc -c : Total bytes

wc -k : Total characters

The wc command considers a word to be a string of characters of non-zero length which are delimited by a white space and lines are counted when newline characters occur.

Example 2)
Using wc on multiple files.

/home/mark$ ls file_list-201111*
file_list-20111105.txt
file_list-20111113.txt
file_list-20111120.txt
file_list-20111128.txt

/home/mark$ wc -l file_list-201111*
98164 file_list-20111105.txt
531665 file_list-20111113.txt
527303 file_list-20111120.txt
564207 file_list-20111128.txt
1721339 total

This gives you the no of lines in every file as well as the sum of all the individual line counts.

Same command can be run without the -l option to get the counts of various parameters and their totals.

Example 3)
wc can also count these values from standard output through pipes.

/home/mark$ echo “I Love You” | wc
1 3 17

Hmm! wc achieved something cool this time…

Similarly you can use wc -l as an alternative to grep -c.
/home/mark$ grep –c tremendous Director-speech.txt

Is same as

/home/mark$ grep tremendous Director-speech.txt | wc -l

Example 4)
wc -l may consume a lot of time on counting the line numbers when the size of the file is huge.
If we are sure that every line in a file contains equal number of characters, there is a faster method to achieve it.

Assume that you have a continuous file, i.e. a file which has same number of characters (or bytes ) in each line.
/home/mark$ ls -lrt All_Customers.lst
-rw-r–r– 1 mark Administ 1321655460912 Dec 16 14:37 All_Customers.lst

/home/mark$ wc -l All_Customers.lst
23456898

The following steps explain the method.

Step1)
save the first thousand lines of the file in a separate file.
/home/mark$ head -1000 All_Customers.lst > All_Customers_1000.lst`

Step2)
get the ratio of the total size of the file to the line count(1000) of the file.
/home/mark$ ls -lrt All_Customers_1000.lst
-rw-r–r– 1 mark Administ 56344000 Dec 16 14:37 All_Customers.lst

/home/mark$ fsize=` ls -lrt All_Customers_1000.lst | awk ‘{ print $5}’`
/home/mark$ fratio=`expr $fsize / 1000`
/home/mark$ echo $fratio
56344

Now ,fratio actually stores the number of bytes per file.

Step3)
Now divide the total size of All_Customers.lst with fratio.
/home/mark$tot_lines=` expr 1321655460912 / $fsize`
/home/mark$echo $tot_lines
23456898

Which is same as calculated by wc -l.
All These steps can be used in a shell script for any given file of this type.

If there are multiple files of such types (such as file_list-201111*) and all have the same fratio.(defined above),then you can write a script to get the similar output as that of
wc –l file_list-201111*

The script is as shown.
fratio=313
tot_val=0
for stream in `ls file_list-201111*`
do
str=`ls -lrt $stream|awk ‘{print $5 ” + “}’|tr -d “\n”|sed ‘s/$/0/’`
val=`echo “($str)/$fratio”|bc`
echo “$val $stream ”
tot_val=`expr $tot_val + $val`
done
echo ” $tot_val total ”

Here ,between the output of ls giving sizes of all the files ,a ‘ +’ symbol is placed and 0 at the end and is passed to bc command as an expression whose value (the sum) is divided by the
ratio just as explained in the 3 steps above.

tail

tail

tail command is one of my favourite commands because of its usefulness in viewing the last few lines of a required file and also what is being written into a file ( using –f option).it can also be used to see the progress of cp, mv, tar and other commands.

tail by default shows last ten lines of a file.it can be made to view any last ‘n’ lines where n is a number

Example 1)
to view the last 1000 lines of a huge script page wise ,
/home/k109$ tail -1000 valuefind.sh | pg
……………..
……………… #lines of scripts
………..
.
.
.
……………..
Standard input <--- #here pg waits and asks for you to enter a key before which it displays the next page. This is helpful in analysing a script or debugging it page wise. Example 2) To view the file contents while it is being written, use –f option. If an application is writing continuously into a file and to know what is being written, this option can be used. One disadvantage of this command is however that it cannot be used inside scripts because once The application stops writing to a file the command does not come out of tail directly and requires some signal like stop or kill to come out. Suppose a file is being written by cp command, you can use tail –f to know
exactly what is being written.

Example 3)
To view the contents of the file starting from a particular line, use tail +
Suppose you have a file worldsport.txt

/home/k109$cat worldsport.txt
The game is played on a rectangular field of grass or green artificial turf, with a goal in the middle of each of the short ends. The object of the game is to score by driving the ball into the opposing goal. In general play, the goalkeepers are the only players allowed to touch the ball with their hands or arms, while the field players typically use their feet to kick the ball into position, occasionally using their torso or head to intercept a ball in midair. The team that scores the most goals by the end of the match wins. If the score is tied at the end of the game, either adraw is declared or the game goes into extra time and/or a penalty shootout, depending on the format of the competition.

If it is required to view the file starting from second line,use

/home/k109$tail +2 worldsport.txt
short ends. The object of the game is to score by driving the ball into the opposing goal. In general play, the goalkeepers are the only players allowed to touch the ball with their hands or arms, while the field players typically use their feet to kick the ball into position, occasionally using their torso or head to intercept a ball in midair. The team that scores the most goals by the end of the match wins. If the score is tied at the end of the game, either adraw is declared or the game goes into extra time and/or a penalty shootout, depending on the format of the competition.

sort

sort
Sort , as the name suggests sorts a file or output of a command. Various options determine the sort criteria which are called as sort keys. If multiple files are passed as parameters to sort the output of sort is concatenated.
Sorting is done in ascending lexicographic order , I e the way in which it appears in a dictionary. If a file contains numbers and alphabet both, by default sort places the sorted alphabets first and then the numbers.

Example1)
Consider a file containing values as shown.

/home/jones$ cat flowers.txt
rose
lily
tulip
marigold
hibiscus
Chrysanthemum
/home/jones$ sort flowers.txt
Chrysanthemum
hibiscus
lily
marigold
rose
tulip

If you want to sort it in reverse order ,use
/home/jones$ sort -r flowers.txt
tulip
rose
marigold
lily
hibiscus
Chrysanthemum

Example2)
If you want the sort to to be case insensitive sort –f must be used.
/home/jones $ cat flowers.txt
rose
lily
Chrysanthemum
tulip
marigold
Rose
hibiscus
chrysanthemum

To default sort(without any option) would give
/home/jones $ Sort flowers.txt
Chrysanthemum
Rose
Chrysanthemum
hibiscus
lily
marigold
rose
tulip

Here it sorted the letters starting with uppercase first and then sorted lower case ones, Now use sort -f
/home/jones $ sort –f flowers.txt
Chrysanthemum
chrysanthemum
hibiscus
lily
marigold
Rose
rose
tulip

Example3)
A file may contain duplicate lines, and if you want to remove duplicate files before sorting , use sort –u
/home/jones $ grep error cron.log
error code 45:invalid time
error code 35:Invalid name
error code 45:invalid time
error code 25:Invalid email-id
error code 25:Invalid email-id
error code 35 Invalid name

/home/jones $ grep error cron.log | sort -u
error code 25:Invalid email-id
error code 35:Invalid name
error code 45:invalid time

Example 4)
You require to sort based on a particular field when they are separated by a delimiter.
/home/jones $ cat detailed-list.csv
dolphin|mammal|12
giraffe|mammal|7
kingfisher|aves|3
moth|insecta|1
shark|fish|6
viper|reptile|2

Now, The output of default sort command is
dolphin|mammal|12
giraffe|mammal|7
kingfisher|aves|3
moth|insecta|1

shark|fish|6
viper|reptile|2
it sorted on the basis of first field (separated by ‘|’) in alphabetic order. For a change, you required to sort it based on another column.
/home/jones$ sort -t “|” +1 detailed-list.csv
kingfisher|aves|3
shark|fish|6
moth|insecta|1
dolphin|mammal|12
giraffe|mammal|7
viper|reptile|2

Here -t “|” tells the sort command do sorting on fields delimited by a “|” character. If you do not use -t option, sequence of space characters is considered as default delimiter . “+1” instructs sort to ignore the first field or sort from second field.

Similarly if you wanted to sort it based on the third column, just using +2 instead of +1 might not work in the given example. The output of that sort command would be as follows.

/home/jones$ sort -t “|” +2 detailed-list.csv
moth|insecta|1
dolphin|mammal|12
viper|reptile|2
kingfisher|aves|3
shark|fish|6
giraffe|mammal|7
It sorted the third column according to its alphabetic value and not arithmetic value. To do so you must use -n option

/home/jones$ sort -n -t “|” +2 detailed-list.csv
moth|insecta|1
viper|reptile|2
kingfisher|aves|3
shark|fish|6
giraffe|mammal|7
dolphin|mammal|12

Example5)
The sorting based on fields can be achieved using sort with –k option. consider a file containing numbers.
/home/jones$ cat num-luck
6758 987 456
2586 324 934
0437 235 417
2586 324 934

Suppose you wanted to sort this list such that sorting should start from 3rd column of 1st field and 4th column of 1st field. Here fields are separated by one or more spaces. Columns here refer to characters.

/home/jones$ sort -k1.3,1.4 num-luck
2812 624 208
0437 235 417
6758 987 456
2586 324 934

To sort from 2nd column of 1st field and 3rd column of second field in reverse order,

/home/jones$ sort -k1.2,2.3r num-luck
2812 624 208
6758 987 456
2586 324 934
0437 235 417

Similarly, to sort lines based on 1st and 3rd fields, use
Sort –k1 –k3

Example 6)
You may require to sort a particular file and rewrite the file with the sorted file. A simple command of the form
Sort filename >new_sorted_filename will not work and is dangerous as it rewrites it into an empty file. Use sort with –o option.

/home/jones$ sort -o flowers.txt flowers.txt

The syntax is sort –o

Example 7)
Sort uses a lot of temporary space while sorting huge files. and by default it uses /tmp directory. If sufficient space was not allocated to /tmp, then the command would abort abruptly. so sort provides an option -T by which you can use an alternative directory for storing temporary files.

The sort command sort -t “|” +1 detailed-list.csv in example 4 can be written as
sort -t “|” +1 -T /backup/jones detailed-list.csv.
this will use /backup/jones directory for storing temporary files.

sed

sed
sed or stream editor is an editing utility that modifies lines of a file as specified by an instruction string and writes it to standard output .sed can thus perform the same functions of the commands like tr, grep and awk ,even though sed has functionalities and options unique to itself.
Though the applications of sed are enormous, only some of the important uses of the command are discussed here with examples.
Example 1)
It is required to you to print the contents of a file starting from a particular line number and ending into another ,use sed as follows.
/home/UAT$ sed -n ‘35,46p’ general-dates.prt
This command prints the contents of the file “general-dates.prt “ starting from its 35th line to the 46th.
To print only the 35th line of a file use
/home/UAT$ sed -n ‘35p’ general-dates.prt
More examples
· To print all the lines of a file starting from its 1st line to the last line
sed -n ‘1,$p’ general-dates.prt
· To print all the lines of a file between line numbers whose values are stored in variables $start-line and $end-line .
sed –n “${start-line} , ${end-line}p” general-dates.prt
· To print all the lines of the file except the 5th line
sed ‘5d’ general-dates.prt
· To print all the lines of the file except those lines between the 8th and the 40th (i.e. 8th line and 40th lines are also not printed).
sed ‘8,40d’ general-dates.prt

Example2)
sed can also search for patterns in file or standard input just like grep , but much more sophisticated pattern searches can be done as shown in the following examples.
· A grep command can be implemented using sed as follows
Consider a grep command
grep industry general-dates.prt
alternatively sed can perform the same using..
sed –n ‘/industry/p’ general-dates.prt

· To print all the lines of a file between the first occurrences of two patterns
sed –n ‘/industry/,/station/p’ general-dates.prt

This prints all the lines starting from the line which has the first occurrence of ‘ industry’ and the first occurrence of ‘station’ .

· To print all the lines of a file starting from its first line and the line of first occurrence of ‘industry’,
sed –n ‘1,/ industry/p’ general-dates.prt

· To put all the lines starting from line with the first occurrence of the pattern ‘industry’ in the file to the last line of the file into standard output(i.e. print).

sed –n ‘/industry/,$p’ general-dates.prt

· To print all the lines starting from the line containing the pattern string whose value is stored in a variable $ptrn to the 15th line of the file.

sed –n ‘/’$ptrn‘/,15p’ general-dates.prt

The pattern space can contain more complex regular expressions .

Example3)
sed can replace the occurrences of patterns into a new one from a file and put it to standard output.
· To replace the first occurrence of ‘teacher’ with ‘tutor’ in the file groupsList.xml
sed ‘s/ teacher/ tutor/’ groupsList.xml
· To replace all the occurrences of ‘teacher’ with ‘tutor’ in the file groupsList.xml
sed ‘s/ teacher/ tutor/g’ groupsList.xml
here g -> represents global.
· To replace all the occurrences of a string stored in a variable ${string_pattern} to ${replace_string} in a file.
sed ‘s/’${string_pattern}’/’ ${replace_string}’/g’ groupsList.xml

· To add a “,” character between the original pattern and replacement string defined by ${string_pattern}’ and ${replace_string} respectively .
sed ‘s/’${string_pattern}’/&,’${ replace_string }’/g’ groupsList.xml
the single quotes enclosing the variable is a necessary.
& variable stores the matched pattern when the string stored in ${string_pattern} is a regular expression.

Example 4)
Sed can be used to edit a particular line of the file. amazing..isnt it..?
/home/UAT$ cat dir_struct
1. 35467 vdn /home/vd2
2. 46788 ghi /var/gh1/gh
3. 89078 bjk /home/vd2
4. 56890 lod /home/lod/inter
5. 33456 bhj /home/bhind
6. 45790 krk /myhome/krk

Now if I want to change the directory structure on line number 4 from /home to /var, sed can be used as follows.

/home/UAT$ sed ‘4 s/home/var/’ dir_struct
1. 35467 vdn /home/vd2
2. 46788 ghi /var/gh1/gh
3. 89078 bjk /home/vd2
4. 56890 lod /var/lod/inter
5. 33456 bhj /home/bhind
6. 45790 krk /myhome/krk

Here we substituted only home with var in the 4th line .If you need to substitute /home/vd2 with /var/vd1 , you would have written the command as follows.
sed ‘4 s/\/home\/vd2/\/var\/vd1/’ dir_struct
looks a little ugly because we used ‘/ ‘ as delimiter and to differentiate the ‘/’ in /home/vd2 and /var/vd1, a backslash “\”was used before it.
instead of using ‘/’as delimiter other characters like : , | , { , # , ! etc can be used for such scenarios.
The above sed can be written using # as follows.

sed ‘4 s#/home/vd2#/var/vd1#’ dir_struct

Example 5)
You have many occurrences of a pattern in a line and you want to replace just the nth pattern or all n+1,n+2.. occurrences . use a number after the ending “/”.
/home/jade$ echo “1st 2nd 3rd 4th 5th 6th”|sed “s/[0-9]th//2”
1st 2nd 3rd 4th 6th # |—–second occurrence of [0-9]th

The command above replaced only the 2nd occurrence of a digit followed by “th”.
If you need to replace all occurrences starting from 2nd,then use the following command.

/home/jade$ echo “1st 2nd 3rd 4th 5th 6th”|sed “s/[0-9]th//2g”
“1st 2nd 3rd 4th ”

rsh

rsh

rsh is used to execute commands on a remote machine.
The rsh command executes the command or a program in another host from current working machine without having to login into that remote machine by entering a password as in ssh. You can run any unix command, or a shell script of a remote host.

Prerequisites
rsh command cannot be executed without making these changes.

1) make a list of the users of both the local and remote hosts and their $LOGNAME s who are going to use rsh to run commands or scripts.

2)Open the required ports which is going to be used for rsh and rcp.

3)make an entry of remote host in /etc/hosts file of the local host and an entry of local host in /etc/hosts of the remote host. This change can done my root user,if you do not have root access then contact your unix admin.

A sample entry is as shown, assume that your hostname is mahaprana and the IP address is 158.0.125.23, Then the remote host’s /etc/hosts file must contain a line
158.0.125.23 mahaprana #backup server.

the comments (3rd column) is optional.
Similarly make an entry of remote host In /etc/hosts file of local host. If remote hostname is alpaprana and its IP address is 158.0.125.45 .
158.0.125.45 alpaprana

4)The list of users are to be placed in $HOME/.rhosts file of both local and remote hosts for every user.
Eg: If the local users are divya and ajay and remote users are bhaskar and ajay.
.
Make entries as shown for respective users in local host
/home/ divya$ hostname
mahaprana
/home/ divya$ echo $HOME
/home/ divya
/home/ divya$ cat .rhosts
158.0.125.45 bhaskar
158.0.125.45 ajay

/home/ajay$echo $HOME
/home/ajay
/home/ ajay$ cat .rhosts
158.0.125.45 bhaskar
158.0.125.45 ajay

And entries shown below in the remote host.

/home/ajay$hostname
alpaprana
/home/ ajay$ cat $HOME/.rhosts
158.0.125.23 ajay
158.0.125.23 divya

/home/bhaskar/ cat $HOME/.rhosts
158.0.125.23 ajay
158.0.125.23 divya

After these changes are made, you can run the rsh command as shown in following examples.

Example 1)
Suppose you are working as a user in local host and the local and remote users are same.
/home/ ajay$ echo $LOGNAME
ajay
/home/ ajay$ rsh 158.0.125.45 “ls inter*”
interbranch.csv.gz
interschool.csv.gz
inter _college.txt
inter-dept.lst

This runs the command ‘ls inter* ‘ in the remote host 158.0.125.45 as user ajay.
By default it runs the commands within double quotes in user (ajay ) ‘s home directory defined by the $HOME variable.

Example 2)
To run the commands or process of a different remote user

/home/ divya$ rsh 158.0.125.45 -l bhaskar “ grep -i beatles Albumlist/old/Film ”
Here comes the sun – BEATLES
Beatles-Twist and shout

To run multiple commands in the remote host use semicolon to separate them.
Eg: If you want to run the above command by making your working directory as Albumlist/old
rsh 158.0.125.45 -l bhaskar “ cd Albumlist/old; grep -i beatles Film” .
Example 3)
you cannot use the environmental variables of the remote user directly inside rsh. For initializing all those variables run .profile

/home/ ajay$ rsh 158.0.125.45 -l bhaskar “ echo $ORACLE_HOME; sqlplus /”
ksh: sqlplus not found
this did not work.
/home/ ajay$ rsh 158.0.125.45 -l bhaskar “. $HOME/.profile 1> /dev/null; echo $ORACLE_HOME; sqlplus /”
/oracle/app/product
SQL>
/dev/null redirects the standard output of running .profile to /dev/null.

rm

rm

rm is used to remove files or directories. It can be used interactively to ask user input while removing multiple files or the contents of a directory.

A great care has to be taken before running rm command. There is no way to recover the files which are removed unless they were backed up on tape.

Use rm to remove these files.

Example 1)
To remove multiple files with a common keyword.

/home/jenny$ls city_DETAIL*.lst
City_DETAIL_MUM.lst
City_DETAIL_DEL.lst
City_DETAIL_BLR.lst
City_DETAIL_KOL.lst
City_DETAIL-ALL
..
To remove all these files except City_DETAIL-ALL enter
/home/jenny$rm City_DETAIL_???.lst

If the file does not belong to the current user, or if the file does not have write permissions, the
rm command asks you before removing ,i:e

rm: remove City_DETAIL_KOL.lst??.lst

If you do not want to answer to this question to all the files, use –f option to remove forcefully.

rm -f City_DETAIL_???.lst

Example 2)
To remove directories and files use –rf option
/home/jenny$rm -rf FILMS
This will remove all the sub directories of and the directory itself. This command is also same as
cd /; rm –rf home/jenny/FILMS

remember that you cannot remove a directory if it is the path of your working directory or a parent when you are inside its subdirectories.
eg: both of these don’t work.

/home/jenny /FILMS$ rm -rf /home/jenny /FILMS #Does not work
/home/jenny /FILMS/K3g$rm -rf /home/jenny /FILMS #Does not work

To view the files which are being removed when files are removed in bulk.

Suppose you have a file containing the list of files to be removed. then
/home/jenny$ head file_to_remove_list
Jj12_2.txt
Df003.log
Awer_cat.lst
Sdifre.post
Mail34.log
TAB/Hourwise_list.doc
Sequence.log
Created/Sqlldr.bad
OLD_FILE/Jenny_old.log
Hearts_jai.mkv

home/jenny$sed ‘s/^/rm -ef /g’ file_to_remove_list > file_to_remove_list.sh
home/jenny$chmod +x file_to_remove_list.sh
/home/jenny$./ file_to_remove_list.sh
removing Jj12_2.txt
removing Df003.log
removing Awer_cat.lst
removing Sdifre.post
removing Mail34.log
removing TAB/Hourwise_list.doc
removing Sequence.log
removing Created/Sqlldr.bad
removing OLD_FILE/Jenny_old.log
removing Hearts_jai.mkv
/home/jenny$

Example 4)
To remove files interactively use -i option.
/home/jenny$ rm -ir FILMS
rm: remove FILMS/ Schindler’s List ?y
rm: remove FILMS/ Chinatown? y
rm: remove FILMS/ Taxi Driver?y
rm: remove FILMS/ The Sixth Sense?n
/home/jenny$

y and n are inputs given by you to remove or not to remove resp.

rcp

rcp
rcp is used to copy or send files or directories between two hosts. It can be used an alternative to ftp and related protocols like sftp, but no password is required to send files unlike ftp.
For rcp to work, various prerequisites are required ,which are same for the rsh command.(refer rsh)

Assume your hostname is Prod-serv and IP is 192.158.0.140 and the remote hostname is
Test-serv and IP is 192.158.0.120 .The local user is kumar and remote user is warren in all of the examples.

Example 1)
To send a file from a local host to remote host.

/home/kumar$ rcp Audit_document.txt warren@192.158.0.120:

This sends the local file Audit_document.txt to the host Test-serv (you can use Test-serv in place of the IP address) and places it in the $HOME directory of warren.
With the name Audit_document.txt and owner of the file as warren.

Example 2)
To receive a file from remote host to local host.
/home/kumar$ rcp warren@192.158.0.120:global/Meal_pass2.prt .
This copies the file Meal_pass2.prt in /home/warren/global directory of 192.158.0.120
to the current directory (/home/kumar denoted here by ‘.’ or dot).

rcp just like cp changes the modification time(time stamp) of the destination file to the latest time. To retain the same time stamp use rcp –p . The above example can be written as

rcp -p warren@192.158.0.120:global/Meal_pass2.prt .

Example 3)
To copy directories between hosts use rcp with –r option.

/home/kumar$ ls -ld localdir
drwxr-xr-x 2 kumar tools 256 Jul 14 2009 localdir

/home/kumar$rcp -r localdir warren@192.15.0.120:
This copies the entire directory localdir(its sub directories) to $HOME directory of warren@192.15.0.120.

More examples:
To copy two files booked_ticket1.txt , booked_ticket2.txt to /var/docs directory of 192.15.0.120
rcp booked_ticket.txt booked_ticket2.txt warren@192.15.0.120: /var/docs
or
rcp booked_ticket?.txt warren@192.15.0.120: /var/docs

To copy the file ‘pattern-styles.tar.gz’ to /home/warren/images directory of 192.15.0.120 as style1.tar.gz with same time stamp.
rcp -p pattern-styles.tar.gz warren@192.15.0.120: images/style1.tar.gz

ps

ps
ps displays all the processes in a unix system. every command, script or an application is considered as a process in unix. All the processes run until the command or application is completed or stopped. Each process has a unique process id ( PID)
It has the same output of windows tasklist command.
Example 1)
to list only your processes (i:e all the commands , and applications run in current session )
/home/priya$ ps
PID TTY TIME CMD
217140 pts/2 0:00 -ksh
594062 pts/2 0:00 -ksh
652138 pts/2 2:03 java webfile.java
820120 pts/2 0:00 ps

Example 2)
To list all the processes running in the server page wise , use –ef option
/home/priya$ ps -ef | pg
UID PID PPID C STIME TTY TIME CMD
priya 184434 1 24 05:32:34 – 0:50 java /usr/bin/java/start-web.java
root 213146 1 0 05:32:31 – 1:21 java /usr/bin/dsmc
matt 327814 893836 0 12:28:09 – 0:00 sleep 120
root 352266 1 0 05:32:38 – 0:14 java /usr/bin/ftpd
priya 368734 1 0 05:32:42 – 0:07
priya 372912 1 0 05:32:43 – 0:10 java /usr/bin/java/start-web3.java
scripts 409806 1 0 05:32:39 – 0:13 java /usr/bin/java/start-web2.java -d final -f clear
scripts 418030 1 0 05:32:35 – 1:08 java /usr/bin/java/start-web.java -d final -f clear
scripts 430084 1 0 05:32:23 – 0:02 /usr/bin/perl -w getfile.cgi
scripts 442592 430084 0 12:17:38 – 0:00 sleep 900
scripts 454810 1 3 05:32:29 – 1:42 java /usr/bin/java/start-web.java -d final -f clear
standard input

Example 3)
To list all the processes which are run by an user,
/home/priya$ ps -fu matt
UID PID PPID C STIME TTY TIME CMD
matt 174431 1 24 05:32:34 – 0:50 wc -l
matt 723156 1 0 05:32:31 pts/2 1:21 cat items
matt 327814 893836 0 12:28:09 – 0:00 sleep 120
matt 1566 1 0 0 5:32:38 pts/1 0:14 sed ‘s/sim//’
matt 668739 1 0 05:32:42 – 0:07 /usr/bin/perl
matt 352912 1 0 05:32:43 – 0:10 java hello.java

All your processes and the processes of other users can be viewed.

In the output of the command above ,the various column heads are defined as follows.
PID
The process ID of the process.

UID
The user ID of the process owner. The login name is printed under the -f flag.

PPID
The process ID of the parent process.

C
CPU utilization of process or thread, incremented each time the system clock ticks and the process or thread is found to be running. Large values indicate a CPU intensive process and result in lower process priority whereas small values indicate an I/O intensive process and result in a more favourable priority.

STIME
The starting time of the process.

TTY
The controlling terminal for the process
if There is a ‘-‘ ,it means the process is not associated with a terminal.
Otherwise indicates The TTY number. For example, the entry pts/2 indicates terminal 2.

CMD
Contains the command name which started the process. The full command name and its parameters are displayed with the -f flag.

paste

paste
paste is an excellent command for text processing, even though its uses are limited. It can mainly be used to append the contents of files. paste is helpful in creating text files by combining
individual files such that all the lines appear on the same row in the destination file i:e linearly.

Example 1)
You have two files as shown.
/users/higgs$ cat Start_Times.log
Getname( ) at Fri Dec 30 22:29:16 IST 2011
Getip( ) at Fri Dec 30 22:30:22 IST 2011
Checkip( ) at Fri Dec 30 22:30:23 IST 2011
Getfile( ) at Fri Dec 30 22:31:22 IST 2011

/users/higgs$ cat End_Times.log
Getname( ) at Fri Dec 30 22:29:17 IST 2011
Getip( ) at Fri Dec 30 22:30:28 IST 2011
Checkip( ) at Fri Dec 30 22:30:24 IST 2011
Getfile( ) at Fri Dec 30 22:31:42IST 2011

If you have to combine these files such that the start and end times of a function appear on a common line, use paste as follows.

/users/higgs$paste Start_Times.log End_Times.log
Getname( ) at Fri Dec 30 22:29:16 IST 2011 Getname( ) at Fri Dec 30 22:29:17 IST 2011
Getip( ) at Fri Dec 30 22:30:22 IST 2011 Getip( ) at Fri Dec 30 22:30:28 IST 2011
Checkip( ) at Fri Dec 30 22:30:23 IST 2011 Checkip( ) at Fri Dec 30 22:30:24 IST 2011
Getfile( ) at Fri Dec 30 22:31:22 IST 2011 Getfile( ) at Fri Dec 30 22:31:42IST 2011

Notice that one space was added in between the two same rows of the two files .This
is default for paste without any option.

In some cases you may need to place a different delimiter between lines of the row.

/users/higgs$ paste -d ‘,’ Start_Times.log End_Times.log >Start_end_times.csv.
/users/higgs$ cat Start_end_times.csv
Getname( ) at Fri Dec 30 22:29:16 IST 2011, Getname( ) at Fri Dec 30 22:29:17 IST 2011
Getip( ) at Fri Dec 30 22:30:22 IST 2011 , Getip( ) at Fri Dec 30 22:30:28 IST 2011
Checkip( ) at Fri Dec 30 22:30:23 IST 2011 , Checkip( ) at Fri Dec 30 22:30:24 IST 2011
Getfile( ) at Fri Dec 30 22:31:22 IST 2011 ,Getfile( ) at Fri Dec 30 22:31:42IST 2011

Open this file Start_end_times.csv in windows excel if you want a better view.

Example 2)
If you want the contents to appear serially,use paste –s.

Imagine you have two files
/users/higgs$ cat Booked.log
Don2
Sherlock_homes
Mission_Impossible4
The_Dirty_picture

/users/higgs$cat Showtime.log
10:30
14:20 4 spaces after each line
17:40
22:45

/users/higgs$paste -s Booked.log Showtime.log

Don2 Sherlock_homes Mission_Impossible4 The_Dirty_picture
10:30 14:20 17:40 22:45

mv

mv
mv command moves or renames files or directories within the same file system or between different file systems. you can also do this interactively when you are moving multiple files.mv is similar to doing cut and paste in windows,

Example 1)
To rename a file or a directory.
/home/yogi$mv yog_LIST jeev_LIST

This renames the file from yog_LIST to jeev_LIST
Similarly a directory can be renamed.

Example 2)
To move a file to a directory.
/home/yogi$mv sqlnet.log oracle_logs

Here oracle_logs is a directory.if you want to save the log with a different name,enter
mv sqlnet.log oracle_logs/sqlnet_2011.log
In this case sqlnet.log is renamed to sqlnet_2011.log and saved in oracle_logs directory.

Example 3)
A file,directory or a group of files or directories can be moved to a
destination path,

/home/yogi$ ls –lrt CONSIGNMNENT*
-rw-r–r– 1 yogi Administ 58 Dec 24 00:40 CONSIGNMENT1
-rw-r–r– 1 yogi Administ 6 Dec 24 00:42 CONSIGNMENT2
-rw-r–r– 1 yogi Administ 978 Dec 25 00:59 CONSIGNMENT3

CONSIGNMNENT-OLD:
-rw-r–r– 1 yogi Administ 27 Jul 25 01:00 lst2.txt
-rw-r–r– 1 yogi Administ 35 Sep 14 09:00 goods_list

/home/yogi$ mv CONSIGNMNENT* /ALL_CONS/DOCS

This moves all the entities listed above to /ALL_CONS/DOCS directory if it exists, otherwise the command gives an error
mv: cannot rename CONSIGNMENT* to /ALL_CONS/DOCS, a file or directory in the path does not exist.

Remember that when there are more than 2 arguments to mv , the last argument is considered the destination directory. When there are only 2 arguments, if the 2nd argument contains a ‘’/” in it, then the directory structure must exist.

Example 3)
If you want mv to ask you interactively which files are to be moved, use mv –i.
/home/yogi$ mv -i Banks_Draft * DRAFTS/LIST

more

more

sometimes, while viewing huge files it would be necessary to view it such that the
contents of the page are displayed as the user examines them and wishes to go
to the next one. user can use various keys in order to go front or back of a page. Move
to few lines ahead or behind.

Example 1)
suppose you have a large file with name greenlife.txt. It contains a total of 1000 lines.

Then to open it and view it page wise enter
/home/g321$ more greenlife.txt
sustainable living
Sustainable living is a lifestyle that attempts to reduce an individual’s or soceity’s
attempt to reduce their carbon footprint by altering methods of transportation, energy
sustainability, in natural balance and respectful of humanity’s symbiotic relaivelyt
ly interrelated with the overall principles of sustainable development.
Lester R. Brown, a prominent environmentalist and founder of the Worldwatch Inst used
reuse/recycle economy with a diversified transport system.”
Contents
1 Definition
2 History
3 Shelter
3.1 List of some sustainable materials
4 Power
4.1 List of organic matter than can be burned for fuel
5 Food
5.1 Environmental impacts of industrial agriculture
5.2 Conventional food distribution and long distance transport
5.3 Local and seasonal foods
5.4 Reducing meat consumption
5.5 Organic farming
5.6 Urban gardening
5.7 Food preservation and storage
6 Transportation
7 Water
7.1 Indoor home appliances
7.1.1 Toilets
7.1.2 Showers
7.1.3 Dishwasher/Sinks
7.1.4 Washing machines
7.2 Outdoor water usage
7.2.1 Conserving Water
7.2.2 Sequestering Water
8 Waste
Sustainable living is fundamentally the application of sustainability to lifestyle
— More (3%) —
Here it waits for user input to go to the next page. The user now has the choice of doing the following few things.(many more options available)
to display the next line ,type return or “Enter” key. type f to go
front 1 line and b to go back one line.
to display the entire next page type space key.
To go 100 lines ahead type 100f and press enter
To go 10 lines behind type 10b
To go to the end of page press shift + g
To exit press q
To search a pattern, type /and enter the keyword.

Example 2)
While searching for patterns using more , if you want to ignore cases use –i option.
/home/g321$ more greenlife.txt

….
…… lines
/outdoor
|
|
—— >This will search for all the occurrences of the pattern ‘outdoor’ in the file, ignoring case and highlights them.

Example 3)
You can cause more command to display the specified number of lines in the window by using –n number option.
For example to view 15 lines per window ,

/home/g321$more -15 greenlife.txt

mkdir

mkdir
mkdir is used to create one or more directories. It can create entire directory structure if needed, or can give required permissions to a directory.

Example1)To simply create a directory or a directory structure.
/home/MUM-user$ mkdir channels
you further want to create directory structures. The conventional method of doing it would be
/home/MUM-user$cd channels
/home/MUM-user/ channels$mkdir STAR
/home/MUM-user/ channels$mkdir SONY
/home/MUM-user/ channels$cd SONY
/home/MUM-user/ channels/ SONY$mkdir KBC
….
…..
i:e create all the individual directories in the tree.
mkdir provides an option –p to help you to avoid these commands.

/home/MUM-user$mkdir -p channels/ STAR/NGC/Speed
/home/MUM-user$ mkdir -p channels/ SONY/ KBC

These commands will create all the intermediate parent directories and the final directory.

Example 2)
To change permissions to directories, you can use chmod command. To do it while creating them, use mkdir -p

/home/MUM-user/ $ mkdir -p -m 775 channels/discovery
/home/MUM-user/ $ls –ld channels/discovery
drwxrwxr-x 2 MUM-user Tools 256 NOV 12 22:01 channels/discovery

man

man
Man command internally invokes the sed command to modify the contents of manual pages(documentation written for the commands) and delivers a user readable output.
Examples:
Example 1:
To read documentation on sed command page-wise
/home/b3456/$ man sed | pg

Example 2:
suppose that you want to read the entire documentation of all the commands present in an unix environment and to save all of its pages in a single text file, you can run the following script from the command line to do so.
/home/b3456/$ cd /usr/bin
/usr/bin/$for i in `ls * `
>do
>man $i >>/home/b3456/man_all.txt 2>/dev/null
>done
Note: “>” symbol comes by default in all the lines
Here 2> indicates standard error.It is redirected to /dev/null(which is discarded)
Now you can take a print out of the text file you have just generated( make full use of office printer umm!!) and study all commands at home.

ls

ls
ls command is used to list the contents off the directory.ls writes to standard output
all the information of the files, directories or specifically of the parameters passed to it.
You can display these contents in the alphabetically sorted order, or list only particular files in
Combination with other command. There are many options with ls which you may hardly use in normal cases.
Here ,only few frequently used options are discussed.

Example 1)
To simply display the names of all the files in a directory.
/home/script-user$ ls
Joke-and-fun.xt
Send_script.sh
Mail-file
Rollercoaster
Jackson_note.tar

Example 2)
To display all the files that have a common pattern.
/home/script-user/items $ ls table_*.log
table_111.log
table_x.log
table_te1.log
table_adam.log

Example 3)
To get a long listing of files which contains all the details such as file size, permissions,
Time stamp, user details sorted by the modification time, use ls as follows.

/home/script-user $ ls –lrt
-rw-r–r– 1 script-user Tools 56344 Dec 16 14:37 Joke-and-fun.xt
-rw-r–r– 1 script-user Tools 105977 Dec 16 15:52 Send_script.sh
drw-r–r– 2 script-user Tools 256 Dec 18 21:01 Mail-file
-rw-r–r– 1 script-user Tools 740 Dec 20 01:03 Rollercoaster
-rw-r–r– 1 script-user Tools 58 Dec 24 00:40 Jackson_note.tar

The first field here shows file permissions, which are discussed in chmod,
2nd field is the type of element,(1 for file 2 for directory..etc)
3rd field gives the owner of the element(file,directory)
4th field gives the group which the user belongs to.
5th field gives the size.
6th,7th and 8th field-modification times.
9 th field gives the name of the element.

You can get any field of the output of ls –lrt using awk.
For eg: To get the name of the element from the output of ls –lrt, use awk as follows
/home/script-user $ ls -lrt | awk ‘{print $9}’
Joke-and-fun.xt
Send_script.sh
Mail-file
Rollercoaster
Jackson_note.tar
Example 4)
To display only the directories in a path, use the output of ls –lrt as follows.
/home/script-user $ ls -lrt | grep “^d”
drw-r–r– 2 script-user Tools 256 Dec 18 21:01 Mail-file

The ‘^’ character represents a starting character of a line( a ‘$’ represents end), in fact it’s not physically seen, but an interpreted character.
In this case grep searched for the lines starting with a ‘d’ which stands for directories.

Example 5)
you want to list the directories whose name has a common pattern, use ls –ld home/script-user/ebooks $ls -ld j*
drwxr-xr-x 2 script-user Tools 256 Nov 9 12:10 j2me
drwxr-xr-x 2 script-user Tools 20480 Nov 9 12:10 java
drwxr-xr-x 2 script-user Tools 4096 Nov 9 12:10 java scripts

Suppose you want to view all the files of the directory java use ls –lrt java instead of doing both cd java and ls –lrt.

Example 6)
To make a case insensitive list on a filename parameter, use ls along with grep as follows.
home/script-user/docs$ ls report*
report1.doc
report_eng.doc
reportslist.txt

home/script-user/docs$ls | grep –i report
report1.doc
report_eng.doc
reportslist.txt
Reports_Bulk.wrt
Business_reports.doc

The above command is equivalent to ls *report* *Report* *REPORT* *Report*………..and so on till all the combination of the word report in Ucase and Lcase.

Example 7)
To list all the directories in a directory, and its subdirectories, use ls –R
In this case ls lists all the files inside the directories starting with j recursively in all its subdirectories.
home/script-user/ebooks $ ls -R j*
j2me:
j2me – the complete reference.pdf j2me—the-complete-reference.htm

java:<---------------------------- directory java The J2EE Tutorial (Addison Wesley & Sun, 2002).pdf (Addison Wesley 2002) - Java Network Programming and Distributed Computing.pdf (Ebook) Java - Borland Jbuilder - Developing Database Applications - Inprise.pdf (OReilly) Java Network Programming ,2nd Ed.pdf (OReilly) Java Server Pages ,2nd Ed.pdf java scripts: [Java Script] O'Reilly- JavaScript The Definitive Guide.pdf java scripts/newdir:<------------- subdirectory [Java Script] O'Reilly - JavaScript The Definitive Guide 2ed.pdf …. …. … ^C Example 8) ls by default does not show hidden files and directories. To show hidden files use –a option. Eg: files like .profile, .sh_history etc are hidden files. if have not used –a option you wont be able to see these files. In all the directories , there are 2 common hidden directories .(a dot) which represents the current directory and ..( 2 dots) represents the parent of the current directory kill kill kill is used to send a signal to the process, though the name suggests that it is used only to terminate a process. But, kill helps to terminate a process more often than not. When some process is run unexpectedly or accidentally, it might have to be aborted ,for which kill helps just like CTRL+ALT+DEL in windows . Example 1) ps command gives you the process id s and other details of various commands, if you want to stop any command, just check the corresponding PID of the process(the command) which you want to terminate ,then use kill to stop that process. Consider the output of ps, /home/b6789$ps -fu b6789 UID PID PPID C STIME TTY TIME CMD b6789 352266 1 0 05:32:38 - 0:14 java /usr/bin/wc –l b6789 368734 1 0 05:32:42 - 0:07 [wc] ------b6789 372912 120 0 05:32:43 - 5:10 java /usr/bin/java/dev2.java b6789 409806 1 0 05:32:39 - 0:13 java /usr/bin/java/start-atm.java this process is consuming cpu and has to be terminated, /home/b6789$kill 372912 This sends the signal SIGTERM to the process 372912, and terminates it. Sometimes the process may refuse to terminate just by SIGTERM, in that case use SIGKILL signal(kill -9) /home/b6789$kill -9 372912 This signal cannot be ignored . You can kill any of the process initiated by you, i:e from your user id this way .A root user has the privilege of killing any user’s processes . Example 2) You can also get the specific PID for your command without having to search it using grep and awk along with ps. Then kill the PID which you have retrieved. If you had to kill the CMD start-atm.java above, /home/b6789$j_pid=`ps –fu $LOGNAME | grep start-atm.java | grep –v grep | awk ’{print $2}’` /home/b6789$echo $ j_pid 409806 /home/b6789$kill -9 $ j_pid You can also use xargs to kill process of a particular type all at once . Example 3) There is a method to kill all the processes initiated by you, /home/b6789$kill -9 -1 This will kill all the processes of your user id and also closes your login session. It is a very dangerous command ,so think thrice and check whether any required process is still running using ps before you press enter. You can send many kinds of signals to a process(not all) by using kill - PID. The various signal numbers and their names are listed below (for AIX) in detail .A short list can also be viewed
by using kill -l (lowercase alphabet L).

Name num Description
SIGHUP 1 hangup, generated when terminal disconnects
SIGINT 2 interrupt, generated from terminal special char
SIGQUIT 3 (*) quit, generated from terminal special char
SIGILL 4 (*) illegal instruction (not reset when caught)
SIGTRAP 5 (*) trace trap (not reset when caught)
SIGABRT 6 (*) abort process
SIGEMT 7 EMT instruction
SIGFPE 8 (*) floating point exception
SIGKILL 9 kill (cannot be caught or ignored)
SIGBUS 10 (*) bus error (specification exception)
SIGSEGV 11 (*) segmentation violation
SIGSYS 12 (*) bad argument to system call
SIGPIPE 13 write on a pipe with no one to read it
SIGALRM 14 alarm clock timeout
SIGTER M 15 software termination signal
SIGURG 16 (+) urgent condition on I/O channel
SIGSTOP 17 (@) stop (cannot be caught or ignored)
SIGTSTP 18 (@) interactive stop
SIGCONT 19 (!) continue (cannot be caught or ignored)
SIGCHLD 20 (+) sent to parent on child stop or exit
SIGTTIN 21 (@) background read attempted from control terminal
SIGTTOU 22 (@) background write attempted to control terminal
SIGIO 23 (+) I/O possible, or completed
SIGXCPU 24 cpu time limit exceeded
SIGXFSZ 25 file size limit exceeded
SIGMSG 27 input data is in the ring buffer
SIGWINCH 28 (+) window size changed
SIGPWR 29 (+) power-fail restart
SIGUSR1 30 user defined signal 1
SIGUSR2 31 user defined signal 2
SIGPROF 32 profiling time alarm
SIGDANGER 33 system crash imminent; free up some page space
SIGVTALRM 34 virtual time alarm
SIGMIGRATE 35 migrate process
SIGPRE 36 programming exception
SIGVIRT 37 AIX virtual time alarm
SIGALRM1 38 m:n condition variables – RESERVED – DON’T USE
SIGTALRM 38 per-thread alarm clock
SIGWAITING 39 m:n scheduling – RESERVED – DON’T USE
SIGRECONFIG 58 Reserved for Dynamic Reconfiguration Operations
SIGCPUFAIL 59 Predictive De-configuration of Processors – (RESERVED – DON’T USE )

SIGKAP 60 keep alive poll from native keyboard
SIGGRANT SIGKAP monitor mode granted
SIGRETRACT 61 monitor mode should be relinquished
SIGSOUND 62 sound control has completed
SIGSAK 63 secure attention key

Example 4)
You can stop a running process in the midway and then continue it from that point using kill command.
Assume you are running a ftp command and want to stop the file transfer.
/home/b6789$ ftp 10.42.67.125
……
…….
put abigfile.txt
#####################################

Now in between if you want to suspend it. take an other session for the same user and run
kill -17 ,

where is the PID of the ftp command. This sends the signal SIGSTOP to the command. you can observe that the file transfer is stopped.

To restart the command,simply run
Kill -19 , which sends the signal SIGCONT which is to continue a process. The file transfer starts from the point where it stopped.

Similarly other commands ,jobs or utilities can be stopped and continued using kill -17 and kill -19 resp.

Example 5)
To Kill all the child processes of a particular process use
kill -9 – to kill all child processes of PID 204567
kill -9 -204567

head

head
head command is used to view or save a specified portion of the beginning of a file.
It does the same work as tail does for the end of the file.

Example 1)
Head by default displays first 10 lines of a file when no option is specified.
/home/ftpusr$ head blackwater.ns
Blackwater USA was formed in 1997, by Erik Prince in North Carolina, to provide training support to military and law enforcement organizations. In explaining the Blackwater’s purpose, Prince stated that ‘‘We are trying to do for the national security apparatus what FedEx did for the Postal Service.’’After serving SEAL and SWAT teams, Blackwater USA received their first government contract after the bombing of the USS Cole off of the coast of Yemen in October of 2000. After winning the bid on the contract, Blackwater was able to train over 100,000 sailors safely.Prince purchased about 7,000 acres (28 km2) (from Dow Jones Executive, Sean Trotter) of the Great Dismal Swamp, a vast swamp on the North Carolina/Virginia border, now mostly a National Wildlife Refuge. “We needed 3,000 acres to make it safe,” Prince told reporter Robert Young Pelton.There, he created his state-of-the-art private training facility and his contracting company, Blackwater, which he named for the peat-colored water of the swamp. The The Blackwater Lodge and Training Center officially opened on May 15, 1998 with a 6,000 acre facility and cost $6.5 million.

If you wish to view only first two lines of the file, use
/home/ftpusr$ head -2 blackwater.ns
Blackwater USA was formed in 1997, by Erik Prince in North Carolina, to provide training support to military and law enforcement organizations. In explaining the Blackwater’s purpose, Prince stated that ‘‘We are trying to do for the

Example 2)
when you have a very large file(say of 100000 lines) and it is difficult to open such a file in vi, but you are concerned only with first few lines of the file.
In that case redirect the first few lines of that huge file into another file and open that file in vi.

/home/ftpusr$ head -1000 migratory_note > migratory_note_1000
/home/ftpusr$vi migratory_note_1000
..
..

Example 3)
In scripts, there are times where you initialize the output of grep to a variable but grep searches and returns multiple values, you can use only the first returned value by head -1,
Confused..?the example below shows how.

/home/ftpusr$
/home/ftpusr$b=`grep 7 prime_numbers`
/home/ftpusr$echo $b
7
17
37

Suppose you want the first occurring prime number containing 7,
Use head as follows.
/home/ftpusr$b=` grep 7 prime_numbers | head -1`
/home/ftpusr$echo $b
7

Example 4)
To print the nth line of a file, use head and tail in combination as follows.

/home/ftpusr$ head -100 operating-out.sh | tail -1
This prints 100th line of the file.

gzip and tar

gzip and tar
gzip is used to compress files and tar is used to archive a file system. Though these two are different commands ,using them together can accomplish a variety of tasks which seem to be
much simpler than when running them separately. Here we will also discuss about the combined use of tar and cd commands.

First we will discuss with examples the individual command usages of gzip and tar and later use them in combination.
gzip

Example 1)
gzip is a compression utility that uses Huffman coding to compress files and save them in .gz format . The .gz file occupies lesser space than the original file. We will not discuss about the .gz file format , but only show how the command can be used along with its options.

To compress a file stake-holder.csv ,
/home/gr-adm $ ls –lrt stake-holder.csv*
-rw-r–r– 1 gr-adm Administ 56344 Dec 16 14:37 greenlife.txt

/home/gr-adm$ gzip file stake-holder.csv
/home/gr-adm$ l s -lrt stake-holder.csv*
-rw-r–r– 1 kaushik Administ 20758 Dec 16 14:37 greenlife.txt.gz

Example 2)
If you want to retain the original file even after compression or want the .gz file to have a different name, use gzip –c.

To compress the file stake-holder.csv and save the .gz file in some other directory,

/home/gr-adm$ gzip -c stake-holder.csv >/backup/stakes/ stake-holder.csv.gz.

You can also append the compressed file into an existing .gz file.
gzip -c stake-holder.csv >>/backup/stakes/Old-stake-holder.csv.gz

Here , Old-stake-holder.csv.gz can be a pre existing .gz file or a new file with that name will be created.

Example 3)
To decompress the .gz file use gunzip ,
/home/gr-adm$ gunzip /backup/stakes/stake-holder.csv.gz
This command will decompress the file stake-holder.csv.gz and places the original file
stake-holder.csv in /backup/stakes/ itself.

Example 4)
To view the contents of the file without completely decompressing the file , use gzip –dc.
Use pg command to view it pagewise..
/home/gr-adm$ gzip -dc stake-holder.csv.gz | pg
A premji, wipro,67
N R murthy,Infosys,65
N Chandra,TCS,46

..
You can redirect the output into another file,
For eg to redirect first 5 lines of the file stake-holder.csv.gz ,

gzip –dc stake-holder.csv.gz | head -5 >stake-holder-five.csv.

tar.

tar is a short form for tape archive. It can create a single file with the contents of a directory structure in one path or extract a file into directory structure in another path.

Example1)
To create a tar file of a directory structure and to view the progress of tar use,

/home/gr-adm$ tar -cvf /archives/archive2/data-history.tar data/notes
a data/notes/Nov-10/exceptions.txt 200 bytes
a data/notes/Nov-12/promotions.csv 123 bytes
a data/notes/N0v-13/oracle-log/ora-errors.log 456 bytes
a data/notes/tea-time.in 2076 bytes
a data/notes/network/virtual-ips.bmp 1290bytes
a data/notes/levy.list 87bytes
/home/gr-adm $

This creates an archive file data-history.tar in the path /archives/archive2 which contains the directory /home/notes and all its files and directory structure.

Example2)
To extract the contents of the archive file into a different location , first change your working directory to the location where you wish to extract the file, then extract and view the progress by running tar with –xvf switch.

/home/gr-adm$ cd /base-app/archive
/home/gr-adm$ tar -xvf /archives/archive2/data-history.tar
x data/notes/Nov-10/exceptions.txt 200 bytes
x data/notes/Nov-12/promotions.csv 123 bytes
x data/notes/N0v-13/oracle-log/ora-errors.log 456 bytes
x data/notes/tea-time.in
x data/notes/network/virtual-ips.bmp
x data/notes/levy.list
/home/gr-adm $

Example3)
Instead of giving entire directory name as a parameter to create a tar file, a list of files which care to be archived can be can be passed to tar with –L option.
A file containing the list of files to take tar is as shown.
/home/gr-adm$ cat to-tar-list
data/notes/Nov-10/exceptions.txt
data/notes/N0v-13/oracle-log/ora-errors.log
data/notes/levy.list

Now to create an archive of only these specific files ,use tar in the following manner.

/home/gr-adm$ tar -cvf /archives/archive2/data-specific.tar -L to-tar-list
a data/notes/Nov-10/exceptions.txt 200 bytes
a data/notes/N0v-13/oracle-log/ora-errors.log 456 bytes
a data/notes/levy.list 87 bytes

Example4)
Sometimes you may need to exclude files from the tar archive. To do this place the files and directories to be excluded in a list file and then use tar –X .

/home/gr-adm$ cat /home/fiuser/excl-list
data/notes/Nov-10/exceptions.txt
data/notes/Nov-12/promotions.csv
data/notes/N0v-13/oracle-log/ora-errors.log

/home/gr-adm$ tar -X /home/fiuser/excl-list -cvf /archives/data-specific2 .tar data/notes
a data/notes/tea-time.in 2076 bytes
a data/notes/network/virtual-ips.bmp 1290bytes
a data/notes/levy.list 87bytes

Example5)
The contents of the tar file cannot be viewed directly by cat or more , but to view the details of the files and directories contained in the tar file ,use tar –tvf

/home/gr-adm$ tar -tvf /archives/data-specific2 .tar
-rw-r–r– gr-adm 2076 2011-12-16 15:52:55 data/notes/tea-time.in
-rw-r–r– gr-adm 1290 2011-12-30 22:44:59 data/notes/network/virtual-ips.bmp
-rw-r–r– gr-adm 87 2011-12-30 22:45:48 data/notes/levy.list

Now that we have gone through tar and gzip individually in detail, let us discuss with examples the use of combined forces of tar and cd command.

· to take the tar of data/notes directory and untar the contents in /archives/archive2

tar cvf – data/notes | (cd /archives/archive2;tar xvf – )

‘ –‘ refers to the console .so it means you are passing the output of tar -cvf to tar –xvf and changing your path to /archives/archive2.

· to take the tar of local directory data/notes and untar the contents in a directory /archives/archive2 of remote machine 10.198.150.45

tar cvf – data/notes | rsh 10.198.150.45 -l remuser “cd /archives/archive2;tar xvf -;”

· to take the tar of files in list file tar_list_1 and untar in /archives/archive2

tar cvf – `cat tar_list_1` | (cd /archives/archive2;tar xvf – )

· to take the tar of files in a local list file tar_list_1 and untar in /archives/archive2 of a remote machine 10.198.150.45

tar cvf – `cat tar_list_1` | rsh 10.198.150.45 – l remuser “cd /archives/archive2;tar xvf -”

· to take the tar of data/notes directory excluding those in excl_list_tar and untar its contents in /archives/archive2

tar –X excl_list_tar cvf – data/notes | (cd /archives/archive2;tar xvf -)

Finally we will discuss with examples the combined use of gzip and tar.

· to gunzip a zipped tar file data_archive.tar.gz and untar It in /archives/archive2 .

gzip -dc data_archive.tar.gz – | (cd /archives/archive2;tar xvf -)

· to gunzip a local zipped tar file data_archive.tar.gz and untar it in /archives/archive2 of a remote machine 10.198.150.45
gzip -dc data_archive.tar.gz – | rsh 10.198.150.45 -l fnsonlad “cd /archives/archive2;tar xvf –“

· To view the contents of a zipped tar file without unzipping it, page wise

gzip -dc data_archive.tar.gz – | tar tvf – |pg

find

find
find command helps you to find files in a directory structure much faster than ls command for
various options. It can widely be used for getting the list of files in a directory and its sub directories
with a given condition on file time stamp, size, name and so on.

Example 1) Suppose that you want to know the entire directory structure in a
Particular path, then use the find command as follows.
/home/Varsha $ find . –type d
./ebooks/perl
./ebooks/perl/1.Strings
./ebooks/perl/10.Subroutines
./ebooks/perl/11.Ref and REcs
./ebooks/perl/12.Packages, Libraries, and Modules
./ebooks/perl/13.Classes, Objects, and Ties
./ebooks/perl/14.Database Access
./songs/hindi/songs collection/15 jab we met
./songs/hindi/songs collection/1942 A Love Story
./songs/hindi/songs collection/20 hey baby
./songs/hindi/songs collection/22 jhoom barabar jhoom
././songs/hindi/songs collection/26 let the music play 3++
./songs/hindi/songs collection/Aa Dekhe Zara
./songs/hindi/songs collection/Aap Kaa Suroor
./songs/hindi/songs collection/Aashiq Banaya Aap Ne
./songs/hindi/songs collection/Rockstar
./songs/hindi/songs collection/3 idiots
.
.
.
.
^C
It seems varsha listens to these songs while her boss is not around .

This command searched all the directories starting from /home/Varsha and displayed it.
Similarly for displaying all the files you can use -type f option. Here the ‘.’ In the command refers to the current directory.
You can use any directory name as the base directory for find to search

To display everything(i:e all
Directories and files) use.
find . – print

Example 2)
You want the output to be similar to that of ls command, use find as follows.
/home/Varsha $ find – type f -ls
244269 33 -rw-r–r– 1 Varsha Trainee 60 Nov 9 23:58 ./ebooks/perl/1.Strings/chapter1
235915 86 -rw-r–r– 1 Varsha Trainee 215 Nov 9 11:26 ./ebooks/perl/1.Strings/chapter2
104841 25 -rw-r–r– 1 Varsha Trainee 554 Nov 9 11:26 ./ebooks/perl/1.Strings/chapter3
104838 32 -rw-r–r– 1 Varsha Trainee 510 Nov 9 11:26 ./ebooks/perl/10.Subroutines/chapter1
104840 33 -rw-r–r– 1 Varsha Trainee 55 Nov 9 11:26 ./ebooks/perl/10.Subroutines/chapter2
..
^C

The output you get are the values included in the following order
I-node number
Size in kilobytes (1024 bytes)
Protection mode
Number of hard links
User
Group
Size in bytes
Modification time

Example 3)
Find also helps in retrieving the files which are a modified n days before or modified after n days. Where n is any integer.
Suppose you want the list of all the files which are modified 5 days ago so that it can be removed after archiving It.
/home/varsha $ date
Fri Dec 23 21:16:06 IST 2011
/home/varsha $ find . –type f -mtime +5 >file_list_5

Then use this file list(file_list_5) to take a tar of the file, or to remove those files and save this file as a copy of ‘which files have been archived’.
Similarly you can use -5 instead of +5 for files modified after 5 days.

Example 4)
to find all the find all the files which are greater or less than a particular size, use –size option.

Suppose you want the list of files which are of size greater than 100kB,use find as follows.
/home/varsha $ find . –size +100c

Example 5)
To find all files which are created after the modification time of a particular file,use find with -newer option.
If you want to list all the files that were created after 2013-05-31 06:25

/home/varsha/$ touch -t 201305310625 test_file

home/varsha/$ ls -lrt test_file
244269 33 -rw-r–r– 1 Varsha Trainee 0 May 31 06:25

home/varsha/$ find . -newer test_file
./BATCH
./BATCH/arguments.bat
./BATCH/arith.bat
./BATCH/for_loop.bat
./BATCH/if_condn.bat
./BATCH/if_condn.bat.bak
./misc
./misc/vim_list.txt
./PERL
./PERL/a1.txt
./PERL/a1.txt.orig
./PERL/a2.txt
./PERL/a2.txt.orig
./PERL/all_dir.txt

You can verify it by using -ls option.

echo

echo

echo is used to display characters into standard output. It can also be used to write to a file
or just to print a newline character.

Example1)
It can be used to print a string and a variable together.

/home/b4578$ echo “ my Home is $HOME”
My Home is /home/b4578
/home/b4578$

To print the $symbol use a backslash “\” character.
ie:-
/home/b4578$ echo “my \$HOME variable is set to $HOME”
my \$HOME variable is set to /home/b4578

Example 2)
To display double quote character “ use a backslash.
Suppose you want to display a html form tag. use echo as shown in a script.
#!/usr/bin/sh
#……….lines of code
echo “”
echo “”
echo “

echo “
………. # lines of html form
echo “


echo “”
echo “”

when you run the script the output will be
echo “”
echo “”
echo “

echo “
………. # lines of html form
echo “


echo “”
echo “”

Example 3)
to just display a newline in the output of a script, simply place an echo.

Example 4)
To write the output of echo command to a file use “>” to redirect the output of echo.
/home/b4578$echo “ my logpath is $LOGPATH” >>file_log.txt
/home/b4578$tail -1 file_log.txt
my logpath is /home/b4578/user/logs
Example 5)
To give an audible alert(a bell) use \a inside echo
For example to make an alarm script which beeps continuously when there is any error, the echo command can be used as shown.
/home/b4578$vi check _transactions.sh
….
alarmfunc( )
{
While [ 1 ] # always condition
do
echo “\a There is some problem” #beep continuously with a gap of one second.
sleep 1
done
}
….

If [ $tran_time_before -gt 20 ]
alarmfunc
fi
….

/home/b4578$

Escape sequences which are recognised by echo are listed below.

\a Displays an alert character.

\b Displays a backspace character.

\c Suppresses the new-line character that otherwise follows the final argument in the output.
All characters following the \c sequence are ignored.

\f Displays a form-feed character.

\n Displays a new-line character.

\r Displays a carriage return character.

\t Displays a tab character.

\v Displays a vertical tab character.

\\ Displays a backslash character.

date

date

date can be used by a lot of applications in various forms. you may require to access a month,day, hour or second of a particular date.

You may use the following options to achieve these functionalities.

%% a literal %
%a locale’s abbreviated weekday name (Sun..Sat)
%A locale’s full weekday name, variable length (Sunday..Saturday)
%b locale’s abbreviated month name (Jan..Dec)
%B locale’s full month name, variable length (January..December)
%c locale’s date and time (Sat Nov 04 12:02:33 EST 1989)
%d day of month (01..31)
%D date (mm/dd/yy)
%e day of month, blank padded ( 1..31)
%h same as %b
%H hour (00..23)
%I hour (01..12)
%j day of year (001..366)
%k hour ( 0..23)
%l hour ( 1..12)
%m month (01..12)
%M minute (00..59)
%n a newline
%p locale’s AM or PM
%r time, 12-hour (hh:mm:ss [AP]M)
%s seconds since 00:00:00, Jan 1, 1970 (a GNU extension)
%S second (00..60)
%t a horizontal tab
%T time, 24-hour (hh:mm:ss)
%U week number of year with Sunday as first day of week (00..53)
%V week number of year with Monday as first day of week (01..52)
%w day of week (0..6); 0 represents Sunday
%W week number of year with Monday as first day of week (00..53)
%x locale’s date representation (mm/dd/yy)
%X locale’s time representation (%H:%M:%S)
%y last two digits of year (00..99)
%Y year (1970…)
%z RFC-822 style numeric timezone (-0500) (a nonstandard extension)
%Z time zone (e.g., EDT), or nothing if no time zone is determinable

To get the the current month,year and day in mmddyyyy format ,enter

date + ‘ %m%d%Y’

cp

cp
cp is used to copy files with a name to a different name in the same path or to a different path.It can also be used to recursively copy files in a directory structure to a different directory.

Example1)
To copy file1 to file2

/home/justin$cp sl_ind.ctl sql_ind2.ctl

This copies the contents of the file sl_ind.ctl to sql_ind2.ctl . If there is already a file named sql_ind2.ctl in the current directory, then the file will be over written ,if it does not exist, the new file will be created.

If you do not want files to be overwritten , use –i option. give y or n as reply.
assume sql_ind2.ctl already exists.

/home/justin$ cp -i sql_ind.ctl sql_ind2.ctl
This will ask you to whether to overwrite or not.

You can also use wildcards like ? and * along with parameters that define a group of source filenames.

Example 2)
when you copy files from source to a different directory ,you do not have to mention the
filename in the destination path if you want to keep the name of the file same. Do it
only when destination filename is to be different.

Suppose /home/raja/libs is a directory,

/home/justin$ cp Jlibs.all.csv /home/raja/libs # this will place Jlibs.all.csv in /home/raja/libs.

/home/justin$ cp Jlibs.all.csv /home/raja/libs/rlibs.all.csv #This will
Create rlibs.all.csv in /home/raja/libs directory

Example 3)
cp has option –p for preserving the file attributes like modification time,without this option, cp by default puts new time stamp for destination files.

/home/justin$ls -lrt flower?.jpg
-rw-r–r– 1 justin Admin 59254 Jan 03 12:14 flower3.jpg
-rw-r–r– 1 justin Admin 59256 Dec 29 17:22 flower2.jpg

/home/justin$cp -p flower3.jpg flower2.jpg
/home/justin$ls -lrt flower?.jpg
-rw-r–r– 1 justin Admin 59254 Jan 03 12:14 flower2.jpg
-rw-r–r– 1 justin Admin 59254 Jan 03 12:14 flower3.jpg

Example 4)
you want to completely copy a file system with all the files and directories in it and create
the same directory structure, use cp with –R option.
/home$ cp -R justin /backup_justin
/home$cd /backup_justin
/backup_justin$find . –type f
Justin/ flower3.jpg
Justin/flower2.jpg
Justin/ sql_ind.ctl
Justin/sql_ind2.ctl
Justin/wildlife/tiger/video2.mpg
Justin/ wildlife/tiger/video3.mpg
Justin/ client-file/voucher1.txt

chmod

chmod

Using chmod you can change the permission of single or multiple directories or files.
In unix, nothing works without proper permissions. To run any script ,the first thing you need to
do after writing it is to give execute permissions .

To view file permissions. use ls –lrt

Example 1)
/home/d456/$ ls –lrt

-rw-r–r– 1 d456 Readers 1181170 Apr 27 2011 apache_manual.tar.gz
-rw-r–r– 1 d456 Readers 1128480 Apr 27 2011 perl cookbook.zip
-rw-r–r– 1 d456 Readers 70228 May 3 2011 IMP.txt

All the files have read and write permissions to the user read to the group and others.
-rw-r–r– can be interpreted in octal notation as 110-100-100 or 644
To change the permissions of the file IMP. txt with read and write to user and
group, only read to others (rw-rw–r–) ,you can use:

D2-/home/d456/$ chmod 664 IMP.txt
D2-/home/d456/$ ls –lrt
rw-r–r– 1 d456 Readers 1181170 Apr 27 2011 apache_manual.tar.gz
-rw-r–r– 1 d456 Readers 1128480 Apr 27 2011 perl cookbook.zip
-rw-rw-r– 1 d456 Readers 70228 May 3 2011 IMP.txt
-rw-r–r– 1 d456 Readers 5748626 May 12 2011 sg245511.pdf

Example 2)
you want to allow all users to run the script named script_name.sh
/home/d476/$ chmod a+x script_name.sh
/home/d476/$ ls –lrt script_name.sh
-rwx -rx–rx– 1 d476 Readers 5748626 Jun 10 2011 script_name.sh

Here a in ‘a+x’ represents all.similarly use ‘u’ is for user, ‘g ‘ for group , ‘o’’ for others ‘+’ to give permissions and ‘-‘ to deny.

Example 3)
You may have to give permissions to all the files in a directory tree use –R option.
To give all files and directories read ,write and execute permissions in /home directory ,

D2- /home $ chmod -R 777 *
chmod: /home/ d476/relativity.txt operation not permitted
chmod : /home/ d476/ source3.sh operation not permitted

You will definitely get these errors if you are not an admin user or logged in as a different user.

cd

cd

cd command helps you to move around directories. various options of cd helps you
in navigating between your current directory to previous working directory, or one directory
behind and so on..

cd in unix provides much more options than the cd in DOS. you will see in these examples.

1) You are working in a directory other than your home and want to go to your home directory($HOME), just enter cd.

/home/dimuser/functions/standard/cpp$ echo $HOME
/home/b768

/home/dimuser/functions/standard/cpp$ cd
/home/b768 $
‘cd ‘ is same as ‘cd $HOME’

2) You need to go to your last working directory.

/home/b768$ cd /var/utilities

/var/utilities$ cd –
/home/b768 #here it echoes the directory into which you are moving
/home/b768 $cd –
/var/utilities
/var/utilities$

3) To move one directory behind use ‘ ..’ ,ie two dots
/home/b768$ cd ..
/home$

Note that ‘.’ Refers to the current directory and ‘..’ refers to a directory behind

4) To interchange similar directory structures as your current working directory.
/home/b768/test/ALG/bin $cd test prod
/home/b768/prod/ALG/bin
/home/b768/prod/ALG/bin$

Ensure here that the destination directory structure exists.
You can also simply change any particular string or a number in a directory name to a new one
and move to the new directory path if it exists.
ie :-
/home/b768/prod/ALG/bin$cd 68 54
/home/b768/prod/ALG/bin
/home/b754/prod/ALG/bin$

cat

cat
cat can be used to open one or a group of files , append files into one file. one of the special functionality is to read an entire file line by line along with the read command. The examples will show us how.
Example 1)
/home/b3456/$ cat branches1.txt (Note: here /home/b3456/$ is the prompt
35467 vdn /home/vd2
46788 ghi /home/gh1/gh
89078 bjk /home/vd2

/home/b3456/$ cat branches2.txt
56890 lod /home/lod/inter
33456 bhj /home/bhind
45790 krk /myhome/krk

Combining these two files into a single file can be done by
/home/b3456/$ cat branches1.txt cat branches2.txt

35467 vdn /home/vd2
46788 ghi /home/gh1/gh
89078 bjk /home/vd2
56890 lod /home/lod/inter
33456 bhj /home/bhind
45790 krk /myhome/krk

Or by
/home/b3456/$ cat branches[1-2].txt

Suppose you have many such files named branches1.txt branches2.txt branches3.txt… branches.txt where is any single numeric or alphabetic character, you can use.
cat branches?.txt to open all the files.

Example 2)
You want to write contents you have copied from a file into another file. one way to do it is to open it in a vi editor and then save it.
Other way to do it is by using cat to open a file and paste the contents.

/home/b3456/$ cat >filetocopy
#the sentence of these lines
were copied from a different file
and pasted here.
#the contents here were typed manually
^D
/home/b3456/$

/home/b3456/$more filetocopy
#the sentence of these lines
were copied from a different file
and pasted here.
#the contents here were typed manually

/home/b3456/$

A ^d character(ctrl + d) must be typed at the end to mark the end of file.
If you already have a file with the same name and you want to append the contents copied into that file,
then use “>>” instead of ‘>’

Example 3)
you can also use cat to read each line of a file into a variable, using read command.
This cannot be achieved with a ‘for in ’ loop since it considers words in a line as separate values assigned to the index variable.

The following commands will tell you how to achieve it using cat and read with a while loop.
Consider that you want to read the file branches1.txt(example 1) ,in such a way that entire string in each line is stored into a variable and then that variable is used to extract each fields(separated by spaces)

/home/b3456/$ cat branches1.txt | while read line # the variable line stores the contents of a line
do
num=`echo $line | awk ‘{print $1}’
path=` echo $line | awk ‘{print $3}’
echo “$path is the path for $num”
done

/home/vd2 is the path for 35467
/home/gh1/gh is the path for 46788
/home/vd2 is the path for 89078

The ‘for in’ loop imposes restrictions on the number of lines of a file which can be by it. But cat has no limitations. It can read any file, however large it is until the system resource is exhausted.

Example 4)
line numbers of a file can be displayed by using cat with –n option.
/home/b3456/$ cat -n branches1.txt cat branches2.txt
1 35467 vdn /home/vd2
2 46788 ghi /home/gh1/gh
3 89078 bjk /home/vd2
4 56890 lod /home/lod/inter
5 33456 bhj /home/bhind
6 45790 krk /myhome/krk

cal

cal
cal command is basically used to view the calendar.When you are working in unix it becomes necessary to have a quick look at the calendar, and you need not minimize your terminal session to view it.jut use cal as follows.
Ex1) To view the entire calendar of a year.
/home/steve $ cal 2011
January February
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 1 2 3 4 5
2 3 4 5 6 7 8 6 7 8 9 10 11 12
9 10 11 12 13 14 15 13 14 15 16 17 18 19
16 17 18 19 20 21 22 20 21 22 23 24 25 26
23 24 25 26 27 28 29 27 28
30 31
March April
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 1 2
6 7 8 9 10 11 12 3 4 5 6 7 8 9
13 14 15 16 17 18 19 10 11 12 13 14 15 16
20 21 22 23 24 25 26 17 18 19 20 21 22 23
27 28 29 30 31 24 25 26 27 28 29 30

May June
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7 1 2 3 4
8 9 10 11 12 13 14 5 6 7 8 9 10 11
15 16 17 18 19 20 21 12 13 14 15 16 17 18
22 23 24 25 26 27 28 19 20 21 22 23 24 25
29 30 31 26 27 28 29 30

July August
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 2 1 2 3 4 5 6
3 4 5 6 7 8 9 7 8 9 10 11 12 13
10 11 12 13 14 15 16 14 15 16 17 18 19 20
17 18 19 20 21 22 23 21 22 23 24 25 26 27
24 25 26 27 28 29 30 28 29 30 31
31
September October
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 2 3 1
4 5 6 7 8 9 10 2 3 4 5 6 7 8
11 12 13 14 15 16 17 9 10 11 12 13 14 15
18 19 20 21 22 23 24 16 17 18 19 20 21 22
25 26 27 28 29 30 23 24 25 26 27 28 29
30 31
November December
Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 1 2 3
6 7 8 9 10 11 12 4 5 6 7 8 9 10
13 14 15 16 17 18 19 11 12 13 14 15 16 17
20 21 22 23 24 25 26 18 19 20 21 22 23 24
27 28 29 30 25 26 27 28 29 30 31
Ex 2) To view the calendar of any month of a given year, use cal with month as first parameter, year as second.
/home/steve $ cal 8 1947
August 1947
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31

bc

bc
bc is an important utility provided by unix to do mathematical calculations. This utility can be used as a one line command. you can also program bc to perform various arithmetic just like that of a C program. It also has a Math library of its own which has functions like sine, cosine, tan and so on, which can be defined by specifying –l option.

For more details on bc you can refer to these url s on bc.
http://www.gnu.org/software/bc/manual/html_mono/bc.html
http://www.thegeekstuff.com/2009/11/unix-bc-command-line-calculator-in-batch-mode/

Example1)
to perform a simple arithmetic 2 + 2,you can bc as follows.
/home/MS$ bc
2+2——————>type your expression here
4 ———————->Ans ,

Example2)
To run a command containing an expression and evaluate it using bc,
/home/MS$ echo “ 35.2 – 43.6” | bc
-8.4

Example3)
To use decimal digits you need to specify scale variable.
/home/MS$ bc
6/9
0
Scale=4
6/9
.6666

basename

basename
basename is used to get the name of the file given its complete path including the file.
It is also helpful to access the name of a script within it.

Ex ample1)
To get the name of a file from its complete path.
/home/gj876$ echo $CGI_PATH
/utilities/intranet/apache/cgi-docs/cowboy
/home/gj876$ basename $CGI_PATH
cowboy

Example2)
Suppose a script “/home/scripts/linkfiles.sh” is called from another script with its complete path,then to access the base filename of “/home/scripts/linkfiles.sh”, use
#!/usr/bin/ksh
File_name=`basename $0`
echo $File_name

the output will be linkfiles.sh

awk

awk
awk is very good programming language/command in manipulating files which have patterns separated by space or other delimiters. It does these operations faster than other basic unix commands like sed and grep.
awk’s power lies in its “one line” editing option which makes it more a command than a programming language.
Examples:
Example1)
suppose that you have a file which contains 4 columns of an SQL spool file.
/home/b3456/$ cat member_list.txt (Note: here /home/b3456/$ is the prompt string)
Ajit kumar 1000 BLR
sachin Sharma 2500 MUM
Ajay gupta 6560 MUM
Srinivasan R 4509 NDL
Ahmed javed 7845 CHE

To print only the city codes(third field or column ) enter the following command.
/home/b3456/$ awk ‘{print $3}’ member_list.txt
BLR
MUM
MUM
NDL
CHE

Example 2)
you want to get the member whose amount withdrawn ( 2nd column) is greater than or equal to 6000 and less than 7000 ,enter the command
/home/b3456/$ awk ‘ {if ( $2 >= 6000 && $2 < 7000) print $1 }’ member_list.txt Ajay gupta Example 3) To get the total size of a group of files in a directory. /home/b3456/$ ls –lrt rw-r--r-- 1 b3456 Administ 19738 Feb 1 2011 NOV_2010_3.pdf -rw-r--r-- 1 b3456 Administ 18176 Feb 1 2011 DEC_2010.pdf --rw-r--r-- 1 b3456 Administ 739840 Jul 8 23:09 2011_ITR1_r2.xls -rw-r--r-- 1 b3456 Administ 11455 Jul 8 23:13 ITR1_AJTPN1033E.xml -rw-r--r-- 1 b3456 Administ 94058 Jul 9 14:08 09072011020807_13102006 -rw-r--r-- 1 b3456 Administ 101 Jul 9 14:10 address_TAX.txt -rw-r--r-- 1 b3456 Administ 43237 Jul 9 14:17 XXXPN1033X_ITR-V.zip -rw-r--r-- 1 b3456 Administ 1046954 Nov 21 14:29 EDU_LOAN_STMT.pdf Enter the following command. /home/b3456/$ ls -lrt | awk ' BEGIN {c = 0}{ c += $5 } END{print c}' 973559 Note that you don’t have to use ‘$’ symbol for printing a variable . Example 4) Many a times ,you may want to pass a shell variable to awk in the command line or inside Shell-script and do some conditional manipulations based on the variable . Consider a situation where you need to compare a name variable of the shell with the names in the file member_list.txt. The script #!/usr/bin/sh ……………..#lines of commands …………….. p_name=`echo $m_name` #p_name Is a shell variable. awk -v myp_name=$ p_name ‘{ if ( $1 == myp_name) print $0 }’ member_list.txt. ……………. Here the awk command prints the entire line which contains the name field same as that of value of variable p_name Example 5) A single space or a sequence of space characters is the default delimiter for fields, if there are any other use awk with option –F and mention the delimiter. If the file member_list2.txt contained the following, /home/b3456/$ cat member_list2.txt Ajit kumar|1000|BLR sachin Sharma|2500|MUM Ajay gupta|6560|MUM Srinivasan R|4509|NDL Ahmed javed|7845|CHE Then to print only the city codes(third field or column ) as in example 1, awk can be used as follows. /home/b3456/$ awk -F “|” ‘{ print $3 }’ member_list2.txt BLR MUM MUM NDL CHE AWK Examples 1) An awk script which performs arithmetic. awk '{ x = 3; y = 3; print ($1/7) + ($2 - y)/x z = $2 - 4; print z }' file_1.txt 2) awk script to print the filename and its size and finally total size occupied by all files. ls -lrt | awk 'BEGIN { c = 0; siz = 0;print FILENAME BYTES} NF == 9 && /^-/ { print $9 " " $5;c++;siz += $5 } END{ print "\nTOTAL COUNT = " c "\nTOTAL SIZE = " siz }' 3) consider a file(name_addr.txt) containing names and addresses with records seperated by a blank line and fields seperated by newline. kaushik nayak dl hsg socty vashi jose dmello ms clny blore reema j l highway rd seattle awk command to extract only first name from this file will be awk 'BEGIN{FS ="\n";RS=""} { print $1 }' name_addr.txt 4)awk command to count the total number of blank lines in a file. awk ' /^$/ BEGIN{ c = 0} { print x += 1 } END { print c }' file_cont_blkline.txt 5) An awk script to count the total number of occurences of a pattern. awk -v ptr=0003 'BEGIN{c = 0}{c += gsub(ptr," ",$0) }END{print c} ' sql2.txt #here the pattern to count is '0003' whose value is passed through variable ptr. 6) An awk program to split output of netstat command into IP and port for a specific port . netstat -an | awk '$4 ~ /.1521$/ {split($5,arr1,".");print "IP = " arr1[1]"."arr1[2]"."arr1[3]"."arr1[4] "|PORT = "arr1[5]}' 7) an awk program to print lines between two patterns (pattern ZO and a blank line) awk '$0 ~ /ZO/,/^$/{ print $0}' select_all_bRS.txt 8)An awk program to write conditional output to multiple files . awk '{ if ($1 ~ /2013/ ){ print $0 > “date.txt”} else if ( $3 ~ /Branch/ ) {print $0 >”brch.txt”} }’ branch_dt.txt

9) An awk script showing the use of next.
awk ‘$1 == “20130426” && $7 == “04060” && substr($2,1,2) == 10 {printf( “%s diff = %d \n”,$0,$4 – $2);next }{print}’ level_2.txt
#if next is not used then $0 will again be printed by {print}.next skips the record

alias

alias

alias is one of those commands for people who want to be lazy. you can use alias
in situations where it is too time consuming to type the same commands again and again.
But avoid aliases to commands like rm, kill etc.

Example. 1)
To always use vim instead of vi, and to make sure that whenever
there is a system crash, network failure etc during editing ,
all the contents are recovered, use the alias as follows.
/home/viru$ alias vi = ‘vim –r’

To make this happen every time you work after logging in, save the above line in
your .profile
i:e in the file $HOME/.profile

after saving it in .profile do not forget to run it.
ie /home/viru$ . $HOME/.profile
|
|
a dot here is necessary.

Example2) after running .profile ,to view all the aliases that are set globally, just enter alias.

/home/viru$ alias
alias ls=’ls –lrt’
alias psu=’ps –fu $LOGNAME’
alias df =‘df –gt’
alias jbin=’cd /home/viru/utils/java/bin’
alias jlib=’cd /home/viru/utils/java/lib’

seems viru is so lazy..!!

Example3)
To prevent the effect of aliases defined for a word, token or a command, use unalias.
/home/viru$ unalias jbin
/home/viru$ jbin
Jbin:not found

There is another way to accomplish this,that is by using quotes (‘’) after a command.the following lines show how.
/home/viru$ alias same
alias same=’/opt/bin/samefile.exe’
/home/viru$ same ‘’
same:not found.
The effect of alias was nullified for the word same. If you use double quotes after an aliased command name ,(eg: alias df =‘df –gt’) then the actual command will be run.(ie df’’ would run /usr/bin/df instead of df -gt)

Aliases which are defined outside a script or in your .profile do not work inside scripts. so make sure not to use aliased words in a shell script to contain their actual values used outside.

xargs

xargs
Xargs is a command which can pass arguments from the output of one command to another. It can run multiline commands by passing arguments in a single file. The most important feature is that it creates a list of arguments to be passed to the command and runs them.
Example1)
Consider that you have many files and each have to be renamed with a common subscript letter.

/home/Krishna $ ls account_[0-9].txt
account_4
account_5
account_7

Then to rename all these files as < filename>_old , use xargs as follows.

/home/Krishna$ ls account_[0-9].txt | xargs -I { } mv { } { }_prev
/home/Krishna $ls account_[0-9]*
account_4_prev
account_5_prev
account_7_prev
..

Here, xargs passed the output of ls, which is a list as an argument list which is denoted by
‘{ }’ to the mv command .the the entire line can be reconstructed as
mv account_4 account_4_prev
mv account_5 account_5_prev
….
..

Example2)
On few occasions, many unwanted processes of a common type may be running and it is necessary to kill all of them without killing any other process. One method would be to kill all of them with their individual PIDs, but it may be not possible to do this in scripts, xargs does the job.

When it is required to kill all processes run by your login name that have names containing ftp in it,use
ps -fu $LOGNAME | grep ftp | awk ‘{ print $2 }’ | xargs -I { } kill -9 { }

Example 3)
You want to copy a large number of files into a directory placing all the filenames in a list file.
Consider a file containing list of all the filenames .end line contains the destination directory

/home/Krishna$ cat candidates_logs.txt
Arjuna.log
Yudhistir.log
Bhim.log
Nakul.log
Sahadev.log

..
/var/tmp/logs

You want to copy all these files to a directory /var/tmp/logs with timestamp. use xargs as follows.

/home/Krishna$ xargs cp –p < candidates_logs.txt The command structure created by xargs was cp -p Arjuna.log Yudhistir.log Bhim.log Nakul.log Sahadev.log … candidates_logs.txt This avoided the typing of all the files in an entire line. You may use this form of xargs in scripts where it is easier to make the list file using simple echo and “>>” .

Example 4)
You are redirecting output containing contents of a file to another and you want a delimiting character after every n words , where n is any natural number, use xargs as follows.
a point to note here is that a word can be a single line if it contains only one word per line.

/home/Krishna$ cat Train-Time.list
S29 F 12 Mumbai_CST Karjat 7:00pm
T113 F 12 (x) Mumbai_CST Thane 7:04pm
K95 S 9 (x) Mumbai_CST Kalyan 7:06pm
A59 S 9 Mumbai_CST Ambernath 7:15pm
BL39 F 12 Mumbai_CST Badlapur 7:17pm
T115 S 9 (x) Mumbai_CST Thane 7:20pm
N27 F 12 Mumbai_CST Kalyan 7:21pm

You can use awk to delimit the fields with a “|” or any other delimiting character, but xargs can perform it too.
/home/Krishna$ cat Train-Time.list | xargs –n1 | xargs -I { } echo “{ }|”
S29| F| 12| Mumbai_CST| Karjat| 7:00pm|
T113| F| 12| Mumbai_CST| Thane| 7:04pm|
K95| S| 9| Mumbai_CST| Kalyan| 7:06pm|
A59| S| 9| Mumbai_CST| Ambernath 7:15pm|
BL39| F| 12| Mumbai_CST| Badlapur| 7:17pm|
T115| S| 9| Mumbai_CST| Thane| 7:20pm|
N27| F| 12| Mumbai_CST| Kalyan| 7:21pm|

You can trim the ending “|” character by using sed .

Here you could have also used values greater than 1 after n so that the character “|” might be inserted after those many words instead of 1.

Example 5)
When running commands by passing multiple arguments , it sometimes becomes necessary to interactively ask you to run it for each argument as seen in the commands cp -i and mv -i.
xargs using its -p option, can perform this on any command you want to run interactively.

Consider a case where you need to take tar of files in a directory by adding every file interactively into the archive file.

/home/Krishna$ ls | xargs -p -n 1 tar -cvf /backup/ALL_KRISHNA.tar
tar -cvf /backup/ALL_KRISHNA.tar account_4_prev ?…y
tar -cvf /backup/ALL_KRISHNA.tar account_5_prev ?…y
tar -cvf /backup/ALL_KRISHNA.tar account_7_prev ?…n ———-> your reply.
tar -cvf /backup/ALL_KRISHNA.tar jokes_old ?…y

IFS – Internal Field Separator

It seems like an esoteric concept, but it’s actually very useful.

If your input file is “1 apple steve@example.com”, then your script could say:

while read qty product customer
do
echo “${customer} wants ${qty} ${product}(s)”
done
The read command will read in the three variables, because they’re spaced out from each other.

However, critical data is often presented in spreadsheet format. If you save these as CSV files, it will come out like this:

1,apple,steve@example.com
This contains no spaces, and the above code will not be able to understand it. It will take the whole thing as one item – the first thing, quanity, $qty, and set the other two fields as blank.

The way around this, is to tell the entire shell, that “,” (the comma itself) separates fields; it’s the “internal field separator”, or IFS.

The IFS variable is set to space/tab/newline, which isn’t easy to set in the shell, so it’s best to save the original IFS to another variable, so you can put it back again after you’ve messed around with it. I tend to use “oIFS=$IFS” to save the current value into “oIFS”.

Also, when the IFS variable is set to something other than the default, it can really mess with other code.

Here’s a script I wrote today to parse a CSV file:

#!/bin/sh
oIFS=$IFS # Always keep the original IFS!
IFS=”,” # Now set it to what we want the “read” loop to use
while read qty product customer
do
IFS=$oIFS
# process the information
IFS=”,” # Put it back to the comma, for the loop to go around again
done < myfile.txt It really is that easy, and it’s very versatile. You do have to be careful to keep a copy of the original (I always use the name oIFS, but whatever suits you), and to put it back as soon as possible, because so many things invisibly use the IFS – grep, cut, you name it. It’s surprising how many things within the “while read” loop actually did depend on the IFS being the default value.

Bash cheat sheet

cheatsheet bash

Grep

Grep with large pattern files

When I investigated the run times of grep -f related to large pattern files (with hundreds of lines and more) it became apparent that one can speed up grep by splitting up the pattern files into smaller chunks.

I tested this with GNU grep 2.5.3 on a machine with an Intel Xeon 2400 MHz CPU running Debian Lenny. grep -f with a pattern file of 1900 lines1) takes at least 72 seconds (with no match found).

$ TIMEFORMAT=”%R seconds”
$ time grep -f pattern-file testmail_for_grep.txt
73.852 seconds
To speed things up, for instance, one can use split to split pattern-file into smaller chunks and run grep with each of them. Here is an example which uses chunks of 50 lines:

split -l 50 pattern-file pattern-file.split.
for CHUNK in pattern-file.split.* ; do
grep -f “$CHUNK” testmail_for_grep.txt
done
rm pattern-file.split.*
Using the same pattern file as above (1900 lines) grep -f of all pattern files splitted with split -l 50 takes only about 1.1 second!

Optimal chunks

For the fun of it I tried to find the optimal size of chunks. I assume that it depends on a multitude of factors (such as the speed of your file system) but in this particular case I gained best results for chunks of about 20 lines. The run time for chunks of 20 lines was 0.70 seconds. I did the same tests with a similar pattern file of 900 lines. There, the optimal chunk size was about 20 lines, too.

Pattern by pattern

In the related bug report bug #16305: grep much less efficient …, Levi Waldron suggested to use

for line in `cat patterns.txt`;do grep $line data.txt >> matches.txt;done
This won’t work in general (it e.g. breaks when patterns contain spaces). However, a while loop using bash’s read might do better (testmail_for_grep.txt being data.txt).

while read line ; do grep “$line” testmail_for_grep.txt ; done < pattern-file The run time of this method was a bit more than 4 seconds for 1900 patterns. This is much slower than using split but in cases where one does not want to write temporary files it might come in handy. Note also that this method always scales linearly (in contrast to grep -f) which at least makes it easier to estimate run times of arbitrarily large pattern files. Duplicates and nothing BTW, grep does not check whether there are duplicate lines in your pattern file. If there are you might want to run it through sort -u or uniq first. Also, it does not check whether your input file is empty. Running my pattern file of 1900 lines over an empty file still took nearly 10 seconds 😉 $ rm testmail_for_grep.txt ; touch testmail_for_grep.txt $ time grep -f pattern-file testmail_for_grep.txt 9.879 seconds