October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

control file of qmail

control file of qmail

Presented by demilitarized area
December 31, 2003 Last updated:

control file, which controls the behavior of qmail resides in / var / qmail / control directory. Unlike such as sendmail, each control, it is not organized in a single file, you can set as a file of each control in qmail.
There is something like the following in the configuration file.

Control file Setting the standard Process to use it Purpose
badmailfrom None specified qmail-smtpd Black list of the From address
bouncefrom MAILER-DAEMON qmail-send username of bounce sender
bouncehost same as me qmail-send bounce the host name of the sender
concurrencylocal Ten qmail send Number that can be simultaneously delivered to the local
concurrencyremote Twenty qmail send The number of simultaneous deliveries possible remote
defaultdomain same as me qmail inject Default domain name
defaulthost me the same as the qmail inject The default host name
databytes 0 qmail-smtpd The maximum number of bytes in the message (0 = no limit)
doublebouncehost same as me qmail send Host name in the case of a double bounce
doublebounceto postmaster qmail send User to receive a double bounce
envnoathost same as me qmail send Default domain name at the time of the address without “@” mark
helohost same as me qmail remote Host name used in SMTP HELO
idhost same as me qmail inject Message ID host name for the
localiphost same as me qmail-smtpd Host name that replaces the local IP address
locals same as me qmail send Domain name to the local
me FQDN of the system Lot Default for the control of many files
(. domain name machine name) officially domain formal name
morercpthosts Unspecified qmail-smtpd secondary rcpthosts database
percenthack Unspecified qmail send domains That CAN use “%” – Style Relaying
plusdomain same as me qmail inject Domain name is substituted by a “+”
qmqpservers Unspecified qmail-qmqpc IP address of the server QMQP
queuelifetime 604800 qmail send The time messages can remain in the queue (s)
rcpthosts Unspecified qmail-smtpd Domain name to allow the receipt of e-mail
smtpgreeting same as me qmail-smtpd The SMTP greeting message (greeting)
smtproutes None specified qmail remote Destination of SMTP,
it can be transferred to a destination you specify the e-mail that arrives in qmail.
timeoutconnect Sixty qmail remote Wait for the SMTP connection time (in seconds)
timeoutremote 1200 qmail remote And waits for the remote server time (in seconds)
timeoutsmtpd 1200 qmail-smtpd Wait for the SMTP client time (in seconds)
virtualdomains Unspecified qmail send User name and virtual domain

For example, in defaulhost file,

hogehoge.co.jp

I want to configure it. In this way, the host name in the header information on when you send an email,

username @ Hogehoge.Co.Jp

It will be forwarded to the other party to become only the name that is specified and so on. The control file called locals, the domain name other than your own, it is better to also specify the domain name in the form that has a host name is good. Mail addressed to the address that you specify here, to determine the local user destined to this server.

localhost
Hogehoge.Co.Jp
Ms.Hogehoge.Co.Jp

In this way, it is should be able to receive mail addressed from the external mail server. The rcpthosts file, be sure to specify what to accept as the address of the @ or later that can be specified as the address of the reception. Here is the same content as those specified almost “locals”. Whereas the process as the mail destined to itself when it receives an email address that is specified by the locals, once (for example, the difference between the locals, where after receiving yourself no mail not addressed to itself in rcpthosts You can get even the address of the mail), such as crab transfer.

localhost
Hogehoge.Co.Jp
.Hogehoge.Co.Jp

In the case where the last verse as “.hogehoge.co.jp”, I would like to receive mail addressed to sub-address of all marked with “hogehoge.co.jp”.

In the above settings, when the mail arrives to the mail server of qmail, mail a translation was to gain access to users in the qmail server, but to send mail to another mail server via the qmail server, to qmail server need to get with the “mail relay” comes out. However, since it could allow simply “mail relay”, there is a risk of being exploited SPAM mail that external users to perform mail to user addressed to another external, it does not allow mail relay by default.
So, 2-1 Use the “RELAYCLIENT =” “” environment variable as explained in the mail from the internal user is a translation you have not applied to this limit. And if you build a server of qmail in DMZ, if you want to forward mail to the firewall inside the firewall outside, you can write to the control file named smtproute the destination.

(Example of smtproute)

.hogehoge.co.jp: betu-host.hogehoge.co.jp
hogehoge.co.jp:betu-host.hogehoge.co.jp
.hogehoge.co.jp: [202.12.30.144]
: Betu-host.hogehoge.co.jp 

As long as you have your with,
as it is mail coming to @ *. hogehoge.co.jp destined will be forwarded to betu-host all the (if there a dot (.) in the beginning) the case of the first line .
Also, if you want to use the IP address instead of a domain name, and so on “. Hogehoge.co.jp:[192.xxx.xxx.xxx]”, you enclose the IP address in square brackets ([]).
If you specify as: “destination”, you can specify the destination of the default. For the meaning of the control file, Http://Www.Jp.Qmail.Org/ of “qmail Annex” – please follow the “qmail-control”.

Exchange 2010 Relaying – How to use it, how to turn it off

Email Relay is one of the more annoying features of email servers. However, there are times it can be pretty useful. It’s annoying because Spammers love to exploit it, and it’s useful because it can allow you to centralize a lot of email services.

What is Email Relay?

Email relay is, quite simple, a feature that allows one email server to use another email server for sending mail. In a relay setup, one SMTP server is configured to relay all the mail it’s trying to send through another email server when the sending email address is not a part of the second server’s organization. In a relay situation, Server 1 will connect to Server 2 and attempt to send an email using SMTP. However, unlike a normal SMTP session where Server 1 sets the recipient as an email address that “belongs” to Server 2, Server 1 tries to send an email to a recipient in a completely different organization. A successful relay basically means that Server 1 can use Server 2, which accepts email for company1.com, to send email to Company2.com.

How is Relay Useful?

Usually, there’s very little, if any, need to use email relay. But there may be situations where you have an application or device that has its own email server solution built in that needs to be able to send email to various recipients. Without the ability to relay, that application or device would need to have wide open access to the Internet in order to send email. This is not always an optimal solution, especially if you already have an email solution in place. It’s simply more secure to have that application or device relay mail through the central email solution.

How is Relay a Pain?

Allowing relay on an email server can cause some major problems, though. The biggest problem is with spammers. Spammers have software that will go to as many public IP addresses as possible, looking for IPs that respond on port 25. If a server responds, the software will attempt to send an email to a recipient by creating a relay session. If the relay session succeeds, that server is tagged as an “Open Relay” and the software will attempt to use that server as a source for loads and loads of Spam. This often results in massive mail queues and the server that is being used to relay mail will often be blacklisted and legitimate mail from that server ends up getting blocked by email systems that use blacklists as a form of spam filtering. In other words, having an open relay can cripple your Email infrastructure in any number of ways.

Relaying with Exchange 2010

By default, Exchange 2010 does not allow relaying. In fact, the last Email server developed by Microsoft that allowed relay by default was Exchange 2003. However, it is possible to configure Exchange 2010 to work as a relay, but you have to be careful with it because you don’t want to turn your Exchange server into an open relay for spammers to use and abuse.

Relaying in Exchange 2010 (and 2007 if you haven’t made the jump to 2010) is accomplished through the use of a simple setting that exists on the Receive Connectors. It’s called Externally Secured Authentication. Unfortunately, MS didn’t do a very good job at explaining what that setting actually does. The setting exists on the Authentication tab of the Receive Connector properties screen in the Exchange Management Console. The image below shows this setting:

 

relay

 

By default, you can see that the entire IPv4 range shows up (I have IPv6 disabled on my email server, on yours it may show the whole IPv6 range here as well). Select all entries that show here and click the red X to remove them. Click Add and enter the IP address of the server you want to allow relay to. Click Next, then New to finish the wizard and create the connector.

6. Once the wizard is done and the connector is created, you should see it in the EMC. Right click the new connector and go to the Permissions tab first. Select Anonymous and Exchange servers (You have to do this to allow Externally Secured Authentication to be a valid selection). You can also check whatever other groups you want as well.

7. Click the Authentication tab and select the Externally Secured Authentication box. Remove all other check marks and click Apply. Click apply and the connector will be set up properly to allow Relay.

8. Click the Network tab and make sure that only the servers you want to relay are listed under “Receive mail from remote servers that have these IP addresses.” If you still have the full IP range listed the server will be an Open Relay at this point.

Note that when you do this, all communications between the server that is sending mail to the Relay server will be in clear text. This means that anyone sniffing traffic between the two mail servers can read the emails with ease. This is usually not a big deal on an Internal network, but you’ll want to make sure there actually *is* an external encryption system going between the two servers to secure the transmission of data.

How To Close An Open Relay In Exchange 2007 / 2010

f you have an Exchange 2007 or Exchange 2010 server and you discover that you are an Open Relay, there is a very simple command that you can run from the Exchange Management Shell to close this down.
The command is:
Get-ReceiveConnector “YourReceiveConnectorName” | Remove-ADPermission -User “NT AUTHORITY\ANONYMOUS LOGON” -ExtendedRights “ms-Exch-SMTP-Accept-Any-Recipient”
Replace “YourReceiveConnector” with the name of your Receive Connector and then run the command.
To test if you are an open relay, you can visit MXToolbox or Mailradar.
If you want to check to see if you are allowing “ms-Exch-SMTP-Accept-Any-Recipient” on any Receive Connector for Anonymous Users, run the following command from the Exchange Management Shell:
Get-ReceiveConnector | Get-ADPermission | where {($_.ExtendedRights -like “*SMTP-Accept-Any-Recipient*”)} | where {$_.User -like ‘*anonymous*’} | ft identity,user,extendedrights
08/04/2014 Update – If you still have a problem after modifying your receive connector(s) accordingly, please make sure you or someone else hasn’t installed the SMTP Service on the Exchange Server. I was emailed about such a problem with an Exchange 2010 server the other day and the having stopped ALL of the Exchange Services the server was STILL an Open Relay. With a quick NETSTAT command to see what was listening on port 25, I soon discovered the SMTP service was present and enabled. Having disabled the service and restarting all the Exchange Services, the Open Relay problem disappeared immediately.

Block Spam Mail with Qmail

Qmail is a modern, secure and powerful SMTP email system. We used QmailRocks as a qmail installation resource.
I would like to introduce few step for “Block Spam Mails with Qmail“.

1. Qmail block mail from spammers based on the envelope sender
Qmail has the ability to unconditionally block mail from spammers based on the envelope sender (which may not be the same as the “From:” field in the header, don’t be surprised if this approach misses some emails that you think it should catch). In other words, if the spammers don’t lie about their sending domain, qmail may be able to block them before the mail message is even transmitted. This cuts down on things like bounces, and hopefully spam!

cd /var/qmail/control
Download the sa-blacklist.current.at-domains file

mv sa-blacklist.current.at-domains badmailfrom
OR append it to badmailfrom

/var/qmail/control/badmailfrom
is the file you should look at to block all mail
from a particular domain.
Restart qmail (e.g. qmailctl stop; qmailctl start)
Let’s test it

[root@planetmy]# telnet localhost 25
Trying 127.0.0.1…
Connected to localhost.localdomain (127.0.0.1).
Escape character is ‘^]’.
220 planetmy.com ESMTP
MAIL FROM: testing@zzzsoft.com
250 ok
RCPT TO: user@planetmy.com
553 sorry, your envelope sender is in my badmailfrom list (#5.7.1)
In this case, we tried to send mail from an account at a known spammer; zzzsoft.com. We then told the mail server where the mail needs to go. The mail server then told us that it can’t accept mail from zzzsoft.com because we’d correctly installed the qmail block list.
Gongratulation! You’re done!
Possibly Related Posts:
How to Install Webmin on OpenFiler
lppasswd: Unable to open passwd file: Permission denied
Missing /var/log/lastlog
Telnet service_limit error
How To Capture PUTTY Session Log

How to disable spammer domain in QMAIL mail server with badmailto variable

I’ve recently noticed one of the qmail SMTP servers I adminster had plenty of logged spammer emails originating from yahoo.com.tw destined to reache some random looking like emails (probably unexisting) again to *@yahoo.com.tw

The spam that is tried by the spammer is probably a bounce spam, since it seems there is no web-form or anything wrong with the qmail server that might be causing the spam troubles.
As a result some of the emails from the well configured qmail (holding SPF checks), having a correct existing MX, PTR record and even having configured Domain Keys (DKIM) started being marked, whether emails are sent to *@yahoo.com legit emails.

To deal with the shits, since we don’t have any Taiwanese (tw) clients, I dediced to completely prohibit any emails destined to be sent via the mail server to *@yahoo.com.tw. This is done via /var/qmail/control/badmailto qmail control variable;

Here is content of /var/qmail/control/badmailto after banning outgoing emails to yahoo.com.tw;;;

qmail:~# cat /var/qmail/control/badmailto
[!%#:*^]
[()]
[{}]
@.*@
*@yahoo.com.tw

The first 4 lines are default rules, which are solving a lot of badmailto common sent emails. Thanks God after a qmail restart:

qmail:~# qmailct restart
….

Checking in /var/log/qmail-sent/current, there are no more outgoing *@yahoo.com.tw destined emails. Problem solved …

qmail toaster

qmail is a secure, reliable, efficient and simple MTA written by Dan J. Bernstein. It has been security bug free since 1998 and is freely available.

But vanilla qmail does not support security mechanisms like SMTP authentication or support for SSL/TLS. While it supports RBL via tcpserver, it has no Anti-Spam-Features like checking the Envelope-From or tarpitting SMTP-connections. It further has no hook for Virus-Scanners or Spam-Filters. And last but not least it misses some nice-to-have features.

Nevertheless qmail is one of the best choices for running an MTA.

There are several patches and patch collections that add single or multiple extensions to qmail. This zeitform qmail toaster is another one. Check what we provide and use this patch if you see it fit your needs. You are welcome.

top

FEATURE OVERVIEW

The zeitform qmail toaster adds the following features to qmail:

ANTI-SPAM AND ANTI-VIRUS

  • Block executable attachments at SMTP level
  • Hook for qmail-queue replacement (via QMAILQUEUE) enables qmail to run a virus scanner and/or spam filter on every message [*]
  • Check for resolvable domain within the Envelope-From
  • Tarpit SMTP dialog for a large number of mail recipients
  • Filter bad HELO-strings, envelope senders and recipients based on regular expressions

SECURITY ENHANCEMENTS

  • Support for STARTTLS and SMTP over SSL/TLS as Client and Server
  • SMTP authentication via LOGIN, PLAIN or CRAMMD5
  • POP3 authentication via CRAM-MD5

OTHER ENHANCEMENTS

  • Standard compliant ESMTP SIZE command
  • CAPA command for POP3
  • Skip over MX servers that greet with 4xx or 5xx and try next MX (RFC-2821 compliance)
  • Support for Maildir++ (maildirquota) for vpopmail
  • Check existence of vpopmail user before accepting message at SMTP level

BUGFIXES AND WORKAROUNDS

  • Compile with the new glibc (2.3.1 or newer) [*]
  • Fixe a bug when .qmail contains only tabs within a line [*]
  • Recognize 0.0.0.0 as local IP address. This prevents spammers to spoof [*]
  • Support the sendmail -f flag [*]
  • Improve ISO C conformance [*]
  • Handle oversized DNS packets
  • Return correct number of messages on POP3 STAT command
  • Linux: reliability for EXT2 and ReiserFS

All features marked [*] are also included in netqmail-1.05.

top

DOWNLOAD

Download the following files:

top

INSTALLATION

Install qmail as explained in Life with qmail.

If everything works correctly install the patches:

cd qmail-1.03
patch -p0 < ../zeitform-qmail-toaster-0.21.patch
make
make setup check

top

USAGE AND CONFIGURATION

The zeitform qmail toaster adds or modifies the following configuration files:

Table 1: configuration files
signatures signatures of executable content to block
badhelo containing regular expressions of bad HELO strings
badmailfrom containing regular expressions of bad senders
badmailto containing regular expressions of bad recipients
databytes max message size for incoming SMTP
clientcert.pem SSL certificate when acting as SMTP client
servercert.pem SSL certificate when acting as SMTP server
dh1024.pem 1024 bit DH key
dh512.pem 512 bit DH key
rsa512.pem 512 bit RSA key
clientca.pem list of CAs for client authentication
clientcrl.pem list of CRLS for client authentication
tlsclients list of E-Mail addresses for client authentication
tlsclientciphers list of openssl cipher strings for client
tlsserverciphers list of openssl cipher strings for server
tlshosts/* certificates for servers with required authentication

And it adds the following environment variables:

Table 2: environment variables
EXECUTABLEOK signatures of executable content to block
QMAILQUEUE path to qmail-queue replacement
DATABYTES overwrite control/databytes
NOBADHELO diables the checking of HELO strings
SMTPS starts SMTP over TLS

BLOCK EXECUTABLE ATTACHMENTS

The blocking of executable attachments is controlled with the configuration file control/signatures. This file contains BASE64 signatures of the MIME attachments you want to block. To create own signatures look at the raw mail and include the significant bytes of the attachment’s first line into the control file. The following example blocks Windows executables and includes signatures for Zip-Archives:

cat <<EOF >/var/qmail/control/signatures
# Windows executables seen in active virii
TVqQAAMAA
TVpQAAIAA
# Additional windows executable signatures not yet 
# seen in virii
TVpAALQAc
TVpyAXkAX
TVrmAU4AA
TVrhARwAk
TVoFAQUAA
TVoAAAQAA
TVoIARMAA
TVouARsAA
TVrQAT8AA
# .ZIPfile signature seen in SoBig.E and mydoom:
#UEsDBBQAA
#UEsDBAoAAA
EOF

To disable the blocking of executables set the environment variable EXECUTABLEOK.

USING A QMAIL-QUEUE REPLACEMENT

To use a replacement for qmail-queue set the environment variable QMAILQUEUE to the path of the queue replacement. A good example is Qmail-Scanner. It allows you to run all incoming messages though one or more virus scanners (like Clam AntiVirus or a variety of commercial products) and/or SpamAssassin for spam filtering.

CHECKING THE ENVELOPE-FROM

If you receive mail from user@domain.com and the mail can not be delivered to the recipient it must be bounced. If domain.com does not exist, qmail sends a doublebounce.

As most spammers fake the sender address — even to non-existent ones — it can be reasonable to check if the Envelope-From’s domain exists.

If domain.com can’t be resolved via DNS, qmail will not accept the message for delivery.

TARPITTING

Regular users won’t send messages to a large number of recipients, spammers do. To make life a bit more uneasy for spammers, tarpitting inserts a small delay between accepting recipients. With this feature qmail can be configured to inserts delays after a certain number of recipients is exceeded.

CHECKING HELO-STRINGS, SENDERS AND RECIPIENTS WITH REGULAR EXPRESSIONS

Vanilla qmail can filter incoming mails based on a list of bad senders in the file control/badmailfrom, but does not support regular expression.

With this patch control/badmailfrom is expanded to understand regular expressions and the files control/badmailto and control/badhelo are added that keep a regex based list of bad recipients and bad HELO-strings. For further details see the file README.qregex. Some examples:

# example for "badhelo"
# block host strings with no dot (not a FQDN)
!\.
# example for "badmailfrom"
# drop everything containing the word spam
.*spam.*
# force users to fully qualify themselves
# (ie deny "user", accept "user@domain")
!@
# example for "badmailto"
# must not contain invalid characters, brakets or multiple @'s
[!%#:*^(){}]
@.*@

SMTP AND POP3 PROTOCOL EXTENSIONS

SMTP AUTH adds authentication to the STMP protocol and to qmail-smtpd in special. This enables selective relaying for users on dynamic IP addresses. The applied patch supports authentication via PLAIN, LOGIN or CRAM-MD5 SASL. All mechanisms but CRAM-MD5 send the password unencrypted and should be avoided in unencrypted SMTP sessions.

SMTP SIZE adds the SIZE sommand to qmail. qmail does by default limit the size of incoming messages by the amount of bytes given in control/databytes, but does not publish this limit. SMTP clients that observe the SIZE value would not start the DATA phase for larger messages. This saves traffic.

STARTTLS adds SSL/TLS encryption to the SMTP session after the command is issued. Please see README.tls for details and configuration issues.

Example:

220 mail.zeitform.de ESMTP
EHLO host.de
250-mail.zeitform.de
250-STARTTLS
250-AUTH LOGIN CRAM-MD5 PLAIN
250-AUTH=LOGIN CRAM-MD5 PLAIN
250-PIPELINING
250-8BITMIME
250 SIZE 50000000

POP3 CAPA is a command that shows the capabilities of a POP3 server. vanila qmail does not offer this command. It is required to propagate the AUTH methods.

POP3 AUTH offers SASL authentication via CRAM-MD5. While this is not strictly necessary as APOP provides a secure way of authentication (without plaintext password), some clients support it and it is considered more secure than APOP. Using CRAM-MD5 authentication with vpopmail required a patch for vchkpw.

Example:

+OK <23137.1078842811@guildenstern.zeitform.de>
CAPA
+OK Capability list follows
TOP
UIDL
LAST
USER
APOP
SASL CRAM-MD5

For further information on the protocols POP3 and SMTP:

VPOPMAIL SUPPORT

The zeitform qmail toaster adds Maildir++ quota support to qmail. This improves the interoperability with vpopmail.

If a message arrives for a recipient address that has no valid user associated (neither as POP3 account nor as forward to a different address) vpopmail may deliver this message to a catch-all account (e.g. postmaster) or bounce the message as being not deliverable (bounce-no-mailbox). With the chkuser patch this check can be done at SMTP level, i.e. after the client issued the RCPT TO command. If a message would be undeliverable, qmail-smtpd will answer with a error message instead of accepting the message and handling the bounce. With the increase of spam this looks like a better approach.

RCPT TO:<non-existant@domain.com>
550 sorry, no mailbox here by that name (#5.1.1 - chkusr)

top

LICENSE

Most of the patches within the zeitform qmail toaster are from other people. Most of them did not include any copyright or license information. So if you are in trouble, contact them for their lines of code.

This documentation and the merging of all patches was done by us. So we have some copyright after all. Where it applies, the license is either the GNU GPL or the GNU FDL, whichever fits better.

THE PATCH IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE PATCH OR THE USE OR OTHER DEALINGS IN THE PATCH.

top

REFERENCES AND CREDITS

The zeitform qmail toaster uses the patches, code or advice from a variety of people (in alphabetical order). The original patches are given as reference where it is possible.

CENTOS 7 LNMP (Nginx -PHP -MySQL)

rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

systemctl start nginx.service

systemctl stop httpd.service
yum remove httpd
systemctl disable httpd.service
root@centos71 ~]# systemctl start nginx.service
[root@centos71 ~]# systemctl restart nginx.service
[root@centos71 ~]# systemctl status nginx.service
nginx.service – nginx – high performance web server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled)
Active: active (running) since Thu 2014-09-11 14:32:02 SGT; 8s ago
Docs: http://nginx.org/en/docs/
Process: 38314 ExecStop=/bin/kill -s QUIT $MAINPID (code=exited, status=0/SUCCESS)
Process: 38319 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
Process: 38318 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
Main PID: 38322 (nginx)
CGroup: /system.slice/nginx.service
??38322 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
??38323 nginx: worker process

Sep 11 14:32:02 centos71.rmohan.com systemd[1]: Starting nginx – high performance web server…
Sep 11 14:32:02 centos71.rmohan.com nginx[38318]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Sep 11 14:32:02 centos71.rmohan.com nginx[38318]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Sep 11 14:32:02 centos71.rmohan.com systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Sep 11 14:32:02 centos71.rmohan.com systemd[1]: Started nginx – high performance web server.

Enable Firewall Rule

[root@centos71 ~]# firewall-cmd –permanent –zone=public –add-service=http
success

[root@centos71 ~]# firewall-cmd –permanent –zone=public –add-service=https
success

[root@centos71 ~]# firewall-cmd –reload
success
http://192.168.1.9/
Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.
[root@centos71 ~]# ip addr show eno16777736 | grep inet | awk ‘{ print $2; }’ | sed ‘s/\/.*$//’
192.168.1.9
fe80::20c:29ff:fe7c:a85a

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]
New password:
Re-enter new password:
Sorry, passwords do not match.

New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
… Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
– Dropping test database…
… Success!
– Removing privileges on test database…
… Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

[root@centos71 ~]# systemctl enable mariadb.service
ln -s ‘/usr/lib/systemd/system/mariadb.service’ ‘/etc/systemd/system/multi-user.target.wants/mariadb.service’
[root@centos71 ~]# systemctl restart mariadb.service
[root@centos71 ~]# systemctl status mariadb.service
mariadb.service – MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled)
Active: active (running) since Thu 2014-09-11 14:57:59 SGT; 6s ago
Process: 39423 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
Process: 39395 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
Main PID: 39422 (mysqld_safe)
CGroup: /system.slice/mariadb.service
??39422 /bin/sh /usr/bin/mysqld_safe –basedir=/usr
??39580 /usr/libexec/mysqld –basedir=/usr –datadir=/var/lib/mysql –plugin-dir=/usr/lib64/mysql/plugin –log-error=/var/log/mariadb/mariadb.log –pid-file=/var/…

Sep 11 14:57:57 centos71.rmohan.com mysqld_safe[39422]: 140911 14:57:57 mysqld_safe Logging to ‘/var/log/mariadb/mariadb.log’.
Sep 11 14:57:57 centos71.rmohan.com mysqld_safe[39422]: 140911 14:57:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Sep 11 14:57:59 centos71.rmohan.com systemd[1]: Started MariaDB database server.
[root@centos71 ~]#
[root@centos71 ~]# yum install php php-mysql php-fpm
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: buaya.klas.or.id
* extras: repo.apiknet.co.id
* updates: kartolo.sby.datautama.net.id
Resolving Dependencies
–> Running transaction check
—> Package php.x86_64 0:5.4.16-23.el7_0 will be installed
–> Processing Dependency: php-common(x86-64) = 5.4.16-23.el7_0 for package: php-5.4.16-23.el7_0.x86_64
–> Processing Dependency: php-cli(x86-64) = 5.4.16-23.el7_0 for package: php-5.4.16-23.el7_0.x86_64
–> Processing Dependency: httpd-mmn = 20120211×8664 for package: php-5.4.16-23.el7_0.x86_64
–> Processing Dependency: httpd for package: php-5.4.16-23.el7_0.x86_64
—> Package php-fpm.x86_64 0:5.4.16-23.el7_0 will be installed
—> Package php-mysql.x86_64 0:5.4.16-23.el7_0 will be installed
–> Processing Dependency: php-pdo(x86-64) = 5.4.16-23.el7_0 for package: php-mysql-5.4.16-23.el7_0.x86_64
–> Running transaction check
—> Package httpd.x86_64 0:2.4.6-18.el7.centos will be installed
–> Processing Dependency: httpd-tools = 2.4.6-18.el7.centos for package: httpd-2.4.6-18.el7.centos.x86_64
–> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-18.el7.centos.x86_64
–> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-18.el7.centos.x86_64
–> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-18.el7.centos.x86_64
—> Package php-cli.x86_64 0:5.4.16-23.el7_0 will be installed
—> Package php-common.x86_64 0:5.4.16-23.el7_0 will be installed
–> Processing Dependency: libzip.so.2()(64bit) for package: php-common-5.4.16-23.el7_0.x86_64
—> Package php-pdo.x86_64 0:5.4.16-23.el7_0 will be installed
–> Running transaction check
—> Package apr.x86_64 0:1.4.8-3.el7 will be installed
—> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
—> Package httpd-tools.x86_64 0:2.4.6-18.el7.centos will be installed
—> Package libzip.x86_64 0:0.10.1-8.el7 will be installed
—> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================
Installing:
php x86_64 5.4.16-23.el7_0 updates 1.3 M
php-fpm x86_64 5.4.16-23.el7_0 updates 1.4 M
php-mysql x86_64 5.4.16-23.el7_0 updates 96 k
Installing for dependencies:
apr x86_64 1.4.8-3.el7 base 103 k
apr-util x86_64 1.5.2-6.el7 base 92 k
httpd x86_64 2.4.6-18.el7.centos updates 2.7 M
httpd-tools x86_64 2.4.6-18.el7.centos updates 77 k
libzip x86_64 0.10.1-8.el7 base 48 k
mailcap noarch 2.1.41-2.el7 base 31 k
php-cli x86_64 5.4.16-23.el7_0 updates 2.7 M
php-common x86_64 5.4.16-23.el7_0 updates 560 k
php-pdo x86_64 5.4.16-23.el7_0 updates 94 k

Transaction Summary
================================================================================================================================================================================
Install 3 Packages (+9 Dependent packages)

Total download size: 9.3 M
Installed size: 32 M
Is this ok [y/d/N]: y
Downloading packages:
(1/12): apr-util-1.5.2-6.el7.x86_64.rpm | 92 kB 00:00:00
(2/12): httpd-tools-2.4.6-18.el7.centos.x86_64.rpm | 77 kB 00:00:00
(3/12): libzip-0.10.1-8.el7.x86_64.rpm | 48 kB 00:00:00
(4/12): mailcap-2.1.41-2.el7.noarch.rpm | 31 kB 00:00:00
(5/12): apr-1.4.8-3.el7.x86_64.rpm | 103 kB 00:00:01
(6/12): php-5.4.16-23.el7_0.x86_64.rpm | 1.3 MB 00:00:01
(7/12): php-common-5.4.16-23.el7_0.x86_64.rpm | 560 kB 00:00:02
(8/12): php-mysql-5.4.16-23.el7_0.x86_64.rpm | 96 kB 00:00:05
(9/12): php-pdo-5.4.16-23.el7_0.x86_64.rpm | 94 kB 00:00:04
(10/12): php-fpm-5.4.16-23.el7_0.x86_64.rpm | 1.4 MB 00:00:15
(11/12): php-cli-5.4.16-23.el7_0.x86_64.rpm | 2.7 MB 00:00:18
(12/12): httpd-2.4.6-18.el7.centos.x86_64.rpm | 2.7 MB 00:00:24
——————————————————————————————————————————————————————————–
Total 392 kB/s | 9.3 MB 00:00:24
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : apr-1.4.8-3.el7.x86_64 1/12
Installing : apr-util-1.5.2-6.el7.x86_64 2/12
Installing : httpd-tools-2.4.6-18.el7.centos.x86_64 3/12
Installing : libzip-0.10.1-8.el7.x86_64 4/12
Installing : php-common-5.4.16-23.el7_0.x86_64 5/12
Installing : php-cli-5.4.16-23.el7_0.x86_64 6/12
Installing : php-pdo-5.4.16-23.el7_0.x86_64 7/12
Installing : mailcap-2.1.41-2.el7.noarch 8/12
Installing : httpd-2.4.6-18.el7.centos.x86_64 9/12
Installing : php-5.4.16-23.el7_0.x86_64 10/12
Installing : php-mysql-5.4.16-23.el7_0.x86_64 11/12
Installing : php-fpm-5.4.16-23.el7_0.x86_64 12/12
Verifying : php-common-5.4.16-23.el7_0.x86_64 1/12
Verifying : php-mysql-5.4.16-23.el7_0.x86_64 2/12
Verifying : apr-1.4.8-3.el7.x86_64 3/12
Verifying : mailcap-2.1.41-2.el7.noarch 4/12
Verifying : php-cli-5.4.16-23.el7_0.x86_64 5/12
Verifying : apr-util-1.5.2-6.el7.x86_64 6/12
Verifying : php-5.4.16-23.el7_0.x86_64 7/12
Verifying : libzip-0.10.1-8.el7.x86_64 8/12
Verifying : php-pdo-5.4.16-23.el7_0.x86_64 9/12
Verifying : httpd-tools-2.4.6-18.el7.centos.x86_64 10/12
Verifying : php-fpm-5.4.16-23.el7_0.x86_64 11/12
Verifying : httpd-2.4.6-18.el7.centos.x86_64 12/12

Installed:
php.x86_64 0:5.4.16-23.el7_0 php-fpm.x86_64 0:5.4.16-23.el7_0 php-mysql.x86_64 0:5.4.16-23.el7_0

Dependency Installed:
apr.x86_64 0:1.4.8-3.el7 apr-util.x86_64 0:1.5.2-6.el7 httpd.x86_64 0:2.4.6-18.el7.centos httpd-tools.x86_64 0:2.4.6-18.el7.centos libzip.x86_64 0:0.10.1-8.el7
mailcap.noarch 0:2.1.41-2.el7 php-cli.x86_64 0:5.4.16-23.el7_0 php-common.x86_64 0:5.4.16-23.el7_0 php-pdo.x86_64 0:5.4.16-23.el7_0

Complete!

vim /etc/php.ini
by uncommenting the line and setting it to “0” like this:

cgi.fix_pathinfo=0

vi /etc/php-fpm.d/www.conf
Find the line that specifies the listen parameter, and change it so it looks like the following:

listen = /var/run/php-fpm/php-fpm.sock
systemctl start php-fpm
systemctl restart php-fpm
systemctl status php-fpm
systemctl enable php-fpm.service
cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.org
server {
listen 80;
server_name 192.168.1.9;

root /usr/share/nginx/html;
index index.php index.html index.htm;

location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
We will call this script info.php. In order for Apache to find the file and serve it correctly, it must be saved to a very specific directory, which is called the “web root”.

In CentOS 7, this directory is located at /usr/share/nginx/html/.
We can create the file at that location by typing:
sudo vi /usr/share/nginx/html/info.php

RAID

RAID

A soft RAID vs Hard RAID:
a> Software RAID is an abstraction layer in an OS between physical and logical disk, and this abstraction layer will consume some CPU resources. Hardware RAID is not the problem;
b> can support hot-swappable RAID hard disks, the benefits of this can be brought online to replace a damaged disk. But a new generation of SATA software RAID can also support hot-swappable (command line must first be removed before pulling);
c> Hardware RAID usually also features a number of vendors to provide extra;
d> independent requires specialized hardware RAID device, which means that pay more money;

2, semi-hard semi-soft RAID, in addition to soft RAID and RAID hard there this one kind exist. It is the IO to complete work to the CPU, they generally do not support RAID5;

3, pay attention to value and step value mkfs block when the group of soft RAID (after the detail);

4, RAID 0:
a> non-nested RAID in the best performance;
b> You can not just two disc set RAID 0, three or more can be, but the performance is marginal effect (ie, assuming a floppy disk performance is 50MB per second, and two floppy disk RAID 0 performance about 96MB per second, three floppy disks RAID 0 is perhaps 130MB instead of 150MB per second per second);
c> capacity = smallest piece of disk capacity x number of disks (there are some software RAID can do without this restriction, linux implementations can be), the storage efficiency of 100%;
d> a bad disk, all data is finished;
e> Note: The read performance and write performance will not get that kind of a big upgrade, the main improvement is the write performance;
f> Scenario: non-critical data, but do not often write backup requires high-speed write performance data;

5, RAID 1:
a> mirror. May consist of two or more disks, data is copied to a disk for each group, as long as one is not broke, do not lose data, reliability of the strongest;
b> Array count only the smallest piece of disk capacity, storage efficiency (100 / N)%, N is the number of disks in the array. Is the lowest of all the RAID utilization;
c> a minimum of two or more disks (recommendation 2);
d> write performance slight decrease significantly enhanced read performance (if it is soft RAID need to strengthen multi-threaded operating system, such as Linux);
to be close to the performance of disk e> group (preferably the same as the disc), so that only the load balancing;
f> can be used as a hot backup program (added to the RAID 1 disk, sync after a good pull down is a backup disk of);
g> Scenarios: requiring high redundancy, while the cost, performance, capacity, less sensitive to the scene;

6, RAID 2:
a> RAID modified version 0;
b> can be composed of a minimum of three disks,
c> will be distributed to each disk data encoding (Hamming Code) after;
d> add ECC checksum, so RAID 0 ? consuming than some disk space;

7, RAID 3:
a> also will be distributed to different physical disks after data encoding;
b> using Bit-interleaving (data interleaved storage) data dispersed manner;
c> will be used to verify the value of the recovered another piece written separate disk;
d> The use of bit partitions, each bit of data may need to read start reading all of the disks, so it is suitable for large amounts of data to read;

8, RAID 4:
a> Most RAID 3 and the same;
b> The difference is that it uses Block-interleaving instead of Bit-interleaving for data partitions;
c> Whether to read and write data on the disk parity disks are what piece needs to be read, so this parity disk load was significantly higher than the other disk, so the parity disk is very easy to damage;

9, RAID 5:
a> Use Disk Striping data partition. Partition the way left symmetrical, left asymmetric, right symmetry, RHEL default choice left-symmetric, because less time addressing, which is the highest read performance choice;
b> does not store copies of data, but store a checksum value (for example, using the XOR, actually also XOR), when the data block is missing, according to the data blocks and parity values ??with the rest of the group to recover data;
c> checksum value is distributed to every single piece of a disk rather than on disk, thus avoiding a piece of disk load was significantly higher or lower than other disk;
d> requires at least two disks to form, performance, and redundancy have some improvement, eclectic selection of RAID 0 and RAID 1’s;
e> storage utilization is high, as 100 x (1 – 1 / N)%, N is the number of physical disk capacity is equivalent to consuming a plate, but the cost of capacity is dispersed in each tray, rather than a one physical disk;
f> Performance can mediate (assign read and write strips) by adjusting the stripe size;
g> broken a dish, you can also reconstruct the data, but due to reconstruction (degraded) by checking the value of the data needs to be calculated, so the rebuilding process will be a sharp decline in performance. And renewal is a long process;
h> application scenarios: high read performance requirements, fewer write performance requirements, cost considerations;

10, RAID5 write performance is poor, because each write actual will trigger four times (or more) of I / O, although the actual cache mechanism will reduce I / O overhead. As follows:
a> about to be written data is read out from the disk;
b> write data, but this time check value has not been updated;
c> to the same group all the blocks of data are read in, the parity block read in;
d> calculate the checksum value and the new value is written back to parity check block;

11, RAID 6:
a> including a variety of mechanisms, including zoning and RAID5 is basically the same;
b> except that it saves two different algorithms (the textbooks that are different for the grouping, such as the calculated value of 1,2-4 1-3 count value 2) the checksum is calculated, so that the disc 2 will consume capacity (also scattered across the disc);
c> composed of a minimum of four disks, allowing simultaneous bad 2;
d> a higher degree of redundancy than RAID 5, while the write performance is more rotten than 5 RAID (In fact, because a lot of times will lead to actual I / O, very bad);
e> storage utilization is 100 x (1 – 2 / N)%, N is the number of physical disks;
f> rebuild faster than RAID 5 blocks reconstruction performance than RAID 5, the risk of failure when rebuilding lower than RAID 5 (RAID 5 in the reconstruction process a lot easier to be broken due to the pressure of a plate);
e> scenarios: higher demand than RAID 5 redundancy protection, the demand for higher read performance lower write performance and cost considerations;

12, RAID 7:
RAID 7 is not open RAID standard, but Storage Computer Corporation’s patented hardware product name,
RAID 7 RAID 3 and RAID 4 is based on the development, but after strengthening to address some of the original restrictions.
In addition, the use of large amounts of cache memory and a dedicated real-time processor for array management in asynchronous implementations,
making RAID 7 can handle a large number of IO requirements, so performance even beyond the standards of many other RAID implementations products.
But because of this, in terms of price is very high.
13, RAID 10:
a> Nested RAID, can be broken down into 1 + 0 and 0 + 1;
b> requires at least four disks, storage efficiency (100 / N)%, N is the number of mirrors;
c> both performance and redundancy that is slightly lower utilization;
10:00 d> create software RAID generally create two RAID 1, RAID 10 and then create on this basis;

14, there is generally nested RAID 50 and RAID RAID 53;

15, less practical application of RAID is RAID2,3,4, because RAID5 has covered the required functionality. So RAID2,3,4 mostly only in the areas of research have achieved, but the practical application of RAID5 or RAID6 oriented places.

16, RAID DP (dual parity) is designed NetApp systems is the use of RAID4 (instead of legends RAID 6, because it is independent of the parity disk) design concept, as each deal only with two disks. Such as RAID and RAID 6 also allows the simultaneous bad two plates, they developed many years WAFL file system for RAID DP made ??specifically optimized efficient than RAID 6;

17, optimized RAID parameters (striping parameters):
a> adjust striping RAID parameters have a great impact on for RAID 0, RAID 5, RAID 6;
b> chunk-size. RAID will be filled with a chunk size disk to move to the next one, and sometimes it is directly striping granularity. chunk size should be an integer multiple of the block size of;
c> to reduce the chunk-size means that the file will be divided into more pieces, distributed on more physical disks. This will improve the transmission efficiency, but may reduce the efficiency of locating seek (some hardware will wait to fill a stripe really written, it will seek to offset some of the positioning of consumption);
d> increasing chunk-size will be more than the opposite effect;
e> stride (step value) is a value type ext2 file system for data structure in the class ext2 seek, it is so designated: mke2fs -E stride = N. N (that is, the step value) should be designated as the chunk-size / filesystem block size (file system block size). (Mke2fs -j -b 4096 -E stride = 16 / dev / md0)
f> value adjustment for the above two, you can improve the efficiency of RAID parallel on multiple disks, so that the performance of the entire RAID varies with the number of physical disks to improve and upgrade;

18, more RAID principle visible http://blog.chinaunix.net/space.php?uid=20023853&do=blog&id=1738605;

19, soft RAID information can be viewed in several places in the following systems:
a> / proc / mdstat. Easy to see where the new reconstruction progress bar, as well as a bad piece of plate, etc.;
b> mdadm -detail / dev / md0. Show details;
c> / sys / block / mdX / md. Easy to see here cache_size other information (unavailable elsewhere);

20, Major / Minor number. linux system on behalf of the former type of device / driver, which is the same type / drive multiple devices in the system ID number;

21, RAID technology originally developed by IBM, Bell Labs invention rather than (http://en.wikipedia.org/wiki/RAID#History);

22, RAID support online expansion;

23, the hard drive has bad sectors are basically appearances, but vendors will set aside part of the track (and usually in the outer ring) came out to replace bad sectors consume capacity. In contrast, enterprise-class hard disk capacity is reserved for the more consumer-grade large hard drive, which is more expensive and longer life it is one of the reasons;

24, under RHEL soft RAID configuration files (used for boot automatically discover RAID) in /etc/mdadm.conf. This file can be so generated: mdadm -verbose -examine -scan> /etc/mdadm.conf. This file administrator needs to configure how mail notification when RAID is broken;

25, Critical Section MeteData RAID is stored in the data. It is a meta-data, if it lost, the data on the RAID whole lost. Various RAID, only RAID Critical Section 1 is written on the disk tail (the other are written in the disk head), so RAID 1 is more suited to the system tray (easy recovery MeteData).

26, when the soft RAID reorganization (reshape) (plus disc inside, say, or change the chunk size or RAID level), if you want to retain the original data, the backup Critical Section is very important. Internal reorganization process is this:
a> the RAID set to read only;
b> Backup Critical Section;
c> Redefine RAID;
d> successful, then delete the original Critical Section, recovery can be written;

27, it is best to manually specify the Critical Section backup location (the best place to be backed up to change the RAID outside), because if reshape failure Critical Section is the need for manual recovery. If there is no manual developed position, Critical Section will be present on the backup disk RAID, and if there is no spare disk RAID is stored in memory (easily lost, dangerous);

28, if the RAID change (such as stretching), filesystem above must remember to make the appropriate changes (for example, the pull of God);

29, the system starts rc.sysinit script in order to start the service by default is this: udev-> selinux-> time-> hostname-> lvm-> Soft RAID. So do the lvm RAID reasonable, unless you manually change the startup script;

30, expanding the capacity RAID 5, RAID 5 by gradually in the way disk drives replaced with larger capacity each to complete (do not forget to resize filesystem);

31, on the same machine between different soft RAID group can be shared spares;

32, if you want to migrate software RAID between machines, remember that before the migration should be changed to migrate RAID on a target machine does not exist (it does not conflict) RAID device name (mdX). Renamed when you can identify with Minor numbers to be renamed the RAID;

33, each block Bitmap The block is used to record whether the physical disk RAID synchronization with the whole lot, actually. It will be periodically written. Enabling it can greatly speed up the recovery process of RAID;

34, Bitmap can be stored in external RAID or RAID. If you tell the RAID to RAID Bitmap exist, the absolute path is given within the RAID, RAID will deadlock. If then the presence of external RAID, file systems only support extX;

35, added to the active RAID Bitmap on the premise that it’s Superblock is good;

36, you can enable Write Behind mechanism, if it is enabled Bitmap. This will successfully written to the disk in the specified application will return after the success of the other disk writes data asynchronously, which is particularly useful for RAID 1;

37, RAID mechanism does not take the initiative to detect bad blocks. Only when forced to read bad block could not be read as it will go to try to fix (find prepare block alternative), if it can not repair the disk (prepared block ran out), it will mark the piece of disk error, then kicked RAID, enable spare;

38, because of the bad block is negative RAID processing, so the process of rebuilding the RAID 5 is likely to find hidden bad blocks, leading RAID recovery failed. Disk capacity, the greater the risk;

39, RAID with crontab regular check for bad blocks is a good suggestion. In Kernel 2.6.16 can be triggered after we check:
echo check >> /sys/block/mdX/md/sync_action.

48 TIPS FOR BASH

We collected a bunch of shell scripting skills, I said,   I write a blog mainly to make some study notes, to facilitate their access to,

so I would come up with such an article, there is no incomprehensible. About the origin of these skills, eh,
I forgot, it may come from theunixschool, commandlinefu, cool ground network and igigo.net, of course, there are some of my own ideas and experiences,
who cares, into my mind that I of the.
0. shell debugging
Copy the code code below: sh -x somefile.sh
Plus set + x set-x in somefile.sh file
1. && || simplified if else
Copy the code code below: gzip -t a.tar.gz

gzip -t a.tar.gz
if [[ 0 == $? ]]; then
echo “good zip”
else
echo “bad zip”
fi

Can be simplified to: Copy the code code below:

gzip -t a.tar.gz && echo “good zip” || echo “bad zip”

2 to determine the file is not empty

Copy the code following code:

if [[ -s $file ]]; then
echo “not empty”
fi

3 Get File Size
Copy the code code below:

stat -c %s $file
stat –printf=’%s\n’ $file
wc -c $file

4 string replacement
Copy the code following code:

${string//pattern/replacement}
a=’a,b,c’
echo ${a//,/ /}

5. Contains substring?
string=”My string”
if [[ $string == *My* ]]; then
echo “It’s there!”
fi

6. rsync backup
Copy the code code below:
rsync -r -t -v /source_folder /destination_folder
rsync -r -t -v /source_folder [user@host:/destination_folder

7 batch rename files

Txt file with all .bak suffix:
Copy the code code below:
rename ‘.txt’ ‘.txt.bak’ *.txt
Remove all bak suffix:
rename ‘*.bak’ ” *.bak
All the spaces into underscores:
Copy the code code below:
find path -type f -exec rename ‘s/ /_/g’ {} \;

The file names are changed to uppercase:
Copy the code code below:

find path -type f -exec rename ‘y/a-z/A-Z/’ {} \;

8. for / while loop

for ((i=0; i < 10; i++)); do echo $i; done
for line in $(cat a.txt); do echo $line; done
for f in *.txt; do echo $f; done
while read line ; do echo $line; done < a.txt
cat a.txt | while read line; do echo $line; done
9 delete blank lines

cat a.txt | sed -e ‘/^$/d’
(echo “abc”; echo “”; echo “ddd”;) | awk ‘{if (0 != NF) print $0;}’

10. compare file modification time

[[ file1.txt -nt file2.txt ]] && echo true || echo false
[[ file1.txt -ot file2.txt ]] && echo true || echo false

11. achieve Dictionary structure
Copy the code code below: hput () {
eval “hkey_ $ 1” = “$ 2”
}
hget () {
eval echo ‘$ {‘ “hkey_ $ 1” ‘}’
}
$ Hput k1 aaa
$ Hget k1
aaa

12 removed from the second row

Copy the code the code below:
$echo ‘a b c d e f’ | cut -d ‘ ‘ -f1,3-
$a c d e f
13 Save the stderr output to a variable
Copy the code the code below:

$ a=$( (echo ‘out’; echo ‘error’ 1>&2) 2>&1 1>/dev/null)
$ echo $a
error

14)
3 lines deleted before 14

$cat a.txt | sed 1,3d

15. read multiple domains into a variable
Copy the code the code below:
read a b c <<< “xxx yyy zzz”
16. iterate

Copy the code code below:

array=( one two three )
for i in ${array[@]}
do
echo $i
done
17 view directory size

Copy the code following code:

du –sh ~/apps

18 View CPU information
Copy the code following code:

cat /proc/cpuinfo
19. date

$ date +%Y-%m-%d
2012-12-24
$ date +%Y-%m-%d –date ‘-1 day’
2012-12-23
$ date +%Y-m-%d –date ‘Dec 25′
2011-12-25
$ date +%Y-m-%d –date ‘Dec 25 – 10 days’
2011-12-15
20. get the path and file name
Copy the code the code below:

$ dirname ‘/home/lalor/a.txt’
/home/lalor
$ basename ‘/home/lalor/a.txt’
a.txt

21. union and intersection
comm can be used to seek union, intersection, difference, assuming that there are now two documents a and b,
which reads as follows:

Copy the code following code:

$cat a
1
3
5

$cat b
3
4
5
6
7
$ cat b
3
4
5
6
7

$ comm a b
1
3
4
5
6
7

$ comm -1 -2 a b # intersection
3
5

$ comm a b | sed ‘s/\t//g’ # and set
1
2
3
4
5
6
7

$ comm -1 -3 a b | sed ‘s/\t//g’ # b-a
4
6
7
22. awk complex delimiter

Multi-character as the delimiter

Copy the code following code:

Multiple delimiters 1

Copy the code the code below:

$ echo “a||b||c||d” | awk -F ‘[|][|]’ ‘{print $3}’
c

Multiple delimiters 2
Copy the code following code:

$echo “a||b,#c d” | awk -F ‘[| ,#]+’ ‘{print $4}’
d

$echo “a||b##c|#d” | awk -F ‘([|][|])|([#][#])’ ‘{print $NF}’
c|#d

23 generates a random number

Copy the code the code below:
echo $ RANDOM

24. The mode split file

Copy the code code below:
csplit server.log /PATTERN/ -n 2 -s {*} -f server_result -b “%02d.log” -z

/ PATTERN / used to match a row, the segmentation process starts
{*} Based on the matching, segmentation is repeated
-s silent mode
filename suffix after -n split, the number of digits
file name prefix -f divided
Specify suffix format -b

25. get the file name or extension
Copy the code following code: var = hack.fun.book.txt
var=hack.fun.book.txt
echo ${var%.*}
hack.fun.book
echo ${var%%.*}
hack
echo ${var#.*}
fun.book.txt
echo ${var##.*}
txt

26. to the root account to perform on a command.

Copy the code the code below:
$ sudo !!

Among them: * !! refers to the previous command * The last parameter $ on a command parameter * on * all * of a command:!!! First three parameters of a command on 3
For example:
$ls /tmp/somedir
ls: cannot access /tmp/somedir: No such file or directory

$mkdir !$
mkdir /tmp/somedir
27. use python to build a simple Web server via http://$HOSTNAME:8000 visit.

Copy the code the code below:
python -m SimpleHTTPServer

28. in Vim without permission to save the file to edit.

Copy the code code below :w !sudo tee %
29. on a command replaces foo bar, and executed.
Copy the code code below: ^foo^bar

30. quickly backup or copy files.
Copy the code code below: cp filename{,.bak}
31. ssh keys will be copied to the user @ host to enable SSH login without a password.
Copy the code code below: $ssh-copy-id user@host
32. the linux desktop recording video.
Copy the code code below: ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg
33. man Magical
Copy the code code below:
man ascii
man test

34 on the vim to edit a command

Copy the code code below: fc
35. delete 0 byte files or junk files
Copy the code code below:

find . -type f -size 0 -delete
find . -type f -exec rm -rf {} \;
find . -type f -name “a.out” -exec rm -rf {} \;
find . type f -name “a.out” -delete
find . type f -name “*.txt” -print0 | xargs -0 rm -f
36 When writing SHELL display multiple lines of information

Copy the code the code below:

cat << EOF
+————————————————————–+
| === Welcome to Tunoff services === |
+————————————————————–+
EOF

Note that when you specify the terminator, it must be the only contents of the line, and the line must begin with this character.
37 How to build mysql soft link

Copy the code code below:

cd /usr/local/mysql/bin
for i in *
do ln /usr/local/mysql/bin/$i /usr/bin/$i
done

38 to obtain an IP address:
ifconfig eth0 |grep “inet addr:” |awk ‘{print $2}’|cut -c 6-

39 the number of open files
Copy the code code below: lsof
40. Clear zombie process
Copy the code code below:

ps -eal | awk ‘{ if ($2 == “Z”){ print $4}}’ | kill -9
41. print only lines

Copy the code code below:
awk ‘!a[$0]++’ file
42 print odd line
Copy the code code below:
awk ‘i=!i’ file
awk ‘NR%2’ file

43. print matching lines after a row
Copy the code code below:
seq 10 | awk ‘/4/{f=4};–f==0{print;exit}’

44 After printing a line 10 rows behind
cat file | grep -A100 string
cat file | grep -B100 string # front
cat file | grep -C100 string # before and after

sed -n ‘/string/,+100p’

awk ‘/string/{f=100}–f>=0’

45. get the last argument on the command line
Copy the code following code:
echo ${!#}
echo ${$#} #wrong attempt
46. ??output redirection

If you would like to you, STDERR and STDOUT output can be redirected to an output file,
therefore, bash provides special redirection symbols &>

ls file nofile &> /dev/null

How do we redirect script inside? Nothing special, and the general redirection same.
Copy the code following code:

#!/bin/bash
#redirecting output to different locations
echo “now redirecting all output to another location” &>/dev/null
The problem comes if we want to redirect all output to a file it?
We do not want to redirect the output every time about it, as the saying goes, Hermit own good ideas.
We can use exec to permanent redirect, as follows:

#!/bin/bash
#redirecting output to different locations
exec 2>testerror
echo “This is the start of the script”
echo “now redirecting all output to another location”

exec 1>testout
echo “This output should go to testout file”
echo “but this should go the the testerror file” >& 2

The output is as follows:

This is the start of the script
now redirecting all output to another location
lalor@lalor:~/temp$ cat testout
This output should go to testout file
lalor@lalor:~/temp$ cat testerror
but this should go the the testerror file
lalor@lalor:~/temp$
Additional ways to redirect:
Copy the code following code: exec 3 >> testout
Cancel redirection:
Copy the code following code: exec 3> –

47. function

Any variables are global variables defined elsewhere, if you want to define local variables need to add local keywords
the shell function can also be used recursively
Copy the code following code:
#!/bin/bash

function factorial {
if [[ $1 -eq 1 ]]; then
echo 1
else
local temp=$[ $1 – 1 ]
local result=`factorial $temp`
echo $[ $result * $1 ]
fi
}

result=`factorial 5`
echo $result
Create a library

The function of a set in another file, and then load the file into the current command by source

At the command line using the function

The function is defined in ~ / .bashrc in to

Passing arrays to functions
Copy the code the code below:

#!/bin/bash
#adding values in an array

function addarray {
local sum=0
local newarray
newarray=(`echo “$@”`)
for value in ${newarray[*]}
do
sum=$[ $sum+$value ]
done
echo $sum
}

myarray=(1 2 3 4 5)
echo “The original array is: ${myarray[*]}”
arg1=`echo ${myarray[*]}`
result=`addarray $arg1`
echo “The result is $result”
48. regex
Match ip address: ?d+.d+.d+.d+
Commentary: When extracting useful ip address
Match a specific number:
^[1-9]d*$ // match the positive integers
^-[1-9]d*$? // match negative integers
^-?[1-9]d*$? // match integers
^[1-9]d*|0$ // match non-negative integer (positive integer + 0)
^-[1-9]d*|0$? // matching non-positive integers (negative integer + 0)
^[1-9]d*.d*|0.d*[1-9]d*$ // match float
^-([1-9]d*.d*|0.d*[1-9]d*)$ // match negative float
^-?([1-9]d*.d*|0.d*[1-9]d*|0?.0+|0)$ // match float
^[1-9]d*.d*|0.d*[1-9]d*|0?.0+|0$ // match non-negative floating point numbers (positive float + 0)
^(-([1-9]d*.d*|0.d*[1-9]d*))|0?.0+|0$? // matching non-float (negative float + 0 )
Commentary: When handling large amounts of data useful to note that amendments to the specific application
Match a specific string:
^[A-Za-z]+$ // match a string of 26 English letters
^[A-Z]+$ // match by 26 uppercase letters of the alphabet composed of a string
^[a-z]+$ // match a string of 26 lowercase letters of the alphabet consisting of
^[A-Za-z0-9]+$ // match the string of numbers and 26 letters of the English
^ w + $ // matching string by numbers, 26 English letters or underscores the

RHEL6.4 Course Summary

RHEL6.4 Course Summary

Unit1
Tracking Security Updates
Update the following three categories
RHSA
RHBA
RHEA
yum updateinfo list View all updates
yum updateinfo list –cve = CVE-2013-0755 View an update
yum –security list updates view security update
yum updateinfo list | grep ‘Critical’ | cut -f1 -d ” | sort -u | wc -l

Unit2
Managing Software Updates
rpm -qa > /root/pre-update-software. $ (date +% Y% m% d) all of the packages installed imported into a document
yum updateinfo> /root/updateinfo-report.$(date +% Y% m% d)
yum update –security -y update only security package installed before gpgcheck = 1 to open
yum update –cve = <CVE> can update specific
rpm –import <GPG-KEY-FILE> Import key installation package
rpm -qa | grep gpg-pubkey view credible GPG keys
rpm -qi gpg-pubkey view package details
rpm -K rpm package installation package to view the md5 value is not correct
rpm -vvK rpm package gives debugging information
rpm -qp –scripts rpm package to view the installation package has no script

Unit3
Creating File Systems
lvcreate -l 100% FREE -n lvname vgname -l, –extents LogicalExtentsNumber [% {VG | PVS | FREE | ORIGIN}]
cryptsetup luksFormat / dev / vgname / lvname type YES to start the encryption format, enter the password
cryptsetup luksOpen / dev / vgname / lvname luksname opened and named
mkfs -t ext4 / dev / mapper / luksname settings file system
mkdir / secret
mount / dev / mapper / luksname / secret
umout / secret
cryptsetup luksClose luksname off encryption

dd if = / dev / urandom of = / path / to / passsword / file bs = 4096 count = 1 can also be used to encrypt the plaintext file
chmod 600 / path / to / password / file
cryptsetup luksAddkey / dev / vdaN / path / to / password / file here also need to enter a password
touch / etc / crypttab
luksname / dev / vgname / lvname / path / to / password / file
In / etc / fstab to add the following
/ Dev / mapper / luksname / secret ext4 defaults 1 2 so you can boot automatically mount the encrypted partition

Unit4
Managing File Systems
nosuid, noexec command has no suid permissions and execute permissions
tune2fs -l /dev/vd1 | head -n 19
tune2fs -l /dev/vda1 | grep ‘mount options’
tune2fs -o user_xattr, acl /dev/vda1 acl permission to add partitions, you can modify /etc/fstab file
lsattr view the file special attributes
chattr +, – grammar
only a supplementary
prohibit modification

Unit5
Managing Special Permissions
suid setUID
guid setGID
chmod u + s /path/to/procedure everyone has permission to run the program
belongs to the group chmod g + s /path/to/dir generated files folder unchanged
find /bin -perm /7000 Find all privileged position under the /bin
find /bin -perm 4000 Exact Match
find /bin -perm -4000 setUID
find /bin -perm -2000 setGID
find /bin -perm -6000 setUID and setGID can also use /6000

Unit6
Managing Additional File Access Controls
View umask umask value
getfacl somefile view the file ACLs (access control lists)
setfacl -mu:bob:rwx /path/to/file bob has owned and owning group permissions
setfacl -md:u:smith:rx subdir d u default user permissions rx subdir subdirectories
setfacl -mo::r/path/to/file owner readable

Unit7
Monitoring For File System Changes
AIDE (Advanced Intrusion Detection Environment) Advanced Intrusion Detection Environment
Its main function is to detect the integrity of the document
yum install -y aide aide to monitor file permissions
grep PERMS /etc/aide.conf added to monitor file
PERMS = p + i + u + g + acl + selinux p permissions, inode, u user, g User Group
/Etc PERMS
/root/\..* PERMS
aide –init initialize the database
mv /var/lib/aide/aide.db.new.gz /var/aide/aide.db.gz
aide –check on the above changes to the file, check verification

Unit8
Managing User Accounts
chage -m 0 -M 90 -W 7 -I 14 username
-m min days
-M Max days
-W Warn days
-I Inactive days
chage -d 0 username user to change the password at next logon
chage -l username listed in the user configuration information
userdel -r while *** user directory username ***
grep PASS_M /etc/login.defs in the configuration file can be modified to add the new user to take effect
# PASS_MAX_DAYS Maximum number of days a password may be used.
# PASS_MIN_DAYS Minimum number of days allowed between password changes.
# PASS_MIN_LEN Minimum acceptable password length.
PASS_MAX_DAYS 30
PASS_MIN_DAYS 3
PASS_MIN_LEN 8
getent passwd | cut -d: -f3 | sort -n | uniq -d see if the system has no duplicate account

Unit9
Managing Pluggable Authentication Modules
Flexibility PAM Pluggable Authentication Modules can be dynamic content needed to make changes to the verification, it can greatly improve verification
1, authentication management (authentication management)
Accept the user name and password, then the user’s password for authentication, and is responsible for setting the user’s some secret information
2, account management (account management)
Check whether the account is allowed to log into the system, whether the account has expired, account login Is there a limit period of time so
3, password management (password management)
Is mainly used to modify the user’s password
4, the session manager (session management)
Is to provide for the session management and accounting (accounting)

Various Linux distributions, PAM authentication module used is normally stored in the / lib / security / directory, you can use the ls command to see what this computer validation controls to support the general PAM module names such as pam_unix.so, the module can always Add this directory and ***, which does not directly affect the program runs, the specific impact on the PAM configuration directory.
PAM configuration file is usually stored in the /etc/pam.d/ directory.
Check whether the program is to support PAM, use the command:
ldd `which cmd` | grep libpam // cmd represents the view of the program name
If you include libpam library, the program will support PAM authentication.
ldd `which login` | grep libpam
libpam.so.0 => /lib64/libpam.so.0 (0x000000326d200000)
libpam_misc.so.0 => /lib64/libpam_misc.so.0 (0x000000326b600000)
/etc/pam.d Configuration File Syntax
type control module [module arguments]

grep maxlogins /etc/security/limits.conf
# <Domain> <type> <item> <value>
# – Maxlogins – max number of logins for this user
student – maxlogins 4 users to simultaneously configure the maximum number of logins
qa hard cpu 1 configuration cpu time
Limit the number of times a user entered password is incorrect
cat /etc/pam.d/system-auth same password-auth have to be changed
#% PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth required pam_tally2.so onerr = fail deny = 3 unlock_time = 180 3 times wrong banned three minutes
auth sufficient pam_fprintd.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid> = 1000 quiet_success
auth required pam_deny.so

Also note the position of account required pam_tally2.so placed
account required pam_unix.so

pam_tally2 View user
pam_tally2 –reset -u username reset release disabled users

Unit10
Securing Console Access
Encryption is not the same $ 6 sh256 $ 1 md5
grub-crypt
Password:
Retype password:
$6$01wSV5m9GBdGdQ3J

$ oroEE6jjedQ59yQqJlxwAc1MBPSrdm6ufuUJil5rJaXmLgYNbsjz1F.kQlcrZYcrO5y9h014VkGCsH5PN7TTg.
grub-md5-crypt
Password:
Retype password:
$ 1$HqxBl1$DVC9jyW6HXZ8.vAlPo2QR1
cat /etc/grub.cfg
password –encrypted $ 6 $ 01wSV5m9GBdGdQ3J

$ oroEE6jjedQ59yQqJlxwAc1MBPSrdm6ufuUJil5rJaXmLgYNbsjz1F.kQlcrZYcrO5y9h014VkGCsH5PN7TTg.
Before /etc/issue certification Show
/etc/motd Message Of The Day and historically certified display after
/etc/ssh/sshd_config PermitRootLogin no ban ssh root user login

Unit11
Installing Central Authentication
IdM (Identity Management)
chkconfig NetworkManager off; service NetworkManager stop off NetworkManager, otherwise ipa-server installation

Not on
/etc/sysconfig/network-scripts/ifcfg-eth0 NIC to configure a static IP, gateway,
DNS must be configured NM_CONTROLLED=no
/etc/hosts of the machine to do to resolve ip server.example.com server
yum -y install ipa-server
ipa-server-install –idstart=2000 –idmax=20000 Note to uid plus a range

The installation is complete need to open the following ports:
TCP Ports:
80,443 HTTP / HTTPS
389,636 LDAP / LDAPS
88,464 kerberos

UDP Ports:
88,464 kerberos
123 ntp

Can also be used to specify a specific command line parameter, so you do not need to specify in the above interactive
ipa-server-install –hostname=server.example.com -n example.com -r EXAMPLE.COM -p RedHat123 -a

redhat123 -U

service sshd restart

kinit admin initialization, if the average user to change the password the first secondary
ipa user-find admin verification

The remaining add users, add the group, modify the configuration information can be operating https://server.example.com login name admin password in the browser
redaht123

Client installation as follows:

yum -y install ipa-client

ipa-client-install –mkhomedir attention to give the new user-generated directory

During the installation to use admin redhat123 certification at You can also use non-interactive installation

ipa-client-install –domain =example.com –server =server.example.com –realm =EXAMPLE.COM -p admin -w
redhat123 –mkhomedir -U

Finally, users can idm landed on the client, the home directory is automatically generated after landing
Unit12
Managing Central Authentication
kinit admin
ipa pwpolicy-show command line view policy
kpasswd bob for users to change passwords
These can be operated in the browser, including sudoers, users can use a command such as

Unit13
Configuring System Logging
ryslog-gnutls after installation support TLS port 6514
Log into the server and client service
Server configuration is as follows:
/etc/rsyslog.conf open port module supports TCP and UDP here to open the TCP
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
#Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
/etc/rsyslog.d/remote.conf This file is in the directory of the new rsyslog.d
:fromhost,isequal, “client.example.com” /var/log/client/messages
:fromhost,isequal, “client.example.com” ~ ~ after adding the client’s information is only stored in the above file

Client configuration is as follows:
Note first of all log /etc/rsyslog.conf send port, the local is not retained
*. *@@(O) server.example.com:514 two @ is gone TCP (o) is the port number for later use
logrotate log segmentation tool

In fact, there is a logwatch tool, you can send important information to the server specified mailbox every day

Unit14
Configuring System Auditing
/etc/sysconfig/auditd
/etc/audit/auditd.conf default port number tcp 60
/etc/audit/audit.rules man rules see the syntax
remote logging with the auditd /etc/audisp/plugins.d/syslog.conf setting active = yes and restart auditd service

Can be used to send information to a remote syslog server
After installing audispd-plugins package (each client, which is a multi-node), you can open /etc/audisp/plugins.d/au-remote.conf

active = yes to the audit log is sent to the log server
Concrete syntax can man auditctl
/etc/audit/audit.rules
-w / path / to / file -p rwxa -k key
-e 2
-w specified audit file path
-p access r read w write x execute a property changes
-k key
-e setting enabled flag, 0,1,2 can be set up after 2 / etc other documents do not add to the mix, there are problems too restart

Unit15
Controlling Access to Network Services
iptables firewall
iptables -L
iptables -F
iptables -X
iptables -Z
iptables -A INPUT -i lo -j ACCEPT system service use
iptables -A INPUT -m state –stat ESTABLISHED, RELATED -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -m state –state NEW -p tcp –dport 22 -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -m state –state NEW -p tcp –dport 80 -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -m state –state NEW -p tcp –dport 514 -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -j LOG
iptables -A INPUT -j REJECT
server iptables save record keeping
cat /etc/sysconfig/iptables default save location
iptables -nvL –line-numbers view verification
Another useful place for PPTP server, Internet problem -s segment after dialling into the PPTP server is automatically assigned IP
iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE

IBM HTTP Server Performance Tuning

IBM HTTP Server Performance Tuning


1. Determining maximum simultaneous connections

The first tuning decision you’ll need to make is determining how many simultaneous connections your IBM HTTP Server installation will need to support. Many other tuning decisions are dependent on this value.

For some IBM HTTP Server deployments, the amount of load on the web server is directly related to the typical business day, and may show a load pattern such as the following:

    Simultaneous
    connections

            |
       2000 |
            |
            |                            **********
            |                        ****          ***
       1500 |                   *****                 **
            |               ****                        ***
            |            ***                               ***
            |           *                                     **
       1000 |          *                                        **
            |         *                                           *
            |         *                                           *
            |        *                                             *
        500 |        *                                             *
            |        *                                              *
            |      **                                                *
            |   ***                                                  ***
          1 |***                                                        **
 Time of    +-------------------------------------------------------------
   day         7am  8am  9am  10am  11am  12am  1pm  2pm  3pm  4pm  5pm

For other IBM HTTP Server deployments, providing applications which are used in many time zones, load on the server varies much less during the day.

The maximum number of simultaneous connections must be based on the busiest part of the day. This maximum number of simultaneous connections is only loosely related to the number of users accessing the site. At any given moment, a single user can require anywhere from zero to four independent TCP connections.

The typical way to determine the maximum number of simultaneous connections is to monitor mod_status reports during the day until typical behavior is understood, or to use mod_mpmstats (2.0.42.2 and later).

The information in this document assumes the tuning exercise is occurring on an individual instance of IBM HTTP Server. When scaled horizontally across a farm of webservers, you may need to account for a swelling of connections when some capacity is removed for maintenance or otherwise taken offline.

Monitoring with mod_status
  1. Add these directives to httpd.conf, or uncomment the ones already there:
    # This example is for IBM HTTP Server 2.0 and above
    # Similar directives are in older default configuration files.
    
    Loadmodule status_module modules/mod_status.so
    <Location /server-status>
    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from .example.com    <--- replace with "." + your domain name
    </Location>
    
  2. Request the /server-status page (http://www.example.com/server-status/) from the web server at busy times of the day and look for a line like the following:
    192 requests currently being processed, 287 idle workers
    

    The number of requests currently being processed is the number of simultaneous connections at this time. Taking this reading at different times of the day can be used to determine the maximum number of connections that must be handled.

Monitoring with mod_mpmstats (IBM HTTP Server 2.0.42.2 and later)
  1. IHS 6.1 and earlier: Copy the version of mod_mpmstats.so for your operating system from the ihsdiag package to the IBM HTTP Server modules directory. (Example filename: ihsdiag-1.4.1/2.0/aix/mod_mpmstats.so)
  2. Add these directives to the bottom of httpd.conf:
    LoadModule mpmstats_module modules/mod_mpmstats.so
    ReportInterval 90
    

    For IBM HTTP Server 7.0 and later, mod_mpmstats is enabled automatically.

  3. Check entries like this in the error log to determine how many simultaneous connections were in use at different times of the day:
    [Thu Aug 19 14:01:00 2004] [notice] mpmstats: rdy 712 bsy 312 rd 121 wr 173 ka 0 log 0 dns 0 cls 18
    [Thu Aug 19 14:02:30 2004] [notice] mpmstats: rdy 809 bsy 215 rd 131 wr 44 ka 0 log 0 dns 0 cls 40
    [Thu Aug 19 14:04:01 2004] [notice] mpmstats: rdy 707 bsy 317 rd 193 wr 97 ka 0 log 0 dns 0 cls 27
    [Thu Aug 19 14:05:32 2004] [notice] mpmstats: rdy 731 bsy 293 rd 196 wr 39 ka 0 log 0 dns 0 cls 58
    

Note that if the web server has not been configured to support enough simultaneous connections, one of the following messages will be logged to the web server error log and clients will experience delays accessing the server.

Windows
[warn] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting

Linux and Unix
[error] server reached MaxClients setting, consider raising the MaxClients setting

Check the error log for a message like this to determine if the IBM HTTP Server configuration needs to be changed. When MaxClients has been reached, new connections are buffered by the operating system and do not consume a request processing thread until the current work can complete. The length of this queue is controlled by the OS and can be influenced by the ListenBacklog directive.

Once the maximum number of simultaneous connections has been determined, add 25% as a safety factor. The next section discusses how to use this number in the web server configuration file.

As you approach a requirement exceeding 3000-4000 simultaneous connections, you will have to instead fan out to more instances of IHS, generally on different OS instances. Exceeding this soft limit is not recommended and may result in unpredictable/unintuitive behavior.

Note: Setting of the KeepAliveTimeout can affect the apparent number of simultaneous requests being processed by the server. Increasing KeepAliveTimeout effectively reduces the number of threads available to service new inbound requests, and will result in a higher maximum number of simultaneous connections which must be supported by the web server. Decreasing KeepAliveTimeout can drive extra load on the server handling unnecessary TCP connection setup overhead. A setting of 5 to 10 seconds is reasonable for serving requests over high speed, low latency networks.

Looking at output from mod_mpmstats (2.0.42.2 and later) can help see if KeepAliveTimeout is set correctly.

For example:

[Thu Aug 28 10:12:17 2008] [notice] mpmstats: rdy 0 bsy 600 rd 1 wr 70 ka 484 log 0 dns 0 cls 45

shows that all threads are busy (0 threads are ready “rdy”, 600 are busy “bsy”). 484 of them are just waiting for another keepalive request (“ka”), and yet the server will be rejecting requests because it has no threads available to work on them. Lowering KeepAliveTimeout would cause those threads to close their connections sooner and become available for more work.

1.1. TCP connection states and thread/process requirements

The netstat command can be used to show the state of TCP connections between clients and IBM HTTP Server. For some of these connection states, a web server thread (or child process, with 1.3.x on Unix) is consumed. For other states, no web server thread is consumed. See the following table to determine if a TCP connection in a particular state requires a web server thread.

TCP state meaning is a web server thread utilized?
LISTEN no connection no
SYN_RCVD not ready to be processed no
ESTABLISHED ready for web server to accept and process requests, or already processing requests yes, as soon as the web server realizes that connection is established; but if there aren’t enough configured web server threads (e.g., MaxClients is too small), the connection may stall until a thread becomes ready
FIN_WAIT1 web server has closed the socket; the connection remains in this state until an ACK is received from the client. A web server thread can be utilized for up to two seconds in this state if FIN is not received from the client, after which the web server gives up and the web server thread is no longer utilized.
CLOSE_WAIT client has closed the socket, web server hasn’t yet noticed yes
LAST_ACK client closed socket then web server closed socket no
FIN_WAIT2 web server closed the socket then client ACKed; the connection remains in this state until a FIN is received from the client or an OS-specific timeout occurs; see Connections in the FIN_WAIT_2 state and Apache for more information A web server thread can be utilized for up to two seconds in this state if FIN is not received from the client, after which the web server gives up and the web server thread is no longer utilized.
TIME_WAIT waiting for 2*MSL timeout before allowing quad to be reused no
CLOSING web server and client closed at the same time no

1.2. Handling enough simultaneous connections with IBM HTTP Server on Windows / ThreadsPerChild setting

IBM HTTP Server on Windows has a Parent process and a single multi-threaded Child process.

On 64-bit Windows OS’es, each instance of IHS is limited to approximately 2000 ThreadsPerChild. On 32-bit Windows, this number can be closer to 4000. These numbers are not exact limits, because the real limits are the sum of the fixed startup cost of memory for each thread + the maximum runtime memory usage per thread, which varies based on configuration and workload. Raising ThreadsPerChild and approaching these limits risks child process crashes when runtime memory usage puts the process address space over the 2GB or 3GB barrier. The upper limits become even more restrictive when loading other modules such as mod_mem_cache, or when there are many RewriteCond/RewriteRule directives. When approaching the limits, the server may run for awhile before the memory limit is exceeded and causes the child process to crash. Values even higher than this would probably cause an immediate crash during startup.

No specific limits can be provided, but it is suggested that anything over a value of 2000 for ThreadsPerChild on a Windows operating system could be at risk.
It should be noted that IHS on Windows is a 32-bit application even when running on a 64-bit Windows OS.

The relevant config directives for tuning the thread values described in this section on Windows are:

  • ThreadsPerChild
    The ThreadsPerChild directive places an upper limit on the number of simultaneous connections the server can handle. ThreadsPerChild should be set according to the maximum number of expected simultaneous connections, but within the restrictions described above.
  • ThreadLimit (valid with IHS 2.0 and above)
    ThreadsPerChild has a built in upper limit. Use ThreadLimit to increase the upper limit of ThreadsPerChild. The value of ThreadLimit affects the size of the shared memory segment the server uses to perform inter-process communication between the parent and the single child process. Do not increase ThreadLimit beyond what is required for ThreadsPerChild.

With PI04922, it’s possible to use nearly twice as much memory (nearly twice as many threads). See PI04922.

Recommended settings:

Directive Value
ThreadsPerChild maximum number of simultaneous connections (within memory constraints)
ThreadLimit same as ThreadsPerChild (2.0 and above)

1.3. Handling enough simultaneous connections with IBM HTTP Server 2.0 and above on Linux and Unix systems

On UNIX and Linux platforms, a running instance of IBM HTTP Server will consist of one single threaded Parent process which starts and maintains one or more multi-threaded Child processes. HTTP requests are received and processed by threads running in the Child processes. Each simultaneous request (TCP connection) consumes a thread. You need to use the appropriate configuration directives to control how many threads the server starts to handle requests and on UNIX and Linux, you can control how the threads are distributed amongst the Child processes.

Relevant config directives on UNIX platforms:

  • StartServersThe StartServers directive controls how many Child Processes are started when the web server initializes. The recommended value is 1. Do not set this higher than MaxSpareThreads divided by ThreadsPerChild. Otherwise, processes will be started at initialization and terminated immediately thereafter.Every second, IHS checks if new child processes are needed, so generally tuning of StartServers will be moot as early as a minute after IHS has started.
  • ServerLimitThere is a built-in upper limit on the number of child processes. At runtime, the actual upper limit on the number of child processes is MaxClients divided by ThreadsPerChild.This should only be changed when you have reason to change MaxClients or ThreadsPerChild, it does not directly dictate the number of child processes created at runtime.

    It is possible to see more child processes than this if some of them are gracefully stopping. If there are many of them, it probably means that MaxSpareThreads is set too small, or that MaxRequestsPerChild is non-zero and not large enough; see below for more information on both these directives.

  • ThreadsPerChildUse the ThreadsPerChild directive to control how many threads each Child process starts. More information on strategies for distributing threads amongst child processes is included below.
  • ThreadLimitThreadsPerChild has a built in upper limit. Use ThreadLimit to increase the upper limit of ThreadsPerChild. The value of ThreadLimit affects the size of the shared memory segment the server uses to perform inter-process communication between the parent and child processes. Do not increase ThreadLimit beyond what is required for ThreadsPerChild.
  • MaxClientsThe MaxClients directive places an upper limit on the number of simultaneous connections the server can handle. MaxClients should be set according to the expected load.

The MaxSpareThreads and MinSpareThreads directives affect how the server reacts to changes in server load. You can use these directives to instruct the server to automatically increase the number of Child processes when server load increases (subject to limits imposed by ServerLimitand MaxClients) and to decrease the number of Child processes when server load is low. This feature can be a useful for managing overall system memory utilization when your server is being used for tasks other than serving HTTP requests.

Setting MaxSpareThreads to a relatively small value has a performance penalty: Extra CPU to terminate and create child processes. During normal operation, the load on the server may vary widely (e.g., from 150 busy threads to 450 busy threads). If MaxSpareThreads is smaller than this variance (e.g., 450-150=300), then the web server will terminate and create child processes frequently, resulting in reduced performance.

Recommended settings:

Directive Value
ThreadsPerChild Leave at the default value, or increase to a larger proportion of MaxClients for better coordination of WebSphere Plugin processing threads (via less child processes). Larger ThreadsPerChild (and fewer processes) also results in fewer dedicated web container threads being used by the ESI invalidation feature of the WebSphere Plugin.Increasing ThreadsPerchild too high on heavily loaded SSL servers may incur more CPU and throughput issues, as there is additional contention for memory.
MaxClients maximum number of simultaneous connections, rounded up to an even multiple of ThreadsPerChild
StartServers 2
MinSpareThreads The greater of “25” or 10% of MaxClients integer. Since IHS checks this value approximately once per second, MinSpareThreads should safely exceed the number of new requests you might receive in a second.Setting MinSpareThreads too high with the WebSphere Plugin may trigger premature spawning of new child processes, for which AppServer requests will be distributed over, with no sharing of MaxConnections counts or markdowns. If the ESI invalidation servlet is configured in the WebSphere Plugin, each additional process results in a dedicated web container thread being consumed.Setting MinSpareThreads too low may induce delays of a few seconds if IHS runs out of processing threads.
MaxSpareThreads There are multiple approaches to the tuning of this directive:

  • Preallocation: The system must have enough resources to handle MaxClients anyway, so let the web server retain idle threads/processes so that they are immediately ready to serve requests when load increases again.Set MaxSpareThreads to the same value as MaxClients.

    This approach should be used if there are extremely long-running application requests that would keep child processes from being able to terminate gracefully.

  • Reduce web server resource utilization during idle periods and increase the coordination between WebSphere Plugin threads: Allow the web server to clean up idle threads after load subsides so that the resources can be used for other applications. When the load increases again, it will reclaim the resources as it creates new child processes.

Set MaxSpareThreads to 25-30% of MaxClients. If it is too small a fraction of MaxClients, child processes will be terminated and recreated frequently.

ServerLimit MaxClients divided by ThreadsPerChild, or the default if that is high enough
ThreadLimit ThreadsPerChild

Note: ThreadLimit and ServerLimit need to appear before these other directives in the configuration file.

Default settings in recent default configuration files:

<IfModule worker.c>
ThreadLimit         25
ServerLimit         64
StartServers         2
MaxClients         600
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>
1.3.1. If memory is constrained

If there is concern about available memory on the server, some additional tuning can be done.

Increasing ThreadsPerChild (and ThreadLimit) will reduce the number of total server processes needed, reducing the per-server memory overhead. However, there are a number of possible drawbacks to increasing ThreadsPerChild. Search this document for ThreadsPerChild and consider all the warnings before changing it. Notably, this increases the per-process footprint which can be detrimental on 32-bit httpds.

On Linux, VSZ can appear very large if ulimit -s is left as a large value or unlimited. Reduce it in bin/envvars to e.g. 512 with ulimit -s if this is a concern. A high VSZ has no real cost, it does not consume memory, but it is sometimes noticed.

Setting and MaxMemFree to e.g. 512, will limit memory retained by each thread.

1.4. Handling enough simultaneous connections with IBM HTTP Server 1.3.x on Linux and Unix systems

IBM HTTP Server 1.3 on Linux and Unix systems uses one single-threaded child process per concurrent connection.

Recommended settings:

Directive Value
MaxClients maximum number of simultaneous connections
MinSpareServers 1
MaxSpareServers same value as MaxClients
StartServers default value

2. Out of the box tuning concerns

2.1. All platforms

MaxClients, ThreadsPerChild, etc.

Refer to the previous section.

cipher ordering (SSL only)

The default SSLCipherSpec ordering enables maximum strength SSL connections at a significant performance penalty. A much better performing and reasonably strong SSLCipherSpec configuration is given below.

Sendfile (non-SSL only)

With IBM HTTP Server 2.0 and above, Sendfile usage is disabled in the current default configuration files. This avoids some occasional platform-specific problems, but it may also increase CPU utilization on platforms on which sendfile is supported (Windows, AIX, Linux, HP-UX, and Solaris/x64).

If you enable sendfile usage on AIX, ensure that the nbc_limit setting displayed by the no program is not too high for your system. On many systems, the AIX system default is 768MB. We recommend setting this to a much more conservative value, such as 256MB. If the limit is too high, and the web server use of sendfile results in a large amount of network buffer cache memory utilization, a wide range of other system functions may fail. In situations like that, the best diagnostic step is to check network buffer cache utilization by running netstat -c. If it is relatively high (hundreds of megabytes), disable sendfile usage and see if the problem occurs again. Alternately, nbc_limit can be lowered significantly but sendfile still be enabled.

Some Apache users on Solaris have noted that sendfile is slower than the normal file handling, and that sendfile may not function properly on that platform with ZFS or some Ethernet drivers. IBM HTTP Server provides support for sendfile on Solaris/x64 but not Solaris/SPARC.

2.2. AIX

With IBM HTTP Server 2.0.42 and above, the default IHSROOT/bin/envvars file specifies the setting MALLOCMULTIHEAP=considersize,heaps:8. This enables a memory management scheme for the AIX heap library which is better for multithreaded applications, and configures it to try to minimize memory use and to use a moderate number of heaps. For configurations with extensive heap operations (SSL or certain third-party modules), CPU utilization can be lowered by changing this setting to the following: MALLOCMULTIHEAP=true. This may increase the memory usage slightly.

2.3. Windows

The Fast Response Cache Accelerator (FRCA, aka AFPA) is disabled in the current default configuration files because some common Windows extensions, such as Norton Antivirus, are not compatible with it. FRCA is a kernel resident micro HTTP server optimized for serving static, non-access protected files directly out of the file system. The use of FRCA can dramatically reduce CPU utilization in some configurations. FRCA cannot be used for serving content over HTTPS/SSL connections.


3. Configuration features to avoid

IBM HTTP Server supports some features and configuration directives that can have a severe impact on server performance. Use of these features should be avoided unless there are compelling reasons to enable them.

  • HostnameLookups OnPerformance penalty: Extra DNS lookups per request.This is disabled by default in the sample configuration files.
  • IdentityCheck OnPerformance penalty: Delays introduced in the request to contact RFC 1413 ident daemon possibly running on client machineThis is disabled by default in the sample configuration files.
  • mod_mime_magicPerformance penalty: Extra CPU and disk I/O to try to find the file typeThis is disabled by default in the sample configuration files.
  • ContentDigest On (1.3 only)Performance penalty: Extra CPU to compute MD5 hash of the responseThis is disabled by default in the sample configuration files.
  • setting MaxRequestsPerChild to non-zeroPerformance penalty:
    • Extra CPU to terminate and create child processes
    • With IHS 2 or higher on Linux and Unix, this can lead to an excessive number of child processes, which in turn can lead to excessive swap space usage. Once a child process reaches MaxRequestsPerChild it will not handle any new connections, but existing connections are allowed to complete. In other words, only one long-running request in the process will keep the process active, sometimes indefinitely. In environments where long-running requests are not unusual, a large number of exiting child processes can build up.

    This is set to the optimal setting (0) in default configuration files for recent releases.

    In rare cases, IHS support will recommend setting MaxRequestsPerChild to non-zero to work around a growth in resources, based on an understanding of what type of resource is growing in use, and what other mechanisms are available to address that growth.

    With IBM HTTP Server 1.3 on Linux and Unix, a setting of a high value such as 10000 is not a concern. The child processes each handle only a single connection, so they cannot be prevented from exiting by long-running requests.

    With IBM HTTP Server 2.0 and above on Linux and Unix, if the feature must be used, then only set it to a relatively high value such as 50000 or more to limit the risk of building up a large number of child processes which are trying to exit but which can’t because of a long-running request which has not completed.

  • .htaccess filesPerformance penalty: Extra CPU and disk I/O to locate .htaccess files in directories where static files are served.htaccess files are disabled in the sample configuration files.
  • detailed loggingDetailed logging (SSLTrace, plug-in LogLevel=trace, GSKit trace, third-party module logging) is often enabled as part of problem diagnosis. When one or more of these traces is left enabled after the problem is resolved, CPU utilization is higher than normal.Detailed logging is disabled in the sample configuration files.
  • disabling Options FollowSymLinksIf the static files are maintained by untrusted users, you may want to disable this option in the configuration file, in order to prevent those untrusted users from creating symbolic links to private files that should not ordinarily be served. But disabling FollowSymLinks to prevent this problem will result in performance degradation since the web server then has to check every component of the pathname to determine if it is a symbolic link.Following symbolic links is enabled in the sample configuration files.

4. Common configuration changes and their Implications

4.1. IBM HTTP Server 2.0 and above on Linux and Unix systems: ThreadsPerChild

This directive is commonly modified as part of tuning the web server. There are advantages and disadvantages for different values of ThreadsPerChild:

  • Higher values for ThreadsPerChild result in lower overall memory use for the server, as long as the value of ThreadsPerChild isn’t higher than the normal number of concurrent TCP connections handled by the server.
  • Extremely high values for ThreadsPerChild may result in encountering address space limitations.
  • Higher values for ThreadsPerChild often results in lower numbers of connections which the WebSphere connection maintains to the application server and better sharing of markdown information.
  • Higher values for ThreadsPerChild result in higher CPU utilization for SSL processing.
  • On older Linux distributions such as RedHat Advanced Server 2.1 and SuSE SLES 8 which use the linuxthreads library, higher values for ThreadsPerChild result in higher CPU utilization in the threads library.Some features may exacerbate this problem, such as RewriteMap or the following modules: mod_mem_cache, mod_ibm_ldap, or mod_ext_filter.
  • Higher ThreadsPerChild results in a more effective use of the cache and connection pooling in mod_ibm_ldap.
  • Higher ThreadsPerChild results in a more effective use of the cache in mod_mem_cache, because each child must fill its own cache.MaxSpareThreads = MaxClients is also beneficial for mod_mem_cache because it prevents child processes who have built up large caches from being gracefully terminated.

System tuning changes may be necessary to run with higher values for ThreadsPerChild. If IBM HTTP Server fails to start after increasing ThreadsPerChild, check the error log for any error messages. A common failure is a failed attempt to create a worker thread.

4.2. IBM HTTP Server 2.0 and above on Linux and Unix systems: MaxClients

This directive is commonly modified as part of tuning the web server to handle a greater client load (more concurrent TCP connections).

When MaxClients is increased, the value for MaxSpareThreads should be scaled up as well. Otherwise, extra CPU will be spent terminating and creating child processes when the load changes by a relatively small amount.

4.3. ExtendedStatus

This directive controls whether some important information is saved in the scoreboard for use by mod_status and diagnostic modules. When this is set to On, web server CPU usage may increase by as much as one percent. However, it can make mod_status reports and some other diagnostic tools more useful.


5. WebSphere plug-in concerns on Linux and Unix systems

>5.1 Tuning IHS to make the MaxConnections parameter more effective

The use of the MaxConnections parameter in the WebSphere plug-in configuration is most effective when IBM HTTP Server 2.0 and above is used and there is a single IHS child process. However, there are other tradeoffs:

It is usually much more effective to actively prevent backend systmes from accepting more connections than they can reliably handle, performing throttling at the TCP level. When this is done at the client (HTTP Plugin) side, there is no cross-system or cross-process coordination which makes the limits ineffective.

  • linuxthreads (traditional pthread library on Linux): ThreadsPerChild greater than about 100 results in high CPU overhead
  • SSL on any platform: threadsPerChild greater than about 100 results in high CPU overhead
  • WebSphere 5.x plug-in has a file descriptor limitation which will be encountered on Linux and Solaris if ThreadsPerChild is greater than 500

Using MaxConnections with more then 1 child processes, or across a webserver farm, introduces a number of complications. Each IHS child process must have a high enough MaxConnections value to allow each thread to be able to find a backend server, but in aggregate the child processes should not be able to overrun an individual application server.

Choosing a value for MaxConnections
  • MaxConnections has no effect if it exceeds ThreadsPerChild, because no child could try to use that many connections in the first place.
  • Upper limitIf you are concerned about a single HTTP Server overloading an Application server, you must first determine “N” — the maximum number of requests the single AppServer can handle.

    MaxConnections would then be = (N / (MaxClients / ThreadsPerChild)), or N divided by the maximum number of child processes based on your configuration . This represents the worst-case number of connections by IHS to a single Application Server. As the number of backends grows, the likelyhood of the worst-case scenario decreases as even the uncoordinated child processes are still distributing load with respect to session affinity and load balancing.

    For example, if you wish to restrict each Application Server to a total of 200 connections, spread out among 4 child processes, you must set the MaxConnections parameter to 50 because each child process keeps its own count.

  • Lower LimitIf MaxConnections is too small, a child process may start returning errors because it has no AppServers to use.

    To prevent problems, MaxConnections * (number of usable backend servers) should exceed ThreadsPerChild.

    For example, if each child process has 128 ThreadsPerChild and MaxConnections is only 50 with two backend AppServers, a single child process may not be able to fulfill all 128 requests because only 50 * 2 connections can be made.

To use MaxConnections, IHS should be configured to use a small, fixed number of child process, and to not vary them in response to a change in load. This provides a consistent, predictable number of child processes that each have a fixed MaxConnections parameter.

  • MinSpareServers and MaxSpareServers should be set to the same value as MaxClients.
  • StartServers should be set to MaxClients / ThreadsPerChild.

When more then 1 child process is configured (number of child processes is MaxClients/ThreadsPerChild), setting MaxSpareServers equal to MaxClients can have the effect of keeping multiple child process alive when they aren’t strictly needed. This can be considered detrimental to the WebSphere Plugin detecting markdowns, because the threads in each child process must discover a server should be marked down. See section 6.2 below.

5.2 Tuning IHS for efficiency of Plugin markdown handling

Only WebSphere Plugin threads in a single IHS child process share info about AppServer markdowns, so some customers wish to aggressively limit the number of child processes that are running at any given time. If a user has problems with markdowns being discovered by many different child processes in the same webserver, consider increasing ThreadsPerChild and reducing MinSpareThreads and MaxSpareThreads as detailed below.

  • One approach is to use a single child process, where MaxClients and ThreadsPerChild are set to the same value. IHS will never create or destroy child processes in response to load.Cautions
    • A WebServer crash impacts 100% of the clients.
    • Some types of hangs may influence 100% of the clients.
    • CPU usage may increase if SSL is used and ThreadsPerChild exceeds a few hundred.
    • More ramifications of high ThreadsPerChild is discussed here.
  • A second approach is to use a variable number of child processes, but to aggressively limit the number created by IHS in response to demand (and aggressively remove unneeded processes). This is accomplished by setting ThreadsPerChild to 25% or 50% of MaxClients, setting MinSpareThreads and MaxSpareThreads low (relative to recommendations here).Cautions:
    • MaxSpareThreads < MaxClients causes IHS to routinely kill off child processes, however it may take some time for these processes to exit while slow requests finish processing.
    • A lower MaxSpareThread can cause extra CPU usage for the creation of replacement child processes.
    • Caches for ESI and mod_mem_cache are thrown away when child processes exit.

See also

5.3 Tuning IHS for efficiency of ESI invalidation servlet / web container threads

As the number of child processes increases (ratio of ThreadsPerChild / MaxClients shrinks), if the ESI Invalidation Servlet is used with the WebSphere Plugin, more and more Web Container threads will be permanently consumed. Each child processes uses 1 ESI Invalidation thread (when the feature is configured), and this thread is used synchronously in the web container.

This requires careful consideration of the number of child processes per webserver, the number of webservers, and the number of configured Web Container threads.


6. SSL Performance

6.1. ciphers

When an SSL connection is established, the client (web browser) and the web server negotiate the cipher to use for the connection. The web server has an ordered list of ciphers, and the first cipher in that list which is supported by the client will be selected.

By default, IBM HTTP Server prefers AES and RC4 ciphers over the computationally expensive Triple-DES (3DES) cipher suite, and tuning of the order of SSL directives for performance reasons is generally not needed.

The following is a table of most of the ciphers IHS supports. The table is sorted in approximate descending cipher strength. For a full list of ciphers IHS supports, refer to the appropriate infocenter for your version of IHS.

SSLV3 and TLSV1 ciphers
shortname longname Meaning
Available on v8 and later, distributed
C024 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 Elliptic Curve DSA AES SHA2 (256-bit)
C02c TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 Elliptic Curve DSA AES SHA2 (256-bit)
C028 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 Elliptic Curve RSA AES SHA2 (256-bit)
C030 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 Elliptic Curve RSA AES SHA2 (256-bit)
C00a TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA Elliptic Curve DSA AES SHA (256-bit)
C014 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA Elliptic Curve RSA AES SHA (256-bit)
C008 TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA Elliptic Curve DSA Triple-DES SHA2 (168-bit)
C012 TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA Elliptic Curve RSA Triple-DES SHA2 (168-bit)
C023 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 Elliptic Curve DSA AES SHA2 (128-bit)
C02b TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 Elliptic Curve DSA AES SHA2 (128-bit)
C009 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA Elliptic Curve DSA AES SHA (128-bit)
C027 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 Elliptic Curve RSA AES SHA2 (128-bit)
C02f TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 Elliptic Curve RSA AES SHA2 (128-bit)
C013 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA Elliptic Curve RSA AES SHA (128-bit)
default on v8 and later, distributed
9D TLS_RSA_WITH_AES_256_GCM_SHA384 RSA AES SHA2 (256-bit)
3D TLS_RSA_WITH_AES_256_CBC_SHA256 RSA AES SHA2 (256-bit)
9C TLS_RSA_WITH_AES_128_GCM_SHA256 RSA AES SHA2 (128-bit)
3C TLS_RSA_WITH_AES_128_CBC_SHA256 RSA AES SHA2 (128-bit)
All versions, all systems
3A SSL_RSA_WITH_3DES_EDE_CBC_SHA RSA Triple-DES SHA (168 bit)
35b TLS_RSA_WITH_AES_256_CBC_SHA RSA AES SHA (256 bit)

The following configuration directs the server to prefer strong 128-bit RC4 ciphers first and will provide a significant performance improvement over the default configuration. This configuration does not support the weaker 40-bit, 56-bit, or NULL/Plaintext ciphers that security scanners may complain about.

The order of the SSLCipherSpec directives dictates the priority of the ciphers, so we order them in a way that will cause IHS to prefer less CPU intensive ciphers. SSLv2 is disabled implicitly by not including any SSLv2 ciphers

<VirtualHost *:443>
  SSLEnable
  Keyfile keyfile.kdb

  ## SSLv3 128 bit Ciphers 
  SSLCipherSpec SSL_RSA_WITH_RC4_128_MD5 
  SSLCipherSpec SSL_RSA_WITH_RC4_128_SHA

  ## FIPS approved SSLV3 and TLSv1 128 bit AES Cipher
  SSLCipherSpec TLS_RSA_WITH_AES_128_CBC_SHA
  
  ## FIPS approved SSLV3 and TLSv1 256 bit AES Cipher
  SSLCipherSpec TLS_RSA_WITH_AES_256_CBC_SHA

  ## Triple DES 168 bit Ciphers
  ## These can still be used, but only if the client does
  ## not support any of the ciphers listed above.
  SSLCipherSpec SSL_RSA_WITH_3DES_EDE_CBC_SHA

  ## The following block enables SSLv2. Excluding it in the presence of 
  ## the SSLv3 configuration above disables SSLv2 support.

  ## Uncomment to enable SSLv2 (with 128 bit Ciphers)
  #SSLCipherSpec SSL_RC4_128_WITH_MD5 
  #SSLCipherSpec SSL_RC4_128_WITH_SHA
  #SSLCipherSpec SSL_DES_192_EDE3_CBC_WITH_MD5
 
</VirtualHost>

For IHS V8R0 and later, new ciphers and a new syntax for SSLCipherSpec are supported. This is a require change since new TLS protocols with a disjoint set of ciphers are supported. These same releases of IHS also favor strong ciphers, but also completely disable weak and export ciphers.

To configure weaker, but less CPU intensive ciphers, in V8R0 and later:

SSLCipherSpec SSLv3 SSL_RSA_WITH_RC4_128_SHA SSL_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA SSL_RSA_WITH_3DES_EDE_CBC_SHASSLCipherSpec TLSv10 SSL_RSA_WITH_RC4_128_SHA SSL_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA SSL_RSA_WITH_3DES_EDE_CBC_SHA

SSLCipherSpec TLSv11 SSL_RSA_WITH_RC4_128_SHA SSL_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA SSL_RSA_WITH_3DES_EDE_CBC_SHA

# TLSv12 is left at the default TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA SSL_RSA_WITH_3DES_EDE_CBC_SHA

You can use the following LogFormat directive to view and log the SSL cipher negotiated for each connection:

LogFormat "%h %l %u %t \"%r\" %>s %b \"SSL=%{HTTPS}e\" \"%{HTTPS_CIPHER}e\" \"%{HTTPS_KEYSIZE}e\" \"%{HTTPS_SECRETKEYSIZE}e\"" ssl_common
CustomLog logs/ssl_cipher.log ssl_common

This logformat will produce an output to the ssl_cipher.log that looks something like this:

127.0.0.1 - - [18/Feb/2005:10:02:05 -0500] "GET / HTTP/1.1" 200 1582 "SSL=ON" "SSL_RSA_WITH_RC4_128_MD5" "128" "128"

6.2. Server certificate size

Larger server certificates are also costly. Every doubling of key size costs 4-8 times more CPU for the required computation.

Unfortunately, you don’t have a lot of choice in the size of your server certificate; the industry is currently (2010) moving from 1024-bit to 2048-bit certificates to keep up with the increasing compute power available to those trying to break SSL. But there are some SSL performance tuning tips that can help.

The primary cost of the computation associated with a larger server certificate size is in the SSL handshake when a new session is created, so using keep-alive and re-using SSL sessions can make a significant difference in performance. See more about that below.

6.3. Linux and Unix systems, IBM HTTP Server 2.0 and higher: ThreadsPerChild

The SSL CPU utilization will be lower with lower values of ThreadsPerChild. We recommend using a maximum of 100 if your server handles a lot of SSL traffic, so that the client load is spread among multiple child processes. (Note: This optimization is not possible on Windows, which supports only a single child process.)

6.4. AIX, IBM HTTP Server 2.0 and higher: MALLOCMULTIHEAP setting in IHSROOT/bin/envvars

Set this to the value true when there is significant SSL work-load, as this will result in better performance for the heap operations used by SSL processing.

6.5. Should I use a cryptographic accelerator?

The preferred approach to improving SSL performance is to use software tuning to the greatest extent possible. Installation and maintenance of crypto cards is relatively complex and usually results in a relatively small reduction in CPU usage. We have observed many situations where the improvement is less than 10%.

6.6. HTTP keep-alive and SSL

HTTP keep-alive has a much larger benefit for SSL than for non-SSL. If the goal is to limit the number of worker threads utilized for keep-alive handling, performance will be much better if KeepAlive is enabled with a small timeout for SSL-enabled virtual hosts, than if keep-alive is disabled altogether.

Example:

<VirtualHost *:443>
normal configuration
# enable keepalive support, but with very small timeout 
# to minimize the use of worker threads
KeepAlive On
KeepAliveTimeout 1
</VirtualHost>

Warning! We are not recommending “KeepAliveTimeout 1” in general. We are suggesting that this is much better than setting KeepAlive Off. Larger values for KeepAliveTimeout will result in slightly better SSL session utilization at the expense of tying up a worker thread for a longer period of time in case the browser sends in another request before the timeout is over. There are diminishing returns for larger values, and the optimal values are dependent upon the interaction between your application and client browsers.

6.7. SSL Sessions and Load Balancers

An SSL session is a logical connection between the client and web server for secure communications. During the establishment of the SSL session, public key cryptography is used to to exchange a shared secret master key between the client and the server, and other characteristics of the communication, such as the cipher, are determined. Later data transfer over the session is encrypted and decrypted with symmetric key cryptography, using the shared key created during the SSL handshake.

The generation of the shared key is very CPU intensive. In order to avoid generating the shared key for every TCP connection, there is a capability to reuse the same SSL session for multiple connections. The client must request to reuse the same SSL session in the subsequent handshake, and the server must have the SSL session identifier cached. When these requirements are met, the handshake for the subsequent TCP connection requires far less server CPU (80% less in some tests). All web browsers in general use are able to reuse the same SSL session. Custom web clients sometimes do not have the necessary support, however.

The use of load balancers between web clients and web servers presents a special problem. IBM HTTP Server cannot share a session id cache across machines. Thus, the SSL session can be reused only if a subsequent TCP connection from the same client is sent by the load balancer to the same web server. If it goes to another web server, the session cannot be reused and the shared key must be regenerated, at great CPU expense.

Because of the importance of reusing the same SSL session, load balancer products generally provide the capability of establishing affinity between a particular web client and a particular web server, as long as the web client tries to reuse an existing SSL session. Without the affinity, subsequent connections from a client will often be handled by a different web server, which will require that a new shared key be generated because a new SSL session will be required.

Some load balancer products refer to this feature as SSL Sticky or Session Affinity. Other products may use their own terminology. It is important to activate the appropriate feature to avoid unnecessary CPU usage in the web server, by increasing the frequency that SSL sessions can be reused on subsequent TCP connections.

End users will generally not be aware that SSL session is not being reused unless the overhead of continually negotiating new sessions causes excessive delay in responses. Web server administrators will generally only become aware of this situation when they observe the CPU utilization approaching 100%. The point at which this becomes noticeable will depend on the performance of the web server hardware, and whether or not a cryptographic accelerator is being used.

When SSL is being used and excessive web server CPU utilization is noticed, it is important to first confirm that Session Affinity is enabled if a load balancer is being used.

Checking the actual reuse of SSL sessions

First, get the number of new sessions and reused sessions. LogLevel must be set to info or debug.

IBM HTTP Server 2.0.42 or 2.0.47 up through cumulative fix PK07831, and IBM HTTP Server 6 up through 6.0.2 writes messages of this format for each handshake:

[Sat Jul 09 10:37:22 2005] [info] New Session ID: 0
[Sat Jul 09 10:37:22 2005] [info] New Session ID: 1

0 means that an existing SSL session was re-used. 1 means that a new SSL session was created.

Getting the number of each type of handshake:

$ grep "New Session ID: 0" logs/error_log  | wc -l
1115
$ grep "New Session ID: 1" logs/error_log  | wc -l
163

IBM HTTP Server 2.0.42 or 2.0.47 with cumulative fix PK13230 or later and IBM HTTP Server 6.0.2.1 and later writes messages of this format for each handshake:

[Sat Oct 01 15:30:17 2005] [info] [client 9.49.202.236] Session ID: YT8AAPUJ4gWir+U4v2mZFaw5KDlYWFhYyOM+QwAAAAA= (new)
[Sat Oct 01 15:30:32 2005] [info] [client 9.49.202.236] Session ID: YT8AAPUJ4gWir+U4v2mZFaw5KDlYWFhYyOM+QwAAAAA= (reused)

To get the relative stats:

$ grep "Session ID.*reused" logs/error_log  | wc -l
1115
$ grep "Session ID:.*new" logs/error_log  | wc -l
163

The percentage of expensive handshakes for this test run is 163 / (1115 + 163), or 12.8%. To confirm that the load balancer is not impeding the reuse of SSL sessions, perform a load test with and without the load balancer*, and compare the percentage of expensive handshakes in both tests.

*Alternately, use the load balancer for both tests, but for one load test have the load balancer to send all connections to a particular web server, and for the other load test have it load balance between multiple web servers.

6.8. Session ID cache limits

IBM HTTP Server uses an external session ID cache with no practical limits on the number of session IDs unless the operating system is Windows or the directive SSLCacheDisable is present in the IHS configuration.

When the operating system is Windows or the SSLCacheDisable directive is present, IBM HTTP Server uses the GSKit internal session ID cache which is limited to 512 entries by default.

This limit can be increased to a maximum of 4095 (64000 for z/OS) entries by setting the environment variable GSK_V3_SIDCACHE_SIZE to the desired value.


7. Network Tuning

7.1 All platforms

Problem Description

Low data transfer rates handling large POST requests.

This problem can be caused by a small TCP receive buffer size being used for web server sockets. This results in the client being limited in how much data it can send before the server machine has to acknowledge it, resulting in poor network utilization.

Resolution

Some data transfer performance problems can be solved using the native operating system mechanism for increasing the default size of TCP receive buffers. IBM HTTP Server must be restarted after making the change.

Platform Tuning parameter Instructions
AIX tcp_recvspace Run no -o tcp_recvspace to display the old value. Run no -o tcp_recvspace=new_value to set a larger value.
Solaris tcp_recv_hiwat Run ndd /dev/tcp tcp_recv_hiwat to display the old value. Run ndd -set /dev/tcp tcp_recv_hiwat new_value to set a larger value.
HP-UX tcp_recv_hiwater_def Run ndd /dev/tcp tcp_recv_hiwater_def to display the old value. Run ndd -set /dev/tcp tcp_recv_hiwater_def new_value to set a larger value.
Linux rmem_default Run cat /proc/sys/net/core/rmem_default to display the old value. Run echo new_value > /proc/sys/net/core/rmem_default to set a larger value.

The following levels of IBM HTTP Server contain a ReceiveBufferSize directive for setting this value in a platform-independent manner, and only for the web server:

  • 2.0.42.2 with cumulative e-fix PK07831 or later
  • 2.0.47.1 with cumulative e-fix PK07831 or later
  • 6.0.2 or later
    (6.0.2.1 or later on Windows)

Usage:

ReceiveBufferSize number-of-bytes

This directive must appear at global scope in the configuration file.

Making the adjustment
  1. Check the current system default using the platform-specific command in the previous table.
  2. Use either 131072 bytes, or twice the current system default, whichever is greater.
    Example ReceiveBufferSize directive:
    ReceiveBufferSize 131072
    If the ReceiveBufferSize directive is not available, use the platform-specific command in the previous table to change the system default.
  3. Restart the web server, then retry the testcase.
  4. If POST performance did not improve enough, double the receive buffer value and try again.

7.2 AIX

Problem Description

Low data transfer rates running on AIX 5 when handling large (multi-megabyte) POST requests from Windows machines. Network traces show large delays (~150 ms) between packet acknowledgments.

Resolution

This performance problem can be corrected by setting an AIX network tuning option and applying AIX maintenance.

For all releases of AIX, set the tcp_nodelayack network option to 1 by using the following command:

no -o tcp_nodelayack=1

For AIX 5.1, apply the fix for APAR IY53226. For more information, see: IY53226

For AIX 5.2, apply the fix for APAR IY53254. For more information, see: IY53226

Problem Description

Unexpected network latency when the application is somewhat slow. Network traces show a normal HTTP 200 OK message for the first part of the response, then AIX waits ~150ms for a delayed ACK from the client.

Resolution

This performance problem can be corrected by setting an AIX network tuning option.

Set the rfc2414 network option to 1 by using the following command:

no -o rfc2414=1

8. Operating System Tuning Reference Materials

Instructions for tuning some operating system parameters are available in the WebSphere InfoCenter. Many of these parameters, such as TCP layer configuration or file descriptor configuration, apply to IBM HTTP Server as well.

9. Memory use comparison between IBM HTTP Server 1.3 and IBM HTTP Server 2.0

This comparison is not applicable to IBM HTTP Server on Windows, where memory usage is much more similar between 1.3 and 2.0.

Many customers on Unix systems have encountered paging (swap) space or physical memory problems with IBM HTTP Server 1.3 due to the large number of child processes which may be required, and the memory overhead per child process.

On AIX and Solaris, paging space is allocated based on the virtual memory size of the process, even for pages which are shared with the httpd parent process and will never be modified. For IBM HTTP Server 1.3, the majority of the virtual memory in a child process is shared with the parent process and never modified in the child, so while it contributes to the paging space usage (a disk allocation issue) it does not contribute to active paging (a performance issue).

This information can help determine how much paging space is required, as well as show some of the benefits of migrating to IBM HTTP Server 2.0 or later.

Customers should expect high virtual memory use for the entire set of IBM HTTP Server 1.3 processes; customers should check paging space utilization when encountering problems related to virtual memory, and ensure that enough paging space has been allocated to support the maximum configured number of httpd processes.

Scenario 1 for comparison

IBM HTTP Server versions
1.3.28.1 with latest maintenance; 2.0.47.1 with latest maintenance; WebSphere 5.1.1.8 plug-in
OS
Solaris 9
MaxClients
500; for IBM HTTP Server 2.0.47.1, two child processes will be required; for IBM HTTP Server 1.3.28.1, 500 child processes will be required
https transports in WebSphere plug-in
One https transport will be configured. Note: Additional https transports add around 400KB more memory per child process. A configuration with multiple https transports will see much greater benefits when switching to IBM HTTP Server 2.0 or above.
SSL-enabled virtual hosts in web server
One SSL-enabled virtual host will be configured. Note: Additional SSL-enabled virtual hosts add around 400KB more memory per child process. A configuration with multiple SSL-enabled virtual hosts will see much greater benefits when switching to IBM HTTP Server 2.0 or above.
no memory-based caching enabled
The WebSphere plug-in ESI cache feature is available with either 1.3.28.1 or 2.0.47.1. There is one copy of the cache per child process, so it is much more memory-efficient with IBM HTTP Server 2.0 or above.

Process management configuration

1.3.28.1
StartServers        5
MaxClients        500
MaxSpareServers   500
MinSpareServers     1
MaxRequestsPerChild 0
2.0.47.1
<IfModule worker.c>
ServerLimit          2
ThreadLimit        250
StartServers         2
MaxClients         500
MinSpareThreads      1
MaxSpareThreads    500 
ThreadsPerChild    250
MaxRequestsPerChild  0
</IfModule>

Memory use measurements

1.3.28.1

(from ps -A -o pid,ppid,vsz,rss,comm)

  PID  PPID  VSZ  RSS COMMAND
22729 22676 13448 4768 bin/httpd
22721 22676 13448 4768 bin/httpd
22734 22676 13448 4768 bin/httpd
22745 22676 13448 4768 bin/httpd
22737 22676 13448 4768 bin/httpd
22719 22676 13448 4768 bin/httpd
22740 22676 13448 4768 bin/httpd
22731 22676 13448 4768 bin/httpd
22728 22676 13448 4768 bin/httpd
22741 22676 13448 4768 bin/httpd
22720 22676 13448 4768 bin/httpd
22724 22676 13448 4768 bin/httpd
22746 22676 13448 4768 bin/httpd
22717 22676 13448 4768 bin/httpd
22730 22676 13448 4768 bin/httpd
22718 22676 13448 4768 bin/httpd
22722 22676 13448 4768 bin/httpd
22732 22676 13448 4768 bin/httpd
22743 22676 13448 4768 bin/httpd
22739 22676 13448 4768 bin/httpd
22733 22676 13448 4768 bin/httpd
22676     1 13448 8760 bin/httpd
22742 22676 13448 4768 bin/httpd
(and 478 more children)

Totals: Total virtual memory size is about 6.7 GB. Total resident set size is about 2.4GB.

2.0.47.1

(from ps -A -o pid,ppid,vsz,rss,comm . On AIX, use ps -A -o pid,ppid,vsz,rssize,comm . )

  PID  PPID  VSZ  RSS COMMAND
  394   390 44240 36696 /home/trawick/testihsbuild/ihsinstall/bin/httpd
  390     1 15136 9528 /home/trawick/testihsbuild/ihsinstall/bin/httpd
  393   390 44144 36536 /home/trawick/testihsbuild/ihsinstall/bin/httpd
  392   390 14552 3328 /home/trawick/testihsbuild/ihsinstall/bin/httpd

Totals: Total virtual memory size is about 117 MB. Total resident set size is about 86MB. Note that IBM HTTP Server 2.0 and above has an extra child process when CGI requests are enabled.

Scenario 2 for comparison

This is the same except for the OS, which is AIX 5.3.

Memory use measurements

1.3.28.1

(from ps auxw then post-processing and picking the largest child)

USER      SZ       RSS
root     11800     800
nobody   12956    1976
(and 499 more children like this)

Totals: Total virtual memory size is about 6.5 GB. Total resident set size is about 1 GB.

2.0.47.1

(from ps auxw then post-processing)

USER     SZ      RSS
nobody   32876   32932
nobody   33348   33384
root     632     668
nobody   636     808

Totals: Total virtual memory size is about 68 MB. Total resident set size is about 68MB. Note that IBM HTTP Server 2.0 and above has an extra child process when CGI requests are enabled.

10. Slow startup, or slow response time from proxy or LDAP with IBM HTTP Server 2.0 or above on AIX

In support of IPv6 networking, these levels of IBM HTTP Server will query the resolver library for IPv4 and IPv6 addresses for a host. This can result on extra DNS queries on AIX, even when the IPv4 address is defined in /etc/hosts. To work around this issue, IPv6 lookups can be disabled.

System-wide setting

Edit /etc/netsvc.conf, which configures the resolver system-wide. Add or modify the lookup rule for hosts so that it has this setting:

hosts=local4,bind4

That will disable IPv6 lookups. Now restart IBM HTTP Server and confirm that the delays with proxy requests or LDAP have been resolved.

IHS-specific setting

Add this to the end of ihsroot/bin/envvars:

  NSORDER=local4,bind4
  export NSORDER

11. High disk I/O with IBM HTTP Server on AIX

A customer reported that an internal disk mirror showed a high level of write I/O every 60 seconds which was related empirically to client load on the web server and which was determined to be unrelated to logging. AIX support narrowed down the specific web server activity related to the high write I/O and determined that it was due to file access times being updated by the filesystem when the web server served the page.

IBM HTTP Server 2.0 and above can send static files using the AIX send_file() API, which in turn can enable the AIX kernel to deliver the file contents to the client from the network buffer cache. This results in the file access time remaining unchanged, which solved this particular disk I/O problem.

The use of send_file() is controlled with the EnableSendfile directive. Several potential problems must be considered when IBM HTTP Server uses send_file(); thus it is disabled by default in the configuration files provided with the last several releases.

12. High CPU in child processes after WebSphere plugin config is updated

The WebSphere plugin will normally reload the plugin configuration file (plugin-cfg.xml) during steady state operation if the file is modified. When the reload occurs during steady state operation, it must be reloaded in every web server child process serving requests. Initialization of https transports is particularly CPU-intensive so, if there are many such transports defined or many child processes, the CPU impact can be high.

One way to address the issue on Unix and Linux platforms is to disable automatic reload by setting RefreshInterval to -1 in plugin-cfg.xml, then use the apachectl graceful command to restart the web server when the new plug-in configuration must be activated. This will result in the reload occurring only once — in the IHS parent process. The new configuration will be inherited by the new child processes which are created by the restart operation.

Another way to address this issue is to utilize WebSphere 6.1 (or later) webserver definitions. This will allow you to have smaller plug-in config files because they are broken down in a way that each plugin-cfg.xml is only generated with the transports relevant to that web server. When the reload occurs, it doesn’t reinitialize all the transports; only the one in the config that changed will be reinitialized.

13. Access logs

Disk writes to the access logs can become a bottleneck at high loads on any operating system. mpmstats output will show many busy threads in log state. Specify

BufferedLogs on

in the IHS configuration file to reduce the disk I/O rate due to access logging.

14. z/OS Considerations

HFS disk syncing

IHS threads can block due to disk filesystem sync activity even with logging disabled. This can be caused by updates to file access times for IHS static content files on an HFS. A mount parameter of SYNC(99999) prevents the metadata updates. zFS uses asynchronous disk writes and should not be affected as much by sync activity.

USS save areas

Currently USS has 14,400 save areas per LPAR in a cell pool. One of these is used for every Unix system call. If the LPAR supports more concurrent web connections than this, the cell pool may become depleted. Once this happens, subsequent system calls will still work, but they must first obtain the local lock and use slower methods of allocating storage. Since the cell pool is an LPAR wide resource, other Unix applications (WAS for example) running on the same LPAR are affected.

If your LPAR needs to support tens of thousands of concurrent web connections, check the mpmstats output in the error log for each IHS server. The bsy count shows the total number of busy threads. Each of the busy threads will typically need a USS save area. The ka count shows the number of threads that are busy due to keepalive reads. With IHS 8.5 and later, the ka count will always be zero because keepalive reads are handled asynchronously and do not tie up a thread. With IHS 8.0 and earlier, you may be able to reduce the KeepaliveTimeout value to lower the number of busy threads. Disabling Keepalive altogether is not recommended due to increased CPU overhead needed to re-establish a connection if the client requests another web object. The additional CPU overhead will be much greater for re-establishing SSL connections.

Default stack size for 31-bit IHS

IHS 8.5.5.0 introduced an optional 31-bit IHS for compatibility with 31-bit only libraries required by some DGW Plugins. If this 31-bit IHS is used, the default stack size should be increased to e.g. STACK(1M,128K,ANY,KEEP,1M,128K) in CEE_RUNOPTS. Otherwise, high CPU can be observed in CEE@HDPSO (xplink stack overflow).

zEDC compression offload

Compressing large files on the fly with mod_deflate can use significant CPU. IHS 8.5.5.4 (PI24424) and later includes an alternate mod_deflate.so module, mod_deflate_z.so, that can use zEnterprise Data Compression (zEDC) when configured. In future releases, the default mod_deflate.so will contain zEDC support.

In IHS benchmarks, more than 10x CPU can be saved in configurations that use high CPU due to mod_deflate.

zEDC FAQ