October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Prevent Bruteforce attacks with Fail2ban

Prevent Bruteforce attacks with Fail2ban

Vigilant system administrators will notice many failed login attempts on their internet connected servers. While its good to know that you are preventing these logins, they are filling your logs and potentially making it harder to see other problems. Additionally, these failed logins are taking up bandwidth and likely trying over and over again to get into your system. Fortunately, there is a solution to preventing these attacks from continuing on a Linux based system. The following tutorial will set up Fail2ban on a RedHat based system. We will monitor failed SSH logins and failed Webmin logins. Additionally, we will set up a unique jail that will block persistant attackers for a longer period of time.

We will begin by installing Fail2ban and the dependencies required. At the time this was published Fail2ban was on version 0.8.3.

yum install fail2ban

Though its installed, there are no jails active. A jail is used to tell Fail2ban what to monitor. We are going to activate the SSH jail first. Please substitute your text editor of choice for nano, below. It is used in this example because of its ease of use for new users.

Enabling SSH Monitoring

nano /etc/fail2ban/jail.conf

Look for the line that begins [ssh-iptables]. Under this line, change the enabled value to true. Additionally, on a RedHat system, we need to change the log that is being monitored for SSH failures. Change the logpath value to /var/log/secure.

Hit the F3 button to save your configuration. This is specific to nano. Change as appropriate for your text editor.

Enabling Webmin Monitoring

Next we are going to add an additional jail. This is only needed and will only function if you have webmin installed. If not, skip to the next section.

At the tail end of the [ssh-iptables] jail that you just editted above add the following block.

[webmin-iptables]
enabled  = true
filter   = webmin-auth
action   = iptables[name=webmin, port=10000, protocol=tcp]
 sendmail-whois[name=WEBMIN, dest=example@example.com, sender=example@example.com]
logpath  = /var/log/secure

Modify the two instances of example@example.com with the destination and sender email address. This jail will monitor attempted logins to the Webmin user interface, which runs on port 10000, and if there are to many, issue a ban on the IP address. The email address supplied in dest= will receive an email saying the ban as been issued. If you moved your install of Webmin to run on something other than port 10000, change the port= value as appropriate.

Deal with Persistant Attackers

After you’ve had Fail2ban installed for a while, you will notice that you are banning the same IP address(es) over and over again. By default, Fail2ban issues an IP block for 10 minutes. This is often long enough to deter someone running an automated scan against your particular network segment. This length is also configurable in the jail.conf file. Look for the bantime value at the top of the configuration file. Additionally, individual jails can override this, as we are about to do.

After banning the same IP many times, I have decided that I don’t want to see that IP address again for a while. Using a jail found on the Fail2ban website, we will issue a month long ban if we block the same IP ten times in a week.

First we will add another jail to jail.conf. After the [ssh-iptables] jail, paste the following.

[fail2ban]
enabled = true
filter = fail2ban
action = iptables-allports[name=fail2ban]
 sendmail-whois[name=fail2ban, dest=example@example.com, sender=example@example.com]
logpath = /var/log/fail2ban.log
maxretry=10
# Search past week
findtime = 604800
# Ban for 30 days
bantime = 2592000

As stated above, this will issue a 30 day block of an IP address if it is blocked 10 times within a week. Again, change the example@example.com to your email address to receive notification of blocks.

Save and exit nano. Now we need to create one more file – the code behind the last jail we created.

nano /etc/fail2ban/filter.d/fail2ban.conf

Paste the following into this file

# fail2ban configuration file
#
# Author: Tom Hendrikx
#
# $Revision$
#
[Definition]
# Option:  failregex
# Notes.:  regex to match the password failures messages in the logfile. The
#          host must be matched by a group named "host". The tag "<HOST>" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P<host>\S+)
# Values:  TEXT
#
# Count all bans in the logfile
failregex = fail2ban.actions: WARNING \[(.*)\] Ban <HOST>
# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
#
# Ignore our own bans, to keep our counts exact. This means it doesn't count any bans this jail issues.
# In your config, name your jail 'fail2ban', or change this line! This means in the jail added to jail.conf, the jail must be like this:
# [fail2ban], else this won't work.
ignoreregex = fail2ban.actions: WARNING \[fail2ban\] Ban <HOST>

Save and exit nano.

Last we set up Fail2ban to run while the server is on and then start it

chkconfig --levels 2345 fail2ban on
service fail2ban start

To see the status of fail to ban run the fail2ban-client

fail2ban-client status

It should output something similar to

Status
|- Number of jail:      3
`- Jail list:           webmin-iptables, fail2ban, ssh-iptables

Fail2ban is now running.

For more information on Fail2ban, check the Fail2ban project website
If you are interested in other things to block or how to do something with Fail2ban, check their HOWTOs

Forcing HTTP traffic over HTTPS using Apache

Forcing HTTP traffic over HTTPS using Apache

I recently was required to redirect all traffic for a domain over an HTTPS connection. Using the Apache’s .htaccess files and the RewriteEngine, this task is trivial.

Insert this block of code into your .htaccess file at the root of your domain. This will redirect at HTTP traffic to use an HTTPS connection instead:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

You also need to make sure you have this line in your httpd.conf file (installed by default on most Linux Distributions)

LoadModule rewrite_module modules/mod_rewrite.so

Email on shutdown and restart (Linux)

 

The ability to know when a system shutdown (gracefully) and when it came back up can be invaluable to a System Administrator. The tips below will send out an email when a Linux system is gracefully shutdown and again when the system has restarted. This means for a single reboot the Administrator will receive two emails. This solution will not send out an email if the system loses power, however an email will still be sent on reboot. This should indicate to the Administrator that something went wrong with the shutdown. The tips below are for a Red Hat based system but should be similar in other architectures.

Start Up Email (Easy Way)

First up is the blurb that will send an email on system start only. This is a simple crontab entry, using one of the special time specifications that cron provides.

crontab -e

Hit the ‘i’ button and on the first empty line paste the following to send an email to example@example.com on system start up (replace with your address):

@reboot  echo "Server has restarted "`hostname` | mail -s "System Restart" example@example.com

Hit the escape button and then type

 

Now restart the system. After a few minutes you should recieve an email saying your system restarted successfully. Nice and easy, but it doesn’t quiet do what we want: send an email on start up AND shutdown. That brings us to:

Start Up and Shutdown Email (More advanced)

To get an email at both start up and shut down we need to write an init script. The tips below are specific to a Red Hat based system (Red Hat, Fedora, CentOS, etc) but should be fairly similar to others.

First thing we need is to create a new script (the example below uses nano, but vi, emacs or any other editor can be used to do the same thing). The following is the complete script used in this example – explanations will follow. In your home directory (or root’s home directory, at the end of this you want this script to be owned by root – this Tutorial does not cover permissions), type:

nano SystemEmail

Paste the following code into that document and save it.

 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
#!/bin/sh
# chkconfig: 2345 99 01
# Description: Sends an email at system start and shutdown
 
#############################################
#                                           #
# Send an email on system start/stop to     #
# a user.                                   #
#                                           #
#############################################
 
EMAIL="example@example.com"
RESTARTSUBJECT="["`hostname`"] - System Startup"
SHUTDOWNSUBJECT="["`hostname`"] - System Shutdown"
RESTARTBODY="This is an automated message to notify you that "`hostname`" started successfully.
 
Start up Date and Time: "`date`
SHUTDOWNBODY="This is an automated message to notify you that "`hostname`" is shutting down.
Shutdown Date and Time: "`date`
LOCKFILE=/var/lock/subsys/SystemEmail
RETVAL=0
 
# Source function library.
. /etc/init.d/functions
 
stop()
{
echo -n $"Sending Shutdown Email: "
echo "${SHUTDOWNBODY}" | mutt -s "${SHUTDOWNSUBJECT}" ${EMAIL}
RETVAL=$?
 
if [ ${RETVAL} -eq 0 ]; then
rm -f ${LOCKFILE}
success
else
failure
fi
echo
return ${RETVAL}
}
 
start()
{
echo -n $"Sending Startup Email: "
echo "${RESTARTBODY}" | mutt -s "${RESTARTSUBJECT}" ${EMAIL}
RETVAL=$?
 
if [ ${RETVAL} -eq 0 ]; then
touch ${LOCKFILE}
success
else
failure
fi
echo
return ${RETVAL}
}
 
case $1 in
stop)
stop
;;
 
start)
start
;;
 
*)
 
esac
exit ${RETVAL}

The main chunk of the code is the case statement at the bottom. When this script is set up, it will automatically be passed either “start” or “stop” as a parameter. Depending on that value, the case statement will either call the start() or stop() function. So, now that we have the script done, we need to set this up to run.

First, we need to make it executable:

chmod u+x SystemEmail

At this point you can test your code by running either of the two following commands. Both should send you an email, if you changed the appropriate variable in the script.

./SystemEmail start
./SystemEmail stop

Now we want to set this up to run at start up and shut down. Copy the script from your home directory to the init.d directory. Once it is here, you want the script to be owned by root. This will ensure that only the root user can make changes to the script. Since this will run everytime you start and stop your machine, this is a wise precaution.

cp SystemEmail /etc/init.d/

Last thing we do is set this up to run automatically by configuring it via chkconfig.

chkconfig --levels 3 SystemEmail on

You will now receive two emails during a normal system reboot. Congradulations.

Send Email on Root Login

Send Email on Root Login

Since root should not have direct log in access via SSH and we have set up our user to use sudo, root should get logged into very rarely. In an effort to alert the System Administrator when someone logs into root, I have set up my system to send out an email on root log in.

  • Log in as root
    su -
  • Change to the root user’s home directory
    cd ~
  • Edit the root user’s .bashrc file (in this example I use nano, but using vi, emacs, pico, etc. is fine)
    nano .bashrc
  • Add the following block of code to the end of .bashrc. This will send an email to example@example.com (change as appropriate)
    echo 'ALERT - Root Shell Access () on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d"(" -f2 | cut -d")" -f1`" example@example.com
  • When rootlogs in you will receive a message similar to this
    ALERT - Root Shell Access () on: Tue Jun 16 11:04:10 CDT 2009 user123 pts/0 2009-06-16 11:04

Word of warning: Send this to an email account that is not hosted on the same machine. If someone can log into root, they can see mail spools on the entire server. It would be a trivial matter to delete this message from the spool so the real System Administrator never sees this message.

Multiple IP Addresses on the same physical connection (Linux)

 

There are times when a server can be allocated more than one IP Address even though it contains only one physical network card. To associate these IP addresses with the server some manipulation of networking settings will need to be performed. The steps outlined in this walk-through are for RedHat based systems. This tutorial is for statically assigned IP Addresses (as a server generally will have).
For this walk through we are going to add one additional IP address to eth0. Navigate to

cd /etc/sysconfig/network-scripts

Copy ifcfg-eth0 to ifcfg-eth0:0

cp ifcfg-eth0 ifcfg-eth0:0

Now we need to modify the new file slightly so that it gets it’s own IP address.

nano ifcfg-eth0:0

DEVICE=eth0:0     <— Change this to match the new eth0:0 file we just created
BOOTPROTO=none
BROADCAST=x.x.x.x    <— This is the broad cast address for the subnet the new IP is on
DNS1=x.x.x.x    <— This is the main DNS server you are using (example: 64.120.14.26)
GATEWAY=x.x.x.x   <— This is the gateway address for the subnet the new IP is on
HWADDR=<DO NOT CHANGE>   <— Don’t change this from what is existing. The Hardware address is the same as the physical one
IPADDR=x.x.x.x   <— This is your new IP address
NETMASK=x.x.x.x   <— This is the netmask for the subnet the new IP is on
ONBOOT=yes    <— Leave to yes
OPTIONS=layer2=1
TYPE=Ethernet
PREFIX=29
DEFROUTE=yes
NAME=”System eth0:0?    <— Change to reflect new name of device
[/code]

Save your file with the new settings. Now we need to restart the networking service:

service network restart

When the network components come back up you should see your new device in the ifconfig command.To add more IPs, copy and replace values as specified above.

Namebench: cross-platform DNS benchmarking tool

I am talking about Namebench. This is cross platform tool written in Python that makes it possible to easily select the fastest DNS available in your area as well as to run benchmark tests directed to DNS entries.

All what you need to have to start using namebench is Python and Tk library, e.g. if you use Ubuntu or Debian just run the following command to meet namebench library requirements:

sudo apt-get install python python-tk -y

When done go to namebench’s official website and download the latest tarball from there. For example 1.3.1 is the latest version for today so you can download it directly from here. Or you can just take below steps:

cd /usr/src
sudo -s
wget http://code.google.com/p/namebench/downloads/detail?name=namebench-1.3.1-source.tgz
tar -xvzf namebench-1.3.1-source.tgz
cd namebench-1.3.1
./namebench.py

Increase port range available for applications

Increase port range available for applications

 

By default an average Linux distribution allows applications to use the following TCP port range for outgoing connections: 32,786-65,536. That’s why your system can handle up to 28,232 TCP sessions at time. Notice, this is more than enough if your Linux system is installed on the laptop or desktop and you just use it for occasional visits to facebook.com, gmail.com and linuxscrew.com (yeah!). But if you run proxy/webcache like squid or some other services which open a lot of outgoing TCP connections you will likely hit ceiling of 28,232 soon.

First of all, let’s see current port range available for TCP sessions:

cat /proc/sys/net/ipv4/ip_local_port_range

Most likely the output will show something like this one “32786 65536?. In order to expand this range you can either echo modified range into above file in /proc filesystem (temporary solution) or add corresponding line into /etc/sysctl.conf (constant solution).

To temporarily expand port range from 28,232 to 40,000 do the following:

sudo -s
echo "25000 65000" > /proc/sys/net/ipv4/ip_local_port_range

To make sure new port range will be applied after reboot add the following line to /etc/sysctl.conf:

net.ipv4.ip_local_port_range="25000 65000"

or just execute this:

sudo sysctl -n net.ipv4.ip_local_port_range="25000 65000"

How to disable directory browsing in apache/httpd?

How to disable directory browsing in apache/httpd?

 

How can I disable building of directory index in apache/httpd? In other words, how to prevent users from seeing the contents of published directories?

 

Actually you are totally right that you wish to disable this feature. One of the “must do’s” on setting a secure apache web server is to disable directory browsing. Usually apache comes with this feature enabled but its always a good idea to get it disabled unless you really need it.

First of all find where is the main apache’s config file httpd.conf is located. If you use Debian, it should be here: /etc/apache/httpd.conf. Using some file editor like Vim or Nano open this file and find the line that looks as follows:

Options Includes Indexes FollowSymLinks MultiViews

then remove word Indexes and save the file. The line should look like this one:

Options Includes FollowSymLinks MultiViews

After it is done, restart apache (e.g. /etc/init.d/apache restart

Grub Fallback: Boot good kernel if new one crashes

Grub Fallback: Boot good kernel if new one crashes

t’s hard to believe but I didn’t know about Grub fallbackfeature. So every time when I needed to reboot remote server into a new kernel I had to test it on local server to make sure it won’t panic on remote unit. And if kernel panic still happened I had to ask somebody who has physical access to the server to reboot the hardware choose proper kernel in Grub. It’s all boring and not healthful – it’s much better to use Grub’s native fallback feature.

Grub is default boot loader in most Linux distributions today, at least major distros like Centos/Fedora/RedHat, Debian/Ubuntu/Mint, Arch use Grub. This makes it possible to use Grub fallback feature just out of the box. Here is example scenario.

There is remote server hosted in New Zealand and you (sitting in Denmark) have access to it over the network only (no console server). In this case you cannot afford that the new kernel makes server unreachable, e.g. if new kernel crash during boot it won’t load network interface drivers so your Linux box won’t appear online until somebody reboots it into workable kernel. Thankfully Grub can be configured to try loading new kernel once and if it fails Grub will load another kernel according to configuration. You can see my example grub.conf below:

default=saved
timeout=5
splashimage=(hd0,1)/boot/grub/splash.xpm.gz
hiddenmenu
fallback 0 1
title Fedora OpenVZ (2.6.32-042stab053.5)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.32-042stab053.5 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet crashkernel=auto
        initrd /boot/initramfs-2.6.32-042stab053.5.img
        savedefault fallback
title Fedora (2.6.35.12-88.fc14.i686)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.35.12-88.fc14.i686 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet
        initrd /boot/initramfs-2.6.35.12-88.fc14.i686.img
        savedefault fallback

According to this configuration Grub will try to load ‘Fedora OpenVZ’ kernel once and if it fails system will be loaded into good ‘Fedora’ kernel. If ‘Fedora OpenVZ’ loads well you’ll be able to reach the server over the network after reboot. Notice lines ‘default=saved’ and ‘savedefault fallback’ which are mandatory to make fallback feature working.

Alternative way

I’ve heard that official Grub fallback feature may work incorrectly on RHEL5 (and Centos 5) so there is elegant workaround (found here):

1. Add param ‘panic=5? to your new kernel line so it looks like below:

title Fedora OpenVZ (2.6.32-042stab053.5)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.32-042stab053.5 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet crashkernel=auto panic=5
        initrd /boot/initramfs-2.6.32-042stab053.5.img

This param will make crashed kernel to reboot itself in 5 seconds.

2. Point default Grub param to good kernel, e.g. ‘default=0?.

3. Type in the following commands (good kernel appears in grub.conf first and new kernel is second one):

# grub

grub> savedefault --default=1 --once
savedefault --default=1 --once
grub> quit

This will make Grub to boot into new kernel once and if it fails it will load good kernel. Now you can reboot the server and make sure it will 100% appear online in a few minutes. I usually prefer native Grub fallback feature but if you see it doesn’t work for you it makes sense to try above mentioned workaround.

Load balancing method

Loadbalancer    terminology

Acronym Terminology

Load Balancer An IP based traffic manager for clusters

VIP : The Virtual IP address that a cluster is contactable on (Virtual Server)
RIP : The Real IP address of a back-end server in the cluster (Real Server)
GW : The Default Gateway for a back-end server in the cluster
WUI : Web User Interface
Floating IP : An IP address shared by the master & slave load balancer when in a highavailability
configuration (shared IP)
Layer 4 : Part of the seven layer OSI model, descriptive term for a network device that can
route packets based on TCP/IP header information
Layer 7:  Part of the seven layer OSI model, descriptive term for a network device that can
read and write the entire TCP/IP header and payload information at the
application layer
DR  : Direct Routing is a standard load balancing technique that distributes packets by
altering only the destination MAC address of the packet
NAT :  Network Address Translation – Standard load balancing technique that changes
the destination of packets to and from the VIP (external subnet to internal cluster
subnet)
SNAT(HAProxy) :Source Network Address Translation – Load balancer acts as a proxy for all
incoming & outgoing traffic
SSL Termination (Pound) : The SSL certificate is installed on the load balancer in order to decrypt HTTPS
traffic on behalf of the cluster

MASQUERADE :  Descriptive term for standard firewall technique where internal servers are
represented as an external public IP address. Sometimes referred to as a
combination of SNAT & DNAT rules

One Arm : The load balancer has one physical network card connected to one subnet

Two Arm : The load balancer has two physical network cards connected to two subnets

What is a virtual IP address?
Most load balancer vendors use the term virtual IP address (VIP) to describe the address that the cluster is
accessed from.
It is important to understand that the virtual IP (VIP) refers both to the physical IP address and also to the
logical load balancer configuration. Likewise the real IP (RIP) address refers both to the real servers physical
IP address and its representation in the logical load balancer configuration.

What is a floating IP address?
The floating IP address is shared by the master and slave load balancer when in a high-availability
configuration. The network knows that the master controls the floating IP address and all traffic will be sent to
this address. The logical VIP matches this address and is used to load balance the traffic to the application
cluster. If the master has a hardware failure then the slave will take over the floating IP address and
seamlessly handle the load balancing for the cluster. In scenarios that only have a master load balancer
there can still be a floating IP address, but in this case it would remain active on the master unit only.

What are your objectives?

It is important to have a clear focus on your objectives and the required outcome of the successful
implementation of your load balancing solution. If the objective is clear and measurable, you know when you
have achieved the goal.
Hardware load balancers have a number of flexible features and benefits for your technical infrastructure and
applications. The first question to ask is:

Are you looking for increased performance, reliability, ease of maintenance or all
three?
Performance
A load balancer can increase performance by
allowing you to utilize several commodity
servers to handle the workload of one
application.

Reliability
Running an application on one server gives you
a single point of failure. Utilizing a load
balancer moves the point of failure to the load
balancer. At Loadbalancer.org we advise that
you only deploy load balancers as clustered
pairs to remove this single point of failure.

Maintenance
Using the appliance, you can easily bring
servers on and off line to perform maintenance
tasks, without disrupting your users.
In order to achieve all three objectives of performance, reliability & maintenance in a web
based application, your application must handle persistence correctly (see page 31 for more
details).

What is the difference between a one-arm and a two-arm configuration?

The number of ‘arms’ is a descriptive term for how many physical connections (Ethernet ports or cables) are
used to connect the load balancers to the network. It is very common for load balancers that use a routing
method (NAT) to have a two-arm configuration. Proxy based load balancers (SNAT) commonly use a onearm
configuration.
NB: To add even more confusion, having a ‘one-arm’ or ‘two-arm’ solution may or may not imply the same
number of network cards.

Topology definition:
One-Arm The load balancer has one physical network card connected to one subnet
Two-Arm The load balancer has two physical network cards connected to two subnets

What are the different load balancing methods supported?

Layer 4

DR
(Direct Routing)

Ultra-fast local server based load balancing
Requires handling the ARP issue on the real servers

1 Arm

Layer 4

NAT
(Network Address Translation)

Fast Layer 4 load balancing, the appliance becomes the default gateway for the real servers

2 or 1 Arm

Layer 4

TUN

Similar to DR but works across IP encapsulated tunnels

1 Arm

Layer 7

SSL Termination

Usually required in order to process cookie persistence in HTTPS streams on the load balancer – Processor intensive

1 Arm

Layer 7

SNAT
(HAProxy)

Layer 7 allows great flexibility including full SNAT and WAN load balancing, HTTP or RDP  cookie insertion and URL switching

1 or 2 Arm

Key:

High performance Recommended for high performance fully transparent and scaleable solutions
Only required for Direct Routing implementation across routed networks (rarely used)
Recommended if HTTP cookie persistence is required, also used for numerous Microsoft applications such as Terminal Services (RDP cookie persistence) and Exchange, that require SNAT mode

Direct Routing (DR) load balancing method

The one-arm direct routing (DR) mode is the recommended mode for Loadbalancer.org installation because it’s a very high performance solution with very little change to your existing infrastructure. NB. Foundry networks call this Direct Server Return and F5 call it N-Path.

  • Direct routing works by changing the destination MAC address of the incoming packet on the fly which is very fast.
  • However, it means that when the packet reaches the real server it expects it to own the VIP. This means you need to make sure the real server responds to the VIP, but does not respond to ARP requests.
  • On average, DR mode is 8 times quicker than NAT for HTTP, 50 times quicker for terminal services and much, much faster for streaming media or FTP.
  • Direct routing mode enables servers on a connected network to access either the VIPs or RIPs. No extra subnets or routes are required on the network.
  • The real server must be configured to respond to both the VIP & its own IP address.
  • Port translation is not possible in DR mode i.e. have a different RIP port than the VIP port.

When using a load balancer in one-arm DR mode all load balanced services can be configured on the same subnet as the real servers. The real servers must be configured to respond to the virtual server IP address as well as their own IP address.

Network Address Translation (NAT) load balancing method

Sometimes it is not possible to use DR mode. The two most common reasons being: if the application cannot bind to RIP & VIP at the same time; or if the host operating system cannot be modified to handle the ARP issue. The second choice is Network Address Translation (NAT) mode. This is also a fairly high performance solution but it requires the implementation of a two arm infrastructure with an internal and external subnet to carry out the translation (the same way a firewall works). Network engineers with experience of hardware load balancers will have often used this method.

  • In two-arm NAT mode the load balancer translates all requests from the external virtual server to the internal real servers.
  • The real servers must have their default gateway configured to point at the load balancer.
  • For the real servers to be able to access the internet on their own, i.e. browse the web, the setup wizard automatically adds the required MASQUERADE rule in the firewall script (some vendors incorrectly call this S-NAT).
  • If you want real servers to be accessible on their own IP address for non-load balanced services, i.e. SMTP, you will need to set up individual SNAT and DNAT firewall script rules for each real server. Or  you can set up a dedicated virtual server with just one real server as the target.
  • Please see the advanced NAT considerations section of our administration manual for more details on these two issues.

When using a load balancer in two-arm NAT mode, all load balanced services can be configured on the external IP. The real servers must also have their default gateways directed to the internal IP. You can also configure the load balancers in one-arm NAT mode, but in order to make the servers accessible from the local network you need to change some routing information on the real servers.

It is possible to add routing rules to the real servers in order to perform NAT load balancing on a single subnet (1 arm), refer  to the administration manual for details.

Source Network Address Translation (SNAT) load balancing method

If your application requires that the load balancer handles cookie insertion then you need to use the SNAT configuration. This also has the advantage of a one arm configuration and does not require any changes to the application servers. However, as the load balancer is acting as a full proxy it doesn’t have the same raw throughput as the routing based methods.

The network diagram for the Layer 7 HAProxy SNAT mode is very similar to the Direct Routing example except that no re-configuration of the real servers is required. The load balancer proxies the application traffic to the servers so that the source of all traffic becomes the load balancer.

  • As with other modes a single unit does not require a Floating IP.
  • SNAT is a full proxy and therefore load balanced servers do not need to be changed in any way.

Because SNAT is a full proxy any server in the cluster can be on any accessible subnet including across the Internet or WAN.
SNAT is not TRANSPARENT by default i.e. the real servers will see the source address of each request as the load balancers IP address. The clients source IP address will be in the x-forwaded for header (see TPROXY method).


Transparent Source Network Address Translation (SNAT-TPROXY) load balancing method

If the source address of the client  is a requirement then HaProxy can be forced into transparent mode using TPROXY, this requires that the real servers use the load balancer as the default gateway (as in NAT mode) and only works for directly attached subnets (as in NAT mode).

  • As with other modes a single unit does not require a Floating IP.
  • SNAT acts as a full proxy but in TPROXY mode all server traffic must pass through the load balancer.
  • The real servers must have their default gateway configured to point at the load balancer.

Transparent proxy is impossible to implement over a routed network i.e. wide area network such as the Internet. To get transparent load balancing over the WAN you can use the TUN loadbalancing method (Direct Routing over secure tunnel) with Linux or UNIX based systems only.

SSL Termination or Acceleration (SSL) with or without TPROXY

All of the layer 4 and Layer 7 load balancing methods can handle SSL traffic in passs through mode i.e. the backend servers do the decryption and encryption of the traffic. This is very scaleable as you can just add more servers to the cluster to gain higher Transactions per second (TPS). However if you want to inspect HTTPS traffic in order to read or insert cookies you will need to decode (terminate) the SSL traffic on the load balancer. You can do this by imoprting your secure key and signed certificate to the load balancer giving it the authority to decrypt traffic. The load balancer uses standard apache/PEM format certificates.
You can define a Pound SSL virtual server with a single backend either a Layer 4 NAT mode virtual server or more usually a Layer 7 HAProxy VIP which can then insert cookies.


Pound-SSL is not TRANSPARENT by default i.e. the backen will see the source address of each request as the load balancers IP address. The clients source IP address will be in the x-forwaded for header. However Pound-SSL can also be configured with TPROXY to ensure that the backend can see the source IP address of all traffic.