March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

March 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Load balancing method

Loadbalancer    terminology

Acronym Terminology

Load Balancer An IP based traffic manager for clusters

VIP : The Virtual IP address that a cluster is contactable on (Virtual Server)
RIP : The Real IP address of a back-end server in the cluster (Real Server)
GW : The Default Gateway for a back-end server in the cluster
WUI : Web User Interface
Floating IP : An IP address shared by the master & slave load balancer when in a highavailability
configuration (shared IP)
Layer 4 : Part of the seven layer OSI model, descriptive term for a network device that can
route packets based on TCP/IP header information
Layer 7:  Part of the seven layer OSI model, descriptive term for a network device that can
read and write the entire TCP/IP header and payload information at the
application layer
DR  : Direct Routing is a standard load balancing technique that distributes packets by
altering only the destination MAC address of the packet
NAT :  Network Address Translation – Standard load balancing technique that changes
the destination of packets to and from the VIP (external subnet to internal cluster
subnet)
SNAT(HAProxy) :Source Network Address Translation – Load balancer acts as a proxy for all
incoming & outgoing traffic
SSL Termination (Pound) : The SSL certificate is installed on the load balancer in order to decrypt HTTPS
traffic on behalf of the cluster

MASQUERADE :  Descriptive term for standard firewall technique where internal servers are
represented as an external public IP address. Sometimes referred to as a
combination of SNAT & DNAT rules

One Arm : The load balancer has one physical network card connected to one subnet

Two Arm : The load balancer has two physical network cards connected to two subnets

What is a virtual IP address?
Most load balancer vendors use the term virtual IP address (VIP) to describe the address that the cluster is
accessed from.
It is important to understand that the virtual IP (VIP) refers both to the physical IP address and also to the
logical load balancer configuration. Likewise the real IP (RIP) address refers both to the real servers physical
IP address and its representation in the logical load balancer configuration.

What is a floating IP address?
The floating IP address is shared by the master and slave load balancer when in a high-availability
configuration. The network knows that the master controls the floating IP address and all traffic will be sent to
this address. The logical VIP matches this address and is used to load balance the traffic to the application
cluster. If the master has a hardware failure then the slave will take over the floating IP address and
seamlessly handle the load balancing for the cluster. In scenarios that only have a master load balancer
there can still be a floating IP address, but in this case it would remain active on the master unit only.

What are your objectives?

It is important to have a clear focus on your objectives and the required outcome of the successful
implementation of your load balancing solution. If the objective is clear and measurable, you know when you
have achieved the goal.
Hardware load balancers have a number of flexible features and benefits for your technical infrastructure and
applications. The first question to ask is:

Are you looking for increased performance, reliability, ease of maintenance or all
three?
Performance
A load balancer can increase performance by
allowing you to utilize several commodity
servers to handle the workload of one
application.

Reliability
Running an application on one server gives you
a single point of failure. Utilizing a load
balancer moves the point of failure to the load
balancer. At Loadbalancer.org we advise that
you only deploy load balancers as clustered
pairs to remove this single point of failure.

Maintenance
Using the appliance, you can easily bring
servers on and off line to perform maintenance
tasks, without disrupting your users.
In order to achieve all three objectives of performance, reliability & maintenance in a web
based application, your application must handle persistence correctly (see page 31 for more
details).

What is the difference between a one-arm and a two-arm configuration?

The number of ‘arms’ is a descriptive term for how many physical connections (Ethernet ports or cables) are
used to connect the load balancers to the network. It is very common for load balancers that use a routing
method (NAT) to have a two-arm configuration. Proxy based load balancers (SNAT) commonly use a onearm
configuration.
NB: To add even more confusion, having a ‘one-arm’ or ‘two-arm’ solution may or may not imply the same
number of network cards.

Topology definition:
One-Arm The load balancer has one physical network card connected to one subnet
Two-Arm The load balancer has two physical network cards connected to two subnets

What are the different load balancing methods supported?

Layer 4

DR
(Direct Routing)

Ultra-fast local server based load balancing
Requires handling the ARP issue on the real servers

1 Arm

Layer 4

NAT
(Network Address Translation)

Fast Layer 4 load balancing, the appliance becomes the default gateway for the real servers

2 or 1 Arm

Layer 4

TUN

Similar to DR but works across IP encapsulated tunnels

1 Arm

Layer 7

SSL Termination

Usually required in order to process cookie persistence in HTTPS streams on the load balancer – Processor intensive

1 Arm

Layer 7

SNAT
(HAProxy)

Layer 7 allows great flexibility including full SNAT and WAN load balancing, HTTP or RDP  cookie insertion and URL switching

1 or 2 Arm

Key:

High performance Recommended for high performance fully transparent and scaleable solutions
Only required for Direct Routing implementation across routed networks (rarely used)
Recommended if HTTP cookie persistence is required, also used for numerous Microsoft applications such as Terminal Services (RDP cookie persistence) and Exchange, that require SNAT mode

Direct Routing (DR) load balancing method

The one-arm direct routing (DR) mode is the recommended mode for Loadbalancer.org installation because it’s a very high performance solution with very little change to your existing infrastructure. NB. Foundry networks call this Direct Server Return and F5 call it N-Path.

  • Direct routing works by changing the destination MAC address of the incoming packet on the fly which is very fast.
  • However, it means that when the packet reaches the real server it expects it to own the VIP. This means you need to make sure the real server responds to the VIP, but does not respond to ARP requests.
  • On average, DR mode is 8 times quicker than NAT for HTTP, 50 times quicker for terminal services and much, much faster for streaming media or FTP.
  • Direct routing mode enables servers on a connected network to access either the VIPs or RIPs. No extra subnets or routes are required on the network.
  • The real server must be configured to respond to both the VIP & its own IP address.
  • Port translation is not possible in DR mode i.e. have a different RIP port than the VIP port.

When using a load balancer in one-arm DR mode all load balanced services can be configured on the same subnet as the real servers. The real servers must be configured to respond to the virtual server IP address as well as their own IP address.

Network Address Translation (NAT) load balancing method

Sometimes it is not possible to use DR mode. The two most common reasons being: if the application cannot bind to RIP & VIP at the same time; or if the host operating system cannot be modified to handle the ARP issue. The second choice is Network Address Translation (NAT) mode. This is also a fairly high performance solution but it requires the implementation of a two arm infrastructure with an internal and external subnet to carry out the translation (the same way a firewall works). Network engineers with experience of hardware load balancers will have often used this method.

  • In two-arm NAT mode the load balancer translates all requests from the external virtual server to the internal real servers.
  • The real servers must have their default gateway configured to point at the load balancer.
  • For the real servers to be able to access the internet on their own, i.e. browse the web, the setup wizard automatically adds the required MASQUERADE rule in the firewall script (some vendors incorrectly call this S-NAT).
  • If you want real servers to be accessible on their own IP address for non-load balanced services, i.e. SMTP, you will need to set up individual SNAT and DNAT firewall script rules for each real server. Or  you can set up a dedicated virtual server with just one real server as the target.
  • Please see the advanced NAT considerations section of our administration manual for more details on these two issues.

When using a load balancer in two-arm NAT mode, all load balanced services can be configured on the external IP. The real servers must also have their default gateways directed to the internal IP. You can also configure the load balancers in one-arm NAT mode, but in order to make the servers accessible from the local network you need to change some routing information on the real servers.

It is possible to add routing rules to the real servers in order to perform NAT load balancing on a single subnet (1 arm), refer  to the administration manual for details.

Source Network Address Translation (SNAT) load balancing method

If your application requires that the load balancer handles cookie insertion then you need to use the SNAT configuration. This also has the advantage of a one arm configuration and does not require any changes to the application servers. However, as the load balancer is acting as a full proxy it doesn’t have the same raw throughput as the routing based methods.

The network diagram for the Layer 7 HAProxy SNAT mode is very similar to the Direct Routing example except that no re-configuration of the real servers is required. The load balancer proxies the application traffic to the servers so that the source of all traffic becomes the load balancer.

  • As with other modes a single unit does not require a Floating IP.
  • SNAT is a full proxy and therefore load balanced servers do not need to be changed in any way.

Because SNAT is a full proxy any server in the cluster can be on any accessible subnet including across the Internet or WAN.
SNAT is not TRANSPARENT by default i.e. the real servers will see the source address of each request as the load balancers IP address. The clients source IP address will be in the x-forwaded for header (see TPROXY method).


Transparent Source Network Address Translation (SNAT-TPROXY) load balancing method

If the source address of the client  is a requirement then HaProxy can be forced into transparent mode using TPROXY, this requires that the real servers use the load balancer as the default gateway (as in NAT mode) and only works for directly attached subnets (as in NAT mode).

  • As with other modes a single unit does not require a Floating IP.
  • SNAT acts as a full proxy but in TPROXY mode all server traffic must pass through the load balancer.
  • The real servers must have their default gateway configured to point at the load balancer.

Transparent proxy is impossible to implement over a routed network i.e. wide area network such as the Internet. To get transparent load balancing over the WAN you can use the TUN loadbalancing method (Direct Routing over secure tunnel) with Linux or UNIX based systems only.

SSL Termination or Acceleration (SSL) with or without TPROXY

All of the layer 4 and Layer 7 load balancing methods can handle SSL traffic in passs through mode i.e. the backend servers do the decryption and encryption of the traffic. This is very scaleable as you can just add more servers to the cluster to gain higher Transactions per second (TPS). However if you want to inspect HTTPS traffic in order to read or insert cookies you will need to decode (terminate) the SSL traffic on the load balancer. You can do this by imoprting your secure key and signed certificate to the load balancer giving it the authority to decrypt traffic. The load balancer uses standard apache/PEM format certificates.
You can define a Pound SSL virtual server with a single backend either a Layer 4 NAT mode virtual server or more usually a Layer 7 HAProxy VIP which can then insert cookies.


Pound-SSL is not TRANSPARENT by default i.e. the backen will see the source address of each request as the load balancers IP address. The clients source IP address will be in the x-forwaded for header. However Pound-SSL can also be configured with TPROXY to ensure that the backend can see the source IP address of all traffic.

3 comments to Load balancing method

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>