November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Bond Technology Load Balancing in Linux

Problem introduction

When the general enterprise is used to provide NFS service, samba service or vsftpd service, the system must provide 7*24 hours of network transmission service. The maximum network transmission speed it can provide is 100MB/s, but when there are a large number of users accessing, the server’s access pressure is very high, and the network transmission rate is particularly slow.

Solution

Therefore, we can use bond technology to achieve load balancing of multiple network cards to ensure automatic backup and load balancing of the network. In this way, the reliability of the network and the high-speed transmission of files in the actual operation and maintenance work are guaranteed.

There are seven (0~6) network card binding modes: bond0, bond1, bond2, bond3, bond4, bond5, bond6. 
The common network card binding driver has the following three modes:

  • Mode0 Balanced load mode: Usually two network cards work and are automatically backed up, but port aggregation is required on the switch devices connected to the server’s local network card to support bonding technology.
  • Mode1 automatic backup technology: usually only one network card works, after it fails, it is automatically replaced with another network card;
  • Mode6 Balanced load mode: Normally, both network cards work, and they are automatically backed up. There is no need for the switch device to provide auxiliary support.

Here mainly describes the mode6 network card binding drive mode, because this mode allows two network cards to work together at the same time, when one of the network card failure can automatically backup, and without the need for switch device support, thus ensuring reliable network transmission protection .

The following is the bond binding operation of the network card in RHEL 7 in the VMware virtual machine

  1. Add another network card device to the virtual machine system and set two network cards in the same network connection mode. As shown in the following figure, the network card device in this mode can bind the network card. Otherwise, the two network cards cannot be added. Send data to each other.
  2. Configure the binding parameters of the network card device. It should be noted here that the independent network card needs to be configured as a “slave” network card. Serving the “main” network card, it should not have its own IP address. After the following initialization of the device, they can support network card binding.
    cd /etc/sysconfig/network-scripts/

    Vim ifcfg-eno16777728 # Edit NIC 1 configuration file

    TYPE=Ethernet
    BOOTPROTO=none
    DEVICE=eno16777728
    ONBOOT=yes
    HWADDR=00:0C:29:E2:25:2D
    USERCTL=no
    MASTER=bond0
    SLAVE=yes

    Vim ifcfg-eno33554968 # Edit NIC 2 configuration file

    TYPE=Ethernet
    BOOTPROTO=none
    DEVICE=eno33554968
    ONBOOT=yes
    HWADDR=00:0C:29:E2:25:2D
    MASTER=bond0
    SLAVE=yes

    1. Create a new network card device file ifcfg-bond0, and configure the IP address and other information. In this way, when the user accesses the corresponding service, the two network card devices provide services together.

    Vim ifcfg-bond0 # Create a new ifcfg-bond0 configuration file in the current directory.

    TYPE=Ethernet
    BOOTPROTO=none
    ONBOOT=yes
    USERCTL=no
    DEVICE=bond0
    IPADDR=192.168.100.5
    PREFIX=24
    DNS=192.168.100.1
    NM_CONTROLLED=no

    1. Modify the network card binding drive mode, here we use mode6 (balanced load mode)

    Vim /etc/modprobe.d/bond.conf # Configure the mode of the NIC driver

    Alias ??bond0 bonding
    options bond0 miimon=100 mode=6

    1. Restart the network service so that the configuration takes effect

    Systemctl restart network

    1. test

First, bonding technology

Bonding is a network card binding technology in a Linux system. It can abstract (bind) n physical NICs on the server into a logical network card, which can improve network throughput and achieve network redundancy. , load and other functions have many advantages.

Bonding technology is implemented at the kernel level of the Linux system. It is a kernel module (driver). To use it, the system needs to have this module. We can use modinfo command to view the information of this module. Generally, it is supported.

# modinfo bonding
filename:       /lib/modules/2.6.32-642.1.1.el6.x86_64/kernel/drivers/net/bonding/bonding.ko
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
alias:          rtnl-link-bond
srcversion:     F6C1815876DCB3094C27C71
depends:        
vermagic:       2.6.32-642.1.1.el6.x86_64 SMP mod_unload modversions 
parm:           max_bonds:Max number of bonded devices (int)
parm:           tx_queues:Max number of transmit queues (default = 16) (int)
parm:           num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm:           num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm:           miimon:Link check interval in milliseconds (int)
parm:           updelay:Delay before considering link up, in milliseconds (int)
parm:           downdelay:Delay before considering link down, in milliseconds (int)
parm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm:           mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm:           primary:Primary network device to use (charp)
parm:           primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm:           ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm:           min_links:Minimum number of available links before turning on carrier (int)
parm:           xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm:           arp_interval:arp interval in milliseconds (int)
parm:           arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm:           arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm:           arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp)
parm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm:           all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm:           resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm:           packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int)
parm:           lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)

The seven working modes of bonding: 

Bonding technology provides seven operating modes that need to be specified when used. Each has its own advantages and disadvantages.

  1. Balance-rr (mode=0) By default, there is a high availability (fault tolerance) and load balancing feature that requires configuration of the switch, each packet is polled for packet delivery (balanced traffic distribution).
  2. Active-backup (mode=1) Only the high availability (fault-tolerance) function does not require switch configuration. In this mode, only one network card is working and only one mac address is available to the outside world. The disadvantage is that the port utilization rate is relatively low.
  3. Balance-xor (mode=2) is not commonly used
  4. Broadcast (mode=3) is not commonly used
  5. 802.3ad (mode=4) IEEE 802.3ad dynamic link aggregation, requires switch configuration, never used
  6. Balance-tlb (mode=5) is not commonly used
  7. Balance-alb (mode=6) has high availability (fault tolerance) and load balancing features and does not require switch configuration (traffic distribution to each interface is not particularly balanced)

There is a lot of information on the specific Internet, understand the characteristics of each mode according to their own choices on the line, generally used 0,1,4,6 these several modes.

Second, CentOS 7 configuration bonding

surroundings:

System: Centos7
Network card: em1, em2
Bond0: 172.16.0.183 
Load Mode: mode6( adaptive load balancing )

The two physical network cards em1 and em2 on the server are bound to a logical network card bond0, and the bonding mode selects mode6.

Note: The ip address is configured on bond0. The physical network card does not need to configure an ip address.

1, close and stop the NetworkManager service

STOP NetworkManager.service systemctl      # Stop NetworkManager service 
systemctl disable NetworkManager.service   # prohibit start-up service NetworkManager

Ps: Must be closed, will not interfere with doing the bonding

2, loading the bonding module

modprobe --first-time bonding

There is no prompt to indicate successful loading. If modprobe: ERROR: could not insert ‘bonding’: Module already in kernel indicates that you have already loaded this module.

You can also use lsmod | grep bonding to see if the module is loaded

lsmod | grep bonding
bonding               136705  0

3, create a configuration file based on the bond0 interface

1
vim /etc/sysconfig/network-scripts/ifcfg-bond0

Modify it as follows, depending on your situation:

DEVICE=bond0
TYPE=Bond
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=6 miimon=100"

The above BONDING_OPTS=” mode=6 miimon=100 ” indicates that the configured working mode is mode6 (adaptive load balancing), and miimon indicates the frequency of monitoring network links (milliseconds). We set the frequency to 100 milliseconds, depending on your needs. Mode can be specified for other load modes.

4, modify the em1 interface configuration file

vim /etc/sysconfig/network-scripts/ifcfg-em1

Modify it as follows:

DEVICE=em1
USERCTL=no
ONBOOT = yes
 MASTER =bond0                   # needs to correspond to the value of DEVICE in the ifcfg-bond0 configuration file above 
SLAVE= yes
BOOTPROTO=none

5, modify the em2 interface configuration file

vim /etc/sysconfig/network-scripts/ifcfg-em2

Modify it as follows:

DEVICE=em2
USERCTL=no
ONBOOT = yes
 MASTER =bond0                  # Needs and corresponds to the value of DEVICE in the ifcfg-bond0 configuration file 
SLAVE= yes
BOOTPROTO=none

6, test

Restart network service

systemctl restart network

View the interface status information of bond0 (If the error message shows that it is not successful, it is most likely that the bond0 interface is not up)

# cat /proc/net/bonding/bond0 

Bonding Mode: adaptive load balancing    // Binding mode: Currently it is ald mode (mode 6), ie high availability and load balancing mode
Primary Slave: None
Currently Active Slave: em1 
MII Status: up                            // Interface status: up (MII is the Media Independent Interface abbreviation, interface meaning)
MII Polling Interval (ms): 100 // Time interval for interface polling (here 100 ms)
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1                      / / prepared interface: em0
 MII Status: up                            / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps // The speed of the port is 1000 Mpbs
Duplex: full                              // full duplex
Link Failure Count: 0                     // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d4 // Permanent MAC address
Slave queue ID: 0

Slave Interface: em1                      / / prepared interface: em1
 MII Status: up                            / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps
Duplex: full                              // full duplex
Link Failure Count: 0                     // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d5 // Permanent MAC address
Slave queue ID: 0

Check the interface information of the network through the ifconfig command

# ifconfig

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 172.16.0.183  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20<link>
        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)
        RX packets 11183  bytes 1050708 (1.0 MiB)
        RX errors 0  dropped 5152  overruns 0  frame 0
        TX packets 5329  bytes 452979 (442.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 3505  bytes 335210 (327.3 KiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 2852  bytes 259910 (253.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d5  txqueuelen 1000  (Ethernet)
        RX packets 5356  bytes 495583 (483.9 KiB)
        RX errors 0  dropped 4390  overruns 0  frame 0
        TX packets 1546  bytes 110385 (107.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 17  bytes 2196 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17  bytes 2196 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
The test network is highly available. We unplugged one of the network cables for testing. The conclusions are:
  • In the current mode=6 mode, one packet is lost. When the network is restored (the network is inserted back), the packet loss is about 5-6. This indicates that the high-availability function is normal but the packet loss will be more when the network recovers.
  • One packet was lost in the test mode=1 mode. When the network was restored (the cable was plugged back in), there was basically no packet loss, indicating that the high-availability function and recovery were normal.
  • Mode6 This kind of load mode is very good except that there is packet loss when the fault is recovered. If this can be ignored, this mode can be used; mode1 fault switching and recovery are fast, and there is basically no packet loss and delay. . But the port utilization is relatively low, because this kind of master-backup mode only has one network card at work.

Third, CentOS 6 configuration bonding

Centos6 configuration bonding is basically the same as the above Cetons7, but the configuration is somewhat different. 

System: Centos6
Network card: em1, em2
Bond0: 172.16.0.183 
Load Mode: mode1(adaptive load balancing) # Here, the load mode is 1, that is, active/standby mode.

1, close and stop the NetworkManager service

service  NetworkManager stop
chkconfig NetworkManager off

Ps: If it is installed, close it. If the error message indicates that this is not installed, then do not use it.

2, loading the bonding module

modprobe --first-time bonding

, a record built on bond0 interface configuration files

vim /etc/sysconfig/network-scripts/ifcfg-bond0

Modify the following (according to your needs):

DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BONDING_OPTS="mode=6 miimon=100"

4, load the bond0 interface to the kernel

vi /etc/modprobe.d/bonding.conf

Modify it as follows:

alias bond0 bonding

5, edit the em1, em2 interface file

vim /etc/sysconfig/network-scripts/ifcfg-em1

Modify it as follows:

DEVICE=em1
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none
vim /etc/sysconfig/network-scripts/ifcfg-em2

Modify it as follows:

DEVICE=em2
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none

6, load the module, restart the network and test

modprobe bonding
service network restart

Check the status of the bondo interface

cat /proc/net/bonding/bond0
Bonding Mode: fault-tolerance ( active- backup ) # The current load mode of bond0 interface is 
active/ backup mode
 Primary Slave: None Currently Active Slave: em2 
MII Status: up 
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 84:2b:2b:6a:76:d4
Slave queue ID: 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 84: 2b: 2b: 6a: 76 : d5
Slave queue ID: 0

Use the ifconfig command to view the status of the next interface. You will find that all MAC addresses in the mode=1 mode are consistent, indicating that the external logic is a mac address.

ifconfig 
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20<link>
        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)
        RX packets 147436  bytes 14519215 (13.8 MiB)
        RX errors 0  dropped 70285  overruns 0  frame 0
        TX packets 10344  bytes 970333 (947.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 63702  bytes 6302768 (6.0 MiB)
        RX errors 0  dropped 64285  overruns 0  frame 0
        TX packets 344  bytes 35116 (34.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 65658  bytes 6508173 (6.2 MiB)
        RX errors 0  dropped 6001  overruns 0  frame 0
        TX packets 1708  bytes 187627 (183.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 31  bytes 3126 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31  bytes 3126 (3.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Perform a high availability test, unplug one of the cables to see the packet loss and delay, and then plug in the network cable (analog recovery), and then watch the packet loss and delay.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>