{"id":7583,"date":"2018-06-16T18:48:32","date_gmt":"2018-06-16T10:48:32","guid":{"rendered":"http:\/\/rmohan.com\/?p=7583"},"modified":"2018-06-16T18:52:05","modified_gmt":"2018-06-16T10:52:05","slug":"bond-technology-load-balancing-in-linux","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=7583","title":{"rendered":"Bond Technology Load Balancing in Linux"},"content":{"rendered":"<h3>Problem introduction<\/h3>\n<p>When the general enterprise is used to provide NFS service, samba service or vsftpd service, the system must provide 7*24 hours of network transmission service.\u00a0The maximum network transmission speed it can provide is 100MB\/s, but when there are a large number of users accessing, the server&#8217;s access pressure is very high, and the network transmission rate is particularly slow.<\/p>\n<h3>Solution<\/h3>\n<p>Therefore, we can use bond technology to achieve load balancing of multiple network cards to ensure automatic backup and load balancing of the network.\u00a0In this way, the reliability of the network and the high-speed transmission of files in the actual operation and maintenance work are guaranteed.<\/p>\n<p><strong>There are seven (0~6) network card binding modes: bond0, bond1, bond2, bond3, bond4, bond5, bond6.\u00a0<\/strong><br \/>\n<strong>The common network card binding driver has the following three modes:<\/strong><\/p>\n<ul>\n<li><strong>Mode0 Balanced load mode:<\/strong>\u00a0Usually two network cards work and are automatically\u00a0<strong>backed<\/strong>\u00a0up, but port aggregation is required on the switch devices connected to the server&#8217;s local network card to support bonding technology.<\/li>\n<li><strong>Mode1 automatic backup technology:<\/strong>\u00a0usually only one network card works, after it fails, it is automatically replaced with another network card;<\/li>\n<li><strong>Mode6 Balanced load mode:<\/strong>\u00a0Normally, both network cards work, and they are automatically backed up. There is no need for the switch device to provide auxiliary support.<\/li>\n<\/ul>\n<p>Here mainly describes the mode6 network card binding drive mode, because this mode allows two network cards to work together at the same time, when one of the network card failure can automatically backup, and without the need for switch device support, thus ensuring reliable network transmission protection .<\/p>\n<h4>The following is the bond binding operation of the network card in RHEL 7 in the VMware virtual machine<\/h4>\n<ol>\n<li>Add another network card device to the virtual machine system and set two network cards in the same network connection mode. As shown in the following figure, the network card device in this mode can bind the network card. Otherwise, the two network cards cannot be added. Send data to each other.<\/li>\n<li>Configure the binding parameters of the network card device. It should be noted here that the independent network card needs to be configured as a &#8220;slave&#8221; network card. Serving the &#8220;main&#8221; network card, it should not have its own IP address.\u00a0After the following initialization of the device, they can support network card binding.\n<pre><code class=\"hljs bash\"><span class=\"hljs-built_in\">cd<\/span> \/etc\/sysconfig\/network-scripts\/<\/code><\/pre>\n<p><span class=\"hljs-comment\">Vim ifcfg-eno16777728 # Edit NIC 1 configuration file<\/span><\/p>\n<blockquote><p>TYPE=Ethernet<br \/>\nBOOTPROTO=none<br \/>\nDEVICE=eno16777728<br \/>\nONBOOT=yes<br \/>\nHWADDR=00:0C:29:E2:25:2D<br \/>\nUSERCTL=no<br \/>\nMASTER=bond0<br \/>\nSLAVE=yes<\/p>\n<p>Vim ifcfg-eno33554968 # Edit NIC 2 configuration file<\/p><\/blockquote>\n<blockquote><p>TYPE=Ethernet<br \/>\nBOOTPROTO=none<br \/>\nDEVICE=eno33554968<br \/>\nONBOOT=yes<br \/>\nHWADDR=00:0C:29:E2:25:2D<br \/>\nMASTER=bond0<br \/>\nSLAVE=yes<\/p><\/blockquote>\n<ol start=\"3\">\n<li>Create a new network card device file ifcfg-bond0, and configure the IP address and other information. In this way, when the user accesses the corresponding service, the two network card devices provide services together.<\/li>\n<\/ol>\n<p>Vim ifcfg-bond0 # Create a new ifcfg-bond0 configuration file in the current directory.<\/p>\n<blockquote><p>TYPE=Ethernet<br \/>\nBOOTPROTO=none<br \/>\nONBOOT=yes<br \/>\nUSERCTL=no<br \/>\nDEVICE=bond0<br \/>\nIPADDR=192.168.100.5<br \/>\nPREFIX=24<br \/>\nDNS=192.168.100.1<br \/>\nNM_CONTROLLED=no<\/p><\/blockquote>\n<ol start=\"4\">\n<li>Modify the network card binding drive mode, here we use mode6 (balanced load mode)<\/li>\n<\/ol>\n<p>Vim \/etc\/modprobe.d\/bond.conf # Configure the mode of the NIC driver<\/p>\n<blockquote><p>Alias ??bond0 bonding<br \/>\noptions bond0 miimon=100 mode=6<\/p><\/blockquote>\n<ol start=\"5\">\n<li>Restart the network service so that the configuration takes effect<\/li>\n<\/ol>\n<blockquote><p>Systemctl restart network<\/p><\/blockquote>\n<ol start=\"6\">\n<li>test<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<div id=\"li_all\">\n<div id=\"li_1\"><ins class=\"adsbygoogle\" data-ad-client=\"ca-pub-5195587195407606\" data-ad-slot=\"9714521574\" data-adsbygoogle-status=\"done\"><ins id=\"aswift_0_expand\"><ins id=\"aswift_0_anchor\"><iframe id=\"aswift_0\" name=\"aswift_0\" width=\"336\" height=\"280\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/ins><\/ins><\/ins><\/div>\n<div id=\"li_2\"><\/div>\n<\/div>\n<div id=\"content\">\n<h4><strong>First, bonding technology<\/strong><\/h4>\n<p>Bonding is a network card binding technology in a Linux system. It can abstract (bind) n physical NICs on the server into a logical network card, which can improve network throughput and achieve network redundancy. , load and other functions have many advantages.<\/p>\n<p>Bonding technology is implemented at the kernel level of the Linux system. It is a kernel module (driver).\u00a0To use it, the system needs to have this module. We can use modinfo command to view the information of this module. Generally, it is supported.<\/p>\n<div class=\"linuxidc_code\">\n<div id=\"linuxidc_code_open_ab7ff889-a396-4153-9ed0-8dd49c4716eb\" class=\"linuxidc_code_hide\">\n<div class=\"linuxidc_code_toolbar\"><\/div>\n<pre># modinfo bonding\r\nfilename:       \/lib\/modules\/2.6.32-642.1.1.el6.x86_64\/kernel\/drivers\/net\/bonding\/bonding.ko\r\nauthor:         Thomas Davis, tadavis@lbl.gov and many others\r\ndescription:    Ethernet Channel Bonding Driver, v3.7.1\r\nversion:        3.7.1\r\nlicense:        GPL\r\nalias:          rtnl-link-bond\r\nsrcversion:     F6C1815876DCB3094C27C71\r\ndepends:        \r\nvermagic:       2.6.32-642.1.1.el6.x86_64 SMP mod_unload modversions \r\nparm:           max_bonds:Max number of bonded devices (int)\r\nparm:           tx_queues:Max number of transmit queues (default = 16) (int)\r\nparm:           num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)\r\nparm:           num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)\r\nparm:           miimon:Link check interval in milliseconds (int)\r\nparm:           updelay:Delay before considering link up, in milliseconds (int)\r\nparm:           downdelay:Delay before considering link down, in milliseconds (int)\r\nparm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)\r\nparm:           mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)\r\nparm:           primary:Primary network device to use (charp)\r\nparm:           primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)\r\nparm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)\r\nparm:           ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)\r\nparm:           min_links:Minimum number of available links before turning on carrier (int)\r\nparm:           xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)\r\nparm:           arp_interval:arp interval in milliseconds (int)\r\nparm:           arp_ip_target:arp targets in n.n.n.n form (array of charp)\r\nparm:           arp_validate:validate src\/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)\r\nparm:           arp_all_targets:fail on any\/all arp targets timeout; 0 for any (default), 1 for all (charp)\r\nparm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)\r\nparm:           all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)\r\nparm:           resend_igmp:Number of IGMP membership reports to send on link failure (int)\r\nparm:           packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), &gt;1 packets per slave. (int)\r\nparm:           lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)<\/pre>\n<\/div>\n<\/div>\n<p><strong>The seven working modes of bonding:\u00a0<\/strong><\/p>\n<p>Bonding technology provides seven operating modes that need to be specified when used. Each has its own advantages and disadvantages.<\/p>\n<ol>\n<li>Balance-rr (mode=0) By default, there is a high availability (fault tolerance) and load balancing feature that requires configuration of the switch, each packet is polled for packet delivery (balanced traffic distribution).<\/li>\n<li>Active-backup (mode=1) Only the high availability (fault-tolerance) function does not require switch configuration. In this mode, only one network card is working and only one mac address is available to the outside world.\u00a0The disadvantage is that the port utilization rate is relatively low.<\/li>\n<li>Balance-xor (mode=2) is not commonly used<\/li>\n<li>Broadcast (mode=3) is not commonly used<\/li>\n<li>802.3ad (mode=4) IEEE 802.3ad dynamic link aggregation, requires switch configuration, never used<\/li>\n<li>Balance-tlb (mode=5) is not commonly used<\/li>\n<li>Balance-alb (mode=6) has high availability (fault tolerance) and load balancing features and does not require switch configuration (traffic distribution to each interface is not particularly balanced)<\/li>\n<\/ol>\n<p>There is a lot of information on the specific Internet, understand the characteristics of each mode according to their own choices on the line, generally used 0,1,4,6 these several modes.<\/p>\n<h4><strong>Second,\u00a0<a title=\"CentOS\" href=\"http:\/\/www.linuxidc.com\/topicnews.aspx?tid=14\" target=\"_blank\" rel=\"noopener\">CentOS<\/a>\u00a07 configuration bonding<\/strong><\/h4>\n<p><strong>surroundings:<\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>System: Centos7\r\nNetwork card: em1, em2\r\nBond0: 172.16.0.183 \r\nLoad Mode: mode6( adaptive load balancing )\r\n\r\n<\/pre>\n<p>The two physical network cards em1 and em2 on the server are bound to a logical network card bond0, and the bonding mode selects mode6.<\/p>\n<p>Note: The ip address is configured on bond0. The physical network card does not need to configure an ip address.<\/p>\n<p><strong>1, close and stop the NetworkManager service<\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>STOP NetworkManager.service systemctl      # Stop NetworkManager service \r\nsystemctl disable NetworkManager.service   # prohibit start-up service NetworkManager<\/pre>\n<\/div>\n<p class=\"brush\">Ps: Must be closed, will not interfere with doing the bonding<\/p>\n<p><strong>2, loading the bonding module<\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>modprobe --first-time bonding<\/pre>\n<\/div>\n<p>There is no prompt to indicate successful loading. If modprobe: ERROR: could not insert &#8216;bonding&#8217;: Module already in kernel indicates that you have already loaded this module.<\/p>\n<p>You can also use lsmod | grep bonding to see if the module is loaded<\/p>\n<div class=\"linuxidc_code\">\n<pre>lsmod | grep bonding\r\nbonding               136705  0<\/pre>\n<\/div>\n<p><strong>3, create a configuration file based on the bond0 interface<\/strong><\/p>\n<div class=\"linuxidc_Highlighter sh-gutter\">\n<div>\n<div id=\"highlighter_50668\" class=\"syntaxhighlighter  python ie\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td class=\"gutter\">\n<div class=\"line number1 index0 alt2\">1<\/div>\n<\/td>\n<td class=\"code\">\n<div class=\"container\">\n<div class=\"line number1 index0 alt2\"><code class=\"python plain\">vim\u00a0<\/code><code class=\"python keyword\">\/<\/code><code class=\"python plain\">etc<\/code><code class=\"python keyword\">\/<\/code><code class=\"python plain\">sysconfig<\/code><code class=\"python keyword\">\/<\/code><code class=\"python plain\">network<\/code><code class=\"python keyword\">-<\/code><code class=\"python plain\">scripts<\/code><code class=\"python keyword\">\/<\/code><code class=\"python plain\">ifcfg<\/code><code class=\"python keyword\">-<\/code><code class=\"python plain\">bond0<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<\/div>\n<p>Modify it as follows, depending on your situation:<\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=bond0\r\nTYPE=Bond\r\nIPADDR=172.16.0.183\r\nNETMASK=255.255.255.0\r\nGATEWAY=172.16.0.1\r\nDNS1=114.114.114.114\r\nUSERCTL=no\r\nBOOTPROTO=none\r\nONBOOT=yes\r\nBONDING_MASTER=yes\r\nBONDING_OPTS=\"mode=6 miimon=100\"<\/pre>\n<\/div>\n<p>The above BONDING_OPTS=&#8221;\u00a0mode=6 miimon=100\u00a0&#8221; indicates that the\u00a0configured working mode is mode6 (adaptive load balancing), and miimon indicates the frequency of monitoring network links (milliseconds). We set the frequency to 100 milliseconds, depending on your needs. Mode can be specified for other load modes.<\/p>\n<p><strong>4, modify the em1 interface configuration file<\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>vim \/etc\/sysconfig\/network-scripts\/ifcfg-em1<\/pre>\n<\/div>\n<p>Modify it as follows:<\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=em1\r\nUSERCTL=no\r\nONBOOT = yes\r\n MASTER =bond0                   # needs to correspond to the value of DEVICE in the ifcfg-bond0 configuration file above \r\nSLAVE= yes\r\nBOOTPROTO=none<\/pre>\n<\/div>\n<p><strong>5, modify the em2 interface configuration file<\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>vim \/etc\/sysconfig\/network-scripts\/ifcfg-em2<\/pre>\n<\/div>\n<p>Modify it as follows:<\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=em2\r\nUSERCTL=no\r\nONBOOT = yes\r\n MASTER =bond0                  # Needs and corresponds to the value of DEVICE in the ifcfg-bond0 configuration file \r\nSLAVE= yes\r\nBOOTPROTO=none<\/pre>\n<\/div>\n<p><strong>6, test<\/strong><\/p>\n<p>Restart network service<\/p>\n<div class=\"linuxidc_code\">\n<pre>systemctl restart network<\/pre>\n<\/div>\n<p>View the interface status information of bond0 (If the error message shows that it is not successful, it is most likely that the bond0 interface is not up)<\/p>\n<div class=\"linuxidc_code\">\n<pre># cat \/proc\/net\/bonding\/bond0 \r\n\r\nBonding Mode: adaptive load balancing    \/\/ Binding mode: Currently it is ald mode (mode 6), ie high availability and load balancing mode\r\nPrimary Slave: None\r\nCurrently Active Slave: em1 \r\nMII Status: up                            \/\/ Interface status: up (MII is the Media Independent Interface abbreviation, interface meaning)\r\nMII Polling Interval (ms): 100 \/\/ Time interval for interface polling (here 100 ms)\r\nUp Delay (ms): 0\r\nDown Delay (ms): 0\r\n\r\nSlave Interface: em1                      \/ \/ prepared interface: em0\r\n MII Status: up                            \/ \/ interface status: up (MII is the Media Independent Interface referred to, the interface means)\r\nSpeed: 1000 Mbps \/\/ The speed of the port is 1000 Mpbs\r\nDuplex: full                              \/\/ full duplex\r\nLink Failure Count: 0                     \/\/ Number of link failures: 0\r\nPermanent HW addr: 84:2b:2b:6a:76:d4 \/\/ Permanent MAC address\r\nSlave queue ID: 0\r\n\r\nSlave Interface: em1                      \/ \/ prepared interface: em1\r\n MII Status: up                            \/ \/ interface status: up (MII is the Media Independent Interface referred to, the interface means)\r\nSpeed: 1000 Mbps\r\nDuplex: full                              \/\/ full duplex\r\nLink Failure Count: 0                     \/\/ Number of link failures: 0\r\nPermanent HW addr: 84:2b:2b:6a:76:d5 \/\/ Permanent MAC address\r\nSlave queue ID: 0<\/pre>\n<\/div>\n<p>Check the interface information of the network through the ifconfig command<\/p>\n<div class=\"linuxidc_code\">\n<pre># ifconfig\r\n\r\nbond0: flags=5187&lt;UP,BROADCAST,RUNNING,MASTER,MULTICAST&gt;  mtu 1500\r\n        inet 172.16.0.183  netmask 255.255.255.0  broadcast 172.16.0.255\r\n        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20&lt;link&gt;\r\n        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)\r\n        RX packets 11183  bytes 1050708 (1.0 MiB)\r\n        RX errors 0  dropped 5152  overruns 0  frame 0\r\n        TX packets 5329  bytes 452979 (442.3 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\r\n\r\nem1: flags=6211&lt;UP,BROADCAST,RUNNING,SLAVE,MULTICAST&gt;  mtu 1500\r\n        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)\r\n        RX packets 3505  bytes 335210 (327.3 KiB)\r\n        RX errors 0  dropped 1  overruns 0  frame 0\r\n        TX packets 2852  bytes 259910 (253.8 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\r\n\r\nem2: flags=6211&lt;UP,BROADCAST,RUNNING,SLAVE,MULTICAST&gt;  mtu 1500\r\n        ether 84:2b:2b:6a:76:d5  txqueuelen 1000  (Ethernet)\r\n        RX packets 5356  bytes 495583 (483.9 KiB)\r\n        RX errors 0  dropped 4390  overruns 0  frame 0\r\n        TX packets 1546  bytes 110385 (107.7 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\r\n\r\nlo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;  mtu 65536\r\n        inet 127.0.0.1  netmask 255.0.0.0\r\n        inet6 ::1  prefixlen 128  scopeid 0x10&lt;host&gt;\r\n        loop  txqueuelen 0  (Local Loopback)\r\n        RX packets 17  bytes 2196 (2.1 KiB)\r\n        RX errors 0  dropped 0  overruns 0  frame 0\r\n        TX packets 17  bytes 2196 (2.1 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0<\/pre>\n<div class=\"linuxidc_code_toolbar\">The test network is highly available. We unplugged one of the network cables for testing. The conclusions are:<\/div>\n<\/div>\n<ul>\n<li>In the current mode=6 mode, one packet is lost. When the network is restored (the network is inserted back), the packet loss is about 5-6. This indicates that the high-availability function is normal but the packet loss will be more when the network recovers.<\/li>\n<li>One packet was lost in the test mode=1 mode. When the network was restored (the cable was plugged back in), there was basically no packet loss, indicating that the high-availability function and recovery were normal.<\/li>\n<li>Mode6 This kind of load mode is very good except that there is packet loss when the fault is recovered. If this can be ignored, this mode can be used; mode1 fault switching and recovery are fast, and there is basically no packet loss and delay. .\u00a0But the port utilization is relatively low, because this kind of master-backup mode only has one network card at work.<\/li>\n<\/ul>\n<h4><span>Third,\u00a0<\/span><a title=\"CentOS\" href=\"http:\/\/www.linuxidc.com\/topicnews.aspx?tid=14\" target=\"_blank\" rel=\"noopener\"><span>CentOS<\/span><\/a><span>\u00a06 configuration bonding<\/span><\/h4>\n<p><span>Centos6 configuration bonding is basically the same as the above Cetons7, but the configuration is somewhat different.\u00a0<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre><span>System: Centos6<\/span><span>\r\nNetwork card: em1, em2<\/span><span>\r\nBond0:<\/span><span> 172.16.0.183 <\/span><span>\r\nLoad Mode: mode1(adaptive load balancing) # Here, the load mode is 1, that is, active\/standby mode.<\/span><\/pre>\n<\/div>\n<p><strong><span>1, close and stop the NetworkManager service<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>service  NetworkManager stop\r\nchkconfig NetworkManager off<\/pre>\n<\/div>\n<p><span>Ps: If it is installed, close it. If the error message indicates that this is not installed, then do not use it.<\/span><\/p>\n<p><strong><span>2,\u00a0<\/span><span>loading the bonding module<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>modprobe --first-time bonding<\/pre>\n<\/div>\n<p><strong><span>3\u00a0<\/span><\/strong><strong><span>, a record\u00a0<\/span><\/strong><strong><span>built on bond0 interface configuration files<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>vim \/etc\/sysconfig\/network-scripts\/ifcfg-bond0<\/pre>\n<\/div>\n<p><span>Modify the following (according to your needs):<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=bond0\r\nTYPE=Bond\r\nBOOTPROTO=none\r\nONBOOT=yes\r\nIPADDR=172.16.0.183\r\nNETMASK=255.255.255.0\r\nGATEWAY=172.16.0.1\r\nDNS1=114.114.114.114\r\nUSERCTL=no\r\nBONDING_OPTS=\"mode=6 miimon=100\"<\/pre>\n<\/div>\n<p><strong><span>4, load the bond0 interface to the kernel<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre><span>vi \/etc\/modprobe.d\/bonding.conf<\/span><\/pre>\n<\/div>\n<p><span>Modify it as follows:<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>alias bond0 bonding<\/pre>\n<\/div>\n<p><strong><span>5, edit the em1, em2 interface file<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>vim \/etc\/sysconfig\/network-scripts\/ifcfg-em1<\/pre>\n<\/div>\n<p><span>Modify it as follows:<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=em1\r\nMASTER=bond0\r\nSLAVE=yes<span>\r\nUSERCTL<\/span><span> = <\/span><span>no<\/span>\r\nONBOOT=yes\r\nBOOTPROTO=none<\/pre>\n<\/div>\n<div class=\"linuxidc_code\">\n<pre>vim \/etc\/sysconfig\/network-scripts\/ifcfg-em2<\/pre>\n<\/div>\n<p><span>Modify it as follows:<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>DEVICE=em2<span>\r\nMASTER<\/span><span>=<\/span><span>bond0<\/span><span>\r\nSLAVE<\/span><span>=<\/span><span>yes<\/span><span>\r\nUSERCTL<\/span><span> = <\/span><span>no<\/span><span>\r\nONBOOT<\/span><span>=<\/span><span>yes<\/span><span>\r\nBOOTPROTO<\/span><span>=none<\/span><\/pre>\n<\/div>\n<p><strong><span>6, load the module, restart the network and test<\/span><\/strong><\/p>\n<div class=\"linuxidc_code\">\n<pre>modprobe bonding\r\nservice network restart<\/pre>\n<\/div>\n<p><span>Check the status of the bondo interface<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>cat \/proc\/net\/bonding\/bond0<\/pre>\n<pre><span>Bonding Mode: fault-tolerance ( <\/span><span>active- <\/span><span>backup<\/span><span> ) # The current load mode of bond0 interface is <\/span>\r\n<span>active\/ backup mode\r\n Primary Slave: None Currently Active Slave: em2 \r\nMII Status: up \r\nMII Polling Interval (ms): 100<\/span>\r\nUp Delay (ms): 0\r\nDown Delay (ms): 0\r\n\r\nSlave Interface: em1\r\nMII Status: up\r\nSpeed: 1000 Mbps\r\nDuplex: full\r\nLink Failure Count: 2\r\nPermanent HW addr: 84:2b:2b:6a:76:d4\r\nSlave queue ID: 0\r\n\r\nSlave Interface: em2\r\nMII Status: up\r\nSpeed: 1000 Mbps\r\nDuplex: full\r\nLink Failure Count: 0<span>\r\nPermanent HW addr:<\/span><span> 84: 2b: 2b: 6a: 76 <\/span><span>: d5<\/span><span>\r\nSlave queue ID: 0<\/span><\/pre>\n<\/div>\n<p><span>Use the ifconfig command to view the status of the next interface. You will find that all MAC addresses in the mode=1 mode are consistent, indicating that the external logic is a mac address.<\/span><\/p>\n<div class=\"linuxidc_code\">\n<pre>ifconfig \r\nbond0: flags=5187&lt;UP,BROADCAST,RUNNING,MASTER,MULTICAST&gt;  mtu 1500\r\n        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20&lt;link&gt;\r\n        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)\r\n        RX packets 147436  bytes 14519215 (13.8 MiB)\r\n        RX errors 0  dropped 70285  overruns 0  frame 0\r\n        TX packets 10344  bytes 970333 (947.5 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\r\n\r\nem1: flags=6211&lt;UP,BROADCAST,RUNNING,SLAVE,MULTICAST&gt;  mtu 1500\r\n        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)\r\n        RX packets 63702  bytes 6302768 (6.0 MiB)\r\n        RX errors 0  dropped 64285  overruns 0  frame 0\r\n        TX packets 344  bytes 35116 (34.2 KiB)<span>\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0<\/span>\r\n\r\nem2: flags=6211&lt;UP,BROADCAST,RUNNING,SLAVE,MULTICAST&gt;  mtu 1500\r\n        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)\r\n        RX packets 65658  bytes 6508173 (6.2 MiB)\r\n        RX errors 0  dropped 6001  overruns 0  frame 0\r\n        TX packets 1708  bytes 187627 (183.2 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\r\n\r\nlo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;  mtu 65536\r\n        inet 127.0.0.1  netmask 255.0.0.0\r\n        inet6 ::1  prefixlen 128  scopeid 0x10&lt;host&gt;\r\n        loop  txqueuelen 0  (Local Loopback)\r\n        RX packets 31  bytes 3126 (3.0 KiB)\r\n        RX errors 0  dropped 0  overruns 0  frame 0\r\n        TX packets 31  bytes 3126 (3.0 KiB)\r\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0<\/pre>\n<\/div>\n<p><span>Perform a high availability test, unplug one of the cables to see the packet loss and delay, and then plug in the network cable (analog recovery), and then watch the packet loss and delay.<\/span><\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Problem introduction <\/p>\n<p>When the general enterprise is used to provide NFS service, samba service or vsftpd service, the system must provide 7*24 hours of network transmission service. The maximum network transmission speed it can provide is 100MB\/s, but when there are a large number of users accessing, the server&#8217;s access pressure is very high, and [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[73],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7583"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7583"}],"version-history":[{"count":3,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7583\/revisions"}],"predecessor-version":[{"id":7586,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7583\/revisions\/7586"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7583"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}