{"id":7528,"date":"2018-06-03T10:01:14","date_gmt":"2018-06-03T02:01:14","guid":{"rendered":"http:\/\/rmohan.com\/?p=7528"},"modified":"2018-06-03T10:01:14","modified_gmt":"2018-06-03T02:01:14","slug":"nginx-load-balancing-and-configuration-2","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=7528","title":{"rendered":"Nginx load balancing and configuration"},"content":{"rendered":"<p><strong>Nginx load balancing and configuration<\/strong><\/p>\n<p>1 Load Balancing Overview The<br \/>\norigin of load balancing is that when a server has a large amount of traffic per unit time, the server will be under great pressure. When it exceeds its own capacity, the server will crash. To avoid crashing the server. The user has a better experience, born load balancing to share the pressure of the server.<\/p>\n<p>Load balancing is essentially implemented with the principle of reverse proxy, is a kind of technology that optimizes server resources and reasonably handles high concurrency, and can balance Server pressure to reduce user request wait time and ensure fault tolerance. Nginx is generally used as an efficient HTTP load balancing server to distribute traffic to multiple application servers to improve performance, scalability, and high availability.<\/p>\n<p>Principle: Internal A large number of servers can be built on the network to form a server cluster. When a user visits the site, they first access the public network intermediate server. The intermediate server is assigned to the intranet server according to the algorithm and shares the pressure of the server. Therefore, each visit of the user ensures the server. The pressure of each server in the cluster tends to balance, sharing server pressure and avoiding servers\u00a0The collapse of the case.<\/p>\n<p>The nginx reverse proxy implementation includes the following load balancing HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.<br \/>\nTo configure HTTPS load balancing, simply use the protocol that begins with &#8216;http&#8217;.<br \/>\nWhen you want to set load balancing for FastCGI, uwsgi, SCGI, or memcached, use the fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass commands, respectively.<\/p>\n<p>2 Common Balancing Mechanisms of Load Balancing<\/p>\n<p>1 round-robin: The requests are distributed to different servers in a polling manner. Each request is assigned to different back-end servers in chronological order. If the back-end server goes down, it is automatically removed to ensure normal services. .<\/p>\n<p>Configuration 1:<br \/>\nupstream server_back {#nginx distribution service request<br \/>\nserver 192.168.2.49;<br \/>\nserver 192.168.2.50;<br \/>\n}<\/p>\n<p>Configuration 2:<br \/>\nhttp {<br \/>\nupstream servergroup { # service group accepts requests, nginx polling distribution service requests<br \/>\nserver srv1.rmohan.com;<br \/>\nserver srv2.rmohan.com;<br \/>\nserver srv3.rmohan.com;<br \/>\n}<br \/>\nserver {<br \/>\nlisten 80;<br \/>\nlocation \/ {<br \/>\nProxy_pass http:\/\/servergroup; #All requests are proxied to servergroup service group<br \/>\n}<br \/>\n}<br \/>\n}<br \/>\nproxy_pass is followed by proxy server ip, can also be hostname, domain name, ip port mode<br \/>\nupstream set load balancing background server list<\/p>\n<p>2 Weight load balancing: if no weight is configured, the load of each server is the same. When there is uneven server performance, weight polling is used. The weight parameter of the specified server is determined by load balancing. a part of.\u00a0Heavy load is great.<br \/>\nUpstream server_back {<br \/>\nserver 192.168.2.49 weight=3;<br \/>\nserver 192.168.2.50 weight=7;<br \/>\n}<\/p>\n<p>3 least-connected: The next request is allocated to the server with the least number of connections. When some requests take longer to respond, the least connections can more fairly control the load of application instances.\u00a0Nginx forwards the request to the less loaded server.<br \/>\nUpstream servergroup {<br \/>\nleast_conn;<br \/>\nserver srv1.rmohan.com;<br \/>\nserver srv2.rmohan.com;<br \/>\nserver srv3.rmohan.com;<br \/>\n}<\/p>\n<p>4 ip-hash: Client-based IP address.\u00a0When load balancing occurs, each request is relocated to one of the server clusters. Users who have logged in to one server then relocate to another server and their login information is lost. This is obviously not appropriate.\u00a0Use ip_hash to solve this problem. If the client has accessed a server, when the user accesses it again, the request will be automatically located to the server through a hash algorithm.<\/p>\n<p>Each request is assigned according to the result of the IP hash, so the request is fixed to a certain back-end server, and it can also solve the session problem<br \/>\nupstream servergroup {<br \/>\nip-hash;<br \/>\nserver srv1.rmohan.com;<br \/>\nserver srv2.rmohan.com;<br \/>\nserver srv3.rmohan.com;<br \/>\n}<\/p>\n<p>Attach an example:<br \/>\n#user nobody;<br \/>\nworker_processes 4;<br \/>\nevents {<br \/>\n# maximum number of concurrent<br \/>\nworkers_connections 1024;<br \/>\n}<br \/>\nhttp{<br \/>\n# The list of pending servers to be<br \/>\nupstream myserver{<br \/>\n# ip_hash instruction to bring the same user to the same server.<br \/>\nIp_hash;<br \/>\nserver 125.219.42.4 fail_timeout=60s; tentative time after the failure of #max_fails 60s<br \/>\nserver 172.31.2.183;<br \/>\n}<\/p>\n<p>Server{<br \/>\n# listening port<br \/>\nlisten 80;<br \/>\n# root<br \/>\nlocation \/ \/<br \/>\n# select which server list<br \/>\nproxy_pass http:\/\/myserver;<br \/>\n}<br \/>\n}<br \/>\n}<\/p>\n<p>Max_fails allows the number of request failures to default to 1<br \/>\nfail_timeout=60s fail_timeout=60s timeout for failed timeouts<br \/>\ndown indicates that the current server is not participating in the\u00a0loadbackup. All nonbackup<br \/>\nmachines will request backups when they are busy, so their stress will be lightest.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nginx load balancing and configuration<\/p>\n<p>1 Load Balancing Overview The origin of load balancing is that when a server has a large amount of traffic per unit time, the server will be under great pressure. When it exceeds its own capacity, the server will crash. To avoid crashing the server. The user has a better [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[70],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7528"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7528"}],"version-history":[{"count":1,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7528\/revisions"}],"predecessor-version":[{"id":7529,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7528\/revisions\/7529"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7528"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7528"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7528"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}