{"id":7805,"date":"2019-04-23T18:00:46","date_gmt":"2019-04-23T10:00:46","guid":{"rendered":"http:\/\/rmohan.com\/?p=7805"},"modified":"2019-04-29T13:58:00","modified_gmt":"2019-04-29T05:58:00","slug":"nginx-server-configuration","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=7805","title":{"rendered":"Nginx server configuration"},"content":{"rendered":"\n<p>yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel<\/p>\n\n\n\n<p>wget -c http:\/\/nginx.org\/download\/nginx-1.14.2.tar.gz<\/p>\n\n\n\n<p>tar zxvf nginx-1.14.2.tar.gz<\/p>\n\n\n\n<p>cd nginx-1.14.2<\/p>\n\n\n\n<p>.\/configure &#8211;prefix=\/usr\/local\/nginx<\/p>\n\n\n\n<p>make &amp;&amp; make install<\/p>\n\n\n\n<p>cd \/usr\/local\/nginx<\/p>\n\n\n\n<p>.\/sbin\/nginx<\/p>\n\n\n\n<p>ps aux|grep nginx<\/p>\n\n\n\n<p>Nginx load balancing configuration example<\/p>\n\n\n\n<p>Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by the hardware device is good, the efficiency is high, and the performance is stable, but the cost is relatively high. The load balancing implemented by software mainly depends on the selection of the equalization algorithm and the robustness of the program. Equalization algorithms are also diverse, and there are two common types: static load balancing algorithms and dynamic load balancing algorithms. The static algorithm is relatively simple to implement, and it can achieve better results in the general network environment, mainly including general polling algorithm, ratio-based weighted rounding algorithm and priority-based weighted rounding algorithm. The dynamic load balancing algorithm is more adaptable and effective in more complex network environments. It mainly has a minimum connection priority algorithm based on task volume, a performance-based fastest response priority algorithm, a prediction algorithm and a dynamic performance allocation algorithm.<\/p>\n\n\n\n<p>The general principle of network load balancing technology is to use a certain allocation strategy to distribute the network load to each operating unit of the network cluster in a balanced manner, so that a single heavy load task can be distributed to multiple units for parallel processing, or a large number of concurrent access or data. Traffic sharing is handled separately on multiple units, thereby reducing the user&#8217;s waiting response time.<\/p>\n\n\n\n<p>Nginx server load balancing configuration<br>\nThe Nginx server implements a static priority-based weighted round-robin algorithm. The main configuration is the proxy_pass command and the upstream command. These contents are actually easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to configure load balancing? At the same time, rationally integrate other functions to form a set of configuration solutions that can meet actual needs.<\/p>\n\n\n\n<p>The following are some basic example fragments. Of course, it is impossible to include all the configuration situations. I hope that it can be used as a brick-and-mortar effect. At the same time, we need to summarize and accumulate more in the actual application process. The places to note in the configuration will be added as comments.<\/p>\n\n\n\n<p>Configuration Example 1: Load balancing of general polling rules for all requests<br>\n    In the following example fragment, all servers in the backend server group are configured with the default weight=1, so they receive the request tasks in turn according to the general polling policy. This configuration is the easiest configuration to implement Nginx server load balancing. All requests to access www.rmohan.com will be load balanced in the backend server group. The example code is as follows:<\/p>\n\n\n\n<p>\u2026<\/p>\n\n\n\n<p>Upstream backend #Configuring the backend server group<br>\n{<br>\n   Server 192.168.1.2:80;<br>\n   Server 192.168.1.3:80;<br>\n   Server 192.168.1.4:80; #defaultweight=1<br>\n}<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name www.rmohan.com;<br>\n   Index index.html index.htm;<br>\n   Location \/ {<br>\n       Proxy_pass http:\/\/backend;<br>\n       Prox_set_header Host $host;<br>\n   }<br>\n   \u2026<br>\n}??<\/p>\n\n\n\n<p>Configuration Example 2: Load balancing of weighted polling rules for all requests<br>\n    Compared with &#8220;Configuration Instance One&#8221;, in this instance segment, the servers in the backend server group are given different priority levels, and the value of the weight variable is the &#8220;weight&#8221; in the polling policy. Among them, 192.168.1.2:80 has the highest level, and receives and processes client requests preferentially; 192.168.1.4:80 has the lowest level, which is the server that receives and processes the least client requests, and 192.168.1.3:80 will be between the above two. between. All requests to access www.rmohan.com will implement weighted load balancing in the backend server group. The example code is as follows:<\/p>\n\n\n\n<p>\u2026<\/p>\n\n\n\n<p>Upstream backend #Configuring the backend server group<br>\n{<br>\n   Server 192.168.1.2:80 weight=5;<br>\n   Server 192.168.1.3:80 weight=2;<br>\n   Server 192.168.1.4:80; #defaultweight=1<br>\n}<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name www.rmohan.com;<br>\n   Index index.html index.htm;<br>\n   Location \/ {<br>\n       Proxy_pass http:\/\/backend;<br>\n       Prox_set_header Host $host;<br>\n   }<br>\n   \u2026<br>\n}<\/p>\n\n\n\n<p>Configuration Example 3: Load balancing a specific resource<br>\n In this example fragment, we set up two sets of proxy server groups, a group named &#8220;videobackend&#8221; for load balancing client requests requesting video resources, and another group for clients requesting filed resources. The end requests load balancing. All requests for &#8220;http:\/\/www.mywebname\/video\/<em>&#8221; will be balanced in the videobackend server group, and all requests for &#8220;http:\/\/www.mywebname\/file\/<\/em>&#8221; will be in the filebackend server group. Get a balanced effect. The configuration shown in this example is to implement general load balancing. For the configuration of weighted load balancing, refer to Configuration Example 2.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>In the location \/file\/ {...} block, we populate the client's real information into the \"Host\", \"X-Real-IP\", and \"X-Forwareded-For\" header fields in the request header. So, the request received by the backend server group retains the real information of the client, not the information of the Nginx server. The example code is as follows:<\/code><\/pre>\n\n\n\n<p>\u2026<\/p>\n\n\n\n<p>Upstream videobackend #Configuring backend server group 1<br>\n{<br>\n   Server 192.168.1.2:80;<br>\n   Server 192.168.1.3:80;<br>\n   Server 192.168.1.4:80;<br>\n}<br>\nUpstream filebackend #Configuring backend server group 2<br>\n{<br>\n   Server 192.168.1.5:80;<br>\n   Server 192.168.1.6:80;<br>\n   Server 192.168.1.7:80;<br>\n}<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name www.rmohan.com;<br>\n   Index index.html index.htm;<br>\n   Location \/video\/ {<br>\n       Proxy_pass http:\/\/videobackend; #Use backend server group 1<br>\n       Prox_set_header Host $host;<br>\n       \u2026<br>\n   }<br>\n   Location \/file\/ {<br>\n       Proxy_pass http:\/\/filebackend; #Use backend server group 2<br>\n                                       #Retain the real information of the client<br>\n       Prox_set_header Host $host;<br>\n       Proxy_set_header X-Real-IP $remote_addr;<br>\n       Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br>\n       \u2026<br>\n   }<br>\n}??<\/p>\n\n\n\n<p>Configuration Example 4: Load balancing different domain names<br>\nIn this example fragment, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and load balance these requests. If the client requests the domain name as &#8220;home.rmohan.com&#8221;, the server server1 receives and redirects to the homebackend server group for load balancing; if the client requests the domain name as &#8220;bbs.rmohan.com&#8221;, the server server2 receives the bbsbackend server level. Perform load balancing processing. This achieves load balancing of different domain names.<\/p>\n\n\n\n<p>Note that there is one server server 192.168.1.4:80 in the two backend server groups that is public. All resources under the two domain names need to be deployed on this server to ensure that client requests are not problematic. The example code is as follows:<\/p>\n\n\n\n<p>\u2026<br>\nUpstream bbsbackend #Configuring backend server group 1<br>\n{<br>\n   Server 192.168.1.2:80 weight=2;<br>\n   Server 192.168.1.3:80 weight=2;<br>\n   Server 192.168.1.4:80;<br>\n}<br>\nUpstream homebackend #Configuring backend server group 2<br>\n{<br>\n   Server 192.168.1.4:80;<br>\n   Server 192.168.1.5:80;<br>\n   Server 192.168.1.6:80;<br>\n}<br>\n                                       #Start to configure server 1<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name home.rmohan.com;<br>\n   Index index.html index.htm;<br>\n   Location \/ {<br>\n       Proxy_pass http:\/\/homebackend;<br>\n       Prox_set_header Host $host;<br>\n       \u2026<br>\n   }<br>\n   \u2026<br>\n}<br>\n                                       #Start to configure server 2<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name bbs.rmohan.com;<br>\n   Index index.html index.htm;<br>\n   Location \/ {<br>\n       Proxy_pass http:\/\/bbsbackend;<br>\n       Prox_set_header Host $host;<br>\n       \u2026<br>\n   }<br>\n   \u2026<br>\n}<\/p>\n\n\n\n<p>Configuration Example 5: Implementing load balancing with URL rewriting<br>\n    First, let&#8217;s look at the specific source code. This is a modification made on the basis of instance one:<\/p>\n\n\n\n<p>\u2026<br>\nUpstream backend #Configuring the backend server group<br>\n{<br>\n   Server 192.168.1.2:80;<br>\n   Server 192.168.1.3:80;<br>\n   Server 192.168.1.4:80; #defaultweight=1<br>\n}<br>\nServer<br>\n{<br>\n   Listen 80;<br>\n   Server_name www.rmohan.com;<br>\n   Index index.html index.htm;<\/p>\n\n\n\n<p>Location \/file\/ {<br>\n       Rewrite ^(\/file\/.<em>)\/media\/(.<\/em>).*$) $1\/mp3\/$2.mp3 last;<br>\n   }<\/p>\n\n\n\n<p>Location \/ {<br>\n       Proxy_pass http:\/\/backend;<br>\n       Prox_set_header Host $host;<br>\n   }<br>\n   \u2026<br>\n}<br>\nThis instance fragment adds a URL rewriting function to the URI containing &#8220;\/file\/&#8221; compared to &#8220;Configuration One&#8221;. For example, when the URL requested by the client is &#8220;http:\/\/www.rmohan.com\/file\/downlaod\/media\/1.mp3&#8221;, the virtual server first uses the location file\/ {\u2026} block to forward to the post. Load balancing is implemented in the backend server group. In this way, load balancing with URL rewriting is easily implemented in the car. In this configuration scheme, it is necessary to grasp the difference between the last flag and the break flag in the rewrite instruction in order to achieve the expected effect.<\/p>\n\n\n\n<p>The above five configuration examples show the basic method of Nginx server to implement load balancing configuration under different conditions. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching and other functions, as well as Gzip compression technology, identity authentication, and rights management. At the same time, when configuring the server group using the upstream command, you can fully utilize the functions of each command and configure the Nginx server that meets the requirements, is efficient, stable, and feature-rich.<\/p>\n\n\n\n<p>Localtion configuration <br>\n        syntax structure: location [ = ~ ~* ^~ ] uri{ \u2026 } <br>\n        uri variable is a matching request character, can be a string without regular expression, or a string containing regular <br>\n        [ ] In the optional uri is required: used to change the matching method of the request string and uri <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    = for the standard uri front, requires the request string to strictly match the uri, if the match has been successful, stop the match and immediately process the request \n\n    ~ indicates that the uri contains a regular expression and is case-sensitive \n\n    ~* is used to indicate that the uri contains a regular expression that is not case sensitive \n\n    ^~ requires finding the location that represents the highest match between the uri and the request string, and then processes the request for the <\/code><\/pre>\n\n\n\n<p>website error page <br>\n        1xx: Indication &#8211; Indicates that the request has been received, continues processing <br>\n        2xx: Success &#8211; indicates that the request has been successfully received, understood, accepted <br>\n        3xx: Redirected &#8211; further operations must be performed to complete the request <br>\n        4xx: Client Error &#8211; Request has Syntax error or request could not be implemented <br>\n        5xx: server side error &#8211; server failed to implement legitimate request <br>\n        http message code meaning <br>\n        to move 301 requested data with There is a new location, and the change is a permanent <br>\n        redirect 302 request data temporary location change<br>\n        Cannot find webpage 400 can connect to the server, but due to the address problem, can&#8217;t find the webpage <br>\n        website refuses to display 404 can connect to the website but can&#8217;t find the webpage <br>\n        can&#8217;t display the page 405 can connect the website, the page content can&#8217;t be downloaded, the webpage writing method <br>\n        can&#8217;t be solved This page is displayed. The server problem is <br>\n        not executed. 501 The website settings that are not being accessed are displayed as the website requested by the browser. The <br>\n        version of the protocol is not supported. 505 The protocol version information requested is <br>\n      : <br>\n        200 OK \/\/The client request is successful. <br>\n        400 Bad Request \/\/Client The request has a syntax error and cannot be understood by the server. <br>\n        401 Unauthorized \/\/ The request is unauthorized. This status code must be used with the WWW-Authenticate header field. <br>\n        403 Forbidden \/\/The server receives the request, but refuses to provide the service <br>\n        404 Not Found \/\/Request The resource does not exist, eg: entered the wrong URL <br>\n        500 Internal Server Error \/\/The server has an unexpected error <br>\n        503 Server Unavailable \/\/The server is currently unable to process the client&#8217;s request and may resume normal after a while <br>\n        eg: HTTP\/1.1 200 OK ( CRLF)        <\/p>\n\n\n\n<p>common feature File Description<br>\n        1. error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg ] <br>\n                    debug &#8212; debug level output log information most complete <br>\n          ?? info &#8212; normal level output prompt information <br>\n          ?? notice &#8212; attention level output Note the information <br>\n          ??  warn &#8212; warning level output some insignificant error message <br>\n        ??  error &#8212; error level has an error <br>\n        ??    crit affecting the normal operation of the service &#8212; serious error level serious error level <br>\n          ??  alert &#8212; very serious level very serious <br>\n        ??  emerg &#8212; &#8211; Super serious super serious <br>\n        nginx server log file output to a file or output to standard output error output to stder: <br>\n        followed by the log level option, from low to high divided into debug \u2026. emerg after setting the level Unicom&#8217;s high-level will also not record <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    2, user user group \n\n    configuration starter user group wants all can start without writing \n\n    3, worker_processes number | auto\n\n    Specifies the number nginx process to do more to generate woker peocess number of \n    auto nginx process automatically detects the number \n\n    4, pid file \n\n    specifies the file where pid where \n    pid log \/ nginx.pid time to pay attention to set the profile name, or can not find \n\n    5, include file \n\n    contains profile, other configurations incorporated \n\n    6, acept_mutex on | off \n\n    connection network provided serialization \n\n    7, multi_accept on | off \n\n    settings allow accept multiple simultaneous network connections \n\n    8, use method \n\n    selection event driven model \n\n    9, worker_connections number \n\n    configuration allows each workr process a maximum number of connections, the default is 1024 \n\n    10, mime-type \n\n    resource allocation type, mime-type is a media type of network resource \n    format: default_type MIME-type \n\n    . 11, path access_log [the format [Buffer size =]] \n\n    from Define the server's log\n    path: configuration storage server log file path and name \n    format: optional, self-defined format string server log \n    size: the size of the memory buffer for temporary storage configuration log \n\n    12, log_format name sting ...; \n\n    in combination with access_log, Dedicated to define the format of the server log \n    and can define a name for the format, making the access_log easy to call\n\n\n    Name : the name of the format string default combined \n    string format string of the service log\n\nmain log_format 'REMOTE_ADDR $ - $ REMOTE_USER [$ time_local] \"$ Request\"' \n                '$ $ body_bytes_sent Status \"$ HTTP_REFERER\"' \n                  ' \"$ HTTP_USER_AGENT\" \"$ HTTP_X_FORWARDED_FOR\"';\n\n\n    ??$remote_addr 10.0.0.1 ---Visitor source address information<\/code><\/pre>\n\n\n\n<p>????$remote_user &#8211; &#8212; The authentication information displayed when the nginx server wakes up for authentication access<\/p>\n\n\n\n<p>????[$time_local] &#8212; Display access time ? information<\/p>\n\n\n\n<p>????&#8221;$request&#8221; &#8220;GET \/ HTTP\/1.1&#8221; &#8212; Requested line<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>            $status 200 --- Show status? Information Shows 304 because of read cache<\/code><\/pre>\n\n\n\n<p>????$body_bytes_sent 256 &#8212; Response data size information<\/p>\n\n\n\n<p>????&#8221;http_refer&#8221; &#8212; Link destination <br>\n      ?? &#8220;$http_user_agent&#8221; &#8212; Browser information accessed by the client<\/p>\n\n\n\n<p>????&#8221;$http_X_forwarded_for&#8221; is referred to as the XFF header. It represents the client, which is the real IP of the HTTP requester. It is only added when passing the HTTP proxy or load balancing server. It is not the standard request header information defined in the RFC. It can be found in the Squid Cache Proxy Development document.<\/p>\n\n\n\n<p>??13, sendfile no | off <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    configuration allows the sendfile mode to transfer files \n\n    14, sendfile_max_chunk size \n\n    configure each worker_process of the nginx process to call senfile() each time the maximum amount of data transfer cannot exceed \n\n    15, keepalive_timeout timeout[header_timeout]; \n\n    configure connection timeout \n    timeout The server maintains the time of the connection \n    head_timeout, the keeplive field of the response message header sets the timeout period \n\n    16, and the number of \n\n    single-link requests for the keepalive_repuests number is \n\n    17. \n\n    There are three ways to configure the network listening configuration listener: \n        Listening IP address: \n        listen address[:port] [default_server] [setfib=number] [backlog=number] [rcvbuf=size] \n        <\/code><\/pre>\n\n\n<p>[sndbuf=size]<\/p>\n\n\n\n<p> [deferred] \n        Listening configuration port:\n        Listen port [default_server] [setfib=number] [backlog=number] [rcvbuf=size] [sndbuf=size] \n        <\/p>\n\n\n<p>[accept_file=filter]<\/p>\n\n\n\n<p> \n        listen for socket \n        listen unix:path [default_server] [backlog=number] [rcvbuf=size] [ Sndbuf=size] [accept_file=filter] \n        <\/p>\n\n\n<p>[deferred]<\/p>\n\n\n\n<p> \n\n        address : IP address \n        port: port number \n        path: socket file path \n        default_server: identifier, set this virtual host to address:port default host \n        setfib=number: current detachment freeBSD useful Is the 0.8.44 version listening scoket associated routing table \n        backlog=number: Set the listening function listen() to the maximum number of network connections hang freeBSD defaults to -1 Other 511 \n        rcvbuf=size: Listening socket accept buffer size \n        sndbuf=size: Listen Socket send buffer size\n        Deferred : The identifier sets accept() to Deferred \n        accept_file=filter: Sets the listening port to filter the request, since the swimming \n        bind for freeBSD and netBSd 5.0+ : The identifier uses the independent bind() to handle the address:port \n        ssl: identifier Set the paint connection to use ssl mode for \n\n    18, server_name name \n\n    based on the name of the virtual host configuration \n    for multiple matching successful processing priority: \n        exact match server_name \n        wildcard match at the beginning match server_name successful \n        wildcard at the end is matching server_ then successful \n        regex If the matching server_name is successfully \n    matched multiple times in the appeal matching mode, the first matching request will be processed first.         After receiving the request \n\n    , the root path \n\n    configuration server of the root path configuration request \n<\/p>\n\n\n\n<p>needs to find the requested resource in the directory specified by the server. <br>\n        This path is Specify the file directory <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    20, alias path (used in the location module) to \n\n    change the request path of the URI received by the location, which can be followed by the variable information.\n\n    21, index file ...; \n\n    set the default home page \n\n    22 of the website , error_page code ...[=[response]] uri \n\n    set the error page information \n\n        code to deal with the http error code \n        resoonse optional code code specified error code into new Error code \n        uri error page path or website address \n\n    23, allow address | CIDR |all \n\n    configuration ip-based access permission permission \n\n    address allows access to the client's ip does not support setting \n    CIDR for clients that are allowed to access multiple CIDR such as 185.199.110.153\/24 \n    All means that all clients can access \n\n    24, deny address | CIDR |all \n\n    configuration ip-based access prohibition permission \n\n    address allows access to the client's ip does not support setting \n    CIDR for clients that allow access to multiple CIDR such as 185.199.110.153\/24 \n    all for all customers You can access        \n\n\n    25, auth_basic string |off to \n\n    configure password-based nginx access.\n\n    string open authentication, verify and configure the type of instructions \n\n    off closed \n\n    26, auth_basic_user_file file \n\n    to configure password to access nginx access permissions to files based on \n\n    file files need to use absolute paths<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"\n<p>yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel<\/p>\n<p>wget -c http:\/\/nginx.org\/download\/nginx-1.14.2.tar.gz<\/p>\n<p>tar zxvf nginx-1.14.2.tar.gz<\/p>\n<p>cd nginx-1.14.2<\/p>\n<p>.\/configure &#8211;prefix=\/usr\/local\/nginx<\/p>\n<p>make &amp;&amp; make install<\/p>\n<p>cd \/usr\/local\/nginx<\/p>\n<p>.\/sbin\/nginx<\/p>\n<p>ps aux|grep nginx<\/p>\n<p>Nginx load balancing configuration example<\/p>\n<p>Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[70],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7805"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7805"}],"version-history":[{"count":1,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7805\/revisions"}],"predecessor-version":[{"id":7806,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7805\/revisions\/7806"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7805"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7805"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7805"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}