yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel
wget -c http://nginx.org/download/nginx-1.14.2.tar.gz
tar zxvf nginx-1.14.2.tar.gz
cd nginx-1.14.2
./configure –prefix=/usr/local/nginx
make && make install
cd /usr/local/nginx
./sbin/nginx
ps aux|grep nginx
Nginx load balancing configuration example
Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by the hardware device is good, the efficiency is high, and the performance is stable, but the cost is relatively high. The load balancing implemented by software mainly depends on the selection of the equalization algorithm and the robustness of the program. Equalization algorithms are also diverse, and there are two common types: static load balancing algorithms and dynamic load balancing algorithms. The static algorithm is relatively simple to implement, and it can achieve better results in the general network environment, mainly including general polling algorithm, ratio-based weighted rounding algorithm and priority-based weighted rounding algorithm. The dynamic load balancing algorithm is more adaptable and effective in more complex network environments. It mainly has a minimum connection priority algorithm based on task volume, a performance-based fastest response priority algorithm, a prediction algorithm and a dynamic performance allocation algorithm.
The general principle of network load balancing technology is to use a certain allocation strategy to distribute the network load to each operating unit of the network cluster in a balanced manner, so that a single heavy load task can be distributed to multiple units for parallel processing, or a large number of concurrent access or data. Traffic sharing is handled separately on multiple units, thereby reducing the user’s waiting response time.
Nginx server load balancing configuration
The Nginx server implements a static priority-based weighted round-robin algorithm. The main configuration is the proxy_pass command and the upstream command. These contents are actually easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to configure load balancing? At the same time, rationally integrate other functions to form a set of configuration solutions that can meet actual needs.
The following are some basic example fragments. Of course, it is impossible to include all the configuration situations. I hope that it can be used as a brick-and-mortar effect. At the same time, we need to summarize and accumulate more in the actual application process. The places to note in the configuration will be added as comments.
Configuration Example 1: Load balancing of general polling rules for all requests
In the following example fragment, all servers in the backend server group are configured with the default weight=1, so they receive the request tasks in turn according to the general polling policy. This configuration is the easiest configuration to implement Nginx server load balancing. All requests to access www.rmohan.com will be load balanced in the backend server group. The example code is as follows:
…
Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}
…
}??
Configuration Example 2: Load balancing of weighted polling rules for all requests
Compared with “Configuration Instance One”, in this instance segment, the servers in the backend server group are given different priority levels, and the value of the weight variable is the “weight” in the polling policy. Among them, 192.168.1.2:80 has the highest level, and receives and processes client requests preferentially; 192.168.1.4:80 has the lowest level, which is the server that receives and processes the least client requests, and 192.168.1.3:80 will be between the above two. between. All requests to access www.rmohan.com will implement weighted load balancing in the backend server group. The example code is as follows:
…
Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80 weight=5;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}
…
}
Configuration Example 3: Load balancing a specific resource
In this example fragment, we set up two sets of proxy server groups, a group named “videobackend” for load balancing client requests requesting video resources, and another group for clients requesting filed resources. The end requests load balancing. All requests for “http://www.mywebname/video/” will be balanced in the videobackend server group, and all requests for “http://www.mywebname/file/” will be in the filebackend server group. Get a balanced effect. The configuration shown in this example is to implement general load balancing. For the configuration of weighted load balancing, refer to Configuration Example 2.
In the location /file/ {...} block, we populate the client's real information into the "Host", "X-Real-IP", and "X-Forwareded-For" header fields in the request header. So, the request received by the backend server group retains the real information of the client, not the information of the Nginx server. The example code is as follows:
…
Upstream videobackend #Configuring backend server group 1
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80;
}
Upstream filebackend #Configuring backend server group 2
{
Server 192.168.1.5:80;
Server 192.168.1.6:80;
Server 192.168.1.7:80;
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location /video/ {
Proxy_pass http://videobackend; #Use backend server group 1
Prox_set_header Host $host;
…
}
Location /file/ {
Proxy_pass http://filebackend; #Use backend server group 2
#Retain the real information of the client
Prox_set_header Host $host;
Proxy_set_header X-Real-IP $remote_addr;
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
…
}
}??
Configuration Example 4: Load balancing different domain names
In this example fragment, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and load balance these requests. If the client requests the domain name as “home.rmohan.com”, the server server1 receives and redirects to the homebackend server group for load balancing; if the client requests the domain name as “bbs.rmohan.com”, the server server2 receives the bbsbackend server level. Perform load balancing processing. This achieves load balancing of different domain names.
Note that there is one server server 192.168.1.4:80 in the two backend server groups that is public. All resources under the two domain names need to be deployed on this server to ensure that client requests are not problematic. The example code is as follows:
…
Upstream bbsbackend #Configuring backend server group 1
{
Server 192.168.1.2:80 weight=2;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80;
}
Upstream homebackend #Configuring backend server group 2
{
Server 192.168.1.4:80;
Server 192.168.1.5:80;
Server 192.168.1.6:80;
}
#Start to configure server 1
Server
{
Listen 80;
Server_name home.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://homebackend;
Prox_set_header Host $host;
…
}
…
}
#Start to configure server 2
Server
{
Listen 80;
Server_name bbs.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://bbsbackend;
Prox_set_header Host $host;
…
}
…
}
Configuration Example 5: Implementing load balancing with URL rewriting
First, let’s look at the specific source code. This is a modification made on the basis of instance one:
…
Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location /file/ {
Rewrite ^(/file/.)/media/(.).*$) $1/mp3/$2.mp3 last;
}
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}
…
}
This instance fragment adds a URL rewriting function to the URI containing “/file/” compared to “Configuration One”. For example, when the URL requested by the client is “http://www.rmohan.com/file/downlaod/media/1.mp3”, the virtual server first uses the location file/ {…} block to forward to the post. Load balancing is implemented in the backend server group. In this way, load balancing with URL rewriting is easily implemented in the car. In this configuration scheme, it is necessary to grasp the difference between the last flag and the break flag in the rewrite instruction in order to achieve the expected effect.
The above five configuration examples show the basic method of Nginx server to implement load balancing configuration under different conditions. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching and other functions, as well as Gzip compression technology, identity authentication, and rights management. At the same time, when configuring the server group using the upstream command, you can fully utilize the functions of each command and configure the Nginx server that meets the requirements, is efficient, stable, and feature-rich.
Localtion configuration
syntax structure: location [ = ~ ~* ^~ ] uri{ … }
uri variable is a matching request character, can be a string without regular expression, or a string containing regular
[ ] In the optional uri is required: used to change the matching method of the request string and uri
= for the standard uri front, requires the request string to strictly match the uri, if the match has been successful, stop the match and immediately process the request
~ indicates that the uri contains a regular expression and is case-sensitive
~* is used to indicate that the uri contains a regular expression that is not case sensitive
^~ requires finding the location that represents the highest match between the uri and the request string, and then processes the request for the
website error page
1xx: Indication – Indicates that the request has been received, continues processing
2xx: Success – indicates that the request has been successfully received, understood, accepted
3xx: Redirected – further operations must be performed to complete the request
4xx: Client Error – Request has Syntax error or request could not be implemented
5xx: server side error – server failed to implement legitimate request
http message code meaning
to move 301 requested data with There is a new location, and the change is a permanent
redirect 302 request data temporary location change
Cannot find webpage 400 can connect to the server, but due to the address problem, can’t find the webpage
website refuses to display 404 can connect to the website but can’t find the webpage
can’t display the page 405 can connect the website, the page content can’t be downloaded, the webpage writing method
can’t be solved This page is displayed. The server problem is
not executed. 501 The website settings that are not being accessed are displayed as the website requested by the browser. The
version of the protocol is not supported. 505 The protocol version information requested is
:
200 OK //The client request is successful.
400 Bad Request //Client The request has a syntax error and cannot be understood by the server.
401 Unauthorized // The request is unauthorized. This status code must be used with the WWW-Authenticate header field.
403 Forbidden //The server receives the request, but refuses to provide the service
404 Not Found //Request The resource does not exist, eg: entered the wrong URL
500 Internal Server Error //The server has an unexpected error
503 Server Unavailable //The server is currently unable to process the client’s request and may resume normal after a while
eg: HTTP/1.1 200 OK ( CRLF)
common feature File Description
1. error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg ]
debug — debug level output log information most complete
?? info — normal level output prompt information
?? notice — attention level output Note the information
?? warn — warning level output some insignificant error message
?? error — error level has an error
?? crit affecting the normal operation of the service — serious error level serious error level
?? alert — very serious level very serious
?? emerg — – Super serious super serious
nginx server log file output to a file or output to standard output error output to stder:
followed by the log level option, from low to high divided into debug …. emerg after setting the level Unicom’s high-level will also not record
2, user user group
configuration starter user group wants all can start without writing
3, worker_processes number | auto
Specifies the number nginx process to do more to generate woker peocess number of
auto nginx process automatically detects the number
4, pid file
specifies the file where pid where
pid log / nginx.pid time to pay attention to set the profile name, or can not find
5, include file
contains profile, other configurations incorporated
6, acept_mutex on | off
connection network provided serialization
7, multi_accept on | off
settings allow accept multiple simultaneous network connections
8, use method
selection event driven model
9, worker_connections number
configuration allows each workr process a maximum number of connections, the default is 1024
10, mime-type
resource allocation type, mime-type is a media type of network resource
format: default_type MIME-type
. 11, path access_log [the format [Buffer size =]]
from Define the server's log
path: configuration storage server log file path and name
format: optional, self-defined format string server log
size: the size of the memory buffer for temporary storage configuration log
12, log_format name sting ...;
in combination with access_log, Dedicated to define the format of the server log
and can define a name for the format, making the access_log easy to call
Name : the name of the format string default combined
string format string of the service log
main log_format 'REMOTE_ADDR $ - $ REMOTE_USER [$ time_local] "$ Request"'
'$ $ body_bytes_sent Status "$ HTTP_REFERER"'
' "$ HTTP_USER_AGENT" "$ HTTP_X_FORWARDED_FOR"';
??$remote_addr 10.0.0.1 ---Visitor source address information
????$remote_user – — The authentication information displayed when the nginx server wakes up for authentication access
????[$time_local] — Display access time ? information
????”$request” “GET / HTTP/1.1” — Requested line
$status 200 --- Show status? Information Shows 304 because of read cache
????$body_bytes_sent 256 — Response data size information
????”http_refer” — Link destination
?? “$http_user_agent” — Browser information accessed by the client
????”$http_X_forwarded_for” is referred to as the XFF header. It represents the client, which is the real IP of the HTTP requester. It is only added when passing the HTTP proxy or load balancing server. It is not the standard request header information defined in the RFC. It can be found in the Squid Cache Proxy Development document.
??13, sendfile no | off
configuration allows the sendfile mode to transfer files
14, sendfile_max_chunk size
configure each worker_process of the nginx process to call senfile() each time the maximum amount of data transfer cannot exceed
15, keepalive_timeout timeout[header_timeout];
configure connection timeout
timeout The server maintains the time of the connection
head_timeout, the keeplive field of the response message header sets the timeout period
16, and the number of
single-link requests for the keepalive_repuests number is
17.
There are three ways to configure the network listening configuration listener:
Listening IP address:
listen address[:port] [default_server] [setfib=number] [backlog=number] [rcvbuf=size]
[sndbuf=size]
[deferred]
Listening configuration port:
Listen port [default_server] [setfib=number] [backlog=number] [rcvbuf=size] [sndbuf=size]
[accept_file=filter]
listen for socket
listen unix:path [default_server] [backlog=number] [rcvbuf=size] [ Sndbuf=size] [accept_file=filter]
[deferred]
address : IP address
port: port number
path: socket file path
default_server: identifier, set this virtual host to address:port default host
setfib=number: current detachment freeBSD useful Is the 0.8.44 version listening scoket associated routing table
backlog=number: Set the listening function listen() to the maximum number of network connections hang freeBSD defaults to -1 Other 511
rcvbuf=size: Listening socket accept buffer size
sndbuf=size: Listen Socket send buffer size
Deferred : The identifier sets accept() to Deferred
accept_file=filter: Sets the listening port to filter the request, since the swimming
bind for freeBSD and netBSd 5.0+ : The identifier uses the independent bind() to handle the address:port
ssl: identifier Set the paint connection to use ssl mode for
18, server_name name
based on the name of the virtual host configuration
for multiple matching successful processing priority:
exact match server_name
wildcard match at the beginning match server_name successful
wildcard at the end is matching server_ then successful
regex If the matching server_name is successfully
matched multiple times in the appeal matching mode, the first matching request will be processed first. After receiving the request
, the root path
configuration server of the root path configuration request
needs to find the requested resource in the directory specified by the server.
This path is Specify the file directory
20, alias path (used in the location module) to
change the request path of the URI received by the location, which can be followed by the variable information.
21, index file ...;
set the default home page
22 of the website , error_page code ...[=[response]] uri
set the error page information
code to deal with the http error code
resoonse optional code code specified error code into new Error code
uri error page path or website address
23, allow address | CIDR |all
configuration ip-based access permission permission
address allows access to the client's ip does not support setting
CIDR for clients that are allowed to access multiple CIDR such as 185.199.110.153/24
All means that all clients can access
24, deny address | CIDR |all
configuration ip-based access prohibition permission
address allows access to the client's ip does not support setting
CIDR for clients that allow access to multiple CIDR such as 185.199.110.153/24
all for all customers You can access
25, auth_basic string |off to
configure password-based nginx access.
string open authentication, verify and configure the type of instructions
off closed
26, auth_basic_user_file file
to configure password to access nginx access permissions to files based on
file files need to use absolute paths
Recent Comments