introduction
With the rapid development of the Internet, the stability and performance of the website have become one of the core competitiveness of the enterprise. Load balancing is a key technology to improve website availability and processing capabilities, and is widely used in Internet architectures. As a high-performance HTTP and reverse proxy server, Nginx has become the first choice for many developers due to its lightweight, high concurrency and rich modular features. This article will introduce how to use Nginx's Keepalive capabilities to achieve highly available load balancing strategies.
What is Keepalive
Keepalive is a TCP connection-holding technology that allows a long-term inactive connection between the client and the server, rather than closing the connection after each request. This technique can reduce the overhead of setting up and closing TCP connections, thereby improving performance. In Nginx, the Keepalive feature can be used in conjunction with load balancing to ensure that a certain number of long connections are maintained when requests are distributed between multiple backend servers.
Configure Nginx Keepalive
1. Nginx main configuration file
First, we need to enable Keepalive in Nginx's configuration file. In the main configuration file of Nginx (usually /etc/nginx/
), add the following configuration:
http { # ...Other configurations... upstream backend { # Load balancing policies, such as polling, minimum number of connections, etc. # Specific policy configuration is omitted here } server { # ...Other server configurations... location / { proxy_pass http://backend; # Enable Keepalive proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; # Set Keepalive timeout proxy_set_header Keep-Alive "timeout=60"; # Other agent-related configurations } } }
In the above configuration, we set up an upstream group called backend to define the load balancing policy for the backend server. In the server block, we configured proxy forwarding for the / path and enabled Keepalive. We set proxy_http_version to 1.1 to support Keepalive and set the Connection and Keep-Alive headers via the proxy_set_header directive. Finally, we set a Keepalive timeout time, which refers to the time when the connection remains active without any data transfer.
2. Nginx subconfig file
If you are using Nginx subconfiguration files to manage different virtual hosts, you need to enable Keepalive in the corresponding subconfiguration files. For example, for a virtual host, you can create a subconfig file /etc/nginx// and add the following configuration:
server { listen 80; server_name ; location / { # ...Other location configuration... proxy_pass http://backend; # Enable Keepalive proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; # Set Keepalive timeout proxy_set_header Keep-Alive "timeout=60"; # Other agent-related configurations } }
3. Load balancing strategy
In addition to Keepalive, we also need to configure appropriate load balancing policies in Nginx. Nginx supports a variety of load balancing algorithms, such as round robin, least connections, IP hash, etc. Choose the right strategy according to your application needs.
For example, the configuration using a polling policy is as follows:
upstream backend { server ; server ; # ...Other backend servers...}
4. Health check
To ensure the availability of the backend server, we can configure Nginx to perform health checks on the backend server. When a backend server is unavailable, Nginx can redirect requests to other healthy servers.
upstream backend { server ; server ; #In Nginx, load balancing can be achieved using the `upstream` module, while the `keepalive` directive is used to set the connection to the backend server. Here is a simple Nginx configuration example that shows how to combine these two features to achieve highly available load balancing: ```nginx http { upstream backend { # Set up a load-balanced server group server ; server ; # You can add more servers to this group } # Set up a virtual host and use the backend group above server { listen 80; server_name ; location / { # Use load balancing backend group proxy_pass http://backend; # Set the connection to the backend server to maintain proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; # Set the timeout time to prevent the connection from closing due to long-term inactivity proxy_read_timeout 60s; proxy_send_timeout 60s; } } }
In this example, we define a load balancing group called backend, which contains two servers and. We then set up a virtual host, listen on port 80, and use the proxy_pass directive to proxy all requests sent to the root directory (/) to the backend group.
To maintain a long connection to the backend server, we used the proxy_http_version directive to specify the HTTP protocol version to 1.1, so that the Connection header field can be used. Next, we use the proxy_set_header directive to set the Connection value to Keep-Alive, which tells the backend server that we want to keep long connections.
In addition, we also set the proxy_read_timeout and proxy_send_timeout directives, which specify the time the client and proxy server wait for the other party to respond when sending data. This prevents long-term inactivity from causing the connection to close, thus maintaining the effectiveness of the long connection.
In practical applications, you may also need to adjust the timeout according to your specific needs, and may need to add more health check mechanisms to ensure the availability of the backend server. In Nginx, Keepalive is used to keep long-term idle connections to reduce latency and improve performance. This is often used with load balancing to ensure traffic is distributed among multiple backend servers while maintaining the effectiveness of the connection. Below is a simple Nginx configuration example showing how to use Keepalive and load balancing to improve service availability and performance.
First, make sure your Nginx version supports Keepalive and load balancing. Then, add the following configuration blocks to your Nginx configuration file:
http { upstream backend { # Use the minimum connection algorithm to decide which backend server to assign the request to least_conn; # Define the address and port of the backend server server :80; server :80; # More backend servers can be added } server { listen 80; # Set Keepalive parameters keepalive_timeout 60s; # Set the maximum number of idle connections keepalive_requests 1000; # When a client request arrives, use load balancing to allocate the request to the backend server location / { proxy_pass http://backend; # Set up agent-related configurations proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # ...Other proxy configuration } } }
In the above configuration:
-
upstream backend
The block defines a name calledbackend
load balancing group containing multiple backend servers. -
least_conn
The instruction tells Nginx to use the minimum number of connections algorithm to select the backend server. -
server
The block defines the listening port and Keepalive parameters of Nginx. -
keepalive_timeout
The directive sets the time for idle connection between the client and Nginx. -
keepalive_requests
The directive sets the maximum number of requests allowed on each connection. -
location
The block defines how requests are proxyed to the backend server.
Please note that this is just a basic configuration example, and it may need to be adjusted according to your specific needs in the actual production environment. For example, you may need to add health checks to ensure the availability of the backend server, or adjust the Keepalive parameters according to your performance requirements.
In addition, if your backend service is an HTTP/HTTPS service, you may also need to configure Nginx's proxy and SSL settings. For HTTPS services, you may also need to use HTTPS reverse proxy to terminate the SSL connection and use HTTP connection between Nginx and the backend server.
Finally, make sure to perform adequate testing in the test environment before deploying any new configuration to ensure the validity and security of the configuration.
The above is the detailed content of Nginx+Keepalive to achieve high availability load balancing. For more information about Nginx Keepalive load balancing, please pay attention to my other related articles!