SoFunction
Updated on 2025-03-03

Implementation of polling mechanism in Nginx

Nginx is a high-performance web server and reverse proxy server, which is particularly outstanding in large-scale concurrency scenarios. Load balancing is a key feature when using Nginx for reverse proxying, and the polling mechanism (Round Robin) is the most common and basic load balancing algorithm. This article will introduce in detail the polling mechanism of Nginx and its performance and configuration in actual applications.

1. Introduction to the ordinary polling mechanism

Round Robin is a simple load balancing algorithm that distributes client requests to the backend server in turn. Assume there are multiple backend servers, Nginx will assign each new request to the next server in order, and when the last server reaches, it will loop back to the first server to reassign the request.

The advantage of this mechanism lies in its simplicity and balance, which is suitable for scenarios with relatively balanced loads, especially when the back-end server configuration is similar and there is no significant performance difference.

1.1 Verification of ordinary polling mechanism

We can create multiple backend servers and use polling mechanisms in Nginx configuration. Then send the request through the client to see if the request is evenly distributed on the backend server.

Nginx configuration:

http {
    upstream backend_servers {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        server 127.0.0.1:8083;
    }

    server {
        listen 80;
        server_name localhost;

        location / {
            proxy_pass http://backend_servers;
        }
    }
}

Backend server (simple Python Flask server):

Create three Flask servers to listen to different ports.

# (listen to port 8081)from flask import Flask
app = Flask(__name__)

@('/')
def home():
    return 'Response from backend 1'

if __name__ == '__main__':
    (port=8081)

# (listen to port 8082)from flask import Flask
app = Flask(__name__)

@('/')
def home():
    return 'Response from backend 2'

if __name__ == '__main__':
    (port=8082)

# (listen to port 8083)from flask import Flask
app = Flask()

@('/')
def home():
    return 'Response from backend 3'

if __name__ == '__main__':
    (port=8083)

verify:

Can be usedcurlOr write a simple Python script to send requests multiple times to verify whether they are evenly distributed

# Use curl to send multiple requestsfor i in {1..10}; do curl http://localhost; done

Each request will get output similar to the following, and the polling mechanism will return the responses of different backend servers in turn:

Response from backend 1
Response from backend 2
Response from backend 3
Response from backend 1
...

2. Polling mechanism configuration in Nginx

Nginx uses the polling mechanism to configure load balancing. When you only need to define the backend server cluster and do not set up other specific load balancing policies, Nginx uses the polling algorithm by default.

Sample configuration:

http {
    upstream backend_servers {
        server ;
        server ;
        server ;
    }

    server {
        listen 80;
        server_name ;

        location / {
            proxy_pass http://backend_servers;
        }
    }
}

In this configuration,upstreamDefine a name calledbackend_serversThe backend server group includes three serversand. When the client accessesWhen  , Nginx will distribute the requests to the three servers on the backend in sequence, implementing the basic polling mechanism.

2.1 Weight Poll Verification

Verify the configuration of the weight polling mechanism to ensure that the heavy weighted server receives more requests.

Nginx configuration:

http {
    upstream backend_servers {
        server 127.0.0.1:8081 weight=3;
        server 127.0.0.1:8082 weight=1;
        server 127.0.0.1:8083 weight=2;
    }

    server {
        listen 80;
        server_name localhost;

        location / {
            proxy_pass http://backend_servers;
        }
    }
}

In this configuration,backend1The weight is 3.backend2The weight is 1,backend3The weight of  is 2. Requests should be allocated according to weights.backend1The largest number of requests received.backend2Second.

verify:

Can also be usedcurlOr write a script to test.

# Use curl to send multiple requestsfor i in {1..10}; do curl http://localhost; done

Through multiple requests, you should seebackend1Received more requests, similar to the following results:

Response from backend 1
Response from backend 1
Response from backend 3
Response from backend 1
Response from backend 2
Response from backend 1
Response from backend 3

3. Weighted Round Robin

In practical applications, there may be differences in the hardware configuration and processing capabilities of the backend server. For example, some servers have higher performance and are able to handle more requests. In this case, the normal polling mechanism is not ideal because it will evenly assign requests to all servers without taking into account the processing power of each server.

To handle this, Nginx provides a Weighted Round Robin, where you can assign a weight value to each server. The higher the weight value, the more requests the server is assigned.

Example configuration for weight polling:

http {
    upstream backend_servers {
        server  weight=3;  # Weight is 3        server  weight=1;  # Weight is 1        server  weight=2;  # Weight is 2    }

    server {
        listen 80;
        server_name ;

        location / {
            proxy_pass http://backend_servers;
        }
    }
}

In this configuration,backend1The server's weight is 3.backend2The weight is 1,backend3The weight of  is 2. Therefore, Nginx will assign more requests tobackend1,followed bybackend3, and finallybackend2. This method can ensure that higher performance servers allocate more requests and achieve more reasonable resource allocation.

4. Pros and cons analysis

advantage:

  • Simple and efficient: The implementation of the polling mechanism is very simple, without the need for additional complex algorithms and calculations.
  • Fairness: With the same performance of the backend server, the polling mechanism can ensure that the requests are evenly distributed to each server.
  • Easy to configure: Polling can be used under the default configuration, which is suitable for beginners to get started quickly.

shortcoming:

  • Ignore server performance differences: Normal polling cannot be optimized for allocation based on the actual load and performance differences of the server, which may cause some servers to be overloaded while others are underprocessed.
  • Not suitable for high dynamic scenarios: The polling mechanism does not consider the real-time state of the server, such as the current load, number of connections, etc., and it is difficult to adapt to some dynamically changing scenarios.

5. Optimization direction of polling mechanism

Although the polling mechanism is simple and efficient, in some complex scenarios, optimizing the load balancing algorithm can bring better results. In addition to providing a polling mechanism, Nginx also supports other load balancing algorithms:

  • least_conn: Assign requests to the server with the lowest number of active connections, which is more suitable for long connection scenarios.
  • ip_hash: According to the client's IP address, the requests for the same IP are always assigned to the same server, suitable for scenarios where sessions need to be maintained.

You can choose appropriate load balancing strategies based on actual needs to improve the performance and stability of the system.

6. Summary

Nginx's polling mechanism is the basis of its load balancing function, and is especially suitable for simple load balancing scenarios. By using the polling mechanism, Nginx can evenly distribute traffic to the backend server and solve server performance differences through weight polling. However, for more complex scenarios, it can also be optimized in combination with other load balancing algorithms.

In practical applications, it is very important to choose the appropriate load balancing algorithm, which directly affects the performance and reliability of the system. Although the polling mechanism is simple, its efficiency and ease of use make it one of the preferred methods for load balancing, especially suitable for small and medium-sized projects or back-end server performance balance scenarios.

By rationally configuring and optimizing Nginx's load balancing strategy, the processing capabilities and stability of web services can be effectively improved.

This is the end of this article about the implementation of the polling mechanism in Nginx. For more related content of the Nginx polling mechanism, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!