SoFunction
Updated on 2025-03-04

How to use nginx to proxy ws or wss requests

Problem encountered: How to use nginx to proxy ws or wss requests

The reason is that in order to reduce costs and increase efficiency, the server merge needs to be done, but the process between the two servers has port conflicts with external connections. If we change the port, the client will also involve modification, but the client version is too old, and the process of changing and redistribution will be very long, so we try other methods to solve the problem of port modification.

Because the server time is relatively old, the client does not connect to load balancing first and then to the server. For this problem, we provide the following two solutions.

  • Multiple network cards, for example, the monitoring ports started by server A are all on IPA, and the monitoring ports after the original server B is migrated in are all on the IPB, which realizes the problem of monitoring multiple duplicate ports on a server. We just need to bind the IPA and IPB to different domain names respectively to realize the client's insensitive server migration.
  • Use nginx for port mapping

In the end, we chose the lower cost nginx port mapping method. The specific code is as follows:

server {
    listen 7865 ssl; #The port to listen (the original port connected by the client)    server_name ;  #The domain name that is monitored is not necessary    ssl_certificate      /etc/nginx/;   #Certificate    ssl_certificate_key  /etc/nginx/; #Certificate        ssl_session_timeout 20m;
        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_verify_client off;
    location / {
        proxy_pass https://127.0.0.1:7899;  # For the new port, there is a key point here. If we are SSL, we use https connection here. If there is no SSL, we need to set it to http        proxy_http_version 1.1;  # HTTP 1.1 must be used        proxy_redirect off;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Here we can think of this layer of nginx reverse proxy as transparent, because the client establishes long connections with nginx, and nginx and the backend server will also establish long connections. Once our backend process is restarted, the client will still sense it.

Expansion 1: Load balancing classification

If we distinguish according to the layer of the working ISO7 layer mode, we can divide it into 4-layer load balancing and 7-layer load balancing.

4-layer load balancing

For example, NLB of LVS or AWS is layer 4 load balancing. Its principle is: obtain the source IP address and port and the destination IP address and port in the packet respectively, and load balancing the backend (the algorithm may be polling or weighting, etc.) and modify the target IP address and port in the packet so that its traffic can be forwarded.

The requested path is:

  • The client sends the data packet to the load balancer
  • Load balancing determines which backend server to which request is last forwarded through the source IP address + port and the destination IP + port.
  • Load balancer modifies the target IP and port
  • The packet is forwarded to the specified backend server

The path to the response is:

  • Backend programs for business processing
  • If the load balancer does NAT (in order to make the backend process transparent, the purpose is to hide the backend server. The principle is to modify the source IP address and source port to the load balancer's IP address and port), the data packet will return to the load balancer.
  • Return to the client

If the load balancer does not have NAT function or is configured as a direct connection, the responding data packet will not pass through the load balancer. This can be confirmed by analyzing the source IP address and port of the traffic packet.

Benefits of 4-layer load balancer. 4-layer load balancer will not detransmit data packets, but will only decode the header packets to obtain IP addresses and ports, so it is relatively faster. When do we need to use 4-layer load balancer?

  • When not HTTP
  • Need for higher performance
  • Simple scene

7-layer load balancing

For example, nginx IIS is load balancing at layer 7

It is mainly used to handle application layer protocols such as HTTP/HTTPS. It not only allows load balancing based on transport layer information (such as IP addresses and ports), but also allows in-depth analysis of application layer data, providing more flexible and advanced traffic management.

Principle: The 7-layer load balancer can parse and check application layer data such as HTTP headers, URLs, cookies, SSL information. This allows it to make intelligent traffic routing decisions based on these contents. Because it's more advanced, there are more things to do

  • Content-based forwarding: Requests can be directed to different server pools based on specific URL paths, domain names, HTTP methods, or other application-layer data. For example, route all requests ending in "/images" to the image server
  • Session remains: Maintaining the continuity of user sessions by using cookies or other mechanisms to ensure that all requests from the same client are forwarded to the same backend server by using cookies or other mechanisms
  • Request redirection: Ability to modify the request content or path, or redirect the request to another resource or server
  • Data compression and caching: When processing requests, the response content can be compressed to reduce the amount of data transmitted. At the same time, it can also cache the responses of common requests, reducing the burden on the backend server

Request path:

  • The client sends a request to the load balancer
  • Load balancer forwards content by analyzing
  • Requests to reach specific backend processes

Response path:

  • Backend processing business logic
  • Response through load balancer
  • Load balancer does some optimization processing such as cache/compression, etc.
  • Respond to the client

It can be found that the response will pass through the load balancer. In fact, the client and nginx have established a connection, and nginx and the backend process have established a connection to realize the forwarding of requests and the response of requests.

Expansion 2: What other load balancing

Load balancing can also be achieved using rabbitMQ multiple consumers

Load balancing achieved by grpc+etcd

This is the article about how to use nginx proxy ws or wss requests. For more related nginx proxy ws or wss content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!