1. Why choose Nginx
Nginx is a high-performance HTTP and reverse proxy server that has the following advantages:
- 1. high performance:Nginx performs well when handling static files and can respond quickly to large numbers of requests.
- 2. Reverse proxy: Requests can be forwarded to different back-end services to achieve front-end separation.
- 3. Load balancing: Supports multiple load balancing strategies to improve service availability and performance.
- 4. Flexible configuration: Concise syntax, easy to understand and maintain.
2. Basic configuration
Suppose we have a front-end project (or React) and a back-end API service (or other language) that we need to deploy in conjunction with Nginx.
First, install Nginx:
# Ubuntu Systemsudo apt-get update sudo apt-get install nginx
Then, modify the Nginx configuration file (usually located in/etc/nginx/sites-available/default
):
server { listen80; server_name your_domain.com; # Front-end static file configuration location / { root /path/to/your/frontend/dist; index ; try_files$uri$uri/ /; } # Backend API request forwarding location /api/ { proxy_pass http://your_backend_server:port/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
In this configuration, we did the following:
Configure the root directory of the front-end static file, all pairs
/
The requests will get the file from this directory.use
try_files
The instruction handles front-end routing to ensure that the routes of a single page application can match correctly.All are
/api/
The initial request is forwarded to the backend server.
3. Optimize configuration
1. Static resource cache
To improve performance, we can set up caches for static resources:
location / { root /path/to/your/frontend/dist; index ; try_files $uri $uri/ /; # Set cache header expires 7d; add_header Cache-Control "public, no-transform"; }
2. Gzip compression
Turn on Gzip compression to reduce the amount of data transmitted:
gzip on; gzip_types text/plain application/javascript application/css text/css application/json image/svg+xml;
3. Load balancing
If there are multiple backend instances, you can configure load balancing:
upstream backend_servers { server backend1:port; server backend2:port; # More servers can be added} location /api/ { proxy_pass http://backend_servers/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
4. Testing and deployment
- 1. Test configuration: After modifying the configuration, first test whether the configuration file is correct.
sudo nginx -t
- 2. Reload Nginx: If the configuration is correct, reload Nginx to make the configuration take effect.
sudo systemctl reload nginx
- 3. Verify access: Access the front-end page and back-end API separately to ensure that they all work properly.
5. Summary
Through Nginx, we can easily implement the coordinated deployment of the front-end and back-end. It not only improves development efficiency, but also optimizes the user experience. Hope this article helps you better utilize Nginx in your project.
This is the article about the collaborative deployment of front-end and back-end through Nginx. For more related content on collaborative deployment of Nginx, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!