SoFunction
Updated on 2025-03-02

Summary of 9 ways to deploy python web programs

The mainstream web server can be counted with just one slap, apache, lighttpd, nginx, iis

Application, Chinese name is application service, which means the application code you write based on a certain web framework. DB server refers to storage services. MySQL is used more frequently in web development. In recent years, due to the expansion of the website scale, storage such as memcache and redis have become popular.
The web server placed in the front has 3 functions

Efficient processing of static files, web servers are developed using C, calls are native functions, and targeted optimizations for IO and file transfers.

It acts as a simple network firewall, can deny some IPs, and simply control the number of concurrent connections, etc., which is better than nothing.

Handle requests from high concurrency short connections and forward requests from thousands of users through dozens of long connections on the intranet. One reason is that web server is very professional in handling high concurrency, and the other reason is that most applications use frameworks that do not have the ability to handle high concurrency.

In fact, some web frameworks on the market have built-in efficient network libraries such as epoll/kqueue, and have the ability to handle high concurrency, such as python tornado, java-based tomcat, jetty, etc. Some people remove the front-end web server and run directly, but when deploying public network applications, it is best not to do this, because the network situation between users' browsers and web servers is strange and unimaginable.

web server strongly recommends using nginx for three reasons

Very excellent performance, very stable
Simple installation, few dependencies
Conf file is very easy to configure, and is simpler than apache/lighttpd
There are 9 ways to deploy python-developed web programs

mod_python, this is a built-in module of apache. It relies heavily on the python version used by mod_python compiled and used in conjunction with apache. It is not recommended.

cgi, this is too old and not recommended. Moreover, nginx does not support cgi method, so you can only use lighttpd or apache

fastcgi , this is the most popular practice at present, supported by the flup module. The corresponding configuration command in nginx is fastcgi_pass

spawn-fcgi , this is a fastcgi multi-process management program, which comes with the lighttpd installation package. It is the same as the flup effect. The difference is that flup is introduced at the python code level, and spawn-fcgi is an external program. spawn-fcgi is very useful and can support code developed in any language, php, python, perl, as long as your code implements the fastcgi interface, it can help you manage your process

scgi , the full name is Simple Common Gateway Interface, which is also an alternative version of cgi. The scgi protocol is very simple. I think it is similar to fastcgi, but it has not been promoted much. The corresponding configuration command of nginx is scgi_pass. You can use it if you want, and flup also supports it.

http, nginx uses proxy_pass forwarding, which requires that the backend appplication must have a http server that can handle high concurrency. In the python web framework, you can only choose tornado.

Python programmers like to invent the wheel. In addition to being a web framework, tornado can also provide high-performance http servers separately. Therefore, if you use other python frameworks to write code, such as bottle, you can also start a high-performance http server by importing tornado, and you can also use the http protocol and nginx to deploy it together. If you expand it, there are many http servers that can handle high concurrency in the Python package. For example, gevent can also be referenced by other frameworks to support http deployment.

In reality, using Java to make web programs, usually http and nginx are used to cooperate, and the application server chooses tomcat or jetty.

uwsgi, consists of 4 parts,

uwsgi protocol
Web server built-in support protocol module
Application Server Protocol Support Module
Process control program

nginx has built-in support for the uwsgi protocol since 0.8.4. The uwsgi protocol is very simple. A 4-byte header + a body. Body can be a package of many protocols, such as http, cgi, etc. (by the field marking in the header). I once conducted a small-scale performance comparison test. The results show that compared with uwsgi and fastcgi, the performance does not have much obvious advantages, and it may be because of the smaller data set.

The characteristic of uwsgi is its own process control program. It is written in C language and uses natvie functions, which is actually similar to spawn-fcgi/php-fpm. Therefore, uwsgi can support a variety of application frameworks, including (python, lua, ruby, erlang, go) and so on

Gunicorn, a tool similar to uwsgi, was ported from the rails deployment tool (Unicorn). However, the protocol it uses is WSGI, the full name is Python Web Server Gateway Interface, which is the official standard defined in Python 2.5 (PEP 333 ). It has a strong root and is relatively simple to deploy./There is a detailed tutorial on it

mod_wsgi, a module of apache, also supports the WSGI protocol,/p/modwsgi/

Comparison of the advantages and disadvantages of fastcgi protocol and http protocol in code deployment

Although fastcgi is a binary protocol, it does not save resources compared to the http protocol. The binary protocol can only save the expression of numbers. For example, 1234567, using a string requires 7 Bytes, and using a number means 4 Bytes, and the string is the same wherever it goes.

When fastcgi transmits data, in order to be compatible with the cgi protocol, it also has to bring a bunch of CG environment variables. Therefore, compared with the http protocol, using fastcgi to transmit data is not easy, but instead has more

The only advantage of fastcgi is that it is long-connected, and the user sends 1000 requests concurrently. Fastcgi may forward it to the backend appplication using 10 links. If you use the http protocol, then give as much as you want, and 1000 requests will be initiated to the backend appplication.

The http proxy forwarding method will cause problems in the face of super high concurrency, because in the tcp protocol stack, the port is an int16 integer. If you create a new connect locally, you need to consume one port, and it can reach up to 65536. Hundreds of thousands of requests are sent externally, and the port pool is exhausted, your server can only refuse to respond.

Summarize

My personal habit is to use the fastcgi protocol to deploy python programs, which is simple and easy to use. When choosing a technical solution, you must choose the simplest and most common one. The fastcgi running script of this blog is as follows

kill - `cat / tmp / `
echo 'restart django....' 
python . /  runfcgi - - settings = lutaf.settings_r maxchildren =  maxspare = minspare =  method = prefork pidfile = / tmp /  host = 127.0 . 0.1  port = outlog = / tmp /  errlog = / tmp / 

Recommend everyone to try Gunicorn, this is the future development direction