Background: The company needs to microservice a existing traditional project, and after splitting it, it must pre-release it in batches to enable a certain part to enable users to use microservice modules and other users to use traditional projects. After the microservice is stable and there are no bugs, all users will be moved to the microservice system.
The above is the background, and this solution is implemented using nginx+lua+redis grayscale release solution. nginx+lua implements reverse proxy, obtains client IP, and redis stores IP information (IP is the address allowed by accessing microservices).
There are two solutions to achieve
The first type: nginx+lua gets the user ip, and then write a program using lua to directly access the redis cluster, query the IP information and return the result;
The second type: nginx+lua gets the user ip, and then write a program using lua to request it to the Redis cache service (single microservice), and the cache service returns the IP address;
At the beginning, I considered the first solution, which is used as openresty, which does not have a very good support package for redis cluster, and there is little information; if the second solution is used, the redis cache service can be taken out separately, which can not only be used for nginx, but also for other services.
Both solutions will be discussed, but the first one uses a stand-alone redis
I assume there are OpenResty and redis environments
The first solution:
Add code in http block
//External configurationinclude ; //New system addressupstream new{ server 192.168.1.103:8081; } //Old system addressupstream old{ server 192.168.1.103:8080; }
Code
//Introduce the redis module, only single-machine connection can be connectedlocal redis = require "" local cache = () cache:set_timeout(60000) //Linklocal ok, err = (cache, '192.168.19.10', 6379) // If the link to redis fails here, it will be forwarded to the server corresponding to @old (traditional service)if not ok then ("@old") return end //If nginx only has one distribution layer, the following four lines of code can be ignoredlocal local_ip = .get_headers()["X-Real-IP"] if local_ip == nil then local_ip = .get_headers()["x_forwarded_for"] end //Get the client ip from the remote_addr variable. When nginx has only one layer, this variable is the client ip, and multi-layer is notif local_ip == nil then local_ip = .remote_addr end //Get whether there is a value in redis based on the client ip; redis stores key:ip val:ip, the stored IP access microservicelocal intercept = cache:get(local_ip) //If it exists, forward to the server corresponding to @new (microservice)if intercept == local_ip then ("@new") return end //If it does not exist, forward to the server corresponding to @old (traditional service)("@old") //Close the clientlocal ok, err = () if not ok then ("failed to close:", err) return end
The logic is simple, but some problems 1: Redis cluster needs to be configured with multiple IPs to prevent downtime problems 2: Link problems, it is best if there is a thread pool. I won't say more here
The second solution:
constant
Change to
The code is as follows
//Cache service address herebackend = "http://192.168.1.156:8080" //Cached service access pathlocal method = "httptest" local requestBody = "/"..method //Modulelocal http = require("") local httpc = () //Set the timeout timehttpc:set_timeout(1000) //Send a requestlocal resp, err = httpc:request_uri(backend, {-- method = "GET", -- path = requestBody, keepalive = false }) //If the request fails to access the old systemif not resp then-- ("@old")---- return-- end //Cache service retrieves IPlocal isHave = //Close the connectionhttpc:close() //Request iplocal local_ip = .remote_addr // Hit to access the microserviceif isHave == local_ip then ("@new") return end //No hit access to the old system("@old")
There is only one cache address here, and there are multiple ones in reality, and one can be accessed using random values. The timeout time must be set. If the cache system has not responded or the cache service is down, you can directly access the old system.
In the example, the cache service only stores the real IP, which is actually stored in the IP network segment. Nginx gets the real IP characters and then matches it.
There are two optimizations that have not been written out here.
You can use a connection pool instead;
2. You can use cache internally in nginx to save all the hit IPs. If you have links, you can go to local cache first, and then go to cache services to improve performance.
At present, http has not found the relevant connection pool, so it has been running under the non-connection pool, and the performance is OK.
Another point is directly cached in nginx.
Add code in http block
Structure is: lua_shared_dict [name][size]
lua_shared_dict rediscache 100m;
The changed code is as follows:
backend = "http://192.168.1.156:8080" //Load shared memorylocal cache_ngx = local local_ip = .remote_addr //Preferentially fetch from local cachelocal cacheip = cache_ngx:get(local_ip) //The local cache does not exist, go to the cache service to retrieve it, and then load it into the local cacheif cacheip == "" or cacheip == nil then local http = require("") local httpc = () httpc:set_timeout(1000) local method = "httptest" local requestBody = "/" .. method local resp, err = httpc:request_uri(backend, { method = "GET", path = requestBody, keepalive=false }) if not resp then ("@new") return end cacheip = httpc:close() //Load to local cache and set expiration time cache_ngx:set(local_ip, cacheip, 10*60) end if cacheip == local_ip then ("@new") return end ("@old")
This is the article about the implementation solution of nginx+lua+redis grayscale release. For more related nginx lua redis grayscale content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!