Implementation of Docker Swarm combined with Docker Compose to deploy clusters
Docker swarm combined with deploying clusters
1) Prepare the file, the example demo is as follows
version: "3" services: mysql_c: image: mysql environment: MYSQL_ROOT_PASSWORD: 123456 restart: always ports: - 3306:3306 volumes: - /root/mysql/:/etc/mysql/ - /root/mysql/data:/var/lib/mysql goweb1: image: gowebimg restart: always deploy: replicas: 6 # Number of copies resources: # resource limits: # Configure CPU cpus: "0.3" # Set this container to use up to 30% of CPU memory: 500M # Set this container to use up to 500M memory restart_policy: # Define container restart policy to replace restart parameter condition: on-failure # Restart only if there is a problem with the application inside the container depends_on: - mysql_c nginx: image: nginx restart: always ports: - 80:80 depends_on: - goweb1 volumes: - /root/nginx//:/etc/nginx/ deploy: replicas: 6 #Number of copies resources: #resource limits: #Configure cpu cpus: "0.3" # Set this container to use up to 30% of CPU memory: 500M # Set this container to use up to 500M memory restart_policy: # Define container restart policy to replace restart parameter condition: on-failure #Restart only if there is a problem with the application inside the container
- Based on docker-compose, multiple containers can be created on a single server
- If you want to create multiple containers at once on multiple servers, you need to combine Swarm
- $
docker stack deploy --compose-file swarmName
- swarmName is the name corresponding to our swarm, and you can configure it at will, such as goWebSwarm
- $
- The problem with the above configuration is that mysql is not clustered
- MySQL needs to build a separate cluster, which involves the master-slave database
- For convenience, it is directly configured into the yml file
- Here, the number of mysql replicas cannot be configured for multiple
- If there are multiple servers, it will run on multiple servers, and data exceptions and inconsistencies will occur.
2) Build a cluster
- $
docker swarm init --advertise-addr 192.168.1.10
Initialize the cluster and create a management node (the currently specified IP is the management node)- Fill in your host's ip
- $
docker swarm join --token SWMTKN-1-52tr219htvsg1volky2tej7pj8bjs2j78q4b6wc9fnt72kkchd-29ohn4mgz191f6oznldvjiw47 192.168.1.10:2377
- Other hosts join the cluster
- According to the yml file, the cluster here is a cluster of nginx and goweb. They all have 6 replicas, and the mysql service has only one replica.
3) Deployment and verification
- $
docker stack deploy --compose-file goWebSwarm
Start deploying services- When calling this command, the network is created first
- Next create 3 services
- These 3 services use the same network, and by default these three services can be directly connected
- $
docker service ls
You can view the currently running services - $
docker service ps goWebSwarm
Check a current service - Through docker-compose deployment, it can be seen that 3 services have been generated, and these three services use the same network
- The previous one existed here, the mysql service was started, and then the goWeb service was also started, but the mysql service was not fully available.
- It can be solved with previous scripts
- You can restart the goWeb project (or expand and reduce the capacity to restart)
- You can first deploy mysql cluster separately, and then deploy goWeb and Nginx services.
- This is not the cluster here, the mysql cluster is a separate cluster
- The configuration file of the goWeb application must also be modified synchronously
4) Configure and deploy an additional Nginx server (not in the cluster)
- Additional NG server for routing and forwarding to nginx servers within the cluster
- There is a file in the /root/nginx// directory on each host disk in the cluster
upstream backend { ip_hash; server goweb1:8080; # Here is the host alias for goweb1 container service} server { listen 80; server_name localhost; # Your domain name address location / { # Set the host header and client real address so that the server can obtain the client real IP # Disable cache proxy_buffering off; # The address of the reverse proxy proxy_pass http://backend; } #error_page 404 /; # redirect server error pages to the static page / # error_page 500 502 503 504 /; location = / { root html; } }
- When accessing this, nginx forwards 80 to 8080
- Now, we need to configure nginx's host, that is, forward it to the cluster and configure load balancing
upstream backend { ip_hash; server 192.168.1.10 weight=1; # Services in the cluster ip There is a nginx server here server 192.168.1.11 weight=1; server 192.168.1.12 weight=1; server 192.168.1.13 weight=1; } server { listen 80; server_name ; # Your domain name address Here you can configure host. If you configure domain name resolution on the server add_header backendCode $upstream_status; add_header BackendIP "$upstream_addr;" always; location / { # Set the host header and client real address so that the server can obtain the client real IP proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # The connection timeout between nginx server and the proxy service, the proxy timeout, and the request for one device exceeds 1s will be forwarded to other IPs proxy_connect_timeout 1s; # Disable cache proxy_buffering off; # The address of the reverse proxy proxy_pass http://backend; } #error_page 404 /; # redirect server error pages to the static page / # error_page 500 502 503 504 /; location = / { root html; } }
- The above is used to view the configuration of nginx server forwarding node
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- Add this to it, you can see the forwarding server in the browser. In the response header, there will be a
BackendIP
Fields of
- Start nginx host
- $
docker run -itd --name nginxweb -p 80:80 -v /root/nginx/:/etc/nginx/ nginx
- $
5) The overall structure is as follows
-
The first floor) The nginx server forwards the request to each host in the cluster after receiving the request.
- Here is the outermost layer nginx as load balancing
- This nginx server is a high-performance server that is only responsible for forwarding and has no other processing tasks
-
The second floor) There are various services in the cluster
- nginx goWeb
- GoWeb connect to mysql or mysql cluster service
- nginx goWeb
- Same as above
- nginx goWeb
- Same as above
- …
- The nginx service inside this cluster will dynamically load balance to various goWeb services
- nginx goWeb
-
The third floor) mysql or mysql cluster
- Currently, our mysql only has one service and is configured in the compose file
The above three layers are the general design schemes of our server side. Of course, mysql cluster is not used above.
The above architecture can support 100W of visits (using mysql cluster)
-
If more load is required, you can copy this architecture to multiple urban areas
- Generally speaking, domain name resolution can only be configured to one server
- Of course, there is a dynamic domain name resolution that can support multiple units
- This way, you can determine which servers to forward according to the request
About Docker Swarm's Raft Consistency Algorithm
- Raft: Consistency algorithm, clusters can only be used when most management nodes are guaranteed to survive.
- Therefore, if the cluster is required, the manager node must be >= 3 units
- manager: manage nodes, used to manage work nodes
- If there are two units, one of them goes down, the remaining one will be unavailable, so that the entire cluster will not be available
- To take advantage of the fault tolerance of swarm mode, Docker recommends that you implement it according to your organization's high availability requirementsodd numberOne node
- When you have multiple managers, you can recover from failures in the manager node without downtime
- A 3 manager group can tolerate losses of up to 1 manager
- A 5 manager group can lose up to 2 manager nodes at the same time
- A N manager cluster can tolerate loss up to (N - 1) / 2 managers
- Docker recommends that a Swarm cluster uses up to 7 manager nodes
This is the end of this article about the implementation of Docker Swarm combined with Docker Compose deployment cluster. For more related contents of Docker Swarm combined with Docker Compose cluster, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!
Related Articles
Implementation of Docker+Jenkins+Gitee Automated Deployment of Maven Project
This article mainly introduces the implementation of the Docker+Jenkins+Gitee automatic deployment maven project. The example code is introduced in this article in detail, which has certain reference learning value for everyone's learning or work. Friends who need it, please learn with the editor below.2023-06-06
The process of building a docker private repository
This article mainly introduces the harbor construction process of docker private warehouse. This article introduces you very detailedly and has certain reference value for your study or work. Friends who need it can refer to it.2020-06-06Example of a method to quickly deploy web applications with Tomcat in Docker
In this article, let’s talk about how to quickly deploy a web application in docker. The article introduces the sample code in detail, which has a certain reference learning value for everyone's study or work. If you need it, please follow the editor to study together.2019-01-01Docker Machine Remotely Deploy Docker
This article mainly introduces the method of remotely deploying Docker with Docker Machine. The editor thinks it is quite good. I will share it with you now and give you a reference. Let's take a look with the editor2018-04-04Dockerhub mirror pull timeout solution
DockerHub encounters the problem of mirror pulling timeout. Now, it can be solved by modifying the repository address to the mirror address provided by daocloud, providing users with convenient mirror pulling services. Those who are interested can learn about it.2024-10-10Detailed explanation of Docker restricting containers' use of memory
This article mainly introduces a detailed explanation of the use of Docker restricted containers on memory. The editor thinks it is quite good. I will share it with you now and give you a reference. Let's take a look with the editor2017-08-08The implementation of Jenkins building Docker images and pushing them to Harbor repository
This article mainly introduces Jenkins to build Docker images and push them to the Harbor repository. The example code is introduced in this article in detail and has certain reference value. Interested friends can refer to it.2021-09-09Analysis of two methods of Docker image construction
This article mainly introduces two methods of Docker image construction in detail, which have certain reference value. Interested friends can refer to it.2017-07-07Docker-How to use dockerfile to build tomcat service
This article mainly introduces Docker-using dockerfile to build tomcat services. The editor thinks it is quite good. I will share it with you now and give you a reference. Let's take a look with the editor2017-01-01Docker's entire process of quickly deploying JavaScript in mainstream scripting language
JavaScript is the only scripting language supported on all mainstream browsers at present, and it is also the only purpose of early JavaScript. The following article mainly introduces relevant information about Docker's rapid deployment of mainstream scripting language JavaScript. Friends who need it can refer to it.2023-02-02