SoFunction
Updated on 2025-03-10

Docker deploys mysql8 with PXC8.0 distributed clustering process

Docker deploys mysql8 with PXC8.0 distributed cluster

Environmental description

IP Host Name node Serve Service Type
192.168.1.1 host1 docker swarm-master PXC1 The first node (default is master node)
192.168.1.2 host2 docker swarm-worker1 PXC2 Second node
192.168.1.3 host3 docker swarm-worker2 PXC3 The third node

1. Create a docker swarm cluster

The deployment cluster relies on an overlay network to realize the communication between PXC services through hostnames. Docker swarm is directly deployed here, and the deployment is completed using the overlay network provided by docker swarm.

1.1 yum installation docker

All three hosts need to be installed

    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo /docker-ce/linux/centos/
    yum makecache
    rpm --import /docker-ce/linux/centos/gpg
    yum -y install docker-ce
    systemctl enable docker && systemctl start docker 

1.2 Docker swarm cluster deployment

-Execute the command to create swarm on master: docker swarm init --advertise-addr {master host IP}

    [root@host1 ~]# docker swarm init --advertise-addr 192.168.1.1
    Swarm initialized: current node (l2wbku2vxecmbz8cz94ewjw6d) is now a manager.
    To add a worker to this swarm, run the following command:
        docker swarm join --token SWMTKN-1-27aqwb4w55kfkcjhwwt1dv0tdf106twja2gu92g6rr9j22bz74-2ld61yjb69uokj3kiwvpfo2il 192.168.1.1:2377
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    [root@host1 ~]# docker node ls
    ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
    l2wbku2vxecmbz8cz94ewjw6d *   host1   Ready               Active              Leader              19.03.3
	Note:If it was not recorded at that time docker swarm init Add prompt worker Complete command,Can be passed docker swarm join-token worker Check。

2. Copy the previous docker swarm join command, execute it on swarm-worker1 and swarm-worker2, and add them to swarm.

The command output is as follows:

    [root@host2 ~]# docker swarm join --token SWMTKN-1-27aqwb4w55kfkcjhwwt1dv0tdf106twja2gu92g6rr9j22bz74-2ld61yjb69uokj3kiwvpfo2il 192.168.1.1:2377
    This node joined a swarm as a worker.
    [root@host3 ~]# docker swarm join --token SWMTKN-1-27aqwb4w55kfkcjhwwt1dv0tdf106twja2gu92g6rr9j22bz74-2ld61yjb69uokj3kiwvpfo2il 192.168.1.1:2377
    This node joined a swarm as a worker.

-Docker node ls on the master can see that two worker nodes have been added.

    [root@host1 ~]# docker node ls 
    ID                            HOSTNAME            STATUS              AVAILABILITY            MANAGER STATUS      ENGINE VERSION
    ju6g4tkd8vxgnrw5zvmwmlevw     host2               Ready                   Active                                  19.03.3
    asrqc03tcl4bvxzdxgfs5h60q     host3               Ready                   Active                                  19.03.3
    l2wbku2vxecmbz8cz94ewjw6d *   host1               Ready                   Active              Leader              19.03.3

2. Deploy PXC cluster

2.1 Create a CA certificate

Execute the following command on the swarm-master:

1. mkdir -p /data/ssl && cd /data/ssl && openssl genrsa 2048 > 
2. openssl req -new -x509 -nodes -days 3600 \
    -key  -out 
3. openssl req -newkey rsa:2048 -days 3600 \
    -nodes -keyout  -out 
4. openssl rsa -in  -out 
5. openssl x509 -req -in  -days 3600 \
    -CA  -CAkey  -set_serial 01 \
    -out 
6. openssl req -newkey rsa:2048 -days 3600 \
    -nodes -keyout  -out 
7. openssl rsa -in  -out 
8. openssl x509 -req -in  -days 3600 \
    -CA  -CAkey  -set_serial 01 \
    -out 
9. openssl verify -CAfile    (Two appearOKJust)
: OK
: OK

question:Step 9: error 18 at 0 depth lookup: self signed certificate

Solution:When you are creating, , ask you to fill in the Common Name.

The common name cannot be the same as, the same

Copy everything in directory /data/ssl to the same directory as swarm-worker1 and swarm-worker2

scp * 192.168.1.2:`pwd`/
scp * 192.168.1.3:`pwd`/

2.2 Create a file

Three machines operate simultaneously:

mkdir -p /data/ssl/cert && vi /data/ssl/cert/

[mysqld]
skip-name-resolve
ssl-ca = /cert/
ssl-cert = /cert/
ssl-key = /cert/

[client]
ssl-ca = /cert/
ssl-cert = /cert/
ssl-key = /cert/

[sst]
encrypt = 4
ssl-ca = /cert/
ssl-cert = /cert/
ssl-key = /cert/

2.3 Create a docker overlay network

Execute the following command on the swarm-master:

docker network create -d overlay --attachable swarm_mysql

2.4 Creating a container

  • 2.4.1 Initialize the cluster and create the first node

Execute the following command on the swarm-master:

docker network create -d overlay --attachable swarm_mysql
docker run -d --name=pn1 \
 --net=swarm_mysql \
 --restart=always \
 -p 9001:3306 \
 --privileged \
 -e TZ=Asia/Shanghai \
 -e MYSQL_ROOT_PASSWORD=mT123456 \
 -e CLUSTER_NAME=PXC1 \
 -v /data/ssl:/cert \
 -v mysql:/var/lib/mysql/ \
 -v /data/ssl/cert:/etc/ \
 percona/percona-xtradb-cluster:8.0
  • 2.4.2 Create a second node

Swarm-worker1 executes the following command:

docker run -d --name=pn2 \
--net=swarm_mysql \
--restart=always \
-p 9001:3306 \
--privileged \
-e TZ=Asia/Shanghai \
-e MYSQL_ROOT_PASSWORD=mT123456 \
-e CLUSTER_NAME=PXC1 \
-e CLUSTER_JOIN=pn1 \
-v /data/ssl:/cert \
-v mysql:/var/lib/mysql/ \
-v /data/ssl/cert:/etc/ \
percona/percona-xtradb-cluster:8.0
  • 2.4.3 Create a third node

Swarm-worker2 executes the following command:

docker run -d --name=pn3 \
--net=swarm_mysql \
--restart=always \
-p 9001:3306 \
--privileged \
-e TZ=Asia/Shanghai \
-e MYSQL_ROOT_PASSWORD=mT123456 \
-e CLUSTER_NAME=PXC1 \
-e CLUSTER_JOIN=pn1 \
-v /data/ssl:/cert \
-v mysql:/var/lib/mysql/ \
-v /data/ssl/cert:/etc/ \
percona/percona-xtradb-cluster:8.0
  • 2.4.4 Check the status of the cluster
swarm-masterExecute the command,Check the status of the cluster
docker exec -it pn1 /usr/bin/mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 19
Server version: 8.0.25-15.1 Percona XtraDB Cluster (GPL), Release rel15, Revision 8638bb0, WSREP version 26.4.3

Copyright (c) 2009-2021 Percona LLC and/or its affiliates
Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

You are enforcing ssl connection via unix socket. Please consider
switching ssl off as it does not make connection via unix socket
any more secure.
mysql> show status like 'wsrep%';

+------------------------------+-------------------------------------------------+
| Variable_name                | Value                                           |
+------------------------------+-------------------------------------------------+
| wsrep_local_state_uuid       | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |
...
| wsrep_local_state_commentforSynced,                                          |
...
| wsrep_incoming_addresses     | ce9ffe81e7cb:3306,89ba74cac488:3306,0e35f30ba764:3306 |
...
| wsrep_cluster_conf_id        | 3                                               |
| wsrep_cluster_size           | 3                                               |
| wsrep_cluster_state_uuid     | 625318e2-9e1c-11e7-9d07-aee70d98d8ac            |
| wsrep_cluster_status         | Primary                                         |
| wsrep_connected              | ON                                              |
...
| wsrep_ready                  | ON                                              |
+------------------------------+-------------------------------------------------+
59 rows in set (0.02 sec)
#wsrep_local_state_commentforSynced,wsrep_incoming_addressesfor三个,wsrep_connected状态forON, wsrep_readyforON即for正常

2.5 High availability test

  • 2.5.1 Non-master restart

When pn1 (master) is running normally, there is no problem with pn2 and pn3 simulation restart. The test steps are omitted, mainly to test whether all nodes can synchronize data normally after restart

docker restart pn2 or docker restart pn3
  • 2.5.2 Master node restart
docker restart pn1

After restarting the pn1 (master) master node simulation restart, it cannot start normally. You need to re-add the cluster as a member. The reconstruction command is as follows:

docker run -d --name=pn1 \
--net=swarm_mysql \
--restart=always \
-p 9001:3306 \
--privileged \
-e TZ=Asia/Shanghai \
-e MYSQL_ROOT_PASSWORD=mT123456 \
-e CLUSTER_NAME=PXC1 \
-e CLUSTER_JOIN=pn2 \
-v /data/ssl:/cert \
-v mysql:/var/lib/mysql/ \
-v /data/ssl/cert:/etc/ \
percona/percona-xtradb-cluster:8.0

Summarize

The above is personal experience. I hope you can give you a reference and I hope you can support me more.