#### PostgreSQL Logical Replication ####
Logical Replication is a solution provided by PostgreSQL itself. It is mainly a technology that focuses on each other through one party's publication and one party's subscription.
Server | Features
| 10.10.56.16 | Published by P1_pubsurface | | 10.10.56.17 | Published by P2_pubsurface | | 10.10.56.19 | Subscribers P_subsurface |
background
Some data are divided into part P1 table, part P2 table or multiple according to the business. Library A operates table P1 table, and library B operates table P2 table
Now we need to query the entire P table, that is, we need a summary library of p table data requirements, so it is implemented through logical copying.
The previous tutorial on building environments that are visible to PostgreSQL
Added new configuration files to the pg_hba.conf on 3 servers respectively
host all all 10.10.56.0/0 md5 host replication all 10.10.56.0/0 trust
Allow all users of the server in the same network segment in 10.10.56.0 to copy, connect and other operations
Start the service on the 16, 17, and 19 servers, connect to the database to execute SQL, and check whether the single database is a single instance master library.
pocdb=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row)
As shown in the above query whether it is a backup library, f means false, that is, the main library, and vice versa t is the slave library
Create databases, users, and assign permissions on single instance data respectively
postgres@clw-db3:/home/postgres> /opt/pgsql-10/bin/psql -p 5432 postgres psql (10.3) Type "help" for help. postgres=# create database pocdb; CREATE DATABASE pocdb=# CREATE USER l_repl PASSWORD '123456' REPLICATION; CREATE ROLE
Create p tables (that is, tables representing their respective services) on the 16 and 17 servers, and assign permissions to the logically copied user l_repl.
pocdb=# CREATE TABLE p(id bigint primary key,ival int); CREATE TABLE pocdb=# GRANT ALL ON p TO l_repl; pocdb=# pocdb=# pocdb=# pocdb=# pocdb=# \d+ p Table "" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- id | bigint | | not null | | plain | | ival | integer | | | | plain | | Indexes: "p_pkey" PRIMARY KEY, btree (id)
Create publishers on servers 16 and 17 respectively, that is, publish your p table so that subscribers can subscribe
pocdb=# CREATE PUBLICATION p_pub FOR TABLE p WITH (publish = 'insert,update,delete'); CREATE PUBLICATION
Create subscribers on 19, that is, subscribe to the tables published by the 16 and 17 servers respectively
CREATE SUBSCRIPTION p1_sub CONNECTION 'host=10.10.56.16 port=5432 user=l_repl dbname=pocdb password=123456' PUBLICATION p_pub; CREATE SUBSCRIPTION p2_sub CONNECTION 'host=10.10.56.17 port=5432 user=l_repl dbname=pocdb password=123456' PUBLICATION p_pub;
16 The server inserts odd id data
pocdb=# INSERT INTO p (id,ival) VALUES (1,1); INSERT 0 1 pocdb=# pocdb=# select * from p; id | ival ----+------ 1 | 1 (1 row)
17 The server inserts even id data
pocdb=# INSERT INTO p (id,ival) VALUES (2,2); INSERT 0 1 pocdb=# pocdb=#
Notice
When inserting data on servers 16 and 17, the primary key must not conflict, otherwise an error will occur. In actual business, the inserted primary key will never conflict
Here we insert data with the primary key of odd number on the 16 server and data with the primary key of 17 as even number on the 16 server
At this time, the 19 server viewed the data summarized in Table P.
pocdb=# select * from p; id | ival ----+------ 1 | 1 2 | 2 (2 rows)
Found that the data has been copied to meet the needs we need
Create R table
background
In actual business needs, we may have some public tables such as data dictionaries, that is, existing independent databases of A, B, and C, but they all need a common dictionary table R table, and there can only be one entrance to R table, for example, only writes in library A, while other libraries need to synchronize the table, so it is implemented through logical copying.
pocdb=# CREATE PUBLICATION r_pub FOR TABLE r WITH (publish = 'insert,update,delete,TRUNCATE'); CREATE PUBLICATION pocdb=#
Logical replication does not support TRUNCATE Cascading Delete Table Data
Logical copy of R table
Create R tables on 16, 17, and 19 respectively
CREATE TABLE R (id bigint ,age int);
The configuration file has been configured above. The same as above, 16 publishes R table, 17 and 19 subscribe to the R table of 16 servers respectively to realize the above services.
Supplement: PostgreSQL logical replication pressure test solution
Logical copy delay pressure measurement
This stress test process is based on the above
56.16 –> 56.19 Monitoring Delay Scripts
Create script query_logical_lag.sh and assign permissions
#!/bin/bash /opt/pgsql-10/bin/psql pocdb<<EOF select now(); select client_addr, application_name, write_lag, flush_lag, replay_lag from pg_stat_replication where usename='l_repl' and application_name='p1_sub'; \q EOF
postgres@clw-db2:~> chmod +x query_logical_lag2.sh postgres@clw-db2:~> ls -l total 4 -rwxr-xr-x 1 postgres postgres 218 May 8 16:49 query_logical_lag2.sh
56.17 –> 56.19 Delay monitoring script
Create script query_logical_lag2.sh
#!/bin/bash /opt/pgsql-10/bin/psql pocdb<<EOF select now(); select client_addr, application_name, write_lag, flush_lag, replay_lag from pg_stat_replication where usename='l_repl' and application_name='p2_sub'; \q EOF
Assign permissions, otherwise the script will not be executed
postgres@clw-db2:~> chmod +x query_logical_lag2.sh postgres@clw-db2:~> ls -l total 4 -rwxr-xr-x 1 postgres postgres 218 May 8 16:49 query_logical_lag2.sh
Execution monitoring delay
Execute on 16 server:
clw_db1 postgres@clw-db1:~> for i in {1..100000} > do > /home/postgres/query_logical_lag.sh >> /home/postgres/query_logical_lag1 > sleep 5 > done
Indicates: execute query_logical_lag.sh script 100,000 times, wait for 5 seconds for each execution, and record the query results to
query_logical_lag1 file
Execute on 17 server
clw_db2 crontab -e for i in {1..10000000} do /home/postgres/query_logical_lag.sh >> /home/postgres/query_logical_lag2 sleep 5 done
Create sequence, use sequence to ensure that the values inserted by different nodes are odd or even
56.16
create sequence p_seq1 increment by 2 minvalue 1 maxvalue 100000000000000 cache 50 no cycle; ----cacheDo you need to be increased
56.17 Server Creation Sequence
create sequence p_seq1 increment by 2 minvalue 2 maxvalue 100000000000000 cache 50 no cycle;
16 and 17 The server creates pressure test scripts separately bench_script_for_insert.sql
\sleep 500ms \set ival random(1, 500000000) INSERT INTO p(id, ival) VALUES (nextval('p_seq1'),:ival);
16 and 17 respectively execute pgbench pressure test commands
/opt/pgsql-10/bin/pgbench -c 150 -j 120 -T 600 -f /pgdata/10/poc/scripts/bench_script_for_insert.sql pocdb
nmon monitors CPU, memory, network
chmod +x /home/super/pgsoft/nmon_x86_64_sles11 /home/super/pgsoft/nmon_x86_64_sles11 -f -c 120 -s 10
Note:
-f Store the result in the file in the current directory, ending in nmon, and automatically generate
-c Total statistics 120 times
-s Statistics every 10 seconds
chmod +x /home/pgsoft/nmon_x86_64_sles11 chmod +x /home/pgsoft/nmon_x86_64_sles11 /home/pgsoft/nmon_x86_64_sles11 -f -c 120 -s 10 /home/super/pgsoft/nmon_x86_64_sles11 -f -c 120 -s 10
The above is personal experience. I hope you can give you a reference and I hope you can support me more. If there are any mistakes or no complete considerations, I would like to give you advice.