场景一:主库宕机switchover异常
1.集群状态
[postgres@monitor ~]$ pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_1 | 1 | 172.20.70.204:5432 | 0/706AA30 | yes | primary | primary
node_8 | 8 | 172.20.70.205:5432 | 0/706AA30 | yes | secondary | secondary
2.故障模拟(primay关机)
3.monitor发起witchover(主库降级)
[postgres@monitor ~]$ pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_1 | 1 | 172.20.70.204:5432 | 0/706AA68 | no | primary | demoted
node_8 | 8 | 172.20.70.205:5432 | 0/706C4F0 | yes | wait_primary | wait_primary
4.备机日志
[postgres@monitor pg_data]$ 09:18:21 51157 INFO Node 1 (172.20.70.204:5432) is marked as unhealthy by the monitor
09:18:21 51157 INFO Setting goal state of node 1 "node_1" (172.20.70.204:5432) to catchingup after it became unhealthy.
09:18:21 51157 INFO New state for node 1 "node_1" (172.20.70.204:5432): secondary ? catchingup
09:18:21 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to wait_primary because none of the 1 standby candidate nodes are healthy at the moment.
09:18:21 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): primary ? wait_primary
09:18:21 51157 INFO New state is reported by node 8 "node_8" (172.20.70.205:5432): "wait_primary"
09:18:21 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? wait_primary
pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_1 | 1 | 172.20.70.204:5432 | 0/B000148 | no | secondary | catchingup
node_8 | 8 | 172.20.70.205:5432 | 0/B000148 | yes | wait_primary | wait_primary
5.备机状态
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
f
(1 row)
postgres=# select name,setting from pg_settings where name='transaction_read_only';
name | setting
-----------------------+---------
transaction_read_only | off
(1 row)
6.主库宕机不可用,备机尝试尝试failover失败
[postgres@db2 log]$ pg_autoctl perform failover
09:20:07 27761 INFO Targetting group 0 in formation "default"
09:20:07 27761 ERROR Monitor ERROR: couldn't find the primary node in formation "default", group 0
09:20:07 27761 ERROR SQL query: SELECT pgautofailover.perform_failover($1, $2)
09:20:07 27761 ERROR SQL params: 'default', '0'
09:20:07 27761 ERROR Failed to perform failover for formation default and group 0
09:20:07 27761 FATAL Failed to perform failover/switchover, see above for details'
7.备机尝试升主失败
[postgres@db2 log]$ pg_autoctl perform promotion
09:21:07 27887 ERROR Monitor ERROR: cannot perform promotion: node node_8 in formation default has reported state "wait_primary" and is assigned state "wait_primary", promotion can only be performed when in state "secondary".
09:21:07 27887 ERROR SQL query: SELECT pgautofailover.perform_promotion($1, $2)
09:21:07 27887 ERROR SQL params: 'default', 'node_8'
09:21:07 27887 ERROR Failed to perform failover for node node_8 in formation default
8.将不可用主机剔除monitor集群,不可用主机待恢复后可以脑裂作为新的主机
pg_autoctl drop node --hostname=172.20.70.204 --pgdata=/data/pg_data/ --pgport=5432
[postgres@monitor pg_data]$ pg_autoctl drop node --hostname=172.20.70.204 --pgdata=/data/pg_data/ --pgport=5432
09:24:31 51157 INFO Removing node 1 "node_1" (172.20.70.204:5432) from formation "default" and group 0
09:24:31 51157 INFO Setting number_sync_standbys to 0 for formation "default" now that we have -1 standby nodes set with replication-quorum.
09:24:31 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to single as there is no other node.
09:24:31 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? single
[postgres@monitor pg_data]$ 09:24:31 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to single as there is no other node.
09:24:31 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? single
09:24:31 51157 INFO New state is reported by node 8 "node_8" (172.20.70.205:5432): "single"
09:24:31 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): single ? single
9.查看节点2变成单机
[postgres@monitor pg_data]$ pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_8 | 8 | 172.20.70.205:5432 | 0/B000148 | yes | single | single
10.尝试将节点2升主(将原宕机主机剔除monitor集群后,备机直接升级为primary)
[postgres@monitor pg_data]$ pg_autoctl perform promotion
09:27:57 63646 FATAL Please use --name to target a specific node
[postgres@monitor pg_data]$ pg_autoctl perform promotion --name node_8
09:28:03 63653 WARN NOTICE: cannot perform promotion: node node_8 in formation default is already a primary.
11.将原主机节点启动,服务拉起实现脑裂(原主机成功拉起变成单机可读)
11.1)查看脑裂后的主机状态"恢复以及事务只读模式"
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
t
(1 row)
postgres=# select name,setting from pg_settings where name='transaction_read_only';
name | setting
-----------------------+---------
transaction_read_only | on
(1 row)
11.2)修复脑裂新主机状态修复
[postgres@db1 ~]$ pg_ctl promote
waiting for server to promote.... done
server promoted
--查看是否还处于恢复模式(恢复状态修复)
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
f
(1 row)
11.2)修复脑裂主机只读
a)注释postgres.conf前面2行paf配置信息
#include 'postgresql-auto-failover-standby.conf' # Auto-generated by pg_auto_failover, do not remove
#include 'postgresql-auto-failover.conf' # Auto-generated by pg_auto_failover, do not remove
b)清理PAF相关配置 --move xx.auto.conf
mv postgresql-auto-failover.conf postgresql-auto-failover.conf.bak
mv postgresql-auto-failover-standby.conf postgresql-auto-failover-standby.conf.bak
c)更改postgres.conf设置关闭流复制配置
#hot_standby=off
#synchronous_standby_names=''
d)重启生效
pg_ctl restart -m fast -D /postgres/data
4)验证数据库成为新单机,且可以读写
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
f
(1 row)
postgres=# select name,setting from pg_settings where name='transaction_read_only';
name | setting
-----------------------+---------
transaction_read_only | off
(1 row)
postgres=# create table t(id int);
CREATE TABLE ^
postgres=# insert into t select generate_series(1,2);
INSERT 0 2
5)至此,脑裂恢复单机成为新主机完成。
--
--
--
--后续恢复集群
继第10步,在新主库(原备库上添加新节点到PAF集群)
1)查看集群状态
[postgres@monitor pg_data]$ pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_8 | 8 | 172.20.70.205:5432 | 0/B000148 | yes | single | single
2)新节点(原宕机节点环境清空后)上加入集群(异步模式)
--删除旧的配置文件,否则报错无法加入集群
"10:01:16 47610 FATAL The state file "/home/postgres/.local/share/pg_autoctl/data/pg_data/pg_autoctl.state" exists and there's no init in progress"
rm -Rf /home/postgres/.local
sudo -u postgres
/usr/pgsql-12/bin/pg_autoctl create postgres --pgdata /data/pg_data/ --auth trust --ssl-self-signed \
--username postgres --dbname postgres --hostname 172.20.70.204 --pgctl /usr/pgsql-12/bin/pg_ctl \
--monitor 'postgres://autoctl_node@172.20.70.203:5432/pg_auto_failover?sslmode=require' --replication-quorum false
[postgres@db1 data]$ /usr/pgsql-12/bin/pg_autoctl create postgres --pgdata /data/pg_data/ --auth trust --ssl-self-signed \
> --username postgres --dbname postgres --hostname 172.20.70.204 --pgctl /usr/pgsql-12/bin/pg_ctl \
> --monitor 'postgres://autoctl_node@172.20.70.203:5432/pg_auto_failover?sslmode=require' --replication-quorum false
10:04:46 47615 INFO Using default --ssl-mode "require"
10:04:46 47615 INFO Using --ssl-self-signed: pg_autoctl will create self-signed certificates, allowing for encrypted network traffic
10:04:46 47615 WARN Self-signed certificates provide protection against eavesdropping; this setup does NOT protect against Man-In-The-Middle attacks nor Impersonation attacks.
10:04:46 47615 WARN See https://www.postgresql.org/docs/current/libpq-ssl.html for details
10:04:46 47615 INFO Started pg_autoctl postgres service with pid 47618
10:04:46 47618 INFO /usr/pgsql-12/bin/pg_autoctl do service postgres --pgdata /data/pg_data/ -v
10:04:46 47615 INFO Started pg_autoctl node-init service with pid 47619
10:04:46 47619 INFO Registered node 9 (172.20.70.204:5432) with name "node_1" in formation "default", group 0, state "wait_standby"
10:04:46 47619 INFO Writing keeper state file at "/home/postgres/.local/share/pg_autoctl/data/pg_data/pg_autoctl.state"
10:04:46 47619 INFO Writing keeper init state file at "/home/postgres/.local/share/pg_autoctl/data/pg_data/pg_autoctl.init"
10:04:46 47619 INFO Successfully registered as "wait_standby" to the monitor.
10:04:46 47619 INFO FSM transition from "init" to "wait_standby": Start following a primary
10:04:46 47619 INFO Transition complete: current state is now "wait_standby"
10:04:46 47619 INFO New state for node 8 "node_8" (172.20.70.205:5432): single ? wait_primary
10:04:46 47619 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? wait_primary
10:04:46 47619 INFO Still waiting for the monitor to drive us to state "catchingup"
10:04:46 47619 WARN Please make sure that the primary node is currently running `pg_autoctl run` and contacting the monitor.
10:04:46 47619 INFO FSM transition from "wait_standby" to "catchingup": The primary is now ready to accept a standby
10:04:46 47619 INFO Initialising PostgreSQL as a hot standby
10:04:46 47619 INFO /usr/pgsql-12/bin/pg_basebackup -w -d application_name=pgautofailover_standby_9 host=172.20.70.205 port=5432 user=pgautofailover_replicator sslmode=require --pgdata /data/backup/node_1 -U pgautofailover_replicator --verbose --progress --max-rate 100M --wal-method=stream --slot pgautofailover_standby_9
10:04:46 47619 INFO pg_basebackup: initiating base backup, waiting for checkpoint to complete
10:04:46 47619 INFO pg_basebackup: checkpoint completed
10:04:46 47619 INFO pg_basebackup: write-ahead log start point: 0/C000028 on timeline 12
10:04:46 47619 INFO pg_basebackup: starting background WAL receiver
10:04:46 47619 INFO 0/33603 kB (0%), 0/1 tablespace (/data/backup/node_1/backup_label )
10:04:46 47619 INFO 33612/33612 kB (100%), 0/1 tablespace (.../backup/node_1/global/pg_control)
10:04:46 47619 INFO 33612/33612 kB (100%), 1/1 tablespace
10:04:46 47619 INFO pg_basebackup: write-ahead log end point: 0/C000100
10:04:46 47619 INFO pg_basebackup: waiting for background process to finish streaming ...
10:04:46 47619 INFO pg_basebackup: syncing data to disk ...
10:04:46 47619 INFO pg_basebackup: base backup completed
10:04:46 47619 INFO Creating the standby signal file at "/data/pg_data/standby.signal", and replication setup at "/data/pg_data/postgresql-auto-failover-standby.conf"
10:04:46 47619 INFO Contents of "/data/pg_data/postgresql-auto-failover-standby.conf" have changed, overwriting
10:04:46 47619 INFO /bin/openssl req -new -x509 -days 365 -nodes -text -out /data/pg_data/server.crt -keyout /data/pg_data/server.key -subj "/CN=172.20.70.204"
10:04:47 47627 INFO /usr/pgsql-12/bin/postgres -D /data/pg_data -p 5432 -h *
10:04:47 47618 INFO Postgres is now serving PGDATA "/data/pg_data" on port 5432 with pid 47627
10:04:47 47619 INFO PostgreSQL started on port 5432
10:04:47 47619 INFO Fetched current list of 1 other nodes from the monitor to update HBA rules, including 1 changes.
10:04:47 47619 INFO Ensuring HBA rules for node 8 "node_8" (172.20.70.205:5432)
10:04:47 47619 INFO Transition complete: current state is now "catchingup"
10:04:47 47619 INFO keeper has been successfully initialized.
10:04:47 47615 WARN pg_autoctl service node-init exited with exit status 0
10:04:47 47618 INFO Postgres controller service received signal SIGTERM, terminating
10:04:47 47618 INFO Stopping pg_autoctl postgres service
10:04:47 47618 INFO /usr/pgsql-12/bin/pg_ctl --pgdata /data/pg_data --wait stop --mode fast
10:04:47 47615 INFO Stop pg_autoctl
--monitor信息记录新节点join到集群
[postgres@monitor pg_data]$ 10:04:46 51157 INFO Registering node 9 "node_1" (172.20.70.204:5432) to formation "default" with replication quorum false and candidate priority 50 [50]
10:04:46 51157 INFO New state is reported by node 9 "node_1" (172.20.70.204:5432): "wait_standby"
10:04:46 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): wait_standby ? wait_standby
10:04:46 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to wait_primary after node 9 "node_1" (172.20.70.204:5432) joined.
10:04:46 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): single ? wait_primary
10:04:46 51157 INFO New state is reported by node 8 "node_8" (172.20.70.205:5432): "wait_primary"
10:04:46 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? wait_primary
10:04:46 51157 INFO Setting goal state of node 9 "node_1" (172.20.70.204:5432) to catchingup after node 8 "node_8" (172.20.70.205:5432) converged to wait_primary.
10:04:46 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): wait_standby ? catchingup
10:04:47 51157 INFO New state is reported by node 9 "node_1" (172.20.70.204:5432): "catchingup"
10:04:47 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): catchingup ? catchingup
3)启动(新加入集群的节点)pg_autoctl
/usr/pgsql-12/bin/pg_autoctl run &
--monitor集群显示集群新成员已加入
[postgres@monitor pg_data]$ 10:04:46 51157 INFO Registering node 9 "node_1" (172.20.70.204:5432) to formation "default" with replication quorum false and candidate priority 50 [50]
10:04:46 51157 INFO New state is reported by node 9 "node_1" (172.20.70.204:5432): "wait_standby"
10:04:46 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): wait_standby ? wait_standby
10:04:46 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to wait_primary after node 9 "node_1" (172.20.70.204:5432) joined.
10:04:46 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): single ? wait_primary
10:04:46 51157 INFO New state is reported by node 8 "node_8" (172.20.70.205:5432): "wait_primary"
10:04:46 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? wait_primary
10:04:46 51157 INFO Setting goal state of node 9 "node_1" (172.20.70.204:5432) to catchingup after node 8 "node_8" (172.20.70.205:5432) converged to wait_primary.
10:04:46 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): wait_standby ? catchingup
10:04:47 51157 INFO New state is reported by node 9 "node_1" (172.20.70.204:5432): "catchingup"
10:04:47 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): catchingup ? catchingup
10:06:23 51157 INFO Node 9 (172.20.70.204:5432) is marked as unhealthy by the monitor
10:06:23 51157 INFO Node 9 (172.20.70.204:5432) is marked as healthy by the monitor
10:06:23 51157 INFO Setting goal state of node 9 "node_1" (172.20.70.204:5432) to secondary after it caught up.
10:06:23 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): catchingup ? secondary
10:06:23 51157 INFO New state is reported by node 9 "node_1" (172.20.70.204:5432): "secondary"
10:06:23 51157 INFO New state for node 9 "node_1" (172.20.70.204:5432): secondary ? secondary
10:06:23 51157 INFO Setting goal state of node 8 "node_8" (172.20.70.205:5432) to primary now that at least one secondary candidate node is healthy.
10:06:23 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): wait_primary ? primary
10:06:23 51157 INFO New state is reported by node 8 "node_8" (172.20.70.205:5432): "primary"
10:06:23 51157 INFO New state for node 8 "node_8" (172.20.70.205:5432): primary ? primary
4)查看状态集群节点加入成功,集群正常
[postgres@monitor pg_data]$ pg_autoctl show state
Name | Node | Host:Port | LSN | Reachable | Current State | Assigned State
-------+-------+--------------------+-----------+-----------+---------------------+--------------------
node_8 | 8 | 172.20.70.205:5432 | 0/D000060 | yes | primary | primary
node_1 | 9 | 172.20.70.204:5432 | 0/D000060 | yes | secondary | secondary