PostgreSQL基于Pgpool-II实现高可用架构的部署实践

1 架构概述

本文档使用 3 台 Pgpool-II服务器来管理PostgreSQL 服务器,以创建强大的集群系统并避免单点故障或脑裂。

要求:所有 Pgpool-II 服务器和PostgreSQL服务器都位于同一子网中

环境配置

节点

节点地址

节点配置

部署组件

server1

192.168.16.220

Centos7.9 4c/8GB

postgresql14 pgpool-II

server2

192.168.16.221

Centos7.9 4c/8GB

postgresql14 pgpool-II

server3

192.168.16.222

Centos7.9 4c/8GB

postgresql14 pgpool-II

VIP

192.168.16.250

架构图

说明:架构中的Active、Standby、Primary、 Standby 的角色不是固定的,可能会通过进一步的操作而改变

Postgresql数据库部署信息

项目说明

配置说明

详细说明

PostgreSQL Version

14.0

数据库安装版本

port

5432

数据库端口

$PGDATA

var/lib/pgsql/14/data

数据库的数据目录

Archive mode

on

开启归档模式

Replication Slots

enable

开启复制槽

Start automatically

enable

开机自启动

pgpool-II数据安装信息说明

项目名称

配置说明

详细说明

Pgpool-II Version

4.3.0

pgpool-II版本

port

9999

Pgpool-II 连接监听端口

9898

PCP进程连接监听端口

9000

看门狗连接监听端口

9694

看门狗之前通过UDP探测心跳的端口

Config file

/etc/pgpool-II/pgpool.conf

Pgpool-II 配置文件路径

Pgpool-II start user

postgres (Pgpool-II 4.1 or later)

Pgpool-II4.0版本或者之前版本, 默认开启用户root

Running mode

streaming replication mode

开启流复制模式

Watchdog

on

开启看门狗的心跳检测

Start automatically

Enable

开启开机自启动

2 部署说明

2.1 组件安装

执行范围:所有节点执行

本次安装是基于RPM方式

# 获取所需依赖包
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/libzstd-1.5.5-1.el7.x86_64.rpm
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/llvm5.0-devel-5.0.1-7.el7.x86_64.rpm
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/llvm5.0-5.0.1-7.el7.x86_64.rpm
wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/llvm5.0-libs-5.0.1-7.el7.x86_64.rpm
 
yum install -y ./libzstd-1.5.5-1.el7.x86_64.rpm
yum install -y centos-release-scl-rh llvm5*
yum install -y epel-release

#使用PostgreSQL YUM 存储库 安装PostgreSQL
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
yum install -y postgresql14-server postgresql14-devel

#当前系统的GPG key与pgpool的rpm的仓库的key不匹配,需要从官方网站下载最新的GPG key
https://www.pgpool.net/yum/RPM-GPG-KEY-PGPOOL2
#将官方key的值复制后,拷贝到节点系统的/etc/pki/rpm-gpg/RPM-GPG-KEY-PGPOOL2
mv /etc/pki/rpm-gpg/RPM-GPG-KEY-PGPOOL2 /etc/pki/rpm-gpg/RPM-GPG-KEY-PGPOOL2-bak
vi /etc/pki/rpm-gpg/RPM-GPG-KEY-PGPOOL2
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQGNBGN8G+0BDADaAkhk1nmzA/kYZr0ZTR755VJ1ER6mh3uRGNWmb5ZPVE6lapmh
SUwgnfQYQqDVIPedcRMzn8U9cP21PWbIE+mb0egH1+5sOVYqxzh5UpAX/dPmz0sm
yFDVuRahYKpICZQ9OnTSBXvkIaCMlcWHbU8gRxmKz3NcpD5/x62I+wdN5KL7v5Oj
STbS5djypO8UzchmuKJKgumezUvY7Bk4TvUPyHvkKe8Ww4GgKNeJjd5j8SwYn2st
KrTNJBvoWmHXC6/PbUgS3zcFOEAZV9MnNwzkvu/3Yn/gbuxNUA7nDIVolVp3RgyJ
qgCksdjX+paTiUKZCyAe6XfeQWPREst5Mi6wPBaDSS50tyFsJDwsEX48WlQV+dCu
34MnowtpFiUydSw5Wg8KcRnSWl7mkGawIuKzSGdbRAyfr2dK+T0Y4Rjpk9G0wY2A
rnCLH27cfxbiBYAScW+83scoFJAo5o4/yaNmQWyXjqqQNlQ9+X5v/vs5oAfGVDQs
4G61ojBXf+dQw6EAEQEAAbRKcGdwb29sLUlJIEtleSAocGdwb29sLUlJIE9mZmlj
aWFsIFNpZ25pbmcgS2V5KSA8cGdwb29sLWhhY2tlcnNAcGdwb29sLm5ldD6JAdIE
EwEIADwWIQQiPz6IYg+m/ERoIp2KskEI8VO/1gUCY3wb7QIbAwULCQgHAgMiAgEG
FQoJCAsCBBYCAwECHgcCF4AACgkQirJBCPFTv9aB1Qv/TwqpB1WB9Jt79J6Td88y
h3k1x8oh9hsQQY36Bu4szNJ9GXMOnEyW0OnI2T2SyF7z+63IzgW3BhcSJ3VeF+KE
SlxwLjzts14mXUTiP5OvPrPYe7nm9ESFnO1E35hDwFhxatxV7MZUYCe/VBXzMVEq
ZWUPnepkqTPbn+4xpfywoOmsjePcEDZ5qH9Hw3f/dCXbyJ5Uun4pDJ+uo4eXgbAg
a6lnQtm6xX4LoQj85SMAX3oUgkRYzN+RsNl0E5GGV3QEF992cWwKWD5alwS0hiA5
9KUCd3LJ9hYB95BlMhBGqUvebekKzlnRiFlsCPcuhkpcaDyH1xZEF9T/kaCwlMna
4YKQTITWBwW1PvEbKrlTok0ASkJgc0gwdlRX489sy9idRtL0jXrZJCO5JRsA2aPm
oSivLpIOUknRyNqiMldaEE1Xq6kafsXobwrA9BVLgHspA/g5VlPIcckeSFPByPjY
OVMi4TrwByMHCFYEZJHTWL2QVoTt4pRMNrovMktYRGG/uQGNBGN8G+0BDAC080Uk
REGGGs/6xa8n5PBS890o2gGvFkotqilkC1VlUqX/A5P4vt591f27sQsncOwRUUt2
3MNSk/19u4pkV4wPws9ROjjbJquhVqr0/BBn9oYlT4xYMGAHC4b7xLoZIvEBTfZ2
ffcNEZTAnm/ocx9exUORzm/nrh27Ptz8veK1V18bC1gx1yetGJ8Dcq1qWciAetE6
bAv/7EDiShXxfoRhAoezLk9ZJOh6ipripv6OrV6592s7+oup8GVrIpVky6pIKff/
I6aJFLXX+4Rb16dWyI+Kj5+0ycb/1fhKh2p2pNwAYrvlIByy2SKHdXvQbidouU3v
Gu6Z88qMi3P5PACyqNJxfP5h93CuxlzZxqbPC1TaRPN7lnuKst1Pd/uKB4uPiRkQ
TRYFKHAT1c3sbcMaZ6S4eBpEmVdjqeAyFYRsMLzTARkoc4974EfGQpy/DOPru6o5
/AaALewzCUOdEpWBAJEy6Q5dFhjnmiVT1Sf6iZvIZfOKApSDWwN0lq8tuk0AEQEA
AYkBtgQYAQgAIBYhBCI/PohiD6b8RGginYqyQQjxU7/WBQJjfBvtAhsMAAoJEIqy
QQjxU7/WO54L/0v2mtA9S/SFnE4brst2f2zAETL3+Ox7jQjbKpcMip+ngZAU3wJy
FfFL1C2fJz9SdHjRVP6vpz5dliyYrVeCwtz/+S9sn1c/AR5x/2tMsZ5SfOVDIq7b
8/MPxBNUBLLjqhAZE8Nqudpg/ukeAXRfLgf2Ts6qAnCQl95/gPspwZFQifj6C7+q
iv4eJooMC41dt+cUFCrWGjJljCxk/jMFjis1O4ZkRSKaI83N3CvKISW1dyQdgeCD
+GYp6E0XQXar0uG+kiFWu/NEJfsThO7P5wbFeGJBi2rWvRedL2zqyx0hUai5mwNX
/2k6p+Kkyh8lrCBU/LccHG4VCVSv85qFJ5HwLXBc6MWFSN4L76jQzvRJ1L7LPs6Q
LU6Ib699pSBscrbBIVp/7xk1Cwi8EWmugt470GB5tKo4KvVkJERtjJqvUWKlszfZ
t9g1FJ2n8WdBFcirYHV73went5UyXLD4+BoDTwETK5DN0iR3HptNGIjV3s4hIQWk
+7YDDv6iX/0JLw==
=9S61
-----END PGP PUBLIC KEY BLOCK-----

#重建yum源
yum clean all
yum makecache

#使用 Pgpool-II YUM 存储库 安装Pgpool-II
sudo yum install -y https://www.pgpool.net/yum/rpms/4.3/redhat/rhel-7-x86_64/pgpool-II-release-4.3-1.noarch.rpm 
sudo yum install -y https://www.pgpool.net/yum/rpms/4.3/redhat/rhel-7-x86_64/pgpool-II-pg14-extensions-4.3.0-1pgdg.rhel7.x86_64.rpm
yum install -y pgpool-II-pg14-4.3.8-*

2.2 配置初始化

2.2.1 数据库初始化

执行范围:主节点执行

#主节点数据库初始化
su - "postgres" -c "/usr/pgsql-14/bin/initdb -k -D /var/lib/pgsql/14/data"
#三个节点都执行创建归档目录
su - postgres
mkdir /var/lib/pgsql/archivedir

2.2.2 配置修改

执行范围:主节点执行

vi $PGDATA/postgresql.conf
listen_addresses = '*'
archive_mode = on
archive_command = 'cp "%p" "/var/lib/pgsql/archivedir/%f"'
max_wal_senders = 10
max_replication_slots = 10
wal_level = replica
hot_standby = on
wal_log_hints = on

2.2.3 启动数据库

执行范围:主节点执行

systemctl daemon-reload 
systemctl start postgresql-14
systemctl enable postgresql-14

2.2.4 创建用户

执行范围:主节点执行

用户

密码

使用场景说明

replica

replica

数据库流复制用户

pgpool

pgpool

pgpool监控检测与后端流复制检测用户

postgres

1qaz@WSX

在线恢复数据库用户

psql -U postgres -p 5432
psql (14.10)
Type "help" for help.
#配置密码加密方式
postgres=# SET password_encryption = 'scram-sha-256';
SET
#创建检测用户
postgres=# CREATE ROLE pgpool WITH LOGIN;
CREATE ROLE
#创建复制用户
postgres=# CREATE ROLE replica WITH REPLICATION LOGIN;
CREATE ROLE
#配置各用户密码
postgres=# \password pgpool
Enter new password for user "pgpool": 
Enter it again: 
postgres=# \password replica
Enter new password for user "replica": 
Enter it again: 
postgres=# \password postgres
Enter new password for user "postgres": 
Enter it again:
#授权pgpool监控权限
postgres=# GRANT pg_monitor TO pgpool;
GRANT ROLE

2.2.5 配置访问策略

执行范围:主节点执行

vi $PGDATA/pg_hba.conf
host    all             all             192.168.16.0/24         scram-sha-256
host    replication     replica         192.168.16.0/24         scram-sha-256

2.2.6 配置免密

执行范围:所有节点执行

要使用Pgpool-II 的自动故障转移和在线恢复,需要进行允许Pgpool-II执行用户(默认 root 用户)和postgres用户之间以及postgres用户和postgres用户之间的所有后端服务器配置无密码SSH的设置。在所有节点上执行以下命令来设置无密码SSH。生成的密钥文件名为id_rsa_pgpool

#根据节点按照对应命令修改主机名
#节点1
hostnamectl set-hostname server1
#节点2
hostnamectl set-hostname server2
#节点3
hostnamectl set-hostname server3

#配置主机名映射
vi /etc/hosts
192.168.16.220 server1
192.168.16.221 server2
192.168.16.222 server3
#修改postgres用户密码
echo "2lp7VaR9um6g" |passwd --stdin postgres
#配置root->postgres免密
cd ~/.ssh
ssh-keygen -t rsa -f id_rsa_pgpool
ssh-copy-id -i id_rsa_pgpool.pub postgres@server1
ssh-copy-id -i id_rsa_pgpool.pub postgres@server2
ssh-copy-id -i id_rsa_pgpool.pub postgres@server3
#配置postgres->postgres免密
su - postgres
cd ~/.ssh
ssh-keygen -t rsa -f id_rsa_pgpool
ssh-copy-id -i id_rsa_pgpool.pub postgres@server1
ssh-copy-id -i id_rsa_pgpool.pub postgres@server2
ssh-copy-id -i id_rsa_pgpool.pub postgres@server3

#所有节点配置完成后进行节点免密登录验证
ssh postgres@server1 -i ~/.ssh/id_rsa_pgpool
ssh postgres@server2 -i ~/.ssh/id_rsa_pgpool
ssh postgres@server3 -i ~/.ssh/id_rsa_pgpool

2.2.7 配置流复制和在线恢复用户免密

执行范围:所有节点执行

为了允许replica用户无需指定密码即可进行流式复制和在线恢复,并 使用postgres执行pg_rewind,我们在postgres用户的主目录中创建.pgpass文件,并将每个PostgreSQL服务器上的权限更改为 600。

su - postgres
vi /var/lib/pgsql/.pgpass
server1:5432:replication:replica:replica
server2:5432:replication:replica:replica
server3:5432:replication:replica:replica
server1:5432:postgres:postgres:1qaz@WSX
server2:5432:postgres:postgres:1qaz@WSX
server3:5432:postgres:postgres:1qaz@WSX

#文件授权
chmod 600 /var/lib/pgsql/.pgpass

2.2.8 配置节点防火墙

执行范围:所有节点执行

注:如果各节点不需要配置防火墙,可以忽略

当连接到Pgpool-II和PostgreSQL服务器时,必须通过启用防火墙管理软件来访问目标端口。以下是CentOS/RHEL7的示例。

firewall-cmd --permanent --zone=public --add-service=postgresql 
firewall-cmd --permanent --zone=public --add-port=9999/tcp -- add-port=9898/tcp --add-port=9000/tcp --add-port=9694/udp 
firewall-cmd --reload

2.2.9 配置pgpool开机自启

执行范围:所有节点执行

#我们将Pgpool-II设置为在所有服务器上自动启动
systemctl enable pgpool.service

2.3 创建节点ID

执行范围:所有节点执行

从Pgpool-II 4.2版本 开始,现在所有节点上的所有配置参数都相同。如果启用了看门狗功能,为了区分哪个主机是哪个主机,需要一个pgpool_node_id文件。您需要创建一个pgpool_node_id文件并指定 pgpool(看门狗)节点号(例如 0、1、2 ...)来标识 pgpool(看门狗)主机。

#server1执行
echo "0" > /etc/pgpool-II/pgpool_node_id
#server2执行
echo "1" > /etc/pgpool-II/pgpool_node_id
#server3执行
echo "2" > /etc/pgpool-II/pgpool_node_id

2.4 pgpool-II配置

用yum安装的pgpool-II,Pgpool-II配置文件pgpool.conf 安装在/etc/pgpool-II中.

从Pgpool-II 4.2 开始,所有主机上的所有配置参数都是相同的,您可以在任何 pgpool 节点上编辑pgpool.conf并将编辑后的pgpool.conf文件复制到其他 pgpool 节点。

2.4.1 集群模式

执行范围:所有节点执行

Pgpool-II有多种集群模式。要设置集群模式,可以使用backend_clustering_mode 。本配置示例采用流式复制模式。

vi /etc/pgpool-II/pgpool.conf 
backend_clustering_mode = 'streaming_replication'

2.4.2 监听地址端口

执行范围:所有节点执行

允许 Pgpool-II 接受所有传入连接,我们设置listen_addresses = '*'

vi /etc/pgpool-II/pgpool.conf 
listen_addresses = '*'
port = 9999
pcp_listen_addresses = '*'
pcp_port = 9898

2.4.3 流复制检测

执行范围:所有节点执行

在sr_check_user 和sr_check_password 中指定复制延迟检查的用户和密码。在以下示例中,我们将 sr_check_password留空,并在pool_passwd中创建条目 。从Pgpool-II 4.0 开始,如果这些参数留空, Pgpool-II将首先尝试从pool_passwd文件中获取该特定用户的密码,然后再使用空密码。

vi /etc/pgpool-II/pgpool.conf 
sr_check_period = 10
sr_check_user = 'pgpool'
sr_check_password = ''

2.4.4 健康检测

执行范围:所有节点执行

启用pgpool-II运行状况检查,这样Pgpool-II可以执行故障转移。另外,如果网络不稳定,即使后端运行正常,健康检查也会失败,可能会出现故障转移或退化操作。为了防止这种健康检查的错误检测,我们设置health_check_max_retries = 3。要与sr_check_user和sr_check_password相同的方式指定health_check_user和health_check_password。

vi /etc/pgpool-II/pgpool.conf 
health_check_period = 5
health_check_timeout = 30
health_check_user = 'pgpool'
health_check_password = ''
health_check_max_retries = 3

2.4.5 后端配置

执行范围:所有节点执行

配置posgreSQL后端数据库的信息,可以通过在参数名称末尾添加数字(0,1,2......)来指定多个后端

vi /etc/pgpool-II/pgpool.conf 
backend_hostname0 = 'server1' 
backend_port0 = 5432 
backend_weight0 = 1 
backend_data_directory0 = '/var/lib/pgsql/14/data' 
backend_flag0 = 'ALLOW_TO_FAILOVER' 

backend_hostname1 = 'server2' 
backend_port1 = 5432 
backend_weight1 = 1 
backend_data_directory1 = ' /var/lib/pgsql/14/data' 
backend_flag1 = 'ALLOW_TO_FAILOVER' 

backend_hostname2 = 'server3' 
backend_port2 = 5432 
backend_weight2 = 1 
backend_data_directory2 = '/var/lib/pgsql/14/data' 
backend_flag2 = 'ALLOW_TO_FAILOVER'

如果要在SHOW POOL_NODES 命令结果 中显示“replication_state”和“replication_sync_state”列,需要backend_application_name参数。这里我们在这些参数中指定每个后端的主机名。(Pgpool-II 4.1 或更高版本)

backend_application_name0 = 'server1' 

backend_application_name1 = 'server2' 

backend_application_name2 = 'server3

2.4.6 故障转移配置

执行范围:所有节点执行

在failover_command参数 中指定故障转移后要执行的failover.sh脚本 。如果我们使用 3 台 PostgreSQL 服务器,我们需要指定 follow_primary_command 在主节点故障转移后运行。如果有两个 PostgreSQL 服务器,则不需要 follow_primary_command 设置。

vi /etc/pgpool-II/pgpool.conf 
failover_command = '/etc/pgpool-II/failover.sh %d %h %p %D %m %H %M %P %r %R %N %S'
follow_primary_command = '/etc/pgpool-II/follow_primary.sh %d %h %p %D %m %H %M %P %r %R'

启停切换脚本,默认安装后在/etc/pgpool-II目录下有相应的脚本示例,需要拷贝并授权

cp -p /etc/pgpool-II/failover.sh{.sample,}
cp -p /etc/pgpool-II/follow_primary.sh{.sample,}
chown postgres:postgres /etc/pgpool-II/{failover.sh,follow_primary.sh}

脚本我们需要根据 PostgreSQL 安装目录 更改PGHOME变量的实际路径

vi /etc/pgpool-II/failover.sh
PGHOME=/usr/pgsql-14


vi /etc/pgpool-II/follow_primary.sh 
PGHOME=/usr/pgsql-14
REPLUSER=replica

由于在follow_primary_command脚本中使用PCP命令 需要进行用户身份验证,因此我们需要在pcp.conf中指定用户名和 md5 加密密码, 格式为“用户名:加密密码”。

cat /etc/pgpool-II/follow_primary.sh
PCP_USER=pgpool

#使用pg_md5为pgpool用户创建加密密码条目,这里我设置pcp密码为123456
echo 'pgpool:'`pg_md5 123456` >> /etc/pgpool-II/pcp.conf

由于follow_primary.sh脚本必须在不输入密码的情况下执行PCP命令,因此我们需要在每台服务器上的Pgpool-II启动用户(postgres用户)的主目录中 创建.pcppass

su - postgres
echo 'localhost:9898:pgpool:123456' > ~/.pcppass
chmod 600 ~/.pcppass

注意:脚本follow_primary.sh script不支持表空间.如果使用表tablespaces,需要修改脚本支持表空间

附脚本内容:

failover.sh

#!/bin/bash
# This script is run by failover_command.

set -o xtrace

# Special values:
# 1)  %d = failed node id
# 2)  %h = failed node hostname
# 3)  %p = failed node port number
# 4)  %D = failed node database cluster path
# 5)  %m = new main node id
# 6)  %H = new main node hostname
# 7)  %M = old main node id
# 8)  %P = old primary node id
# 9)  %r = new main port number
# 10) %R = new main database cluster path
# 11) %N = old primary node hostname
# 12) %S = old primary node port number
# 13) %% = '%' character

FAILED_NODE_ID="$1"
FAILED_NODE_HOST="$2"
FAILED_NODE_PORT="$3"
FAILED_NODE_PGDATA="$4"
NEW_MAIN_NODE_ID="$5"
NEW_MAIN_NODE_HOST="$6"
OLD_MAIN_NODE_ID="$7"
OLD_PRIMARY_NODE_ID="$8"
NEW_MAIN_NODE_PORT="$9"
NEW_MAIN_NODE_PGDATA="${10}"
OLD_PRIMARY_NODE_HOST="${11}"
OLD_PRIMARY_NODE_PORT="${12}"

PGHOME=/usr/pgsql-14
REPL_SLOT_NAME=$(echo ${FAILED_NODE_HOST,,} | tr -- -. _)
POSTGRESQL_STARTUP_USER=postgres
SSH_KEY_FILE=id_rsa_pgpool
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/${SSH_KEY_FILE}"


echo failover.sh: start: failed_node_id=$FAILED_NODE_ID failed_host=$FAILED_NODE_HOST \
    old_primary_node_id=$OLD_PRIMARY_NODE_ID new_main_node_id=$NEW_MAIN_NODE_ID new_main_host=$NEW_MAIN_NODE_HOST

## If there's no main node anymore, skip failover.
if [ $NEW_MAIN_NODE_ID -lt 0 ]; then
    echo failover.sh: All nodes are down. Skipping failover.
    exit 0
fi

## Test passwordless SSH
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NEW_MAIN_NODE_HOST} ls /tmp > /dev/null

if [ $? -ne 0 ]; then
    echo failover.sh: passwordless SSH to ${POSTGRESQL_STARTUP_USER}@${NEW_MAIN_NODE_HOST} failed. Please setup passwordless SSH.
    exit 1
fi

## If Standby node is down, skip failover.
if [ $OLD_PRIMARY_NODE_ID != "-1" -a $FAILED_NODE_ID != $OLD_PRIMARY_NODE_ID ]; then

    # If Standby node is down, drop replication slot.
    ${PGHOME}/bin/psql -h ${OLD_PRIMARY_NODE_HOST} -p ${OLD_PRIMARY_NODE_PORT} postgres \
        -c "SELECT pg_drop_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

    if [ $? -ne 0 ]; then
        echo ERROR: failover.sh: drop replication slot \"${REPL_SLOT_NAME}\" failed. You may need to drop replication slot manually.
    fi

    echo failover.sh: end: standby node is down. Skipping failover.
    exit 0
fi

## Promote Standby node.
echo failover.sh: primary node is down, promote new_main_node_id=$NEW_MAIN_NODE_ID on ${NEW_MAIN_NODE_HOST}.

ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NEW_MAIN_NODE_HOST} ${PGHOME}/bin/pg_ctl -D ${NEW_MAIN_NODE_PGDATA} -w promote

if [ $? -ne 0 ]; then
    echo ERROR: failover.sh: promote failed
    exit 1
fi

echo failover.sh: end: new_main_node_id=$NEW_MAIN_NODE_ID on ${NEW_MAIN_NODE_HOST} was successfully promoted to primary
exit 0

follow_primary.sh

#!/bin/bash
# This script is run after failover_command to synchronize the Standby with the new Primary.
# First try pg_rewind. If pg_rewind failed, use pg_basebackup.

set -o xtrace

# Special values:
# 1)  %d = node id
# 2)  %h = hostname
# 3)  %p = port number
# 4)  %D = node database cluster path
# 5)  %m = new primary node id
# 6)  %H = new primary node hostname
# 7)  %M = old main node id
# 8)  %P = old primary node id
# 9)  %r = new primary port number
# 10) %R = new primary database cluster path
# 11) %N = old primary node hostname
# 12) %S = old primary node port number
# 13) %% = '%' character

NODE_ID="$1"
NODE_HOST="$2"
NODE_PORT="$3"
NODE_PGDATA="$4"
NEW_PRIMARY_NODE_ID="$5"
NEW_PRIMARY_NODE_HOST="$6"
OLD_MAIN_NODE_ID="$7"
OLD_PRIMARY_NODE_ID="$8"
NEW_PRIMARY_NODE_PORT="$9"
NEW_PRIMARY_NODE_PGDATA="${10}"

PGHOME=/usr/pgsql-14
ARCHIVEDIR=/var/lib/pgsql/archivedir
REPLUSER=replica
PCP_USER=pgpool
PGPOOL_PATH=/usr/bin
PCP_PORT=9898
REPL_SLOT_NAME=$(echo ${NODE_HOST,,} | tr -- -. _)
POSTGRESQL_STARTUP_USER=postgres
SSH_KEY_FILE=id_rsa_pgpool
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/${SSH_KEY_FILE}"

echo follow_primary.sh: start: Standby node ${NODE_ID}

# Check the connection status of Standby
${PGHOME}/bin/pg_isready -h ${NODE_HOST} -p ${NODE_PORT} > /dev/null 2>&1

if [ $? -ne 0 ]; then
    echo follow_primary.sh: node_id=${NODE_ID} is not running. skipping follow primary command
    exit 0
fi

# Test passwordless SSH
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NEW_PRIMARY_NODE_HOST} ls /tmp > /dev/null

if [ $? -ne 0 ]; then
    echo follow_main.sh: passwordless SSH to ${POSTGRESQL_STARTUP_USER}@${NEW_PRIMARY_NODE_HOST} failed. Please setup passwordless SSH.
    exit 1
fi

# Get PostgreSQL major version
PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'`

if [ $PGVERSION -ge 12 ]; then
    RECOVERYCONF=${NODE_PGDATA}/myrecovery.conf
else
    RECOVERYCONF=${NODE_PGDATA}/recovery.conf
fi

# Synchronize Standby with the new Primary.
echo follow_primary.sh: pg_rewind for node ${NODE_ID}

# Run checkpoint command to update control file before running pg_rewind
${PGHOME}/bin/psql -h ${NEW_PRIMARY_NODE_HOST} -p ${NEW_PRIMARY_NODE_PORT} postgres -c "checkpoint;"

# Create replication slot "${REPL_SLOT_NAME}"
${PGHOME}/bin/psql -h ${NEW_PRIMARY_NODE_HOST} -p ${NEW_PRIMARY_NODE_PORT} postgres \
    -c "SELECT pg_create_physical_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

if [ $? -ne 0 ]; then
    echo follow_primary.sh: create replication slot \"${REPL_SLOT_NAME}\" failed. You may need to create replication slot manually.
fi

ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NODE_HOST} "

    set -o errexit

    ${PGHOME}/bin/pg_ctl -w -m f -D ${NODE_PGDATA} stop

    ${PGHOME}/bin/pg_rewind -D ${NODE_PGDATA} --source-server=\"user=${POSTGRESQL_STARTUP_USER} host=${NEW_PRIMARY_NODE_HOST} port=${NEW_PRIMARY_NODE_PORT} d
bname=postgres\"
    [ -d \"${NODE_PGDATA}\" ] && rm -rf ${NODE_PGDATA}/pg_replslot/*

    cat > ${RECOVERYCONF} << EOT
primary_conninfo = 'host=${NEW_PRIMARY_NODE_HOST} port=${NEW_PRIMARY_NODE_PORT} user=${REPLUSER} application_name=${NODE_HOST} passfile=''/var/lib/pgsql/.pgp
ass'''recovery_target_timeline = 'latest'
restore_command = 'scp ${SSH_OPTIONS} ${NEW_PRIMARY_NODE_HOST}:${ARCHIVEDIR}/%f %p'
primary_slot_name = '${REPL_SLOT_NAME}'
EOT

    if [ ${PGVERSION} -ge 12 ]; then
        sed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \
               -e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${NODE_PGDATA}/postgresql.conf
        touch ${NODE_PGDATA}/standby.signal
    else
        echo \"standby_mode = 'on'\" >> ${RECOVERYCONF}
    fi

    ${PGHOME}/bin/pg_ctl -l /dev/null -w -D ${NODE_PGDATA} start

"

# If pg_rewind failed, try pg_basebackup 
if [ $? -ne 0 ]; then
    echo follow_primary.sh: end: pg_rewind failed. Try pg_basebackup.

    ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NODE_HOST} "

        set -o errexit

        [ -d \"${NODE_PGDATA}\" ] && rm -rf ${NODE_PGDATA}
        [ -d \"${ARCHIVEDIR}\" ] && rm -rf ${ARCHIVEDIR}/*
        ${PGHOME}/bin/pg_basebackup -h ${NEW_PRIMARY_NODE_HOST} -U $REPLUSER -p ${NEW_PRIMARY_NODE_PORT} -D ${NODE_PGDATA} -X stream

        cat > ${RECOVERYCONF} << EOT
primary_conninfo = 'host=${NEW_PRIMARY_NODE_HOST} port=${NEW_PRIMARY_NODE_PORT} user=${REPLUSER} application_name=${NODE_HOST} passfile=''/var/lib/pgsql/.pgp
ass'''recovery_target_timeline = 'latest'
restore_command = 'scp ${SSH_OPTIONS} ${NEW_PRIMARY_NODE_HOST}:${ARCHIVEDIR}/%f %p'
primary_slot_name = '${REPL_SLOT_NAME}'
EOT

        if [ ${PGVERSION} -ge 12 ]; then
            sed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \
                   -e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${NODE_PGDATA}/postgresql.conf
            touch ${NODE_PGDATA}/standby.signal
        else
            echo \"standby_mode = 'on'\" >> ${RECOVERYCONF}
        fi
        sed -i \
            -e \"s/#*port = .*/port = ${NODE_PORT}/\" \
            -e \"s@#*archive_command = .*@archive_command = 'cp \\\"%p\\\" \\\"${ARCHIVEDIR}/%f\\\"'@\" \
            ${NODE_PGDATA}/postgresql.conf
    "

    if [ $? -ne 0 ]; then

        # drop replication slot
        ${PGHOME}/bin/psql -h ${NEW_PRIMARY_NODE_HOST} -p ${NEW_PRIMARY_NODE_PORT} postgres \
            -c "SELECT pg_drop_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

        if [ $? -ne 0 ]; then
            echo ERROR: follow_primary.sh: drop replication slot \"${REPL_SLOT_NAME}\" failed. You may need to drop replication slot manually.
        fi

        echo follow_primary.sh: end: pg_basebackup failed
        exit 1
    fi

    # start Standby node on ${NODE_HOST}
    ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${NODE_HOST} $PGHOME/bin/pg_ctl -l /dev/null -w -D ${NODE_PGDATA} start

fi

# If start Standby successfully, attach this node
if [ $? -eq 0 ]; then

    # Run pcp_attact_node to attach Standby node to Pgpool-II.
    ${PGPOOL_PATH}/pcp_attach_node -w -h localhost -U $PCP_USER -p ${PCP_PORT} -n ${NODE_ID}

    if [ $? -ne 0 ]; then
        echo ERROR: follow_primary.sh: end: pcp_attach_node failed
        exit 1
    fi

else

    # If start Standby failed, drop replication slot "${REPL_SLOT_NAME}"
    ${PGHOME}/bin/psql -h ${NEW_PRIMARY_NODE_HOST} -p ${NEW_PRIMARY_NODE_PORT} postgres \
        -c "SELECT pg_drop_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

    if [ $? -ne 0 ]; then
        echo ERROR: follow_primary.sh: drop replication slot \"${REPL_SLOT_NAME}\" failed. You may need to drop replication slot manually.
    fi

    echo ERROR: follow_primary.sh: end: follow primary command failed
    exit 1
fi

echo follow_primary.sh: end: follow primary command is completed successfully
exit 0

2.4.7 pgpool-II在线恢复配置

执行范围:所有节点执行

为了使用Pgpool-II执行在线恢复,我们指定PostgreSQL用户名和在线恢复命令 recovery_1st_stage。由于 执行在线恢复需要PostgreSQL中的超级用户权限,因此我们在recovery_user中指定postgres用户。然后,我们在PostgreSQL主服务器(server1) 的数据库集群目录下创建recovery_1st_stage和pgpool_remote_start,并添加执行权限,recovery_password如果为空,默认会去pool_passwd中匹配用户密码。

vi /etc/pgpool-II/pgpool.conf 
recovery_user = 'postgres' 
recovery_password = '' 
recovery_1st_stage_command = 'recovery_1st_stage'

#拷贝脚本模板
cp -p /etc/pgpool-II/recovery_1st_stage.sample /var/lib/pgsql/14/data/recovery_1st_stage 
cp -p /etc/pgpool-II/pgpool_remote_start.sample /var/lib/pgsql/14/data/pgpool_remote_start 
chown postgres:postgres /var/lib/pgsql/14/data/{recovery_1st_stage,pgpool_remote_start}

注意:recovery_1st_stage 脚本不支持表空间。如果您使用表空间,则需要修改脚本以支持表空间

脚本我们需要根据 PostgreSQL 安装目录 更改PGHOME变量的实际路径以及复制用户

vi /var/lib/pgsql/14/data/recovery_1st_stage
PGHOME=/usr/pgsql-14
REPLUSER=replica

vi /var/lib/pgsql/14/data/pgpool_remote_start
PGHOME=/usr/pgsql-14

为了使用在线恢复功能,函数pgpool_recovery, pgpool_remote_start, pgpool_switch_xlog是必须的,因此我们需要在server1服务中template1安装pgpool_recovery

su - postgres
psql template1 -c "CREATE EXTENSION pgpool_recovery"

附脚本源码:

recovery_1st_stage

#!/bin/bash
# This script is executed by "recovery_1st_stage" to recovery a Standby node.

set -o xtrace

PRIMARY_NODE_PGDATA="$1"
DEST_NODE_HOST="$2"
DEST_NODE_PGDATA="$3"
PRIMARY_NODE_PORT="$4"
DEST_NODE_ID="$5"
DEST_NODE_PORT="$6"
PRIMARY_NODE_HOST="$7"

PGHOME=/usr/pgsql-14
ARCHIVEDIR=/var/lib/pgsql/archivedir
REPLUSER=repl
REPL_SLOT_NAME=$(echo ${DEST_NODE_HOST,,} | tr -- -. _)
POSTGRESQL_STARTUP_USER=postgres
SSH_KEY_FILE=id_rsa_pgpool
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/${SSH_KEY_FILE}"

echo recovery_1st_stage: start: pg_basebackup for Standby node $DEST_NODE_ID

## Test passwordless SSH
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${DEST_NODE_HOST} ls /tmp > /dev/null

if [ $? -ne 0 ]; then
    echo recovery_1st_stage: passwordless SSH to ${POSTGRESQL_STARTUP_USER}@${DEST_NODE_HOST} failed. Please setup passwordless SSH.
    exit 1
fi

## Get PostgreSQL major version
PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'`
if [ $PGVERSION -ge 12 ]; then
    RECOVERYCONF=${DEST_NODE_PGDATA}/myrecovery.conf
else
    RECOVERYCONF=${DEST_NODE_PGDATA}/recovery.conf
fi

## Create replication slot "${REPL_SLOT_NAME}"
${PGHOME}/bin/psql -h ${PRIMARY_NODE_HOST} -p ${PRIMARY_NODE_PORT} postgres \
    -c "SELECT pg_create_physical_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

if [ $? -ne 0 ]; then
    echo ERROR: recovery_1st_stage: create replication slot \"${REPL_SLOT_NAME}\" failed. You may need to create replication slot manually.
fi

## Execute pg_basebackup to recovery Standby node
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@$DEST_NODE_HOST "

    set -o errexit

    rm -rf $DEST_NODE_PGDATA
    rm -rf $ARCHIVEDIR/*

    ${PGHOME}/bin/pg_basebackup -h $PRIMARY_NODE_HOST -U $REPLUSER -p $PRIMARY_NODE_PORT -D $DEST_NODE_PGDATA -X stream

    cat > ${RECOVERYCONF} << EOT
primary_conninfo = 'host=${PRIMARY_NODE_HOST} port=${PRIMARY_NODE_PORT} user=${REPLUSER} application_name=${DEST_NODE_HOST} passfile=''/var/lib/pgsql/.pgpass
'''recovery_target_timeline = 'latest'
restore_command = 'scp ${SSH_OPTIONS} ${PRIMARY_NODE_HOST}:${ARCHIVEDIR}/%f %p'
primary_slot_name = '${REPL_SLOT_NAME}'
EOT

    if [ ${PGVERSION} -ge 12 ]; then
        sed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \
               -e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${DEST_NODE_PGDATA}/postgresql.conf
        touch ${DEST_NODE_PGDATA}/standby.signal
    else
        echo \"standby_mode = 'on'\" >> ${RECOVERYCONF}
    fi

    sed -i \
        -e \"s/#*port = .*/port = ${DEST_NODE_PORT}/\" \
        -e \"s@#*archive_command = .*@archive_command = 'cp \\\"%p\\\" \\\"${ARCHIVEDIR}/%f\\\"'@\" \
        ${DEST_NODE_PGDATA}/postgresql.conf
"

if [ $? -ne 0 ]; then

    ${PGHOME}/bin/psql -h ${PRIMARY_NODE_HOST} -p ${PRIMARY_NODE_PORT} postgres \
        -c "SELECT pg_drop_replication_slot('${REPL_SLOT_NAME}');"  >/dev/null 2>&1

    if [ $? -ne 0 ]; then
        echo ERROR: recovery_1st_stage: drop replication slot \"${REPL_SLOT_NAME}\" failed. You may need to drop replication slot manually.
    fi

    echo ERROR: recovery_1st_stage: end: pg_basebackup failed. online recovery failed
    exit 1
fi

echo recovery_1st_stage: end: recovery_1st_stage is completed successfully
exit 0

pgpool_remote_start

#!/bin/bash
# This script is run after recovery_1st_stage to start Standby node.

set -o xtrace

DEST_NODE_HOST="$1"
DEST_NODE_PGDATA="$2"

PGHOME=/usr/pgsql-14
POSTGRESQL_STARTUP_USER=postgres
SSH_KEY_FILE=id_rsa_pgpool
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/${SSH_KEY_FILE}"

echo pgpool_remote_start: start: remote start Standby node $DEST_NODE_HOST

## Test passwordless SSH
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${DEST_NODE_HOST} ls /tmp > /dev/null

if [ $? -ne 0 ]; then
    echo ERROR: pgpool_remote_start: passwordless SSH to ${POSTGRESQL_STARTUP_USER}@${DEST_NODE_HOST} failed. Please setup passwordless SSH.
    exit 1
fi

## Start Standby node
ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${DEST_NODE_HOST} "
    $PGHOME/bin/pg_ctl -l /dev/null -w -D ${DEST_NODE_PGDATA} status

    if [ \$? -eq 0 ]; then
        exit 0
    fi

    $PGHOME/bin/pg_ctl -l /dev/null -w -D ${DEST_NODE_PGDATA} start
"

if [ $? -ne 0 ]; then
    echo ERROR: pgpool_remote_start: ${DEST_NODE_HOST} PostgreSQL start failed.
    exit 1
fi

echo pgpool_remote_start: end: PostgreSQL on ${DEST_NODE_HOST} is started successfully.
exit 0

2.4.8 客户端认证配置

执行范围:所有节点执行

因为在配置初始化章节中,我们已经将PostgreSQL身份验证方法设置为 scram-sha-256,所以需要通过 Pgpool-II设置客户端身份验证才能连接到后端节点。使用 RPM 安装时,Pgpool-II配置文件 pool_hba.conf位于/etc/pgpool-II中。默认情况下,pool_hba 身份验证处于禁用状态,设置enable_pool_hba = on 即可启用它。

vi /etc/pgpool-II/pgpool.conf
enable_pool_hba = on

vi /etc/pgpool-II/pool_hba.conf
host    all         pgpool      192.168.16.0/24       scram-sha-256
host    all         postgres    192.168.16.0/24       scram-sha-256

用于身份验证的默认密码文件名为pool_passwd。要使用scram-sha-256身份验证,需要用于解密密码的解密密钥。我们在Pgpool-II 启动用户postgres(Pgpool-II 4.1 或更高版本)的主目录中创建.pgpoolkey文件。(Pgpool-II 4.0 或之前版本,默认情况下Pgpool-II以root 身份 启动)

su - postgres 
echo 'asdfasdfasdfasdfa' > ~/.pgpoolkey
chmod 600 ~/.pgpoolkey

执行命令pg_enc -m -k /path/to/.pgpoolkey -u username -p将用户名和AES加密密码注册到文件pool_passwd中。如果pool_passwd尚不存在,它将在与 pgpool.conf相同的目录中创建

#这里的用户密码是在数据库中创建的用户密码
su - postgres
pg_enc -m -k ~/.pgpoolkey -u pgpool -p
db password: 输入pgpool密码
pg_enc -m -k ~/.pgpoolkey -u postgres -p
db password: 输入postgres密码


cat /etc/pgpool-II/pool_passwd 
pgpool:AESsRYTuf0H/fATkILBq1Us0A==
postgres:AES7Gd9cCfEFJp4ePP0YI7m3w==

2.4.9 看门狗配置

执行范围:所有节点执行

vi /etc/pgpool-II/pgpool.conf
#开启看门狗
use_watchdog = on
#指定VIP
delegate_IP = '192.168.16.250'
#命令路径
if_cmd_path = '/sbin'
arping_path = '/usr/sbin'
#指定操作启动和停止vip命令
if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'
if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'
arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
#配置看门狗节点信息
hostname0 = 'server1'
wd_port0 = 9000
pgpool_port0 = 9999

hostname1 = 'server2'
wd_port1 = 9000
pgpool_port1 = 9999

hostname2 = 'server3'
wd_port2 = 9000
pgpool_port2 = 9999
#指定生命检测方法及检测间隔时间
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
#指定心跳发送/接受主机
heartbeat_hostname0 = 'server1'
heartbeat_port0 = 9694
heartbeat_device0 = 'eth0'
heartbeat_hostname1 = 'server2'
heartbeat_port1 = 9694
heartbeat_device1 = 'eth0'
heartbeat_hostname2 = 'server3'
heartbeat_port2 = 9694
heartbeat_device2 = 'eth0'
#指定心跳发送信号的间隔时间和心跳死区超时时间
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30

注意:

1) 如果看门狗节点数为偶数,则需要开启enable_consensus_with_half_votes参数

2)如果 use_watchdog = on,请确保pgpool_node_id文件中指定了 pgpool 节点号

如果使用 RPM 安装Pgpool-II ,则系统用户postgres 已配置为通过sudo运行ip/arping,无需密码

visudo
postgres ALL=NOPASSWD: /sbin/ip 
postgres ALL=NOPASSWD: /usr/sbin/arping

当Watchdog进程异常终止时,虚拟 IP 在新旧活动 pgpool 节点上都可能“up”。为了防止这种情况,请配置wd_escalation_command以在新的活动 pgpool 节点上启动虚拟 IP 之前关闭其他 pgpool 节点上的虚拟 IP。

#拷贝脚本模板
cp -p /etc/pgpool-II/escalation.sh{.sample,}
#修改脚本的网卡设备和vip配置
#这里需要说明一下,PGPOOLS配置的是节点的主机名称,这个不是ip,因为脚本中判断的是主机名称
vi /etc/pgpool-II/escalation.sh
PGPOOLS=(server1 server2 server3)

VIP=192.168.16.250
DEVICE=eth0
#授权postgres用户执行
chown postgres:postgres /etc/pgpool-II/escalation.sh

#配置脚本路径
vi /etc/pgpool-II/pgpool.conf
wd_escalation_command = '/etc/pgpool-II/escalation.sh'

附脚本源码

escalation.sh

#!/bin/bash
# This script is run by wd_escalation_command to bring down the virtual IP on other pgpool nodes
# before bringing up the virtual IP on the new active pgpool node.

set -o xtrace

POSTGRESQL_STARTUP_USER=postgres
SSH_KEY_FILE=id_rsa_pgpool
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.ssh/${SSH_KEY_FILE}"
SSH_TIMEOUT=5
PGPOOLS=(server1 server2 server3)

VIP=192.168.16.250
DEVICE=eth0

for pgpool in "${PGPOOLS[@]}"; do
    [ "$HOSTNAME" = "${pgpool}" ] && continue

    timeout ${SSH_TIMEOUT} ssh -T ${SSH_OPTIONS} ${POSTGRESQL_STARTUP_USER}@${pgpool} "
        /usr/bin/sudo /sbin/ip addr del ${VIP}/24 dev ${DEVICE}
    "
    if [ $? -ne 0 ]; then
        echo ERROR: escalation.sh: failed to release VIP on ${pgpool}.
    fi
done
exit 0

2.4.10 日志配置

执行范围:所有节点执行

从 Pgpool-II 4.2版本开始,实现了日志收集进程。在示例中,我们启用日志记录收集器

vi /etc/pgpool-II/pgpool.conf
log_destination = 'stderr'
logging_collector = on
log_directory = '/var/log/pgpool_log'
log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log'
log_truncate_on_rotation = on
log_rotation_age = 1d
log_rotation_size = 10MB

#创建日志目录
mkdir /var/log/pgpool_log/
chown postgres:postgres /var/log/pgpool_log/

2.4.11 配置/etc/sysconfig/pgpool

执行范围:所有节点执行

当启动Pgpool-II时,如果pgpool_status 文件存在,Pgpool-II将从pgpool_status文件中读取后端状态(up/down) 。

如果您想在Pgpool-II启动时忽略pgpool_status文件,请将“-D”添加到 /etc/sysconfig/pgpool 的启动选项 OPTS 中。

vi /etc/sysconfig/pgpool 
OPTS=" -D -n"

2.5 pgpool启停

执行范围:所有节点执行

#启动pgpool
#我们在之前部分已经设置了Pgpool-II的自动启动。要启动 Pgpool-II,请重新启动整个系统或执行以下命令。
systemctl start pgpool.service


#参考停止pgpool命令
systemctl stop pgpool.service

查看VIP创建情况

2.6 设置 PostgreSQL 备用服务器

执行范围:主节点执行

我们应该使用Pgpool-II在线恢复功能 设置PostgreSQL备用服务器。确保pcp_recovery_node命令 使用的recovery_1st_stage和pgpool_remote_start脚本位于PostgreSQL主服务器 ( server1 ) 的数据库集群目录中.

# pcp_recovery_node -h 192.168.16.220 -p 9898 -U pgpool -n 1
密码:123456
pcp_recovery_node -- Command Successful

# pcp_recovery_node -h 192.168.137.150 -p 9898 -U pgpool -n 2
密码:
pcp_recovery_node -- Command Successful

执行pcp_recovery_node命令后,验证server2和server3是否作为PostgreSQL备用服务器 启动。

psql -h 192.168.16.250 -p 9999 -U pgpool postgres -c "show pool_nodes"

3 集群故障切换

3.1 看门狗主备切换

用pcp_watchdog_info命令查看看门狗当前状态

#server1节点执行
pcp_watchdog_info -h 192.168.16.220 -p 9898 -U pgpool

停止server1节点主看门狗,然后server2或 server3将被提升为主看门狗

#server1节点执行
#停止主看门狗pgpool-II
systemctl stop pgpool.service 

#在server3节点执行
#查看看门狗状态
pcp_watchdog_info -h 192.168.16.222 -p 9898 -U pgpool
#查看vip
ip a

server3节点查看VIP

再次启动我们已停止的Pgpool-II ( server1 ),并验证server1是否作为备用服务器运行

#server1节点执行
#停止主看门狗pgpool-II
systemctl start pgpool.service 
#查看看门狗状态
pcp_watchdog_info -h 192.168.16.220 -p 9898 -U pgpool

3.2 数据库主备切换

通过VIP连接后端数据库,查看节点信息

#server3节点查看
psql -h 192.168.16.250 -p 9999 -U pgpool postgres -c "show pool_nodes"

从信息看server1为主库节点,我们把主库节点停掉模拟宕机

systemctl stop postgresql-14

当停止server1主库后,server2数据库被提升为主库

psql -h 192.168.16.250 -p 9999 -U pgpool postgres -c "show pool_nodes"

server3作为备库运行,并指向新的主库server2

#查看server3和server2两个库的主备状态
psql -h server3 -p 5432 -U pgpool postgres -c "select pg_is_in_recovery()"
psql -h server2 -p 5432 -U pgpool postgres -c "select pg_is_in_recovery()"
#查看server2流复制状态
psql -h server2 -p 5432 -U pgpool postgres -c "select * from pg_stat_replication" -x

3.3 在线恢复数据库

在这里,我们使用Pgpool-II在线恢复功能将server1(旧的主服务器)恢复为备用服务器。在恢复旧主服务器之前,请确保当前主服务器server2的数据库集群目录中存在recovery_1st_stage和pgpool_remote_start脚本。

#三个节点只要有pgpool服务启动,都可以执行以下命令
pcp_recovery_node -h 192.168.16.220 -p 9898 -U pgpool -n 0
Password: 
pcp_recovery_node -- Command Successful
#查看节点信息
 psql -h 192.168.16.250 -p 9999 -U pgpool postgres -c "show pool_nodes"

3.4 数据同步验证

我们登录数据库创建库和表,并写入数据,在从节点查看情况

psql -h 192.168.16.250 -p 9999 -U postgres postgres 

#创建数据库
create database test owner postgres encoding UTF8;
#切换到test数据库
\c test
#创建数据表
create table tb_1 (id int8,create_time timestamp(0) without time zone);
#插入数据集
insert into tb_1 values (1,now());
insert into tb_1 values (2,now());
insert into tb_1 values (3,now());
#查看数据
select * from tb_1 ;

查看数据查询状态

psql -h 192.168.16.250 -p 9999 -U pgpool postgres -c "show pool_nodes"

官方参考链接:Pgpool-II + Watchdog Setup Example

  • 27
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
pgpool-ii的配置可以通过源码安装或者使用yum安装来完成。如果选择源码安装,可以按照以下步骤进行配置: 1. 解压源码包:使用命令`tar -xvf pgpool-II-4.2.2.tar.gz`解压源码包。 2. 进入解压后的目录:使用命令`cd pgpool-II-4.2.2`进入解压后的目录。 3. 配置安装路径:使用命令`./configure --prefix=/usr/package/pgpool-II-4.2.2`配置安装路径。 4. 编译和安装:使用命令`make && make install`进行编译和安装。 如果选择使用yum安装,可以按照以下步骤进行配置: 1. 安装yum源:使用命令`yum install -y https://www.pgpool.net/yum/rpms/4.2/redhat/rhel-7-x86_64/pgpool-II-release-4.2-1.noarch.rpm`安装pgpool的yum源。 2. 安装pgpool:使用命令`yum install -y pgpool-II-pg11-devel.x86_64`安装pgpool。 3. 启用pgpool服务:使用命令`systemctl enable pgpool.service`启用pgpool服务。 无论是源码安装还是yum安装,配置文件都可以在`/usr/package/pgpool-II-4.2.2/etc`目录下找到。可以将`pgpool.conf.sample-stream`复制为`pgpool.conf`进行配置。 需要注意的是,从Pgpool-II 4.2版本开始,所有的配置都将通过`/etc/pgpool/`目录下的`pgpool_node_id`文件来区分节点。 另外,如果需要进行postgresql的配置流同步,可以先安装postgresql,然后根据具体需求进行配置。 希望以上信息对您有所帮助。\[1\]\[2\]\[3\] #### 引用[.reference_title] - *1* *2* *3* [pgpool-II的安装及配置读写分离的高可用pg集群](https://blog.csdn.net/qq_35997793/article/details/114028254)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值