pacemaker+corosync+postgresql 一主两从集群搭建

测试环境

操作系统:CentOS Linux release 7.7.1908 (Core)
数据库:PostgreSQL 9.6.12

一、操作系统配置(pg1 && pg2 && pg3)

  1. 禁用防火墙、selinux
  2. 同步系统时间
  3. 修改hosts文件
 # more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.176 pg1
192.168.10.177 pg2
192.168.10.178 pg3
192.168.10.170 vip-master
192.168.10.171 vip-slave
  1. 配置系统参数
# vim /etc/security/limits.conf
*  soft   stack    10240
*  hard   stack    10240
*  soft    nofile  131072  
*  hard    nofile  131072  
*  soft    nproc   131072  
*  hard    nproc   131072  
*  soft    core    unlimited  
*  hard    core    unlimited 
# vim /etc/sysctl.conf
kernel.sem = 50100 64128000 50100 1280
# sysctl -p

二、 安装高可用集群包(pg1 && pg2 && pg3)

  1. 安装集群包
# yum install -y pacemaker corosync pcs ipvsadm
  1. 启用pcsd服务
# systemctl start pcsd
# systemctl enable pcsd
# systemctl enable corosync
# systemctl enable pacemaker
  1. 设置hacluster用户密码
# echo hacluster|passwd hacluster --stdin
  1. 集群认证(在任一节点上执行)
# pcs cluster auth -u hacluster -p hacluster pg1 pg2 pg3
pg3: Authorized
pg2: Authorized
pg1: Authorized
#
  1. 同步配置(在任一节点上执行)
# pcs cluster setup --last_man_standing=1 --name pgcluster pg1 pg2 pg3
Destroying cluster on nodes: pg1, pg2, pg3...
pg1: Stopping Cluster (pacemaker)...
pg3: Stopping Cluster (pacemaker)...
pg2: Stopping Cluster (pacemaker)...
pg2: Successfully destroyed cluster
pg1: Successfully destroyed cluster
pg3: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'pg1', 'pg2', 'pg3'
pg2: successful distribution of the file 'pacemaker_remote authkey'
pg1: successful distribution of the file 'pacemaker_remote authkey'
pg3: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
pg1: Succeeded
pg2: Succeeded
pg3: Succeeded
Synchronizing pcsd certificates on nodes pg1, pg2, pg3...
pg3: Success
pg2: Success
pg1: Success
Restarting pcsd on the nodes in order to reload the certificates...
pg3: Success
pg2: Success
pg1: Success
# 
  1. 启动集群(在任一节点上执行)
# pcs cluster start --all
pg1: Starting Cluster (corosync)...
pg2: Starting Cluster (corosync)...
pg3: Starting Cluster (corosync)...
pg1: Starting Cluster (pacemaker)...
pg3: Starting Cluster (pacemaker)...
pg2: Starting Cluster (pacemaker)...
#

三、安装PostgreSQL数据库(pg1 && pg2 && pg3)

  1. 安装PG

数据库版本:pg 11.x以下版本

# yum install gcc-c++ readline-devel zlib-devel
# tar zxvf postgresql-9.6.tar.gz
# cd postgresql-9.6
# ./configure --prefix=/opt/PostgreSQL/9.6/
# make -j 2
# make install
  1. 配置用户、数据目录(pg1 && pg2 && pg3)
# useradd postgres
# passwd postgres
# vim /home/postgres/.bash_profile
添加:
export PGDATA=/opt/PostgreSQL/9.6/data
export PATH=/opt/PostgreSQL/9.6/bin:$PATH
# mkdir /opt/PostgreSQL/9.6/data
# chown postgres.postgres /opt/PostgreSQL/9.6/data
# chmod 700 /opt/PostgreSQL/9.6/data
  1. 主节点(pg1)
# su - postgres
$ initdb -D /opt/PostgreSQL/9.6/data
$ cd /opt/PostgreSQL/9.6/data
  1. 配置数据库参数

#vim postgresql.conf

hot_standby=on         #pg10前的版本需主动开启该参数
listen_addresses = '*'
wal_level = logical
wal_log_hints = on
max_wal_size = 10GB
min_wal_size = 80MB
checkpoint_completion_target = 0.9 
archive_mode = on
archive_command = '/bin/true'
wal_keep_segments = 1000
synchronous_standby_names = ''
hot_standby_feedback = on 
logging_collector = on
log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_size = 0
log_min_duration_statement = 0
log_checkpoints = on
log_connections = on
log_disconnections = on
log_line_prefix = '%t [%p]: db=%d,user=%u,app=%a,client=%h '
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
lc_messages = 'en_US.UTF-8'

#vim pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             0.0.0.0/0               md5
# IPv6 local connections:
#host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     trust
host    replication     all             127.0.0.1/32            md5
host    replication     repluser        192.168.10.0/24         md5
  1. 启动master数据库
[root@pg1 ~]# su - postgres
[postgres@pg1 ~]$ 
[postgres@pg1 ~]$ pg_ctl start -D /opt/PostgreSQL/9.6/data
  1. 创建复制用户
postgres=# create user repluser with replication password 'repluser';
CREATE ROLE
postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 repluser  | Replication                                                | {}
postgres=# 
  1. 创建slave(pg2 && pg3)
  # su - postgres 
  $ pg_basebackup -h pg1 -U repluser -p 5432 -D  /opt/PostgreSQL/9.6/data --wal-method=stream --checkpoint=fast --progress --verbose
  1. 停止master(pg1)
[postgres@pg1 data]$ pg_ctl stop -D /opt/PostgreSQL/9.6/data
waiting for server to shut down.... done
server stopped
[postgres@pg1 data]$ pg_ctl status -D /opt/PostgreSQL/9.6/data
pg_ctl: no server running
[postgres@pg1 data]$ 

注意事项:
数据库无需配置自启动,由集群软件自动拉起

四、配置集群资源

#vim cluster_setup.sh

pcs cluster cib pgsql_cfg
pcs -f pgsql_cfg property set no-quorum-policy="ignore"           
pcs -f pgsql_cfg property set stonith-enabled="false"                      
pcs -f pgsql_cfg resource defaults resource-stickiness="INFINITY"      
pcs -f pgsql_cfg resource defaults migration-threshold="3"                         
#### vip-master ###               
pcs -f pgsql_cfg resource create vip-master IPaddr2 ip="192.168.10.170" cidr_netmask="24" \
op start  timeout="60s" interval="0s"  on-fail="restart"    \
op monitor timeout="60s" interval="10s" on-fail="restart"    \
op stop    timeout="60s" interval="0s"  on-fail="block"  
#### vip-slave ###                                         
pcs -f pgsql_cfg resource create vip-slave IPaddr2 ip="192.168.10.171" cidr_netmask="24" \
op start   timeout="60s" interval="0s"  on-fail="restart"    \
op monitor timeout="60s" interval="10s" on-fail="restart"    \
op stop    timeout="60s" interval="0s"  on-fail="block" 
#### pgsql resource ####                            
pcs -f pgsql_cfg resource create pgsql pgsql \
pgctl="/opt/PostgreSQL/9.6/bin/pg_ctl" \
psql="/opt/PostgreSQL/9.6/bin/psql" \
pgdata="/opt/PostgreSQL/9.6/data/" \
config="/opt/PostgreSQL/9.6/data/postgresql.conf" \
rep_mode="sync" node_list="pg1 pg2 pg3" master_ip="192.168.10.170"  \
repuser="repluser" \
primary_conninfo_opt="password=repluser \
keepalives_idle=60 keepalives_interval=5 keepalives_count=5" \
restart_on_promote='true' \
op start   timeout="60s" interval="0s"  on-fail="restart" \
op monitor timeout="60s" interval="4s" on-fail="restart" \
op monitor timeout="60s" interval="3s" on-fail="restart" role="Master" \
op promote timeout="60s" interval="0s"  on-fail="restart" \
op demote  timeout="60s" interval="0s"  on-fail="stop" \
op stop    timeout="60s" interval="0s"  on-fail="block"  
#### setting master #####
pcs -f pgsql_cfg resource master pgsql-cluster pgsql master-max=1 master-node-max=1 clone-max=3 clone-node-max=1 notify=true
#### master group #####
pcs -f pgsql_cfg resource group add master-group vip-master 
#### slave group #####      
pcs -f pgsql_cfg resource group add slave-group vip-slave 
#### master group setting #####             
pcs -f pgsql_cfg constraint colocation add master-group with master pgsql-cluster INFINITY  
pcs -f pgsql_cfg constraint order promote pgsql-cluster then start master-group symmetrical=false score=INFINITY                                                                                              
pcs -f pgsql_cfg constraint order demote  pgsql-cluster then stop  master-group symmetrical=false score=0 
#### slave-group  setting  #####                                                                                              
pcs -f pgsql_cfg constraint colocation add slave-group with slave pgsql-cluster INFINITY        
pcs -f pgsql_cfg constraint order promote pgsql-cluster then start slave-group symmetrical=false score=INFINITY                                                                                               
 pcs -f pgsql_cfg constraint order demote  pgsql-cluster then stop  slave-group symmetrical=false score=0
#### push config ####
pcs cluster cib-push pgsql_cfg

#chmod +x cluster_setup.sh
#sh cluster_setup.sh

五、集群状态检查常用命令

#pcs status corosync

Membership information
----------------------
    Nodeid      Votes Name
         1          1 pg1 (local)
         2          1 pg2
         3          1 pg3

#pcs property list

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: pgcluster
 dc-version: 1.1.23-1.el7_9.1-9acf116022
 have-watchdog: false
 last-lrm-refresh: 1614233039
 no-quorum-policy: ignore
 stonith-enabled: false
Node Attributes:
 pg1: pgsql-data-status=LATEST
 pg2: pgsql-data-status=STREAMING|SYNC
 pg3: pgsql-data-status=STREAMING|ASYNC

#pcs resource show

 Master/Slave Set: pgsql-cluster [pgsql]
     Masters: [ pg1 ]
     Slaves: [ pg2 pg3 ]
 Resource Group: master-group
     vip-master	(ocf::heartbeat:IPaddr2):	Started pg1
 Resource Group: slave-group
     vip-slave	(ocf::heartbeat:IPaddr2):	Started pg2

#crm_mon -Arf -1

Stack: corosync
Current DC: pg1 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum

Last updated: Thu Feb 25 17:43:35 2021
Last change: Thu Feb 25 14:05:07 2021 by root via crm_attribute on pg1

3 nodes configured
5 resource instances configured

Online: [ pg1 pg2 pg3 ]

Full list of resources:

 Master/Slave Set: pgsql-cluster [pgsql]
     Masters: [ pg1 ]
     Slaves: [ pg2 pg3 ]
 Resource Group: master-group
     vip-master	(ocf::heartbeat:IPaddr2):	Started pg1
 Resource Group: slave-group
     vip-slave	(ocf::heartbeat:IPaddr2):	Started pg2

Node Attributes:
* Node pg1:
    + master-pgsql                    	: 1000      
    + pgsql-data-status               	: LATEST    
    + pgsql-master-baseline           	: 000000000C000098
    + pgsql-status                    	: PRI       
* Node pg2:
    + master-pgsql                    	: 100       
    + pgsql-data-status               	: STREAMING|SYNC
    + pgsql-status                    	: HS:sync   
* Node pg3:
    + master-pgsql                    	: -INFINITY 
    + pgsql-data-status               	: STREAMING|ASYNC
    + pgsql-status                    	: HS:async  

Migration Summary:
* Node pg1:
* Node pg2:
* Node pg3:

  • 常用管理命令
pcs status //查看集群状态
pcs resource show //查看资源
pcs resource create ClusterIP IPaddr2 ip=192.168.0.120 cidr_netmask=32 //创建一个虚拟IP资源
pcs resource cleanup //xx表示虚拟资源名称,当集群有资源处于unmanaged的状态时,可以用这个命令清理掉失败的信息,然后重置资源状态
pcs resource list //查看资源列表
pcs resource restart //重启资源
pcs resource enable //启动资源
pcs resource disable //关闭资源
pcs resource delete //删除资源
  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值