rhcs+gfs2+corosync + pacemaker +postgres_streaming_replication

rhcs+gfs2+corosync + pacemaker +postgres_streaming_replication
一、
配置redhat 7.3 + gfs2共享文件系统,使两台或多台服务器对同一文件同时系统具有挂载及读写权限。供PG共享存储集群测试。

1、搭建环境:

操作系统版本:
[root@hgdb1 pgsql]# uname -a
Linux hgdb1 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@hgdb1 pgsql]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.3 (Maipo)
三台服务器均为如上所示操作系统。

IP:
hgdb1: 192.168.100.108
hgdb2: 192.168.100.110
openfiler: 192.168.100.111 (本服务本应该使用openfiler做共享存储,但是由于该服务器问题导致openfiler系统无法安装,最后选择安装和另外两台服务器一样的操作系统,划分区的时候划出1.5T的/highgo,挂载该分区到其他两台服务器上。)

2、关闭操作系统如下服务。
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop  NetworkManager.service
systemctl disable NetworkManager.service
systemctl stop avahi-daemon

[root@hgdb1 ]# cat /etc/selinux/config 
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled    此处修改为disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

setenforce 0
修改如上配置后建议重启操作系统使配置生效,同时验证确认配置生效。

3、yum源配置(本节中如未特殊说明,三台服务器均需配置)
[root@hgdb1 ]# vi /etc/yum.repos.d/rhel.repo 
[rhel] 
name=Red Hat Enterprise Linux $releasever - $basearch - Debug 
baseurl=file:///media/rhel
enabled=1 
gpgcheck=1 
gpgkey=file:///media/rhel/RPM-GPG-KEY-redhat-release

[repo-ha]
gpgcheck=0
enabled=1
baseurl=file:///media/rhel/addons/HighAvailability
name=repo-ha

[repo-storage]
gpgcheck=0
enabled=1
baseurl=file:///media/rhel/addons/ResilientStorage
name=repo-storage
挂载镜像
mount /dev/sr0 /media/rhel/
yum clean all
yum list

yum *scsi* -y          ( openfiler)

yum install gfs2-utils pacemaker pcs lvm2-cluster  *scsi*  python-clufter corosync crm -y           (hgdb1、hgdb2)

在hgdb1和hgdb2两节点安装crm的时候可能会出现问题,如果出现问题,请自行下载安装包。三个安装包存在依赖关系,需要一起安装。
下载地址:http://download.opensuse.org/repositories/network:/ha- clustering:/Stable/RedHat_RHEL-7/noarch/

python-parallax-1.0.1-29.1.noarch.rpm
crmsh-scripts-3.0.0-6.1.noarch.rpm
crmsh-3.0.0-6.1.noarch.rpm
安装完成后会在/usr/lib/ocf/resource.d下生成相应的ocf资源脚本,如下:
[root@hgdb1 ~]# cd /usr/lib/ocf/resource.d
[root@hgdb1 resource.d]# ll
total 4
drwxr-xr-x. 2 root root 4096 Sep  5 09:35 heartbeat
drwxr-xr-x. 2 root root   51 Aug 22 13:26 openstack
drwxr-xr-x. 2 root root  179 Aug 23 14:51 pacemaker

[root@hgdb1 resource.d]# crm ra list ocf
CTDB                ClusterMon          Delay               Dummy
Filesystem          HealthCPU           HealthSMART         IPaddr
IPaddr2             IPsrcaddr           LVM                 MailTo
NovaEvacuate        Route               SendArp             Squid
Stateful            SysInfo             SystemHealth        VirtualDomain
Xinetd              apache              clvm                conntrackd
controld            db2                 dhcpd               docker
ethmonitor          exportfs            galera              garbd
iSCSILogicalUnit    iSCSITarget         iface-vlan          mysql
nagios              named               nfsnotify           nfsserver
nginx               nova-compute-wait   oracle              oralsnr
pgsql               pgsql.bak           ping                pingd
portblock           postfix             rabbitmq-cluster    redis
remote              rsyncd              slapd               symlink
tomcat  
虽然安装了上述软件后会生成pgsql资源脚本,但是其版本过旧,且自带的pgsql不能实现自动切换功能,所以在安装了pacemaker/corosync之后需要从网上下载进行替换,如下:

https://github.com/ClusterLabs/resource-agents/tree/master/heartbeat
下载pgsql与ocf-shellfuncs.in
替换:
# cp pgsql /usr/lib/ocf/resource.d/heartbeat/
# cp ocf-shellfuncs.in /usr/lib/ocf/lib/heartbeat/ocf-shellfuncs
{注意要将ocf-shellfuncs.in名称改为ocf-shellfuncs,否则pgsql可能会找不到要用的函数。新下载的函数定义文件中添加了一些新功能函数,如ocf_local_nodename等}     


4、安装PG并配置流复制
hosts文件设置(两节点均需配置)
    192.168.100.108         hgdb1
192.168.100.110         hgdb2
192.168.100.127         vip1
192.168.100.128         vip2
192.168.100.130         rep-vip
注:
rep-vip: 流复制虚拟IP。数据库备端通过该IP地址连接主端,进行流复制,所以该IP会随着主端的变化改变位置。
vip1:    主端虚拟IP,用于接受业务请求。(跟随主端改变位置)
vip2:	  备端虚拟IP,用于接受业务请求。(跟随备端改变位置)


生成密钥
{默认利用random生成,但如果中断的系统随机数不够用就需要较长的时间,此时可以通过urandom来替代random}
[root@hgdb1 ~]# cd /etc/corosync/
[root@hgdb1 corosync]# mv /dev/random /dev/random.bak
[root@hgdb1 corosync]# ln -s /dev/urandom /dev/random
[root@hgdb1 corosync]# corosync-keygen 
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
SSH互信配置
hgdb1 -> hgdb2 :
[root@hgdb1 ~]# cd .ssh/
[root@hgdb1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2c:ed:1e:a6:a7:cd:e3:b2:7c:de:aa:ff:63:28:9a:19 root@node1
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|       o         |
|      . S        |
|       o         |
|    E   +.       |
|     =o*=oo      |
|    +.*%O=o.     |
+-----------------+
[root@hgdb1 .ssh]# ssh-copy-id -i id_rsa.pub hgdb2
The authenticity of host 'hgdb2 (192.168.100.202)' can't be established.
RSA key fingerprint is be:76:cd:29:af:59:76:11:6a:c7:7d:72:27:df:d1:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hgdb2,192.168.100.202' (RSA) to the list of known hosts.
root@hgdb2's password: 
Now try logging into the machine, with "ssh 'hgdb2'", and check in:
 
  .ssh/authorized_keys
 
to make sure we haven't added extra keys that you weren't expecting.
[root@hgdb1 .ssh]# ssh hgdb2 date
Sat Jan 18 06:36:21 CST 2014
hgdb2 -> hgdb1 :
[root@hgdb2 ~]# cd .ssh/
[root@hgdb2 .ssh]# ssh-keygen -t rsa
[root@hgdb2 .ssh]# ssh-copy-id -i id_rsa.pub node1
[root@hgdb2 .ssh]# ssh hgdb1 date
Sat Jan 18 06:37:31 CST 2014
同步配置
[root@hgdb1 corosync]# scp authkey corosync.conf hgdb2:/etc/corosync/
authkey         100%  128     0.1KB/s   00:00  
corosync.conf   100% 2808     2.7KB/s   00:00
以下配置如未特殊说明,在两节点均需配置。
highgo用户环境变量
[highgo@hgdb1 data]$ cat ~/.bash_profile
export LANG=C
export PGHOME=/data/highgo/4.1.1
export PGUSER=highgo
export PGPORT=5866
export PGDATA=/data/highgo/4.1.1/data
export PATH=$PGHOME/bin:$PATH:$HOME/bin
export LD_LIBRARY_PATH=$PGHOME/lib:$LD_LIBRARY_PATH
export PATH
注:hgdb1、hgdb2 两节点的HGDB均在本地目录下,只有归档路径在共享存储上。
数据库安装过程略。
流复制配置过程如下:

1、修改主端参数
listen_addresses = '*'
log_min_messages = warning 
log_directory = 'hgdb_log'  
logging_collector=on
log_destination = 'csvlog'
log_statement = 'ddl'
log_hostname = on   
log_filename='hgdb-%a.log'
log_rotation_age=1d
log_truncate_on_rotation=on
fsync = on    
synchronous_commit = on 
wal_level = hot_standby
archive_mode = on
archive_directory = ' /hgdbdata/master/archive ' 
max_wal_senders = 5 
max_replication_slots=5

2、在主端创建replication_slot
highgo=# SELECT * FROM pg_create_physical_replication_slot ('node_a_slot');
  slot_name  | xlog_position
-------------+---------------
 node_a_slot |
(1 行记录)
创建后检查确认replication_slot:
highgo=# SELECT * FROM pg_replication_slots;

3、修改主库pg_hba.conf文件,添加如下内容  
host    all             all             0.0.0.0/0            		trust
host    replication     highgo          0.0.0.0/0                 trust

4、修改后重启主端数据库。
5、备端停止数据库服务,删除data目录下的所有文件,保留data目录。
6、在备端执行如下命令,同步主端数据。
pg_basebackup -h 192.168.100.108 -U highgo -F p -P -x -R -D $PGDATA -l  hgdbbak20170906
7、修改备端相关(参数/配置)文件内容
修改备端postgresql.conf中如下参数
hot_standby = on  
8、启动备库,检查同步效果。
select pg_is_in_recovery();
ps -ef |grep postgres
流复制环境搭建完成后,关闭所有数据库。
[highgo@hgdb2 ~]$ pg_ctl stop -m f
[highgo@hgdb1 ~]$ pg_ctl stop -m f
5、配置共享存储(如果使用虚拟机,可以使用其他方法)
下面操作在 openfiler服务器上进行:
[root@openfiler ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/openfiler-root   xfs       300G  6.6G  293G   3% /
devtmpfs                devtmpfs   48G     0   48G   0% /dev
tmpfs                   tmpfs      48G  224K   48G   1% /dev/shm
tmpfs                   tmpfs      48G  9.9M   48G   1% /run
tmpfs                   tmpfs      48G     0   48G   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  166M  332M  34% /boot
/dev/mapper/openfiler-home   xfs        10G   36M   10G   1% /home
/dev/mapper/openfiler-highgo xfs       1.5T   33M  1.5T   1% /highgo
tmpfs                   tmpfs     9.5G   32K  9.5G   1% /run/user/0
tmpfs                   tmpfs     9.5G   28K  9.5G   1% /run/user/1000
/dev/loop0              iso9660   3.6G  3.6G     0 100% /media/cdrom
[root@openfiler ~]# umount /highgo/
[root@openfiler ~]# df -Th
Filesystem            Type      Size  Used Avail Use% Mounted on
/dev/mapper/openfiler-root xfs       300G  6.6G  293G   3% /
devtmpfs              devtmpfs   48G     0   48G   0% /dev
tmpfs                 tmpfs      48G  224K   48G   1% /dev/shm
tmpfs                 tmpfs      48G  9.9M   48G   1% /run
tmpfs                 tmpfs      48G     0   48G   0% /sys/fs/cgroup
/dev/sda1             xfs       497M  166M  332M  34% /boot
/dev/mapper/openfiler-home xfs        10G   36M   10G   1% /home
tmpfs                 tmpfs     9.5G   32K  9.5G   1% /run/user/0
tmpfs                 tmpfs     9.5G   28K  9.5G   1% /run/user/1000
/dev/loop0            iso9660   3.6G  3.6G     0 100% /media/cdrom

[root@openfiler ~]# targetcli 
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/backstores/block> cd /backstores/block
/backstores/block> create dev=/dev/mapper/openfiler-highgo highgo
Created block storage object highgo using /dev/mapper/openfiler-highgo.
/backstores/block> ls
o- block ...................................................................................................... [Storage Objects: 1]
  o- highgo .............................................................. [/dev/mapper/openfiler-highgo (1.5TiB) write-thru deactivated]
/backstores/block> cd /iscsi 
/iscsi> create wwn=iqn.2017-08.com.highgo:highgo
Created target iqn.2017-08.com.highgo:highgo.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> ls
o- iscsi .............................................................................................................. [Targets: 1]
  o- iqn.2017-08.com.highgo:highgo ....................................................................................... [TPGs: 1]
    o- tpg1 ................................................................................................. [no-gen-acls, no-auth]
      o- acls ............................................................................................................ [ACLs: 0]
      o- luns ............................................................................................................ [LUNs: 0]
      o- portals ...................................................................................................... [Portals: 1]
        o- 0.0.0.0:3260 ....................................................................................................... [OK]
/iscsi> cd iqn.2017-08.com.highgo:highgo
/iscsi/iqn.20...highgo:highgo> ls
o- iqn.2017-08.com.highgo:highgo ......................................................................................... [TPGs: 1]
  o- tpg1 ................................................................................................... [no-gen-acls, no-auth]
    o- acls .............................................................................................................. [ACLs: 0]
    o- luns .............................................................................................................. [LUNs: 0]
    o- portals ........................................................................................................ [Portals: 1]
      o- 0.0.0.0:3260 ......................................................................................................... [OK]
/iscsi/iqn.20...highgo:highgo> cd tpg1/
/iscsi/iqn.20...o:highgo/tpg1> ls
o- tpg1 ..................................................................................................... [no-gen-acls, no-auth]
  o- acls ................................................................................................................ [ACLs: 0]
  o- luns ................................................................................................................ [LUNs: 0]
  o- portals .......................................................................................................... [Portals: 1]
    o- 0.0.0.0:3260 ........................................................................................................... [OK]
/iscsi/iqn.20...o:highgo/tpg1> cd luns
/iscsi/iqn.20...hgo/tpg1/luns>  create /backstores/block/highgo
Created LUN 0.
/iscsi/iqn.20...hgo/tpg1/luns> ls
o- luns .................................................................................................................. [LUNs: 1]
  o- lun0 ................................................................................. [block/highgo (/dev/mapper/openfiler-highgo)]
/iscsi/iqn.20...hgo/tpg1/luns> cd ..
/iscsi/iqn.20...o:highgo/tpg1> ls
o- tpg1 ..................................................................................................... [no-gen-acls, no-auth]
  o- acls ................................................................................................................ [ACLs: 0]
  o- luns ................................................................................................................ [LUNs: 1]
  | o- lun0 ............................................................................... [block/highgo (/dev/mapper/openfiler-highgo)]
  o- portals .......................................................................................................... [Portals: 1]
    o- 0.0.0.0:3260 ........................................................................................................... [OK]
/iscsi/iqn.20...o:highgo/tpg1> cd acls 
/iscsi/iqn.20...hgo/tpg1/acls> create iqn.2017-08.com.highgo:highgo
Created Node ACL for iqn.2017-08.com.highgo:highgo
Created mapped LUN 0.
/iscsi/iqn.20...hgo/tpg1/acls>  cd /iscsi/
/iscsi> cd iqn.2017-08.com.highgo:highgo/tpg1/portals/0.0.0.0:3260 
/iscsi/iqn.20.../0.0.0.0:3260> enable_iser true
iSER enable now: True
/iscsi/iqn.20.../0.0.0.0:3260> cd ..
/iscsi/iqn.20.../tpg1/portals> cd ..
/iscsi/iqn.20...o:highgo/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
Parameter demo_mode_write_protect is now '0'.
Parameter authentication is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
/iscsi/iqn.20...o:highgo/tpg1> cd /
/>  saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
[root@openfiler ~]# systemctl start target
[root@openfiler ~]# systemctl enable targe

openfiler的配置到此为止。

在hgdb1和hgdb2上均需运行如下命令:
iscsiadm -m discovery -t st -p 192.168.100.111:3260 -I iser
iscsiadm -m node -p 192.168.100.111 -l
fdisk -l 查看是否可以找到该硬盘。
 
使用fdisk将共享磁盘/dev/sdb分区为/dev/sdb1(步骤略),分区后如下:
 
6、创建集群
echo redhat |passwd --stdin hacluster   在每个节点上分别设置hacluster的密码为redhat
pcs cluster auth hgdb1 hgdb2
Username: hacluster     此处需输入集群用户hacluster
Password:               此处需输入集群用户密码redhat
hgdb1: Authorized
hgdb2: Authorized

[root@hgdb1 ~]#  cd /etc/corosync
[root@hgdb1 corosync]# cat corosync.conf
totem {
    version: 2
    secauth: off
    cluster_name: hgcluster
    transport: udpu
}

nodelist {
    node {
        ring0_addr: hgdb1
        nodeid: 1
    }

    node {
        ring0_addr: hgdb2
        nodeid: 2
    }
}

quorum {
    provider: corosync_votequorum
    two_node: 1
}

logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

[root@hgdb1 ~]# pcs cluster start
登录管理界面,创建集群节点信息,使用如下地址登录:
https://192.168.100.108:2224 
登录页面提示输入集群用户:hacluster  密码:redhat
 
登录后点击“Create New”
 
分别输入集群名称及索要添加的节点名称,然后点击“Create Cluster”
 
 
 
 
此时检查状态如下:
[root@db1 ~]# pcs status
Cluster name: hgcluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: db2 (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Thu Jun 29 14:01:31 2017          Last change: Thu Jun 29 14:00:25 2017 by hacluster via crmd on hgdb2

2 nodes and 0 resources configured

Online: [ hgdb1 hgdb2 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
使用如下命令创建集群文件系统:
# mkfs.gfs2 -j 2 -p lock_dlm -t hgcluster:sdb1 /dev/sdb1
其中-j表示有两个节点共享该磁盘,hgcluster为之前创建的集群名称
示例:
[root@hgdb1 ]#  mkfs.gfs2 -j 2 -p lock_dlm -t hgcluster:sdb1 /dev/sdb1
This will destroy any data on /dev/sdb1
Are you sure you want to proceed? [y/n]y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done 
Building resource groups: Done 
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/sdb1
Block size:                4096
Device size:               1.00 GB (261888 blocks)
Filesystem size:           1.00 GB (261887 blocks)
Journals:                  2
Resource groups:           5
Locking protocol:          "lock_dlm"
Lock table:                "hgcluster:sdb1"
UUID:                      53d27f65-cf0a-5406-0646-bde784fa07e7
分区格式化操作均是在hgdb1节点执行, 在hgdb2节点执行
 [root@hgdb2 ]# partprobe
使用pcs添加资源的方式实现自动挂载:
pcs resource create clusterfs Filesystem device="/dev/sdb1" directory="/hgdbdata" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true
配置此命令后,服务启动后,共享磁盘在两节点自动挂载。

 
关于fence设备的问题。(建议使用后两种方法,如果禁用fence设备很可能造成gfs2文件系统没办法挂载的问题)

1、可以禁止stonith(只在一个节点上执行即可):
[root@hgdb1 ~]# crm configure property stonith-enabled=false
   2、可使用dell服务器的iDRAC做fence设备。
   3、可以使用如下方法创建fence设备。(如果使用以下命令,那么在后面的pgsql.crm中可以将有关fence配置的部分删掉,如果不删掉pgsql.crm中的fence配置,pgsql.crm会将之前的配置覆盖掉。)
[root@hgdb1 ~]#          pcs stonith create scsi-stonith-device fence_scsi devices=/dev/mapper/fence pcmk_monitor_action=metadata pcmk_reboot_action=off pcmk_host_list="hgdb1 hgdb2" meta provides=unfencing

[root@hgdb1 ~]#     pcs property set stonith-enabled=true
7、配置pacemaker
{关于pacemaker的配置可通过多种方式,如crmsh、hb_gui、pcs等,该实验使用crmsh配置}
编写crm配置脚本:(本实验中使用dell的idrac作为fence设备,如果采用其他方法,脚本中关于fence设备的配置必须修改。)

[root@hgdb1 resource.d]# cd

[root@hgdb1 ~]#	vi pgsql.crm
node 1: hgdb1 \
        attributes hgdb-data-status="STREAMING|SYNC"
node 2: hgdb2 \
        attributes hgdb-data-status=LATEST
primitive clusterfs Filesystem \
        params device="/dev/sdb1" directory="/hgdbdata" fstype=gfs2 options=noatime \
        op start interval=0s timeout=60 \
        op stop interval=0s timeout=60 \
        op monitor interval=10s on-fail=fence
primitive clvmd clvm \
        params \
        op start interval=0s timeout=90 \
        op stop interval=0s timeout=90 \
        op monitor interval=30s on-fail=fence
primitive dlm ocf:pacemaker:controld \
        params \
        op start interval=0s timeout=90 \
        op stop interval=0s timeout=100 \
        op monitor interval=30s on-fail=fence
primitive fence1 stonith:fence_idrac \
        params ipaddr=192.168.100.121 passwd=highgo login=root \
        op monitor interval=60s
primitive fence2 stonith:fence_idrac \
        params ipaddr=192.168.100.122 passwd=highgo login=root \
        op monitor interval=60s
primitive hgdb pgsql \
        params pgctl="/data/highgo/4.1.1/bin/pg_ctl" start_opt="-p 5866" psql="/data/highgo/4.1.1/bin/psql" pgdata="/data/highgo/4.1.1/data" pgdba=highgo pgport=5866 pglibs="/data/highgo/4.1.1/lib" config="/data/highgo/4.1.1/data/postgresql.conf" pgdb=highgo stop_escalate=0 rep_mode=sync node_list="hgdb1 hgdb2" archive_directory="/hgdbdata/master/archive" master_ip=192.168.100.130 repuser=highgo primary_conninfo_opt="keepalives_idle=60 keepalives_interval=5 keepalives_count=5" \
        op start interval=0s timeout=120 \
        op stop interval=0s timeout=120 \
        op monitor interval=30 timeout=30 \
        op monitor interval=29 role=Master timeout=30 \
        op promote interval=0s timeout=120 \
        op demote interval=0s timeout=120 \
        meta
primitive pingCheck ocf:pacemaker:ping \
        params name=default_ping_set host_list=192.168.100.1 multiplier=100 \
        op start interval=0s on-fail=restart timeout=60s \
        op monitor interval=10s on-fail=restart timeout=60s \
        op stop interval=0s on-fail=ignore timeout=60s
primitive vip-rep IPaddr2 \
        params ip=192.168.100.130 nic=em1 cidr_netmask=24 \
        meta migration-threshold=0 \
        op start interval=0s on-fail=restart timeout=60s \
        op monitor interval=10s on-fail=restart timeout=60s \
        op stop interval=0s on-fail=block timeout=60s
primitive vip1 IPaddr2 \
        params ip=192.168.100.127 \
        op start interval=0s timeout=20s \
        op stop interval=0s timeout=20s \
        op monitor interval=10s timeout=20s
primitive vip2 IPaddr2 \
        params ip=192.168.100.128 \
        op start interval=0s timeout=20s \
        op stop interval=0s timeout=20s \
        op monitor interval=10s timeout=20s \
        meta
group mastergroup vip1 vip-rep
group slavegroup vip2
ms hgdb-master hgdb \
        meta
clone clnPingCheck pingCheck
clone clusterfs-clone clusterfs \
        meta interleave=true
clone clvmd-clone clvmd \
        meta interleave=true ordered=true
clone dlm-clone dlm \
        meta interleave=true ordered=true
colocation colocation-clvmd-clone-dlm-clone-INFINITY inf: clvmd-clone dlm-clone
colocation colocation-mastergroup-hgdb-master-INFINITY inf: mastergroup:Started hgdb-master:Master
colocation colocation-slavegroup-hgdb-master-INFINITY inf: slavegroup:Started hgdb-master:Slave
order order-clusterfs-clone-hgdb-master-100 100: clusterfs-clone:start hgdb-master:start symmetrical=true
order order-clvmd-clone-clusterfs-clone-100 100: clvmd-clone:start clusterfs-clone:start symmetrical=true
order order-dlm-clone-clvmd-clone-mandatory dlm-clone:start clvmd-clone:start
xml <fencing-topology> \
  <fencing-level devices="fence1" id="fl-hgdb1-1" index="1" target="hgdb1"/> \
  <fencing-level devices="fence2" id="fl-hgdb2-2" index="2" target="hgdb2"/> \
</fencing-topology>
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=1.1.15-11.el7-e174ec8 \
        cluster-infrastructure=corosync \
        cluster-name=hgcluster \
        no-quorum-policy=freeze \
        stonith-enabled=true \
        last-lrm-refresh=1503546329 \
        crmd-transition-delay=0s
rsc_defaults rsc-options: \
        resource-stickiness=INFINITY \
        migration-threshold=1





具体路径以及IP地址,需要自己手动修改。	

 	 更新配置:

[root@hgdb1 ~]# crm configure load update pgsql.crm
WARNING: pgsql: specified timeout 60s for stop is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for start is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for notify is smaller than the advised 90
WARNING: pgsql: specified timeout 60s for demote is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for promote is smaller than the advised 120
	
过段时间后,查看集群状态。
[root@hgdb1 ~]# pcs status                         
Cluster name: hgcluster
Stack: corosync
Current DC: hgdb2 (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Wed Sep  6 17:33:42 2017          Last change: Wed Sep  6 16:16:24 2017 by root via crm_attribute on hgdb2

2 nodes and 15 resources configured

Online: [ hgdb1 hgdb2 ]

Full list of resources:

 fence1 (stonith:fence_idrac):  Started hgdb1
 fence2 (stonith:fence_idrac):  Started hgdb2
 Resource Group: mastergroup
     vip1       (ocf::heartbeat:IPaddr2):       Started hgdb2
     vip-rep    (ocf::heartbeat:IPaddr2):       Started hgdb2
 Resource Group: slavegroup
     vip2       (ocf::heartbeat:IPaddr2):       Started hgdb1
 Clone Set: clnPingCheck [pingCheck]
     Started: [ hgdb1 hgdb2 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ hgdb1 hgdb2 ]
 Clone Set: dlm-clone [dlm]
     Started: [ hgdb1 hgdb2 ]
 Clone Set: clusterfs-clone [clusterfs]
     Started: [ hgdb1 hgdb2 ]
 Master/Slave Set: hgdb-master [hgdb]
     Masters: [ hgdb2 ]
     Slaves: [ hgdb1 ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

注:刚启动时两节点均为slave,一段时间后hgdb1自动切换为master。

至此,安装配置过程全部完成。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值