Openstack云平台脚本部署之Galera高可用集群配置(二)

目录

一、概述

二、Pacemaker配置

三、Haproxy配置

四、MariaDB Galera配置

五、参考文档

六、系列文章


一、概述

社区文档中,Openstack的高可用架构采用“Pacemkaer+Haproxy+Galera”的架构,数据库服务的高可用采用Galera同步复制的多主数据库集群,在三个控制节点分别安装MariaDB,然后构建Galera多主集群,数据库后台会进行更新数据的同步复制。使用Haproxy可使openstack可以通过虚拟地址VIP访问数据库及其他Openstack各基础服务。使用Pacemaker构建管理节点集群,通过服务及资源的自理达到后台服务的高可用。

二、Pacemaker配置

Pacemaker集群资源管理器用于资源级别的监测和恢复,这里使用Pacemaker管理多个管理节点上的Openstack服务,多个管理节点上的多个服务对应于Pacemaker集群中的资源,Pacemaker可以自动重启所管理的资源,保证云服务的高可用。

pacemaker的安装配置脚本install-configure-pacemaker.sh

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

#!/bin/sh

. ../0-set-config.sh

./style/print-split.sh "Pacemaker Installation"

### [所有控制节点] 安装软件

./pssh-exe C "yum install -y pcs pacemaker corosync fence-agents-all resource-agents"

### [所有控制节点] 配置服务

./pssh-exe C "systemctl enable pcsd && systemctl start pcsd"

### [所有控制节点]设置hacluster用户的密码

./pssh-exe C "echo $password_ha_user | passwd --stdin hacluster"

## [controller01]设置到集群节点的认证

pcs cluster auth ${controller_name[@]} -u hacluster -p $password_ha_user --force

### [controller01]创建并启动集群

pcs cluster setup --force --name openstack-cluster ${controller_name[@]}

pcs cluster start --all

pcs cluster enable --all

sleep 5

### [controller01]设置集群属性

pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min

### [controller01] 暂时禁用STONISH,否则资源无法启动

pcs property set stonith-enabled=false

### [controller01]配置VIP资源,VIP可以在集群节点间浮动

pcs resource create vip ocf:heartbeat:IPaddr2 params ip=$virtual_ip cidr_netmask="24" op monitor interval="30s"

在后面的安装部署过程中需要重启pacemaker管理集群,撰写一个重启pcs cluster的脚本:restart-pcs-cluster.sh

1

2

3

4

5

6

7

8

9

10

11

12

13

#!/bin/sh

pcs cluster stop --all

sleep 10

#ps aux|grep "pcs cluster stop --all"|grep -v grep|awk '{print $2 }'|xargs kill

./pssh-exe C "pcs cluster kill"

pcs cluster stop --all

pcs cluster start --all

sleep 5

watch -n 0.5 pcs resource

echo "pcs resource"

pcs resource

pcs resource|grep Stop

pcs resource|grep FAILED

三、Haproxy配置

Haproxy是一个开源的、高性能的给予TCP和HTTP应用代理的高可用的、负载均衡服务软件,这里通过使用一个虚拟IP地址和HAProxy负载均衡器来达到请求的负载均衡及服务高可用。

Haproxy中Galera集群的配置部分如下:

1

2

3

4

5

6

7

listen galera_cluster

    bind 192.168.2.241:3306

    balance  source

    option httpchk

    server controller01 192.168.2.11:3306 check port 9200 inter 2000 rise 2 fall 5

    server controller02 192.168.2.12:3306 check port 9200 inter 2000 rise 2 fall 5 backup

    server controller03 192.168.2.13:3306 check port 9200 inter 2000 rise 2 fall 5 backup

Haproxy的安装与配置:install-configure-haproxy.sh

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

#!/bin/sh

. ../0-set-config.sh

./style/print-split.sh "Haproxy Installation"

### [所有控制节点] 修改/etc/haproxy/haproxy.cfg文件,这里一次性修改配置

. ./1-gen-haproxy-cfg.sh

### 拷贝文件到其他节点

./scp-exe C ../conf/haproxy.cfg /etc/haproxy/haproxy.cfg

### [所有控制节点] 安装软件

./pssh-exe C "yum install -y haproxy"

### [所有控制节点] 修改/etc/rsyslog.d/haproxy.conf文件

./scp-exe C ../conf/rsyslog_haproxy.conf /etc/rsyslog.d/haproxy.conf

### [所有控制节点] 修改/etc/sysconfig/rsyslog文件

./pssh-exe C "sed -i -e 's#SYSLOGD_OPTIONS=\"\"#SYSLOGD_OPTIONS=\"-c 2 -r -m 0\"#g' /etc/sysconfig/rsyslog"

### [所有控制节点] 重启rsyslog服务

./pssh-exe C "systemctl restart rsyslog"

### [controller01]在pacemaker集群增加haproxy资源

pcs resource create haproxy systemd:haproxy --clone

pcs constraint order start vip then haproxy-clone kind=Optional

pcs constraint colocation add haproxy-clone with vip

### 重启pcs集群,测试VIP的可用性

. restart-pcs-cluster.sh

ping -c 3 $virtual_ip

四、MariaDB Galera配置

Galera是一个MySQL、MariaDB和Percona等数据库的同步多主集群软件,具备同步复制、并行复制的功能,所有节点可以同时读写数据库,新节点加入数据自动复制, 失效节点自动被清除,可以直接连接集群,使用感受上与MySQL完全一致。Galera用于配置Openstack后端数据库MariaDB的高可用性。

高可用数据库集群安装、配置、同步校验脚本install-configure-galera.sh

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

#!/bin/sh

. ../0-set-config.sh

./style/print-split.sh "Galera Installation"

### install mariadb

./pssh-exe C "yum install -y MariaDB-server xinetd"

### 备份配置文件

./pssh-exe C "cp /etc/my.cnf.d/server.cnf /etc/my.cnf.d/bak.server.cnf"

###

for ((i=0; i<${#controller_map[@]}; i+=1));

do

  name=${controller_name[$i]};

  ip=${controller_map[$name]};

  rm -rf ../conf/server.cnf

  cp ../conf/server.cnf.template ../conf/server.cnf

  sed -i -e 's#bind-address=*#bind-address='"${ip}"'#g' ../conf/server.cnf

  sed -i -e 's#wsrep_node_name=#wsrep_node_name='"${name}"'#g' ../conf/server.cnf

  sed -i -e 's#wsrep_node_address=#wsrep_node_address='"${ip}"'#g' ../conf/server.cnf

  scp ../conf/server.cnf root@$ip:/etc/my.cnf.d/server.cnf

done

##### controller01

galera_new_cluster

#### config galera

for ((i=0; i<${#controller_map[@]}; i+=1));

do

  name=${controller_name[$i]};

  ip=${controller_map[$name]};

  if [ $name = "controller01" ]; then

    ./style/print-warnning.sh "Please set the database password is $password_galera_root"

    mysql_secure_installation

  else

    ssh root@$ip systemctl start mariadb

  fi

done;

. restart-pcs-cluster.sh

mysql -uroot -p$password_galera_root -h $virtual_ip -e "SHOW STATUS LIKE 'wsrep_cluster_size';"

#### Galera cluster check

rm -rf ../conf/clustercheck

cp ../conf/clustercheck.template ../conf/clustercheck

sed -i -e 's#MYSQL_PASSWORD=#MYSQL_PASSWORD='"$password_galera_root"'#g' ../conf/clustercheck

sed -i -e 's#MYSQL_HOST=#MYSQL_HOST='"$virtual_ip"'#g' ../conf/clustercheck

./scp-exe C "../conf/clustercheck" "/etc/sysconfig/clustercheck"

mysql -uroot -p$password_galera_root -e "GRANT ALL PRIVILEGES ON *.* TO 'clustercheck_user'@'localhost' IDENTIFIED BY '"$password_galera_root"';GRANT ALL PRIVILEGES ON *.* TO 'clustercheck_user'@'"$virtual_ip"' IDENTIFIED BY '"$password_galera_root"';FLUSH PRIVILEGES;"

rm -rf ../conf/clustercheck.sh

cp ../conf/clustercheck.sh.template ../conf/clustercheck.sh

sed -i -e 's#MYSQL_PASSWORD=#MYSQL_PASSWORD='"\"\${2-$password_galera_root}\""'#g' ../conf/clustercheck.sh

./scp-exe C ../conf/clustercheck.sh /usr/bin/clustercheck

./pssh-exe C "chmod a+x /usr/bin/clustercheck && chmod 755 /usr/bin/clustercheck && chown nobody /usr/bin/clustercheck"

###setup check service

./pssh-exe C "sed -i -e '/9200\/[udp,tcp]/d' /etc/services"

./pssh-exe C "echo 'mysqlchk    9200/tcp # mysqlchk' >> /etc/services"

./scp-exe C "../conf/mysqlchk" "/etc/xinetd.d/mysqlchk"

./pssh-exe C "systemctl stop xinetd && systemctl enable xinetd && systemctl start xinetd"

### test checking service

./pssh-exe C /usr/bin/clustercheck

telnet $virtual_ip 9200

其中检验脚本../conf/clustercheck.sh

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

#!/bin/bash

#

# Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly

#

# Author: Olaf van Zandwijk <olaf.vanzandwijk@nedap.com>

# Author: Raghavendra Prabhu <raghavendra.prabhu@percona.com>

#

# Documentation and download: https://github.com/olafz/percona-clustercheck

#

# Based on the original script from Unai Rodriguez

#

if [[ $1 == '-h' || $1 == '--help' ]];then

    echo "Usage: $0 <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"

    exit

fi

# if the disabled file is present, return 503. This allows

# admins to manually remove a node from a cluster easily.

if [ -e "/var/tmp/clustercheck.disabled" ]; then

    # Shell return-code is 1

    echo -en "HTTP/1.1 503 Service Unavailable\r\n"

    echo -en "Content-Type: text/plain\r\n"

    echo -en "Connection: close\r\n"

    echo -en "Content-Length: 51\r\n"

    echo -en "\r\n"

    echo -en "Percona XtraDB Cluster Node is manually disabled.\r\n"

    sleep 0.1

    exit 1

fi

MYSQL_USERNAME="${1-clustercheck_user}"

MYSQL_PASSWORD="${2-a263f6a89fa2}"

AVAILABLE_WHEN_DONOR=${3:-0}

ERR_FILE="${4:-/dev/null}"

AVAILABLE_WHEN_READONLY=${5:-1}

DEFAULTS_EXTRA_FILE=${6:-/etc/my.cnf}

#Timeout exists for instances where mysqld may be hung

TIMEOUT=10

EXTRA_ARGS=""

if [[ -n "$MYSQL_USERNAME" ]]; then

    EXTRA_ARGS="$EXTRA_ARGS --user=${MYSQL_USERNAME}"

fi

if [[ -n "$MYSQL_PASSWORD" ]]; then

    EXTRA_ARGS="$EXTRA_ARGS --password=${MYSQL_PASSWORD}"

fi

if [[ -r $DEFAULTS_EXTRA_FILE ]];then

    MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \

                    ${EXTRA_ARGS}"

else

    MYSQL_CMDLINE="mysql -nNE --connect-timeout=$TIMEOUT ${EXTRA_ARGS}"

fi

#

# Perform the query to check the wsrep_local_state

#

WSREP_STATUS=$($MYSQL_CMDLINE -e "SHOW STATUS LIKE 'wsrep_local_state';" \

    2>${ERR_FILE} | tail -1 2>>${ERR_FILE})

if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]]

then

    # Check only when set to 0 to avoid latency in response.

    if [[ $AVAILABLE_WHEN_READONLY -eq 0 ]];then

        READ_ONLY=$($MYSQL_CMDLINE -e "SHOW GLOBAL VARIABLES LIKE 'read_only';" \

                    2>${ERR_FILE} | tail -1 2>>${ERR_FILE})

        if [[ "${READ_ONLY}" == "ON" ]];then

            # Percona XtraDB Cluster node local state is 'Synced', but it is in

            # read-only mode. The variable AVAILABLE_WHEN_READONLY is set to 0.

            # => return HTTP 503

            # Shell return-code is 1

            echo -en "HTTP/1.1 503 Service Unavailable\r\n"

            echo -en "Content-Type: text/plain\r\n"

            echo -en "Connection: close\r\n"

            echo -en "Content-Length: 43\r\n"

            echo -en "\r\n"

            echo -en "Percona XtraDB Cluster Node is read-only.\r\n"

            sleep 0.1

            exit 1

        fi

    fi

    # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200

    # Shell return-code is 0

    echo -en "HTTP/1.1 200 OK\r\n"

    echo -en "Content-Type: text/plain\r\n"

    echo -en "Connection: close\r\n"

    echo -en "Content-Length: 40\r\n"

    echo -en "\r\n"

    echo -en "Percona XtraDB Cluster Node is synced.\r\n"

    sleep 0.1

    exit 0

else

    # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503

    # Shell return-code is 1

    echo -en "HTTP/1.1 503 Service Unavailable\r\n"

    echo -en "Content-Type: text/plain\r\n"

    echo -en "Connection: close\r\n"

    echo -en "Content-Length: 44\r\n"

    echo -en "\r\n"

    echo -en "Percona XtraDB Cluster Node is not synced.\r\n"

    sleep 0.1

    exit 1

fi

五、参考文档

Openstack HA配置参考:

https://docs.openstack.org/ha-guide/controller-ha-haproxy.html

Galera 配置参考:

https://www.howtoforge.com/tutorial/how-to-setup-haproxy-as-load-balancer-for-mariadb-on-centos-7/

Clustercheck检验脚本来源:

https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck

六、系列文章

Openstack云平台脚本部署”系列文章目录如下:

Openstack云平台脚本部署之概述(零)

Openstack云平台脚本部署之基础环境配置(一)

Openstack云平台脚本部署之Galera高可用集群配置(二)

Openstack云平台脚本部署之RabbitMQ高可用集群部署(三)

Openstack云平台脚本部署之MongoDB配置(四)

Openstack云平台脚本部署之Memcached配置(五)

Openstack云平台脚本部署之Keystone认证服务配置(六)

Openstack云平台脚本部署之Glance镜像服务配置(七)

Openstack云平台脚本部署之Nova计算服务配置(八)

Openstack云平台脚本部署之Neutron网络服务配置(九)

Openstack云平台脚本部署之Dashboard配置(十)

Openstack云平台脚本部署之Cinder块存储服务配置(十一)

Openstack云平台脚本部署之Ceilometer数据收集服务配置(十二)

Openstack云平台脚本部署之Aodh告警服务配置(十三)

Openstack云平台脚本部署之Ceph存储集群配置(十四)

Openstack云平台脚本部署之计算节点服务配置(十五)

Openstack云平台脚本部署之增加计算节点配置(十六)

Openstack云平台脚本部署之测试验证(十七)

Openstack云平台脚本部署之Ganglia监控(十八)

Openstack云平台脚本部署之Nagios监控(十九)

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值