rhel6.3配置Haproxy + Keepalived +PXC 5.7

Haproxy + Keepalived +PXC 是目前比较流行的Mysql高可用方案,数据库底层通过Percona XtraDB Cluster(PXC)集群保证数据库的可用性,数据库上层通过haproxy来做负载均衡(至少两台haproxy),最后再通过keepalived(至少两台,测试环境可与haproxy使用同一台机器)来连接haproxy服务,当其中一台haproxy出现故障后,keepalived的虚拟IP会自动漂移到另一台haproxy,client端连接的是keepalived的虚拟IP,最终实现高可用的环境。

之前做了那么多的准备,今天来实际安装一下:

1,当前环境:

Haproxy IP | keepalived IP(两台主机都部署haproxy以及keepalived)
172.17.61.134 qht134
172.17.61.135 qht135
172.17.61.140 vip

PXC IP
172.17.61.131 qht131
172.17.61.132 qht132
172.17.61.133 qht133

[root@qht131 ~]# more /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

[root@qht131 ~]# mysql -V
mysql  Ver 14.14 Distrib 5.7.21-20, for Linux (x86_64) using  6.0

2,配置pxc节点监控

2.1创建状态检查用户(任一pxc节点)

[root@qht131 ~]# locate clustercheck
/usr/bin/clustercheck
[root@qht131 ~]# grep clustercheckuser /usr/bin/clustercheck
# GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';

[root@qht131 ~]# mysql -uroot -p
mysql> GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
Query OK, 0 rows affected, 1 warning (0.54 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.08 sec)

2.2 PXC三节点手工执行clusterchek,确保返回200

[root@qht131 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40

Percona XtraDB Cluster Node is synced.
2.3关闭防火墙及selinux
[root@qht131 ~]# chkconfig --level 2345 iptables off
[root@qht131 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled  #确保是disabled或者permissive
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

3.安装haproxy(两个节点)

需要先检查安装haproxy的节点没有安装mysql,因为haproxy.cfg所绑定的也是3306端口,如果安装的话需要关闭或者在haproxy.cfg中指定其它的端口。

以其中一个节点为例:

[root@qht135 ~]# yum install haproxy -y
[root@qht135 ~]# rpm -aq | grep haproxy
haproxy-1.5.18-1.el6.x86_64

关闭防火墙:

[root@qht135 ~]# chkconfig --level 2345 iptables off

添加以下内容到haproxy.cfg配置文件

[root@qht135 ~]# vim /etc/haproxy/haproxy.cfg  
frontend pxc-front     ##前端监控配置名称,端口,协议,对应的后端名称
bind *:3306               
mode tcp     
default_backend pxc-back

frontend stats-front    ##web状态监控配置
bind *:8080
mode http
default_backend stats-back

backend pxc-back   ###后端配置
mode tcp
balance leastconn
option httpchk
server node131 172.17.61.131:3306 check port 9200 inter 12000 rise 3 fall 3
server node132 172.17.61.132:3306 check port 9200 inter 12000 rise 3 fall 3
server node133 172.17.61.133:3306 check port 9200 inter 12000 rise 3 fall 3

backend stats-back    ###web监控访问
mode http
balance roundrobin
stats uri /haproxy/stats
stats refresh 5s
stats auth pxcstats:secret ##8080登入的帐户和密码

启动haproxy服务:

[root@qht135 rc5.d]# service haproxy start
Starting haproxy:                                          [  OK  ]

如果启动有warining的话,需要根据指定注释掉haproxy.cfg中相关的参数 。

[root@qht135 rc5.d]# netstat -nltp|grep haproxy
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN      5595/haproxy
tcp        0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      5595/haproxy

4,验证haproxy

先增加一个测试的用户(其中一台pxc)

[root@node131 ~]# mysql -uroot -pxxx -hlocalhost
mysql>grant all privileges on *.* to 'robin'@'192.168.%' identified by 'xxx';

不断地循环连接数据库:

[root@qht135 ~]# for i in `seq 1 1000`; do mysql -hlocalhost -P3306 -urobin -pxxx --protocol=tcp -e "select now()"; done

打开网页输入配置文件中的帐户和密码:


5.安装keepalived(两个节点)

5.1详细过程见https://blog.csdn.net/jolly10/article/details/80704973

[root@qht134 home]# wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz  
[root@qht134 home]# tar -zxvf keepalived-1.2.2.tar.gz  
[root@qht134 home]# cd keepalived-1.2.2  
[root@qht134 keepalived-1.2.2]# yum -y install gcc openssl-devel popt-devel  
[root@qht134 keepalived-1.2.2]# ./configure --prefix=/ && make && make install  
[root@qht134 keepalived-1.2.2]# chkconfig --add keepalived  
[root@qht134 keepalived-1.2.2]# chkconfig keepalived on  

修改keepalived配置文件:

查看当前的网络设备名,当前是eth2

[root@qht134 keepalived-1.2.2]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8b:fb:c0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.135/24 brd 172.17.61.255 scope global eth2
    inet6 fe80::20c:29ff:fe8b:fbc0/64 scope link
       valid_lft forever preferred_lft forever
[root@qht134 keepalived]# cat  /etc/keepalived/keepalived.conf
! Configuration File for keepalived


global_defs {
    router_id LVS_DEVEL
}
#monitor haproxy process,execute every 2s
vrrp_script chk_haproxy {
    script "/etc/keepalived/chk_haproxy.sh"
    interval 2
    weight 2
}
vrrp_instance VI_1 {
    state MASTER          #qht134 set MASTER,qht135 set BACKUP
    interface eth2
    virtual_router_id 51
    priority 100             #qht134 set 100,qht135 set 50
    advert_int 1
    mcast_src_ip 172.17.61.134    #real ip
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    track_script {
        chk_haproxy    #monitor status of haproxy process
    }
    virtual_ipaddress {
        172.17.61.140    #VIP
    }
}
[root@qht134 keepalived]# cat /etc/keepalived/chk_haproxy.sh
#!/bin/bash
#pkill keepalived
status=$(ps aux|grep haproxy | grep -v grep | grep -v bash | wc -l)
if [ "${status}" = "0" ]; then
    /etc/init.d/haproxy start


    status2=$(ps aux|grep haproxy | grep -v grep | grep -v bash |wc -l)


    if [ "${status2}" = "0"  ]; then
            /etc/init.d/keepalived stop
    fi
fi
[root@qht134~]# chmod u+x /etc/keepalived/chk_haproxy.sh

[root@qht134 keepalived-1.2.2]# scp /etc/keepalived/chk_haproxy.sh 172.17.61.135:/etc/keepalived/

通过chk_haproxy.sh脚本,当real_server发生故障时,就停止keepalived服务,将vip飘移到另一台haproxy.


5.2,配置另一台keepalived,安装过程和qht134一致。

只有配置文件需要三个地方修改一下:

[root@qht135 keepalived]# cat  /etc/keepalived/keepalived.conf
! Configuration File for keepalived


global_defs {
    router_id LVS_DEVEL
}
#monitor haproxy process,execute every 2s
vrrp_script chk_haproxy {
    script "/etc/keepalived/chk_haproxy.sh"
    interval 2
    weight 2
}
vrrp_instance VI_1 {
    state BACKUP          #qht134 set MASTER,qht135 set BACKUP
    interface eth2
    virtual_router_id 51
    priority 50             #qht134 set 100,qht135 set 50
    advert_int 1
    mcast_src_ip 172.17.61.135    #real ip
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    track_script {
        chk_haproxy    #monitor status of haproxy process
    }
    virtual_ipaddress {
        172.17.61.140    #VIP
    }
}

两个节点都打开keepalived

[root@qht134 keepalived-1.2.2]#  /etc/init.d/keepalived restart
Stopping keepalived:                                       [  OK  ]
Starting keepalived:                                       [  OK  ]
[root@qht135 keepalived-1.2.2]#  /etc/init.d/keepalived restart
Stopping keepalived:                                       [  OK  ]
Starting keepalived:                                       [  OK  ]

5.3,测试keepalived

[root@qht134 keepalived-1.2.2]#  ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:42:74:a0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.134/24 brd 172.17.61.255 scope global eth2
    inet 172.17.61.140/32 scope global eth2
    inet6 fe80::20c:29ff:fe42:74a0/64 scope link
       valid_lft forever preferred_lft forever

当前的vip是在qht134上面,来测试一下qht134的haproxy中断后,vip会不会漂移到qht135

[root@qht134 keepalived-1.2.2]# service haproxy stop
Stopping haproxy:                                          [  OK  ]
[root@qht134 keepalived]#  ip addr show eth2
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:42:74:a0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.134/24 brd 172.17.61.255 scope global eth2
    inet 172.17.61.140/32 scope global eth2
    inet6 fe80::20c:29ff:fe42:74a0/64 scope link
       valid_lft forever preferred_lft forever

关闭haproxy服务后,VIP并没有漂移到qht135,这是由于chk_haproxy.sh会不断监测haproxy进程,发现haproxy中断后会重启服务。

[root@qht134 keepalived]#   service haproxy status
haproxy (pid  17203) is running...

所以关闭后又被重启了。

那我只有直接关闭keepalived服务试下能不能漂移到qht135:

[root@qht134 keepalived]#  service keepalived  stop
Stopping keepalived:                                       [  OK  ]

查看/etc/log/messages发现VIP已从本机移除:

Jun 25 18:30:37 qht134 Keepalived: Terminating on signal
Jun 25 18:30:37 qht134 Keepalived_vrrp: Terminating VRRP child process on signal
Jun 25 18:30:37 qht134 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Jun 25 18:30:37 qht134 Keepalived: Stopping Keepalived v1.2.2 (06/22,2018)

查看qht134的网络状态:

[root@qht134 keepalived]#  ip addr show eth2
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:42:74:a0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.134/24 brd 172.17.61.255 scope global eth2
    inet6 fe80::20c:29ff:fe42:74a0/64 scope link
       valid_lft forever preferred_lft forever

查看qht135的网络状态:

[root@qht135 keepalived-1.2.2]# ip addr show eth2
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8b:fb:c0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.135/24 brd 172.17.61.255 scope global eth2
    inet 172.17.61.140/32 scope global eth2
    inet6 fe80::20c:29ff:fe8b:fbc0/64 scope link
       valid_lft forever preferred_lft forever
[root@qht135 keepalived-1.2.2]# tail -n5 /var/log/messages
Jun 25 18:30:41 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Jun 25 18:30:41 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Jun 25 18:30:41 qht131 avahi-daemon[1326]: Registering new address record for 172.17.61.140 on eth2.IPv4.
Jun 25 18:30:41 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 172.17.61.140
Jun 25 18:30:46 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 172.17.61.140

VIP已被qht135注册,说明飘移成功!

继续测试,如果qht134的keepalived重新开启后会发生什么:

[root@qht134 keepalived]#  service keepalived start
Starting keepalived:                                       [  OK  ]
[root@qht134 keepalived]#  ip addr show eth2
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:42:74:a0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.134/24 brd 172.17.61.255 scope global eth2
    inet 172.17.61.140/32 scope global eth2
    inet6 fe80::20c:29ff:fe42:74a0/64 scope link
       valid_lft forever preferred_lft forever
[root@qht135 keepalived-1.2.2]#  ip addr show eth2
2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8b:fb:c0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.61.135/24 brd 172.17.61.255 scope global eth2
    inet6 fe80::20c:29ff:fe8b:fbc0/64 scope link
       valid_lft forever preferred_lft forever
[root@qht135 keepalived-1.2.2]#  tail -n5 /var/log/messages
Jun 25 18:30:46 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 172.17.61.140
Jun 25 18:42:46 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Jun 25 18:42:46 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Jun 25 18:42:46 qht131 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Jun 25 18:42:46 qht131 avahi-daemon[1326]: Withdrawing address record for 172.17.61.140 on eth2.

从qht135的日志来看,由于qht134的等级比较高,master的服务启动后vip就被qht134夺了过去。


6. keepalived以及haproxy都安装测试完成之后,客户端只需要连接VIP就可以连接pxc数据库,至此mysql的高可用配置完成!


参考:

https://blog.csdn.net/leshami/article/details/79105893

https://www.cnblogs.com/kgdxpr/p/3325788.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值