mysql负责均衡读写分离_MySQL读写分离之负载均衡

mysql官方文档中有这么一句话:

MySQL Proxy is currently an Alpha release and should not be used within production environments.

So。。。

使用haproxy来做这些事,以下仅供参考:

环境配置

master        192.168.1.106             master1

slave1         192.168.1.107             master2master1(与master1主-主复制)

slave2         192.168.1.110             slave2---->master1(master1的从库)

slave3         192.168.1.111             slave3---->master1(master1的从库)

slave4         192.168.1.112             slave4----> master2(master2的从库)

monitor     192.168.1.200

192.168.1.105   eth1 写ip

192.168.1.113   eth2 读ip

说明:

当 master 停止复制, slave1 成为主库,haproxy停止发送请求到master和slave2,slave3, slave1与slave2,slave3依然可以从master接收日志。

当slave1停止复制,master成为主库,haproxy停止发送请求到slave1和slave4,master与slave4依然可以从slave1接收日志。

当 master和slave1同时停止复制,这时2台主库变成readonly模式,数据库不能写入 ,haproxy停止发送请求到slave2,slave3,slave4(脑裂)。

当slave1 offline时,master进入backup mode,haproxy停止发送请求到slave1,slave4。

当master offline时,slave1进入backup mode,haproxy停止发送请求到master,slave2,slave3。

当master和slave1同时offline,整个DB停止工作。

1、主从配置(略)

2、安装  xinetd ,配置mysqlchk服务

vi /etc/xinetd.d/mysqlchk

--两个master配置

service mysqlchk-write

{

flags           = REUSE

socket_type     = stream

port            = 9201

wait            = no

user= root

server          = /opt/script/mysqlchk_status.sh

log_on_failure  += USERID

disable         = no

only_from       = 192.168.1.0/24  #recommended toput the IPs that need

# toconnectexclusively (security purposes)

per_source      = UNLIMITED

}

service mysqlchk-read

{

flags           = REUSE

socket_type     = stream

port            = 9202

wait            = no

user= root

server          = /opt/script/mysqlchk_replication.sh

log_on_failure  += USERID

disable         = no

only_from       = 192.168.1.0/24  #recommended toput the IPs that need

# toconnectexclusively (security purposes)

per_source      = UNLIMITED

}

--所有slaves只需配置复制状态检查脚本

service mysqlchk-read

{

flags           = REUSE

socket_type     = stream

port            = 9202

wait            = no

user= root

server          = /opt/script/mysqlchk_replication.sh

log_on_failure  += USERID

disable         = no

only_from       = 192.168.1.0/24  #recommended toput the IPs that need

# toconnectexclusively (security purposes)

per_source      = UNLIMITED

}

vi /etc/services

--两个master添加:

mysqlchk-write    9201/tcp         # MySQL status check

mysqlchk-read9202/tcp         # MySQL replicationcheck

--所有slaves添加:

mysqlchk-read9202/tcp         # MySQL replicationcheck

重启xinetd

# /etc/init.d/xinetd stop

# /etc/init.d/xinetd start

查看端口号确认

[root@master xinetd.d]#  netstat -antup|grep xinetd

tcp        0      0 0.0.0.0:9201                0.0.0.0:*                   LISTEN      3077/xinetd

tcp        0      0 0.0.0.0:9202                0.0.0.0:*                   LISTEN      3077/xinetd

3、monitor主机安装haproxy

tar zxvf haproxy-1.4.23.tar.gz

cd haproxy-1.4.23

make TARGET=linux26 ARCH=x86_64

make install

4、配置haproxy配置文件

vi /usr/local/haproxy-1.4.23/conf/haproxy-db.cfg

# HAProxy configuration - haproxy-db.cfg

global

maxconn 4096

daemon

pidfile /usr/local/haproxy-1.4.23/haproxy.pid

#debug

#quiet

#chroot /usr/share/haproxy

defaults

log     global

mode    http

#optionhttplog

optiondontlognull

log 127.0.0.1 local0

retries 3

optionredispatch

maxconn 4096

timeout connect1000ms

timeout client 50000ms

timeout server 50000ms

listen  stats :8011

balance

mode http

stats enable

stats auth root:monitor

##

## FRONTEND ##

##

# Load-balanced IPsforDB writesandreads

#

frontend db_write

mode tcp

bind 192.168.1.105:3306

default_backend cluster_db_write

frontend db_read

mode tcp

bind 192.168.1.113:3306

default_backend cluster_db_read

# Monitor DB server availability

#

frontend monitor_master

#

# setmaster_backupto'up'or'down'

#

bind 127.0.0.1:9301

mode http

#optionnolinger

acl no_repl_master nbsrv(master_replication) eq 0

acl no_repl_slave1 nbsrv(slave1_replication) eq 0

acl no_master nbsrv(master_status) eq 0

acl no_slave1 nbsrv(slave1_status) eq 0

monitor-uri /monitor

monitor fail unless no_repl_master no_repl_slave1 no_slave1

monitor fail if no_master no_slave1

frontend monitor_slave1

#

# setslave1_backupto'up'or'down'

#

bind 127.0.0.1:9302

mode http

#optionnolinger

acl no_repl_master nbsrv(master_replication) eq 0

acl no_repl_slave1 nbsrv(slave1_replication) eq 0

acl no_master nbsrv(master_status) eq 0

acl no_slave1 nbsrv(slave1_status) eq 0

monitor-uri /monitor

monitor fail unless no_repl_master no_repl_slave1 no_master

monitor fail if no_master no_slave1

frontend monitor_slave2

#

# setslave2read-onlyslaveto'down'

#

bind 127.0.0.1:9303

mode http

#optionnolinger

acl no_repl_slave2 nbsrv(slave2_replication) eq 0

acl no_repl_master nbsrv(master_replication) eq 0

acl slave1 nbsrv(slave1_status) eq 1

monitor-uri /monitor

monitor fail if no_repl_slave2

monitor fail if no_repl_master slave1

frontend monitor_slave3

#

# setslave3read-onlyslaveto'down'

#

bind 127.0.0.1:9304

mode http

#optionnolinger

acl no_repl_slave3 nbsrv(slave3_replication) eq 0

acl no_repl_master nbsrv(master_replication) eq 0

acl slave1 nbsrv(slave1_status) eq 1

monitor-uri /monitor

monitor fail if no_repl_slave3

monitor fail if no_repl_master slave1

frontend monitor_slave4

#

# setslave4read-onlyslaveto'down'

#

bind 127.0.0.1:9305

mode http

#optionnolinger

acl no_repl_slave4 nbsrv(slave4_replication) eq 0

acl no_repl_slave1 nbsrv(slave1_replication) eq 0

acl master nbsrv(master_status) eq 1

monitor-uri /monitor

monitor fail if no_repl_slave4

monitor fail if no_repl_slave1 master

# Monitor forsplit-brain syndrome

#

frontend monitor_splitbrain

#

# setmaster_splitbrainandslave1_splitbrainto'up'

#

bind 127.0.0.1:9300

mode http

#optionnolinger

acl no_repl01 nbsrv(master_replication) eq 0

acl no_repl02 nbsrv(slave1_replication) eq 0

acl master nbsrv(master_status) eq 1

acl slave1 nbsrv(slave1_status) eq 1

monitor-uri /monitor

monitor fail unless no_repl01 no_repl02 master slave1

##

## BACKEND ##

##

# Checkevery DB server replication status

# - perform an http checkonport 9201 (replication status)

# - setto'down'if responseis'503 Service Unavailable'

# - setto'up'if responseis'200 OK'

#

backend master_replication

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server master 192.168.1.106:3306 checkport 9202 inter 5s rise 1 fall 1

backend slave1_replication

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server slave1 192.168.1.107:3306 checkport 9202 inter 5s rise 1 fall 1

backend slave2_replication

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server slave2 192.168.1.110:3306 checkport 9202 inter 5s rise 1 fall 1

backend slave3_replication

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server slave3 192.168.1.111:3306 checkport 9202 inter 5s rise 1 fall 1

backend slave4_replication

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server slave4 192.168.1.112:3306 checkport 9202 inter 5s rise 1 fall 1

# CheckMaster DB server mysql status

# - perform an http checkonport 9201 (mysql status)

# - setto'down'if responseis'503 Service Unavailable'

# - setto'up'if responseis'200 OK'

#

backend master_status

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server master 192.168.1.106:3306 checkport 9201 inter 5s rise 2 fall 2

backend slave1_status

mode tcp

balance roundrobin

optiontcpka

optionhttpchk

server slave1 192.168.1.107:3306 checkport 9201 inter 5s rise 2 fall 2

# DB write cluster

# Failure scenarios:

# - replication 'up'onmaster & slave1 = writestomaster

# - replication 'down'onslave1 = writestomaster

# - replication 'down'onmaster = writestoslave1

# - replication 'down'onmaster & slave1 = go nowhere, split-brain, cluster FAIL!

# - mysql 'down'onslave1 = writestomaster_backup

# - mysql 'down'onmaster = writestoslave1_backup

# - mysql 'down'onmaster & slave1 = go nowhere, cluster FAIL!

#

backend cluster_db_write

#

# - max1 db server availableatalltimes

# - master ispreferred (topoflist)

# - db_backups settheir'up'or'down'basedonresultsfrommonitor_monitor

#

mode tcp

optiontcpka

balance roundrobin

optionhttpchk GET /monitor

server master 192.168.1.106:3306 weight 1 checkport 9202 inter 5s rise 2 fall 1

server slave1 192.168.1.107:3306 weight 1 checkport 9202 inter 5s rise 2 fall 1 backup

server master_backup 192.168.1.106:3306 weight 1 checkport 9301 inter 5s rise 2 fall 2 addr 127.0.0.1 backup

server slave1_backup 192.168.1.107:3306 weight 1 checkport 9302 inter 5s rise 2 fall 2 addr 127.0.0.1 backup

# DB readcluster

# Failure scenarios

# - replication 'up'onmaster & slave1 = readsonmaster, slave1,alldb_slaves

# - replication 'down'onslave1 = readsonmaster, slavesofmaster

# - replication 'down'onmaster = readsonslave1, slavesofslave1

# - replication 'down'onmaster & slave1 = readsonmaster_splitbrainandmaster_splitbrainonly

# - mysql 'down'onslave1 = readsonmaster_backup, slavesofmaster

# - mysql 'down'onmaster = readsonslave1_backup, slavesofslave1

# - mysql 'down'onmaster & slave1 = go nowhere, cluster FAIL!

#

backend cluster_db_read

#

# - max2 master db servers availableatalltimes

# - maxN slave db servers availableatalltimesexceptduring split-brain

# - monitor track 'up'and'down'ofmonitorinthe cluster_db_write

# - db_backups track 'up'and'down'ofdb_backupsinthe cluster_db_write

# - db_splitbrains settheir'up'or'down'basedonresultsfrommonitor_splitbrain

#

mode tcp

optiontcpka

balance roundrobin

optionhttpchk GET /monitor

server master 192.168.1.106:3306 weight 1 track cluster_db_write/master

server slave1 192.168.1.107:3306 weight 1 track cluster_db_write/slave1

server master_backup 192.168.1.106:3306 weight 1 track cluster_db_write/master_backup

server slave1_backup 192.168.1.107:3306 weight 1 track cluster_db_write/slave1_backup

server master_splitbrain 192.168.1.106:3306 weight 1 checkport 9300 inter 5s rise 1 fall 2 addr 127.0.0.1

server slave1_splitbrain 192.168.1.107:3306 weight 1 checkport 9300 inter 5s rise 1 fall 2 addr 127.0.0.1

#

# Scaling & redundancy options

# - db_slaves settheir'up'or'down'basedonresultsfrommonitor_monitor

# - db_slaves should take longer torise

#

server slave2_slave 192.168.1.110:3306 weight 1 checkport 9303 inter 5s rise 5 fall 1 addr 127.0.0.1

server slave3_slave 192.168.1.111:3306 weight 1 checkport 9304 inter 5s rise 5 fall 1 addr 127.0.0.1

server slave4_slave 192.168.1.112:3306 weight 1 checkport 9305 inter 5s rise 5 fall 1 addr 127.0.0.1

5、启动haproxy

haproxy -f /usr/local/haproxy-1.4.23/conf/haproxy-db.cfg

监控地址:http://192.168.1.200:8011/haproxy?stats

user:root    password:monitor

一些参数说明 :

maxconn

Sets the maximum per-process number of concurrent connections to . It

is equivalent to the command-line argument "-n". Proxies will stop accepting

connections when this limit is reached.

daemon

Makes the process fork into background. This is the recommended mode of

operation. It is equivalent to the command line "-D" argument. It can be

disabled by the command line "-db" argument.

pidfile

Writes pids of all daemons into file . This option is equivalent to

the "-p" command line argument. The file must be accessible to the user

starting the process.

retries

Set the number of retries to perform on a server after a connection failure

May be used in sections:    defaults | frontend | listen | backend

yes   |    no    |   yes  |   yes

Arguments :

   is the number of times a connection attempt should be retried on

a server when a connection either is refused or times out. The

default value is 3.

It is important to understand that this value applies to the number of

connection attempts, not full requests. When a connection has effectively

been established to a server, there will be no more retry.

In order to avoid immediate reconnections to a server which is restarting,

a turn-around timer of 1 second is applied before a retry occurs.

When "option redispatch" is set, the last retry may be performed on another

server even if a cookie references a different server.

See also : "option redispatch"

option redispatch

no option redispatch

Enable or disable session redistribution in case of connection failure

May be used in sections:    defaults | frontend | listen | backend

yes   |    no    |   yes  |   yes

Arguments : none

In HTTP mode, if a server designated by a cookie is down, clients may

definitely stick to it because they cannot flush the cookie, so they will not

be able to access the service anymore.

Specifying "option redispatch" will allow the proxy to break their

persistence and redistribute them to a working server.

It also allows to retry last connection to another server in case of multiple

connection failures. Of course, it requires having "retries" set to a nonzero

value.

This form is the preferred form, which replaces both the "redispatch" and

"redisp" keywords.

If this option has been enabled in a "defaults" section, it can be disabled

in a specific instance by prepending the "no" keyword before it.

option dontlognull

no option dontlognull

Enable or disable logging of null connections

May be used in sections :   defaults | frontend | listen | backend

yes   |    yes   |   yes  |   no

Arguments : none

In certain environments, there are components which will regularly connect to

various systems to ensure that they are still alive. It can be the case from

another load balancer as well as from monitoring systems. By default, even a

simple port probe or scan will produce a log. If those connections pollute

the logs too much, it is possible to enable option "dontlognull" to indicate

that a connection on which no data has been transferred will not be logged,

which typically corresponds to those probes.

It is generally recommended not to use this option in uncontrolled

environments (eg: internet), otherwise scans and other malicious activities

would not be logged.

If this option has been enabled in a "defaults" section, it can be disabled

in a specific instance by prepending the "no" keyword before it.

另外,使用keepalived实现代理层的HA。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Re: 《Galera 高可用 MySQL 集群》 (Percona Xtradb Cluster)PXC v5.7 + Haproxy + Keepalived========================================================# Galera Cluster 如何解决问题 我们知道 Galera Cluster 是 MySQL封装了具有高一致性,支持多点写入的同步通信模块Galera而做的,它是建立在MySQL同步基础之上的,使用 Galera Cluster时,应用程序可以直接读、写某个节点的最新数据,并且可以在不影响应用程序读写的情况下,下线某个节点,因为支持多点写入,使得 Failover 变得非常简单。 目前Galera Cluster具备的功能包括如下几个方面:    1) 多主架构:真正的多点读写的集群,在任何时候读写数据,都是最新的。    2) 同步复制:集群不同节点之间数据同步,没有延迟,在数据库挂掉之后,数据不会丢失。    3) 并发复制:从节点在APPLY数据时,支持并行执行,有更好的性能表现。    4) 故障切换:在出现数据库故障时,因为支持多点写入,切的非常容易。    5) 热插拔  :在服务期间,如果数据库挂了,只要监控程序发现的够快,不可服务时间就会非常少。在节点故障期间,节点本身对集群的影响非常小。    6) 自动节点克隆:在新增节点,或者停机维护时,增量数据或者基础数据不需要人工手动备份提供, Galera Cluster会自动拉取在线节点数据,最终集群会变为一致。    7) 对应用透明:集群的维护,对应用程序是透明的,几乎感觉不到。以上几点,足以说明Galera Cluster是一个既稳健,又在数据一致性、完整性及高性能方面有出色表现的高可用解决方案
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值