mycat + keepalived + haproxy + mmm

本文介绍Mycat分布式数据库系统与MMM(Master-Master replication manager for MySQL)在高可用集群中的部署与配置。Mycat实现读写分离、分表分库等特性,MMM基于MySQL双主复制,确保数据服务高可用。文章详细描述了部署步骤、配置文件解析及常见错误处理。
摘要由CSDN通过智能技术生成

整体环境描述

mmm(Master-Master replication manager for MySQL)基于MySQL双主复制的高可用集群架构,该架构保证了集群的高可用服务(数据一致性不能完全保证)。当主节点宕机时,通过monitor 检测到,并触发VIP漂移,从而保证了数据的持续可用。在数据延迟较高,数据安全性要求较高的环境中不建议使用,可以考虑mha

mycat 它是一个开源的分布式数据库系统,是一个实现了 MySQL 协议的的Server,前端用户可以把它看作是一个数据库代理,用 MySQL 客户端工具和命令行访问,而其后端可以用MySQL 原生(Native)协议与多个 MySQL 服务器通信,也可以用 JDBC 协议与大多数主流数据库服务器通信,其核心功能是分表分库,即将一个大表水平分割为 N 个小表,存储在后端 MySQL 服务器里或者其他数据库里。
mycat 实现的功能:

  • 单纯的读写分离,此时配置最为简单,支持读写分离,主从切换;
  • 分表分库,对于超过 1000 万的表进行分片,最大支持 1000 亿的单表分片;
  • 多租户应用,每个应用一个库,但应用程序只连接 Mycat,从而不改造程序本身,实现多租户化
  • 报表系统,借助于 Mycat 的分表能力,处理大规模报表的统计; 替代 Hbase,分析大数据;
  • 作为海量数据实时查询的一种简单有效方案,比如 100 亿条频繁查询的记录需要在 3 秒内查询出来结果,除了基于主键的查询,还可能存在范围查询或其他属性查询,此时 Mycat 可能是最简单有效的选

keepalived+haproxy keepalived 通过检测服务状态实现节点间vip的自动切换,结合haproxy 实现负载均衡,可以保证整个mycat 集群的高可用。

拓扑图
通过上述各种组件的配合使用,可以提供一个可以灵活分库分表的靠可用集群架构,根据实际环境需要,还可以在mycat 层面实现读写分离。
在这里插入图片描述

部署mycat

mycat版本
Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gz

tar -xzvf Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gz -C /usr/local/

主要配置server.xml rule.xml schema.xml

# cat schema.xml 
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
    <!--配置逻辑库信息db 包含的表信息-->
	<schema name="db" checkSQLschema="true" sqlMaxLimit="100">
		<table name="travelrecord" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />
		<table name="company" primaryKey="ID" type="global" dataNode="dn1,dn2,dn3" />
		<table name="goods" primaryKey="ID" type="global" dataNode="dn1,dn2" />
		<table name="hotnews" primaryKey="ID" autoIncrement="true" dataNode="dn1,dn2,dn3" rule="mod-long" />
		<table name="employee" primaryKey="ID" dataNode="dn1,dn2" rule="sharding-by-intfile" />
		<table name="customer" primaryKey="ID" dataNode="dn1,dn2" rule="sharding-by-intfile">
			<childTable name="orders" primaryKey="ID" joinKey="customer_id" parentKey="id">
				<childTable name="order_items" joinKey="order_id" parentKey="id" />
			</childTable>
			<childTable name="customer_addr" primaryKey="ID" joinKey="customer_id" parentKey="id" />
		</table>
	</schema>
	<!--配置逻辑存储片的信息-->
	<dataNode name="dn1" dataHost="mm1" database="db2" />
	<dataNode name="dn2" dataHost="mm1" database="db3" />
	<dataNode name="dn3" dataHost="dba3" database="db1" />
    <!--配置host信息,供上面使用-->
	<dataHost name="mm1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
		<heartbeat>select user()</heartbeat>
		<writeHost host="centos1" url="192.168.56.50:3306" user="test" password="123456">
			<readHost host="centos2" url="192.168.56.52:3306" user="test" password="123456" />
		</writeHost>
	</dataHost>
	<dataHost name="dba3" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
	<heartbeat>select user()</heartbeat>
        <writeHost host="centos3" url="192.168.56.123:3306" user="test"  password="123456" />
	</dataHost>
		dbDriver="jdbc"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1"
		url="jdbc:mysql://localhost:3306" user="root" password="123456"> </writeHost>
		</dataHost> -->
</mycat:schema>

# cat server.xml 
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
	<system>
	<property name="nonePasswordLogin">0</property> <!-- 0为需要密码登陆、1为不需要密码登陆 ,默认为0,设置为1则需要指定默认账户-->
	<property name="useHandshakeV10">1</property>
	<property name="useSqlStat">0</property>  <!-- 1为开启实时统计、0为关闭 -->
	<property name="useGlobleTableCheck">0</property>  <!-- 1为开启全加班一致性检测、0为关闭 -->		<property name="sequnceHandlerType">2</property>
	<property name="subqueryRelationshipCheck">false</property> <!-- 子查询中存在关联查询的情况下,检查关联字段中是否有分片字段 .默认 false -->		<property name="processorBufferPoolType">0</property>
		<!--分布式事务开关,0为不过滤分布式事务,1为过滤分布式事务(如果分布式事务内只涉及全局表,则不过滤),2为不过滤分布式事务,但是记录分布式事务日志-->
		<property name="handleDistributedTransactions">0</property>
		
			<!--
			off heap for merge/order/group/limit      1开启   0关闭
		-->
		<property name="useOffHeapForMerge">1</property>		<!--
			单位为m
		-->
        <property name="memoryPageSize">64k</property>		<!--
			单位为k
		-->
		<property name="spillsFileBufferSize">1k</property>		<property name="useStreamOutput">0</property>		<!--
			单位为m
		-->
		<property name="systemReserveMemorySize">384m</property>
		<!--是否采用zookeeper协调切换  -->
		<property name="useZKSwitch">false</property>		
		<property name="strictTxIsolation">false</property>
		
		<property name="useZKSwitch">true</property>
		
	</system>
	
		<user name="test" defaultAccount="true">
		<property name="password">123456</property>
		<property name="schemas">db</property>

	</user></mycat:server>


rule.xml如果没有自定义的分片规则,使用默认的就可以,另外,如果选用数据库存储序列的方式,需要配置 sequence_db_conf.properties。

# cat sequence_db_conf.properties 
#sequence stored in datanode
GLOBAL=dn1
COMPANY=dn1
CUSTOMER=dn1
ORDERS=dn1
travelrecord=dn1

错误及处理

  1. 表设置了自增主键,插入时报错
查询:insert into travelrecord(aname) values ('aa'),('bb'),('cc')

错误代码: 1064
bad insert sql (sharding column:ID not provided,INSERT INTO travelrecord (aname)
VALUES ('aa'),
	('bb'),
	('cc')

解决

在schema.xml中添加primaryKey 和 autoIncrement 属性
 <table name="travelrecord" dataNode="dn1,dn2,dn3" primaryKey="ID" autoIncrement="true" rule="auto-sharding-long" />
  1. 插入值得时候报错:
mysql> insert into travelrecord (aname) values ( 'dd');
ERROR 1064 (HY000): can't find any valid datanode :TRAVELRECORD -> ID -> 1087887822743318529

解决:

rule 选择错误, rule="auto-sharding-long" ,改成mod-long

部署keepalived

部署haproxy

部署mmm

环境描述:

主机名角色ipserver_idVIP
centos1主 读写192.168.56.111111192.168.56.51/50(写)
centos2备主 只读192.168.56.222222192.168.56.52

版本
mysql : 5.7.24
mmm :2.2.1

实施步骤

  1. 创建用户:
create user replication@'192.168.56.%' identified by 'slave';
grant REPLICATION SLAVE on *.* to replication@'192.168.56.%';

create user mmm_agent@'192.168.56.%' identified by 'RepAgent';
grant SUPER, REPLICATION CLIENT, PROCESS on *.* to mmm_agent@'192.168.56.%';

create user mmm_monitor@'192.168.56.%' identified by 'RepMonitor';
grant REPLICATION CLIENT on *.* to mmm_monitor@'192.168.56.%';

create user mmm_tools@'192.168.56.%' identified by 'RepTools';
grant SUPER, REPLICATION CLIENT, RELOAD on *.* to mmm_tools@'192.168.56.%';
  1. 基于GTID 搭建主主 复制
  2. 安装perl 包(mmm 需要)
yum -y install gcc* perl-Date-Manip  perl-Date-Manip  perl-Date-Manip perl-XML-DOM-XPath perl-XML-Parser perl-XML-RegExp rrdtool perl-Class-Singleton perl perl-DBD-MySQL perl-Params-Validate perl-MailTools perl-Time-HiRes

单独下载:
Proc-Daemon-0.23
Log-Log4perl-1.49
Algorithm-Diff-1.1903
  1. 安装mmm 软件
# tar -xzvf mysql-mmm-2.2.1.tar.gz
# cd mysql-mmm-2.2.1/
# make && make install
  1. 编辑配置文件
    agent 配置文件
# cat mmm_agent.conf 
include mmm_common.conf
this db1 #配置个节点的节点名

通用配置文件

# cat mmm_common.conf 
active_master_role	writer 


<host default>
	cluster_interface		eth0

	pid_path				/var/run/mmm_agentd.pid
	bin_path				/usr/lib/mysql-mmm/

    replication_user        replication
    replication_password    slave

	agent_user				mmm_agent
	agent_password			RepAgent
</host>

<host db1>
	ip						192.168.56.111 #db1 de ip
	mode					master
	peer					db2 #指定主主的另一台主机
</host>

<host db2>
	ip						192.168.56.222 #db2 de ip
	mode					master
	peer					db1
</host>


<role writer>
	hosts					db1, db2 #所有的写权限主机
	ips						192.168.56.50 主 的vip
	mode					exclusive
</role>

<role reader>
	hosts					db1, db2 #所有的读节点名称
	ips						192.168.56.111,192.168.56.222 
	mode					balanced
</role>

监控节点配置文件

# cat mmm_mon.conf 
include mmm_common.conf

<monitor>
	ip			 		192.168.56.111
	pid_path				/var/run/mmm_mond.pid
	bin_path				/usr/lib/mysql-mmm/
	status_path				/var/lib/misc/mmm_mond.status
	ping_ips				192.168.56.111,192.168.56.222
</monitor>

<host default>
	monitor_user			mmm_monitor
	monitor_password		RepMonitor
</host>

debug 0

  1. 各节点启动agent
# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
  1. 启动monitor
# /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok

启动后的状态:

[root@centos1 mysql]# mmm_control show 
  db1(192.168.56.111) master/AWAITING_RECOVERY. Roles: 
  db2(192.168.56.222) master/AWAITING_RECOVERY. Roles:
  1. 设置online
[root@centos1 mysql]# mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@centos1 mysql]# mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@centos1 mysql]# mmm_control show 
  db1(192.168.56.111) master/ONLINE. Roles: reader(192.168.56.111), writer(192.168.56.50)
  db2(192.168.56.222) master/ONLINE. Roles: reader(192.168.56.222)

错误处理

  1. 启动agent 时报错
# /etc/init.d/mysql-mmm-agent start  #  /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
failed 

需要单独安装:Proc/Daemon.pm

# cd Proc-Daemon-0.23
# perl Makefile.PL 
Checking if your kit is complete...
Looks good
Warning: prerequisite Proc::ProcessTable 0 not found.
Writing Makefile for Proc::Daemon
[root@centos1 Proc-Daemon-0.23]# make && make install 
cp lib/Proc/Daemon.pm blib/lib/Proc/Daemon.pm
cp lib/Proc/Daemon.pod blib/lib/Proc/Daemon.pod
Manifying blib/man3/Proc::Daemon.3pm
Installing /usr/local/share/perl5/Proc/Daemon.pod
Installing /usr/local/share/perl5/Proc/Daemon.pm
Installing /usr/local/share/man/man3/Proc::Daemon.3pm
Appending installation info to /usr/lib64/perl5/perllocal.pod
另外需要安装:
Log-Log4perl-1.49
Algorithm-Diff-1.1903
  1. 配置完成启动online 之后,主机网络端口无IP
/var/log/mysql-mmm
# ls
mmm_agentd.log  mmm_mond.log

检查日志发现:

2019/01/21 12:04:00 FATAL Couldn't configure IP '192.168.56.52' on interface 'eth0': undef

开启debug 查看输出

2019/01/21 12:03:54 DEBUG Executing /usr/lib/mysql-mmm//agent/configure_ip eth0 192.168.56.52
Can't locate Net/ARP.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/share/perl5/vendor_perl/MMM/Agent/Helpers/Network.pm line 11.
BEGIN failed--compilation aborted at /usr/share/perl5/vendor_perl/MMM/Agent/Helpers/Network.pm line 11.
Compilation failed in require at /usr/share/perl5/vendor_perl/MMM/Agent/Helpers/Actions.pm line 5.
BEGIN failed--compilation aborted at /usr/share/perl5/vendor_perl/MMM/Agent/Helpers/Actions.pm line 5.
Compilation failed in require at /usr/lib/mysql-mmm//agent/configure_ip line 6.
BEGIN failed--compilation aborted at /usr/lib/mysql-mmm//agent/configure_ip line 6.
2019/01/21 12:03:54 FATAL Couldn't configure IP '192.168.56.52' on interface 'eth0': undef
2019/01/21 12:03:54 DEBUG Executing /usr/lib/mysql-mmm//agent/sync_with_master 

解决:
安装Net-ARP-1.0.7
之后问题解决

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值