MMM mysql高可用配置详细配置

一 MMM 高可用mysql简介
   MMM(Master-Master Replication mananger for mysql),由一个管理端(monitor)和多个代理端(agent)构成。通过MMM可以实现监控和管理Mysql主主复制和服务状态,同时也可监控多个Slave节点的复制以及运行状态,并且可以做到任何节点发生故障时实现自动化切换的功能。
   MMM套件三个主要脚本:
       mmm_mond:监控进程运行在管理节点,主要负责对所有数据库的监控工作,同时决定和处理所有节点的角色切换
       mmm_agent:代理进程运行在每台Mysql服务器,完成监控的测试工作和执行远程服务设置
       mmm_control:管理脚本查看和管理集群运行状态,同时管理mmm_mond进程。
二 MMM典型应用架构


三 MMM双主多从Mysql架构配置
   架构图如上图
双主双从应用架构读、写分离IP列表
角色 物理IP server_id 虚拟IP地址 IP角色 功能
Master1 192.168.1.166 1 192.168.1.150 writer IP 写入VIP,单点写入
192.168.1.151 reader IP 读查询VIP,每个节点一个读VIP,可通过负载均衡软件对读负载均衡
Master2 192.168.1.168 2 192.168.1.152
Slave1 192.168.1.186 3 192.168.1.153
Slave2 192.168.1.188 4 192.168.1.154
Monitor 192.168.1.180        

   0.配置前准备

       校时操作

   
   
  1. #安装ntpdate工具
  2. yum install ntpdate -y
  3. #使用ntpdate校时(后面的是ntp服务器)
  4. ntpdate pool.ntp.org
       关闭selinux
   
   
  1. setenforce 0
  2. sed -i 's/enforcing/disabled/g' /etc/selinux/config
  1 MMM的安装配置
   1.MMM套件安装
   1.在Monitor端安装所有MMM组件
   
   
  1. yum install epel-release.noarch -y
  2. yum install mysql-mmm mysql-mmm-agent mysql-mmm-tools mysql-mmm-monitor -y
   2.在其他所有节点安装mysql-mmm-agent
   
   
  1. yum install epel-release.noarch -y
  2. yum install mysql-mmm-agent -y
   2.Master1和Master2的主主配置和Masetr1和Slave1和Slave2的主从配置
   (安装配置参考《Mysql主从复制配置》《Mysql+Keepalived双主互备高可用》的配置)
   3.在所有MySQL节点的/etc/my.cnf中增加参数(要重启)
   
   
  1. read_only=1
  2. #read_only是因为MMM对数据需严格的读写控制
  3. #此参数不影响replication;root用户依然可写。
   4.所有MySQL节点创建monitor user(健康检测)和monitor agent(切换只读模式和同步Master信息)帐号仅在mysql写入主节点,其他节点会自动复制
   
   
  1. grant replication client on *.* to 'mmm_monitor'@'192.168.1.%' identified by 'monitorpasswd';
  2. grant super, replication client, process on *.* to 'mmm_agent'@'192.168.1.%' identified by 'agentpasswd';
   5.在所有MMM节点配置mmm_common.conf 注意以下所有配置文件中不能以下注释,会报错 使用sed -i '/^#/d;s/#.*//g' file 清除注释
    
    
  1. vim /etc/mysql-mmm/mmm_common.conf
    
    
  1. #当设置此参数,所有mysql节点都设置为"read_only=1",MMM会根据Mysql角色来决定是否执行"set global read_only=0".
  2. active_master_role writer
  3. <host default>
  4. cluster_interface eno16777736 #设置网络接口
  5. pid_path /run/mysql-mmm-agent.pid #设置PID文件位置
  6. bin_path /usr/libexec/mysql-mmm/ #设置MMM可执行文件路径
  7. replication_user slave_cp #设置复制的用户名
  8. replication_password pass #设置复制用户密码
  9. agent_user mmm_agent #设置更改只读操作用户
  10. agent_password agentpasswd #设置更改只读操作用户密码
  11. </host>
  12. <host db1> #DB1配置信息
  13. ip 192.168.1.166    
  14. mode master
  15. peer db2 #与DB1对等主机
  16. </host>
  17. <host db2>
  18. ip 192.168.1.168
  19. mode master
  20. peer db1
  21. </host>
  22. <host db3>
  23. ip 192.168.1.186
  24. mode slave
  25. </host>
  26. <host db4>
  27. ip 192.168.1.188
  28. mode slave
  29. </host>
  30. <role writer> #设置可执行写用户
  31. hosts db1, db2 #DB1和DB2都可执行
  32. ips 192.168.1.160 #设置可写的VIP
  33. mode exclusive #设置角色互斥模式,互斥角色只有一个IP,同一时间只能分配给一个用户
  34. </role>
  35. <role reader> #设置刻度角色模式
  36. hosts db1, db2, db3, db4 #设置可执行主机
  37. ips 192.168.1.151, 192.168.1.152, 192.168.1.153, 192.168.1.154
  38. mode balanced #设置角色模式为负载均衡,这些IP动态分配多个MySQL主机
  39. </role>
   6.在仅在MMM管理节点配置mmm_mom.conf
    
    
  1. vim /etc/mysql-mmm/mmm_mon.conf
    
    
  1. include mmm_common.conf
  2. <monitor>
  3. ip 127.0.0.1 #安全起见,只在本机监听,默认端口9988
  4. pid_path /run/mysql-mmm-monitor.pid
  5. bin_path /usr/libexec/mysql-mmm
  6. status_path /var/lib/mysql-mmm/mmm_mond.status
  7.    #测试网络连通性,只要一个正常则网络正常
  8. ping_ips 192.168.1.1, 192.168.1.166, 192.168.1.168, 192.168.1.186, 192.168.1.188
  9.    flap_duration       3600 #抖动时间范围
         flap_count          3 #在抖动时间范围内最大抖动次数
        auto_set_online     8 #是否自动上线,如果大于0,抖动的主机在抖动时间范围过后,则设置自动上线
  10. # The kill_host_bin does not exist by default, though the monitor will
  11. # throw a warning about it missing. See the section 5.10 "Kill Host
  12. # Functionality" in the PDF documentation.
  13. #
  14. # kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host
  15. #
  16. </monitor>
  17. <host default>
  18. monitor_user mmm_monitor
  19. monitor_password monitorpasswd
  20. </host>
  21. debug 0 #MMM管理端运行模式 0 正常模式 1 debug模式

     
     
  1. #开启MMM管理端的9988端口
  2. firewall-cmd --permanent --add-port=9988/tcp
  3. firewall-cmd --reload
  4. #开启所有mysql节点的9989端口
  5. firewall-cmd --permanent --add-port=9989/tcp
  6. firewall-cmd --reload
   7.在所有Mysql节点设置mmm_agent.conf
     
     
  1. vim /etc/mysql-mmm/mmm_agent.conf
      
      
  1. include mmm_common.conf
  2. this db1 #在四台mysql节点上设置对应的db,分别为db1、db2、db3、db4
    8.启动MMM服务
     
     
  1. #MMM管理端启动
  2. systemctl restart mysql-mmm-monitor
  3. systemctl enable mysql-mmm-monitor
      
      
  1. #Mysql节点启动
  2. systemctl restart mysql-mmm-agent
  3. systemctl enable mysql-mmm-agent
       
       
  1. #MMM管理端基本管理命令
  2. mmm_control show db1(192.168.1.166) master/AWAITING_RECOVERY. Roles: db2(192.168.1.168) master/AWAITING_RECOVERY. Roles: db3(192.168.1.186) slave/AWAITING_RECOVERY. Roles: db4(192.168.1.188) slave/AWAITING_RECOVERY. Roles:
        
        
  1. #如果一直显示等待,可手动设置
  2. mmm_control set_online db1
  3. mmm_control set_online db2
  4. mmm_control set_online db3
  5. mmm_control set_online db4
         
         
  1. mmm_control show
  2. db1(192.168.1.166) master/ONLINE. Roles: reader(192.168.1.154), writer(192.168.1.150)
  3. db2(192.168.1.168) master/ONLINE. Roles: reader(192.168.1.151)
  4. db3(192.168.1.186) slave/ONLINE. Roles: reader(192.168.1.153)
  5. db4(192.168.1.188) slave/ONLINE. Roles: reader(192.168.1.152)
          
          
  1. #查看各个节点运行状态
  2. mmm_control checks all
  3. db4 ping [last change: 2017/03/25 22:55:49] OK
  4. db4 mysql [last change: 2017/03/25 22:55:49] OK
  5. db4 rep_threads [last change: 2017/03/25 22:55:49] OK
  6. db4 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null
  7. db2 ping [last change: 2017/03/25 22:55:49] OK
  8. db2 mysql [last change: 2017/03/25 22:55:49] OK
  9. db2 rep_threads [last change: 2017/03/25 22:55:49] OK
  10. db2 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null
  11. db3 ping [last change: 2017/03/25 22:55:49] OK
  12. db3 mysql [last change: 2017/03/25 22:55:49] OK
  13. db3 rep_threads [last change: 2017/03/25 22:55:49] OK
  14. db3 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null
  15. db1 ping [last change: 2017/03/25 22:55:49] OK
  16. db1 mysql [last change: 2017/03/25 22:55:49] OK
  17. db1 rep_threads [last change: 2017/03/25 22:55:49] OK
  18. db1 rep_backlog [last change: 2017/03/25 22:55:49] OK: Backlog is null
          
          
  1. #查看mysql各个节点VIP绑定状态
  2. ip a
   9.测试
   1.读写分离测试
    
    
  1. #创建测试用户
  2. mysql -uroot -p
  3. create database test;
  4. create user test@"192.168.1.%" identified by '123';
  5. grant all on test.* to test@"192.168.1.%";
  6. exit
      
      
  1. #写VIP登录(创建表单,插入数据测试略)
  2. mysql -utest -p -h192.168.1.160
  3. use test;
  4. create table mmm_test(id varchar(60));
  5. insert into mmm_test (id) values ("masetr");
  6. exit
    
    
  1. #读VIP登录
  2. mysql -utest -p -h192.168.1.151
  3. select * from test.mmm_test;
  4. +----------------+ | Tables_in_test | +----------------+ | mmm_test | +----------------+
  5. mysql -utest -p -h192.168.1.152
  6. select * from test.mmm_test;
   2.故障测试
    
    
  1. [root@monitor ~]# mmm_control show
  2. db1(192.168.1.166) master/ONLINE. Roles: reader(192.168.1.152), writer(192.168.1.160)
  3. db2(192.168.1.168) master/ONLINE. Roles: reader(192.168.1.151)
  4. db3(192.168.1.186) slave/ONLINE. Roles: reader(192.168.1.154)
  5. db4(192.168.1.188) slave/ONLINE. Roles: reader(192.168.1.153)
     
     
  1. [root@DB1 ~]# systemctl stop mariadb
      
      
  1. [root@www ~]# mmm_control show
  2. db1(192.168.1.166) master/HARD_OFFLINE. Roles:
  3. db2(192.168.1.168) master/ONLINE. Roles: reader(192.168.1.151)
  4. db3(192.168.1.186) slave/ONLINE. Roles: reader(192.168.1.154)
  5. db4(192.168.1.188) slave/ONLINE. Roles: reader(192.168.1.153)
       
       
  1. [root@monitor ~]# mmm_control show
  2. db1(192.168.1.166) master/HARD_OFFLINE. Roles:
  3. db2(192.168.1.168) master/ONLINE. Roles: reader(192.168.1.151), writer(192.168.1.160)
  4. db3(192.168.1.186) slave/ONLINE. Roles: reader(192.168.1.152), reader(192.168.1.154)
  5. db4(192.168.1.188) slave/ONLINE. Roles: reader(192.168.1.153)
        
        
  1. [root@DB1 ~]# systemctl restart mariadb
         
         
  1. #注意虽然DB1复活,但写VIP仍然在DB2不变
  2. [root@monitor ~]# mmm_control show
  3. db1(192.168.1.166) master/ONLINE. Roles: reader(192.168.1.152)
  4. db2(192.168.1.168) master/ONLINE. Roles: reader(192.168.1.151), writer(192.168.1.160)
  5. db3(192.168.1.186) slave/ONLINE. Roles: reader(192.168.1.154)
  6. db4(192.168.1.188) slave/ONLINE. Roles: reader(192.168.1.153)
四 Amoeba优化MMM架构(服务器IP 192.168.1.199)
  
   1.安装Amoeba开发环境Java
    
    
  1. #建立安装目录
  2. mkdir /usr/java
  3. cd /usr/java
  4. #官网下载地址http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.rpm
  5. wget -c --no-check-certificate --no-cookie --header "Cookie: s_nr=1420682671945; s_cc=true; oraclelicense=accept-securebackup-cookie; gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fjava%2Fjavase%2Fdownloads%2Fjdk7-downloads-1880260.html;s_sq=%5B%5BB%5D%5D" http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.rpm
  6. chmod +x jdk-8u92-linux-x64.rpm
  7. rpm -ivh jdk-8u92-linux-x64.rpm
  8. vim /etc/profile
  9. #追加以下信息
  10. export JAVA_HOME=/usr/java/jdk1.8.0_92
  11. export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  12. export PATH=$PATH:$JAVA_HOME/bin
  13. #立即生效
  14. source /etc/profile
  15. #查看版本信息
  16. java -version java version "1.8.0_92" Java(TM) SE Runtime Environment (build 1.8.0_92-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
   2.安装Amoeba
    
    
  1. wget https://sourceforge.net/projects/amoeba/files/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip
  2. unzip amoeba-mysql-3.0.5-RC-distribution.zip
  3. mv amoeba-mysql-3.0.5-RC /usr/local/amoeba/
   3.配置Amoeba
     
     
  1. vim /usr/local/amoeba/conf/dbServers.xml
   修改以下黑体信息
      
      
  1. <?xml version="1.0" encoding="gbk"?>
  2. <!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">
  3. <amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">
  4. <!--
  5. Each dbServer needs to be configured into a Pool,
  6. If you need to configure multiple dbServer with load balancing that can be simplified
  7. by the following configuration:
  8. add attribute with name virtual = "true" in dbServer, but the configuration does not
  9. allow the element with name factoryConfig
  10. such as 'multiPool' dbServer
  11. -->
  12. <dbServer name="abstractServer" abstractive="true">
  13. <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">
  14. <property name="connectionManager">${defaultManager}</property>
  15. <property name="sendBufferSize">64</property>
  16. <property name="receiveBufferSize">128</property>
  17. <!-- mysql port -->
  18. <property name="port">3306</property>
  19. <!-- mysql schema -->
  20. <property name="schema">amoeba</property>
  21. <!-- mysql user -->
  22. <property name="user">amoeba</property>
  23. <property name="password">12345</property>
  24. </factoryConfig>
  25. <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">
  26. <property name="maxActive">500</property>
  27. <property name="maxIdle">500</property>
  28. <property name="minIdle">1</property>
  29. <property name="minEvictableIdleTimeMillis">600000</property>
  30. <property name="timeBetweenEvictionRunsMillis">600000</property>
  31. <property name="testOnBorrow">true</property>
  32. <property name="testOnReturn">true</property>
  33. <property name="testWhileIdle">true</property>
  34. </poolConfig>
  35. </dbServer>
  36. <dbServer name="writedb" parent="abstractServer">
  37. <factoryConfig>
  38. <!-- mysql ip -->
  39. <property name="ipAddress">192.168.1.160</property>
  40. </factoryConfig>
  41. </dbServer>
  42. <dbServer name="slave1" parent="abstractServer">
  43. <factoryConfig>
  44. <!-- mysql ip -->
  45. <property name="ipAddress">192.168.1.151</property>
  46. </factoryConfig>
  47. </dbServer>
  48. <dbServer name="slave2" parent="abstractServer">
  49. <factoryConfig>
  50. <!-- mysql ip -->
  51. <property name="ipAddress">192.168.1.152</property>
  52. </factoryConfig>
  53. </dbServer>
  54. <dbServer name="slave3" parent="abstractServer">
  55. <factoryConfig>
  56. <!-- mysql ip -->
  57. <property name="ipAddress">192.168.1.153</property>
  58. </factoryConfig>
  59. </dbServer>
  60. <dbServer name="slave4" parent="abstractServer">
  61. <factoryConfig>
  62. <!-- mysql ip -->
  63. <property name="ipAddress">192.168.1.154</property>
  64. </factoryConfig>
  65. </dbServer>
  66. <dbServer name="myslaves" virtual="true">
  67. <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
  68. <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->
  69. <property name="loadbalance">1</property>
  70. <!-- Separated by commas,such as: server1,server2,server1 -->
  71. <property name="poolNames">slave1,slave2,slave3,slave4</property>
  72. </poolConfig>
  73. </dbServer>
  74. </amoeba:dbServers>

       
       
  1. vim /usr/local/amoeba/conf/amoeba.xml
   修改以下黑体信息
      
      
  1. <?xml version="1.0" encoding="gbk"?>
  2. <!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">
  3. <amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">
  4. <proxy>
  5. <!-- service class must implements com.meidusa.amoeba.service.Service -->
  6. <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">
  7. <!-- port -->
  8. <property name="port">8066</property>
  9. <!-- bind ipAddress -->
  10. <!--
  11. <property name="ipAddress">127.0.0.1</property>
  12. -->
  13. <property name="connectionFactory">
  14. <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">
  15. <property name="sendBufferSize">128</property>
  16. <property name="receiveBufferSize">64</property>
  17. </bean>
  18. </property>
  19. <property name="authenticateProvider">
  20. <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">
  21. <property name="user">root</property>
  22. <property name="password">1234567890</property>
  23. <property name="filter">
  24. <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">
  25. <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>
  26. </bean>
  27. </property>
  28. </bean>
  29. </property>
  30. </service>
  31. <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">
  32. <!-- proxy server client process thread size -->
  33. <property name="executeThreadSize">128</property>
  34. <!-- per connection cache prepared statement size -->
  35. <property name="statementCacheSize">500</property>
  36. <!-- default charset -->
  37. <property name="serverCharset">utf8</property>
  38. <!-- query timeout( default: 60 second , TimeUnit:second) -->
  39. <property name="queryTimeout">60</property>
  40. </runtime>
  41. </proxy>
  42. <!--
  43. Each ConnectionManager will start as thread
  44. manager responsible for the Connection IO read , Death Detection
  45. -->
  46. <connectionManagerList>
  47. <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">
  48. <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>
  49. </connectionManager>
  50. </connectionManagerList>
  51. <!-- default using file loader -->
  52. <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">
  53. <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>
  54. </dbServerLoader>
  55. <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">
  56. <property name="ruleLoader">
  57. <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">
  58. <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>
  59. <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>
  60. </bean>
  61. </property>
  62. <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
  63. <property name="LRUMapSize">1500</property>
  64. <property name="defaultPool">writedb</property>
  65. <property name="writePool">writedb</property>
  66. <property name="readPool">myslaves</property>
  67. <property name="needParse">true</property>
  68. </queryRouter>
  69. </amoeba:configuration>
   4.设置Amoeba登录数据库权限(仅在mysql写入主节点,其他节点会自动复制)
     
     
  1. mysql -uroot -p
  2. create database amoeba;
  3. create user amoeba@"192.168.1.199" identified by '12345';
  4. grant all on amoeba.* to amoeba@"192.168.1.199";
  5. #给所有权限(测试会用到)
  6. grant all on *.* to amoeba@"192.168.1.199";
  7. flush privileges;
  8. exit
   5.开启amoeba防火墙
      
      
  1. firewall-cmd --permanent --add-port=8066/tcp
  2. firewall-cmd --reload
   6.启动Amoeba 
     
     
  1. /usr/local/amoeba/bin/launcher &
  2. netstat -tlunp | grep java tcp6 0 0 :::8066 :::* LISTEN 2666/java
    报错解决方法
      
      
  1. vim /usr/local/amoeba/jvm.properties
  2. 将下面内容修改成最下面
  3. JVM_OPTIONS="-server -Xms256m -Xmx1024m -Xss196k -XX:PermSize=16m -XX:MaxPermSize=96m"
  4. JVM_OPTIONS="-server -Xms1024m -Xmx1024m -Xss256k -XX:PermSize=16m -XX:MaxPermSize=96m"
   7.验证
     
     
  1. #在mysql进行如下操作
  2. #在slave1,2从库执行
  3. insert into test.mmm_test (id) values ("slave");
  4. #开始验证
  5. mysql -uroot -p1234567890 -h192.168.1.199 -P8066
  6. select * from test.mmm_test;
  7. MySQL [(none)]> select * from test.mmm_test; +--------+ | id | +--------+ | masetr | | slave | +--------+ 2 rows in set (0.07 sec) MySQL [(none)]> select * from test.mmm_test; +--------+ | id | +--------+ | masetr | | slave | +--------+ 2 rows in set (0.03 sec) MySQL [(none)]> select * from test.mmm_test; +--------+ | id | +--------+ | masetr | +--------+ 1 row in set (0.04 sec) MySQL [(none)]> select * from test.mmm_test; +--------+ | id | +--------+ | masetr | +--------+ 1 row in set (0.04 sec)

五 MySQL读写分离完整高可用集群架构
   (电脑最多支持6台虚拟机,再多就翘翘了,就不测试)



  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值