docker compose mysql主从复制及orchestrator高可用使用

 1.orchestrator 功能演示:

1.1 多级级联:

1.2 主从切换:

切换成功后,原来的主库是红色的,需要在主库的配置页面点击“start replication ”,重新连接上新的主库。

1.3 主从故障,从库自动切换新主库

2.mysql主从复制 搭建

参考地址:https://www.jb51.net/server/3105779cp.htmhttps://www.jb51.net/server/3105779cp.htm

主库my.cnf配置文件如下:

[mysqld]
log-bin=mysql-bin
server-id=1
report_host=172.22.0.103
log-slave-updates=1
# 启用GTID模式
gtid_mode=ON
# 确保一致性
enforce_gtid_consistency=ON
# 使用表格存储master信息
master_info_repository=TABLE
# 使用表格存储relay log信息
relay_log_info_repository=TABLE
# 设置binlog格式为ROW
binlog_format=ROW

其他数据库的my.cnf只需要修改server-id和report_host就行了。

docker compose 配置文件如下:

version: "3"

services:

  mysql-master:
    container_name: mysql-master
    hostname: mysql-master
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3307:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.103
    volumes:
      - ./mysql1/conf/my.cnf:/etc/my.cnf
      - ./mysql1/data:/var/lib/mysql
  
  mysql-slave:
    container_name: mysql-slave
    hostname: mysql-slave
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3308:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.104
    volumes:
      - ./mysql2/conf/my.cnf:/etc/my.cnf
      - ./mysql2/data:/var/lib/mysql
  mysql-slave2:
    container_name: mysql-slave2
    hostname: mysql-slave2
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3309:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.105
    volumes:
      - ./mysql3/conf/my.cnf:/etc/my.cnf
      - ./mysql3/data:/var/lib/mysql
networks:
  mysql-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.22.0.0/16

主从复制使用gtid,

主库需要执行的sql:

create user 'repl_user'@'%' identified by 'test123456';
grant replication slave on *.* to 'repl_user'@'%';

从库需要执行的sql:

stop slave ;

CHANGE MASTER TO MASTER_HOST='192.168.0.62', MASTER_PORT=3306, MASTER_USER='repl_user',
MASTER_PASSWORD='test123456',master_auto_position=1;

start slave;

show slave status;

--主从切换时要用到,提前创建好用户

create user 'repl_user'@'%' identified by 'test123456';
grant replication slave on *.* to 'repl_user'@'%';

检查主从同步是否配置好,在主库执行以下sql:

show slave hosts;

主从复制正常时,显示如下:

3.orchestrator 搭建

参考地址:Orchestrator实现MySQL故障切换 - 墨天轮 (modb.pro)

orchestrator.conf.json配置文件如下:

{
  "Debug": true,
  "EnableSyslog": false,
  "ListenAddress": ":3000",
  "MySQLTopologyUser": "orc_client_user",
  "MySQLTopologyPassword": "test123456",
  "MySQLTopologyCredentialsConfigFile": "",
  "MySQLTopologySSLPrivateKeyFile": "",
  "MySQLTopologySSLCertFile": "",
  "MySQLTopologySSLCAFile": "",
  "MySQLTopologySSLSkipVerify": true,
  "MySQLTopologyUseMutualTLS": false,
  "MySQLOrchestratorHost": "172.22.0.102",
  "MySQLOrchestratorPort": 3306,
  "MySQLOrchestratorDatabase": "orcdb",
  "MySQLOrchestratorUser": "orc1",
  "MySQLOrchestratorPassword": "orc123456",
  "MySQLOrchestratorCredentialsConfigFile": "",
  "MySQLOrchestratorSSLPrivateKeyFile": "",
  "MySQLOrchestratorSSLCertFile": "",
  "MySQLOrchestratorSSLCAFile": "",
  "MySQLOrchestratorSSLSkipVerify": true,
  "MySQLOrchestratorUseMutualTLS": false,
  "MySQLConnectTimeoutSeconds": 1,
  "RaftEnabled": true,
  "RaftDataDir": "/var/lib/orchestrator",
  "RaftBind": "172.22.0.91",	
  "DefaultRaftPort": 10008,
  "RaftNodes": [
    "172.22.0.91",
    "172.22.0.92",
    "172.22.0.93"
  ], 
  "DefaultInstancePort": 3306,
  "DiscoverByShowSlaveHosts": false,
  "InstancePollSeconds": 5,
  "DiscoveryIgnoreReplicaHostnameFilters": [
    "a_host_i_want_to_ignore[.]example[.]com",
    ".*[.]ignore_all_hosts_from_this_domain[.]example[.]com",
    "a_host_with_extra_port_i_want_to_ignore[.]example[.]com:3307"
  ],
  "UnseenInstanceForgetHours": 240,
  "SnapshotTopologiesIntervalHours": 0,
  "InstanceBulkOperationsWaitTimeoutSeconds": 10,
  "HostnameResolveMethod": "default",
  "MySQLHostnameResolveMethod": "@@hostname",
  "SkipBinlogServerUnresolveCheck": true,
  "ExpiryHostnameResolvesMinutes": 60,
  "RejectHostnameResolvePattern": "",
  "ReasonableReplicationLagSeconds": 10,
  "ProblemIgnoreHostnameFilters": [],
  "VerifyReplicationFilters": false,
  "ReasonableMaintenanceReplicationLagSeconds": 20,
  "CandidateInstanceExpireMinutes": 60,
  "AuditLogFile": "",
  "AuditToSyslog": false,
  "RemoveTextFromHostnameDisplay": ".mydomain.com:3306",
  "ReadOnly": false,
  "AuthenticationMethod": "",
  "HTTPAuthUser": "",
  "HTTPAuthPassword": "",
  "AuthUserHeader": "",
  "PowerAuthUsers": [
    "*"
  ],
  "ClusterNameToAlias": {
    "127.0.0.1": "test suite"
  },
  "ReplicationLagQuery": "",
  "DetectClusterAliasQuery": "SELECT SUBSTRING_INDEX(@@hostname, '.', 1)",
  "DetectClusterDomainQuery": "",
  "DetectInstanceAliasQuery": "",
  "DetectPromotionRuleQuery": "",
  "DataCenterPattern": "[.]([^.]+)[.][^.]+[.]mydomain[.]com",
  "PhysicalEnvironmentPattern": "[.]([^.]+[.][^.]+)[.]mydomain[.]com",
  "PromotionIgnoreHostnameFilters": [],
  "DetectSemiSyncEnforcedQuery": "",
  "ServeAgentsHttp": false,
  "AgentsServerPort": ":3001",
  "AgentsUseSSL": false,
  "AgentsUseMutualTLS": false,
  "AgentSSLSkipVerify": false,
  "AgentSSLPrivateKeyFile": "",
  "AgentSSLCertFile": "",
  "AgentSSLCAFile": "",
  "AgentSSLValidOUs": [],
  "UseSSL": false,
  "UseMutualTLS": false,
  "SSLSkipVerify": false,
  "SSLPrivateKeyFile": "",
  "SSLCertFile": "",
  "SSLCAFile": "",
  "SSLValidOUs": [],
  "URLPrefix": "",
  "StatusEndpoint": "/api/status",
  "StatusSimpleHealth": true,
  "StatusOUVerify": false,
  "AgentPollMinutes": 60,
  "UnseenAgentForgetHours": 6,
  "StaleSeedFailMinutes": 60,
  "SeedAcceptableBytesDiff": 8192,
  "PseudoGTIDPattern": "",
  "PseudoGTIDPatternIsFixedSubstring": false,
  "PseudoGTIDMonotonicHint": "asc:",
  "DetectPseudoGTIDQuery": "",
  "BinlogEventsChunkSize": 10000,
  "SkipBinlogEventsContaining": [],
  "ReduceReplicationAnalysisCount": true,
  "FailureDetectionPeriodBlockMinutes": 60,
  "FailMasterPromotionOnLagMinutes": 0,
  "RecoveryPeriodBlockSeconds": 60,
  "RecoveryIgnoreHostnameFilters": [],
  "RecoverMasterClusterFilters": [
    "*"
  ],
  "RecoverIntermediateMasterClusterFilters": [
    "*"
  ],
  "OnFailureDetectionProcesses": [
    "echo 'Detected {failureType} on {failureCluster}. Affected replicas: {countSlaves}' >> /tmp/recovery.log"
  ],
  "PreGracefulTakeoverProcesses": [
    "echo 'Planned takeover about to take place on {failureCluster}. Master will switch to read_only' >> /tmp/recovery.log"
  ],
  "PreFailoverProcesses": [
    "echo 'Will recover from {failureType} on {failureCluster}' >> /tmp/recovery.log"
  ],
  "PostFailoverProcesses": [
    "echo '(for all types) Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
  ],
  "PostUnsuccessfulFailoverProcesses": [],
  "PostMasterFailoverProcesses": [
    "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Promoted: {successorHost}:{successorPort}' >> /tmp/recovery.log"
  ],
  "PostIntermediateMasterFailoverProcesses": [
    "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
  ],
  "PostGracefulTakeoverProcesses": [
    "echo 'Planned takeover complete' >> /tmp/recovery.log"
  ],
  "CoMasterRecoveryMustPromoteOtherCoMaster": true,
  "DetachLostSlavesAfterMasterFailover": true,
  "ApplyMySQLPromotionAfterMasterFailover": true,
  "PreventCrossDataCenterMasterFailover": false,
  "PreventCrossRegionMasterFailover": false,
  "MasterFailoverDetachReplicaMasterHost": false,
  "MasterFailoverLostInstancesDowntimeMinutes": 0,
  "PostponeReplicaRecoveryOnLagMinutes": 0,
  "OSCIgnoreHostnameFilters": [],
  "GraphiteAddr": "",
  "GraphitePath": "",
  "GraphiteConvertHostnameDotsToUnderscores": true,
  "ConsulAddress": "",
  "ConsulAclToken": "",
  "ConsulKVStoreProvider": "consul"
}

多个orchestrator.conf.json需要修改的地方如下:


 "MySQLTopologyUser": "orc_client_user",
 "MySQLTopologyPassword": "test123456",

  ......

  "MySQLOrchestratorHost": "172.22.0.102",
  "MySQLOrchestratorPort": 3306,
  "MySQLOrchestratorDatabase": "orcdb",
  "MySQLOrchestratorUser": "orc1",
  "MySQLOrchestratorPassword": "orc123456",

   ......

  "AuthenticationMethod": "basic",

  "HTTPAuthUser": "admin",

  "HTTPAuthPassword": "uq81sgca1da",  

  "RaftEnabled":true,

  "RaftDataDir":"/usr/local/orchestrator/raftdata",

  "RaftBind": "172.22.0.91",	
  "DefaultRaftPort": 10008,
  "RaftNodes": [
    "172.22.0.91",
    "172.22.0.92",
    "172.22.0.93"
  ], 

    ......

  "RecoveryPeriodBlockSeconds": 60,

  "RecoveryIgnoreHostnameFilters": [],

  "RecoverMasterClusterFilters": [

    "*"

  ],

  "RecoverIntermediateMasterClusterFilters": [

    "*"

  ],

  ......

MySQLTopologyUser是orchestrator监听mysql主从数据库时使用的用户

MySQLOrchestratorUser是orchestrator自身需要的数据库

在主库和从库执行以下sql,创建MySQLTopologyUser:

create user 'orc_client_user'@'%' identified by 'test123456'; 
GRANT ALL PRIVILEGES ON *.* TO 'orc_client_user'@'%';

在orchestrator自身的数据库(这个数据库是独立的,不在三个mysql主从库之中)执行以下sql,创建MySQLOrchestratorUser:

CREATE USER 'orc1'@'%' IDENTIFIED BY 'orc123456';
GRANT ALL PRIVILEGES ON *.* TO 'orc1'@'%';

 成功启动后,如下图:

4.完整的docker-compose.yml文件

version: "3"

services:

  # orchestrator 监控
  orchestrator-test1:
    container_name: orchestrator-test1
    image: openarkcode/orchestrator:latest
    restart: always
    ports:
      - "3000:3000"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.91
    volumes:
      - ./orchestrator.conf.json:/etc/orchestrator.conf.json

  orchestrator-test2:
    container_name: orchestrator-test2
    image: openarkcode/orchestrator:latest
    restart: always
    ports:
      - "3002:3000"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.92
    volumes:
      - ./orchestrator2.conf.json:/etc/orchestrator.conf.json

  orchestrator-test3:
    container_name: orchestrator-test3
    image: openarkcode/orchestrator:latest
    restart: always
    ports:
      - "3003:3000"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.93
    volumes:
      - ./orchestrator3.conf.json:/etc/orchestrator.conf.json

  # orc 使用的数据库
  orc-mysql:
    container_name: orc-mysql
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
      MYSQL_PASSWORD: orc123456
      MYSQL_USER: orc1
      MYSQL_DATABASE: orcdb
    ports:
      - "3306:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.102
    volumes:
      - ./mysql:/var/lib/mysql

  mysql-master:
    container_name: mysql-master
    hostname: mysql-master
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3307:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.103
    volumes:
      - ./mysql1/conf/my.cnf:/etc/my.cnf
      - ./mysql1/data:/var/lib/mysql
  
  mysql-slave:
    container_name: mysql-slave
    hostname: mysql-slave
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3308:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.104
    volumes:
      - ./mysql2/conf/my.cnf:/etc/my.cnf
      - ./mysql2/data:/var/lib/mysql
  mysql-slave2:
    container_name: mysql-slave2
    hostname: mysql-slave2
    image: mysql:5.7.30
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: test123456
    ports:
      - "3309:3306"
    networks:
      mysql-network:
        ipv4_address: 172.22.0.105
    volumes:
      - ./mysql3/conf/my.cnf:/etc/my.cnf
      - ./mysql3/data:/var/lib/mysql
networks:
  mysql-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.22.0.0/16

5.常见问题 

5.1 拖动报Relocating m03:3306 below m02:3306 turns to be too complex; please do it manually错误

解决办法:检查mysql是否启用gtid模式

5.2 mysql无法拖动到另一个mysql的下级,报ERROR m04:3306 cannot replicate from m03:3306. Reason: instance does not have log_slave_updates enabled: m03:3306错误

解决办法:mysql需要开启log-slave-updates

5.3 主从故障后,从库上显示errant gtid found错误

解决办法1:执行reset master

解决办法2:跳过从库多余的gtid,参考https://blog.csdn.net/weixin_48154829/article/details/124200051

5.4 orchestrator无法发现mysql主从集群,后台日志报hostname无法解析错误

解决办法:需要在安装orchestrator的机器上,修改hosts文件

  • 12
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
1. 安装Docker 如果您还没有安装Docker,请按照官方文档的说明安装Docker。 2. 创建Docker网络 为了使主从复制工作,我们需要为Docker容器创建一个网络。在命令行中运行以下命令: ``` sudo docker network create mysql_network ``` 3. 启动主数据库 我们将使用官方的MySQL Docker镜像来启动主数据库。在命令行中运行以下命令: ``` sudo docker run -d --name mysql_master --net mysql_network -e MYSQL_ROOT_PASSWORD=password mysql:latest --server-id=1 --log-bin=mysql-bin --binlog-do-db=mydb ``` 这将启动一个名为mysql_master的Docker容器,并将其连接到我们创建的mysql_network网络中。我们还指定了一个root用户的密码,并为该容器分配了一个server-id,以便在创建从数据库时使用。 我们还使用了两个MySQL选项来启用二进制日志记录和仅记录mydb数据库的事务。 4. 启动从数据库 现在我们需要启动另一个MySQL容器作为从数据库。在命令行中运行以下命令: ``` sudo docker run -d --name mysql_slave --net mysql_network -e MYSQL_ROOT_PASSWORD=password mysql:latest --server-id=2 --log-bin=mysql-bin --binlog-do-db=mydb --relay-log=mysql-relay-bin --log-slave-updates=1 --read-only=1 --slave-skip-errors=all --skip-slave-start ``` 这将启动一个名为mysql_slave的Docker容器,并将其连接到我们创建的mysql_network网络中。我们还指定了一个root用户的密码,并为该容器分配了一个server-id。 我们还使用了一些MySQL选项来启用二进制日志记录和仅记录mydb数据库的事务。我们还启用了中继日志记录和从服务器更新主服务器的日志记录。我们还将容器设置为只读模式,并跳过任何从数据库错误。最后,我们将跳过从服务器的启动。 5. 配置主数据库 现在我们需要登录主数据库并为它创建一个用于从服务器的用户。在命令行中运行以下命令: ``` sudo docker exec -it mysql_master mysql -u root -p ``` 输入我们在启动容器时指定的root密码,这将进入MySQL shell。在MySQL shell中,运行以下命令: ``` GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'password'; FLUSH PRIVILEGES; SHOW MASTER STATUS; ``` 这将创建一个名为slave_user的用户并授予其复制权限。我们还刷新了特权表以确保更改生效。最后,我们显示了主服务器的状态,以便稍后配置从服务器。 请注意,我们在第一行中使用了“%”通配符,以便从任何位置连接到主服务器。在生产环境中,您应该更明确地指定从服务器的IP地址。 6. 配置从数据库 现在我们需要登录从数据库并配置它以连接到主服务器。在命令行中运行以下命令: ``` sudo docker exec -it mysql_slave mysql -u root -p ``` 输入我们在启动容器时指定的root密码,这将进入MySQL shell。在MySQL shell中,运行以下命令: ``` CHANGE MASTER TO MASTER_HOST='mysql_master', MASTER_USER='slave_user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=0; START SLAVE; ``` 这将配置从服务器以连接到我们的主数据库,并指定将从哪个二进制日志文件开始复制。我们还启动从服务器以开始复制。 请注意,我们在第一行中使用了主服务器的容器名称(mysql_master)作为主机名。在Docker网络中,容器可以使用其名称进行通信。 7. 测试主从复制 现在,我们已经成功配置了主从复制,我们可以测试它是否正常工作。在MySQL shell中,运行以下命令: ``` USE mydb; CREATE TABLE test (id INT PRIMARY KEY); INSERT INTO test VALUES (1); ``` 这将在主服务器上创建一个名为test的表,并将一行插入其中。我们可以使用以下命令在从服务器上检查是否成功复制: ``` USE mydb; SELECT * FROM test; ``` 如果一切正常,您应该看到与主服务器上插入的相同的行。 现在,我们已经成功地使用Docker部署了MySQL主从复制。请记住,这只是一个简单的示例,用于演示如何设置主从复制。在生产环境中,您应该仔细考虑安全性和高可用性等问题。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值