深入使用DBus

中间的MySQL(hadoop002)即是主又是从,一定要及得加上log_slave_updates配置,否则在 MySQL主节点(hadoop001)上插入数据,MySQL从虽然可以同步数据,但是它不会级联生成binlog日志, canal就采集不到数据了。

在hadoop001节点上执行如下命令

[hadoop@hadoop001 ~]$ sudo vim /etc/my.cnf

server-id=1
log-bin=mysql-bin
binlog-format=Row
binlog-ignore-db=information_schema
binlog-ignore-db=ambari
binlog-ignore-db=dbusmgr
binlog-ignore-db=hive
binlog-ignore-db=mysql
binlog-ignore-db=performance_schema
binlog-ignore-db=sys

[hadoop@hadoop001 ~]$ sudo systemctl restart mysqld

mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_mixed_case_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_number_count=3;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_special_char_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_length=3;
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'repl'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> show master status;
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB                                                    | Executed_Gtid_Set |
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
| mysql-bin.000001 |      834 |              | information_schema,ambari,dbusmgr,hive,mysql,performance_schema,sys |                   |
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
1 row in set (0.00 sec)

登录hadoop002,编辑MySQL配置文件:

[hadoop@hadoop002 ~]$ sudo vim /etc/my.cnf

#跟主库的server-id一定要不一样
server-id=2
log-bin=mysql-bin

#下面这个必须加上,因为node02上的MySQL既是slave又是master,加上该选项才会生成级联binlog

log_slave_updates
binlog-format=Row
binlog-ignore-db=information_schema
binlog-ignore-db=ambari
binlog-ignore-db=dbusmgr
binlog-ignore-db=hive
binlog-ignore-db=mysql
binlog-ignore-db=performance_schema
binlog-ignore-db=sys

[hadoop@hadoop002 ~]$ sudo systemctl restart mysqld

[hadoop@hadoop002 ~]$ mysql -uroot -p

mysql> CHANGE MASTER TO
    -> MASTER_HOST='hadoop001',
    -> MASTER_USER='repl',
    -> MASTER_PASSWORD='123456',
    -> MASTER_LOG_FILE='mysql-bin.000001',
    -> MASTER_LOG_POS=834;
Query OK, 0 rows affected, 2 warnings (0.03 sec)

mysql> START SLAVE;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: hadoop001
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 834
               Relay_Log_File: hadoop002-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 834
              Relay_Log_Space: 531
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: f6423798-3c25-11ea-a925-00163e0426ac
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

mysql> show master status;
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB                                                    | Executed_Gtid_Set |
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
| mysql-bin.000002 |      154 |              | information_schema,ambari,dbusmgr,hive,mysql,performance_schema,sys |                   |
+------------------+----------+--------------+---------------------------------------------------------------------+-------------------+
1 row in set (0.00 sec)

mysql> SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user;
+------------------------------------+
| query                              |
+------------------------------------+
| User: 'repl'@'%';                  |
| User: 'mysql.session'@'localhost'; |
| User: 'mysql.sys'@'localhost';     |
| User: 'root'@'localhost';          |
+------------------------------------+
4 rows in set (0.00 sec)

登录node01上的MySQL:

mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)

mysql>   set global validate_password_mixed_case_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql>   set global validate_password_number_count=3;
Query OK, 0 rows affected (0.00 sec)

mysql>   set global validate_password_special_char_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql>   set global validate_password_length=3;
Query OK, 0 rows affected (0.00 sec)

mysql> create database dbus;
Query OK, 1 row affected (0.01 sec)

mysql> CREATE USER 'dbus'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON dbus.* TO dbus@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> use dbus;
Database changed

DROP TABLE IF EXISTS `db_full_pull_requests`;
CREATE TABLE `db_full_pull_requests` (
  `seqno` bigint(19) NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) DEFAULT NULL,
  `table_name` varchar(64) NOT NULL,
  `physical_tables` varchar(10240) DEFAULT NULL,
  `scn_no` int(11) DEFAULT NULL,
  `split_col` varchar(50) DEFAULT NULL,
  `split_bounding_query` varchar(512) DEFAULT NULL,
  `pull_target_cols` varchar(512) DEFAULT NULL,
  `pull_req_create_time` timestamp(6) NOT NULL,
  `pull_start_time` timestamp(6) NULL DEFAULT NULL,
  `pull_end_time` timestamp(6) NULL DEFAULT NULL,
  `pull_status` varchar(16) DEFAULT NULL,
  `pull_remark` varchar(1024) DEFAULT NULL,
  PRIMARY KEY (`seqno`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
SET FOREIGN_KEY_CHECKS=0;

DROP TABLE IF EXISTS `db_heartbeat_monitor`;
CREATE TABLE `db_heartbeat_monitor` (
    `ID` bigint(19) NOT NULL AUTO_INCREMENT COMMENT '心跳表主键',
    `DS_NAME` varchar(64) NOT NULL COMMENT '数据源名称',
    `SCHEMA_NAME` varchar(64) NOT NULL COMMENT '用户名',
    `TABLE_NAME` varchar(64) NOT NULL COMMENT '表名',
    `PACKET` varchar(256) NOT NULL COMMENT '数据包',
    `CREATE_TIME` datetime NOT NULL COMMENT '创建时间',
    `UPDATE_TIME` datetime NOT NULL COMMENT '修改时间',
    PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
SET FOREIGN_KEY_CHECKS=0;

注意:从库hadoop02不用执行上述操作,之前作了主动复制,他会自动复制的,你可以自行登录从库查看。

登录hadoop002的MySQL

从上图能看到,在hadoop001刚创建的表已经同步到hadoop002上

我们依然登录node01上的MySQL(主库)

mysql> create database hello;
Query OK, 1 row affected (0.00 sec)

mysql> use hello
Database changed
mysql> create table t1(a int, b varchar(50));
Query OK, 0 rows affected (0.03 sec)

mysql> CREATE USER 'test_user'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON hello.* TO test_user@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)


mysql> GRANT select on test_schema1.* TO dbus;
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

登录备库hadoop002上的MySQL(从库):

mysql> CREATE USER 'canal'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

在DBus-keeper页面添加数据线时可以选择自动部署Canal,强烈不推荐这种方式。 要想使用一键部署,我们只需要把Canal的安装包放到要安装的目标机机器的指定目录即可, 我们将要把 Canal部署在hadoop002、hadoop003的/home/hadoop/app/dbus下。 请自行把dbus-canal-auto-0.5.0.tar.gz上传至hadoop002、hadoop003的/home/hadoop/app/dbus下并解压缩:

cd /home/hadoop/app/dbus
tar -zxvf dbus-canal-auto-0.5.0.tar.gz

推荐手工部署,在hadoop002、hadoop003都要部署

[root@hadoop001 software]# scp dbus-canal-auto-0.5.0.tar.gz root@hadoop002:/home/hadoop/app/dbus

[root@hadoop001 software]# scp dbus-canal-auto-0.5.0.tar.gz root@hadoop003:/home/hadoop/app/dbus

[hadoop@hadoop002 dbus]$ tar -zxvf dbus-canal-auto-0.5.0.tar.gz

[hadoop@hadoop003 dbus]$ tar -zxvf dbus-canal-auto-0.5.0.tar.gz

[hadoop@hadoop002 conf]$ vim canal-auto.properties

#数据源名称,需要与dbus keeper中添加的一致
dsname=hello_db
#zk地址,替换成自己的信息
zk.path=hadoop001:2181
#canal 用户连接地址。即:要canal去同步的源端库的备库的地址
canal.address=hadoop002:3306
#canal用户名
canal.user=canal
#canal密码,替换成自己配置的
canal.pwd=123456

执行deploy.sh脚本

 

执行完之后多了几个目录:

canal-hello-db:这个就是他自动部署的canal,实际上是从它上面的canal目录拷贝了一份,并自动生成了配置

canal.log、hello_db.log就是两个软连接,方便我们查看日志,省得进到很深的目录里

reports这里存放自动部署和deploy.sh check时生成的报告,如果发现canal异常可以去这里面检查部署时的报错信息

我们的启停脚本就在canal-hello-db这个目录下,大家进入目录就能看到。

自动部署之后就不要再执行deploy.sh。

jdbc:MySQL://hadoop001:3306/dbus?useUnicode=true&characterEncoding=utf8&autoReconnect=true&failOverReadOnly=false&noAccessToProcedureBodies=true&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false

[hadoop@hadoop003 dbuskeeper_web]$ sudo cp -r dbus_jars /usr/hdp/current/storm-client/

[hadoop@hadoop003 dbuskeeper_web]$ sudo scp -r dbus_jars root@hadoop002:/usr/hdp/current/storm-client/

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值