一、linux下试验中间件canal的example示例-binlog日志的实时获取显示
今天重装mysql后,进行了canal的再次试验,原来用的mysql5.7, 今天重装直接换了5.6算了。反正测试服务器的mysql也不常用。canal启动后日志显示examplep repare to find start position just show master status 开始寻找开始点,因为canal的配置中canal.instance.master.position项没有填写,所以这个如果mysql没有数据变化canal会一直停留在这里直到mysql数据库有变化,也即是从启动canal开始认为是数据变化的起始点 publish:October 23, 2018 -Tuesday。
==> canal/canal.log <== Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=96m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: UseCMSCompactAtFullCollection is deprecated and will likely be removed in a future release. 2018-10-23 17:05:17.093 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler 2018-10-23 17:05:17.183 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations 2018-10-23 17:05:17.186 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## start the canal server. 2018-10-23 17:05:17.325 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[192.168.90.123:11111] ==> example/example.log <== 2018-10-23 17:05:18.261 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [canal.properties] 2018-10-23 17:05:18.268 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [example/instance.properties] 2018-10-23 17:05:18.675 [main] WARN o.s.beans.GenericTypeAwarePropertyDescriptor - Invalid JavaBean property 'connectionCharset' being accessed! Ambiguous write methods found next to actually used [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.lang.String)]: [public void com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.setConnectionCharset(java.nio.charset.Charset)] 2018-10-23 17:05:18.804 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [canal.properties] 2018-10-23 17:05:18.805 [main] INFO c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [example/instance.properties] 2018-10-23 17:05:19.164 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set 2018-10-23 17:05:19.741 [main] INFO c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start CannalInstance for 1-example 2018-10-23 17:05:19.977 [main] INFO c.a.otter.canal.instance.core.AbstractCanalInstance - start successful.... 2018-10-23 17:05:19.977 [destination = example , address = /192.168.90.123:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just show master status ==> canal/canal.log <== 2018-10-23 17:05:20.090 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## the canal server is running now ...... ==> example/example.log <== 2018-10-23 17:06:09.908 [destination = example , address = /192.168.90.123:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - find start position : EntryPosition[included=false,journalName=mysql-bin.000007,position=4961,serverId=1,gtid=<null>,timestamp=1540285200000]
数据发生变化后,example日志里显示连接到了主库的日志及position等数据,然后这里就不再显示其它东西了,原来我以为这里也会显示mysql的bin日志,怪不得必须要启动canal的client客户端连接canal。如之前文章所述:https://linge.blog.csdn.net/article/details/141367606 最后下载canal.example压缩包:https://github.com/alibaba/canal/releases/download/canal-1.1.0/canal.example-1.1.0.tar.gz
解压到/opt/modules/canal_example目录,启动后查看日志:
cd /opt/modules/canal_example bin/startup.sh cd logs tail -f canal/entry.log example/entry.log #在mysql进行数据变更后,这里会显示mysql的bin日志。 **************************************************** * Batch Id: [7] ,count : [3] , memsize : [149] , Time : 2018-10-23 17:25:35 * Start : [mysql-bin.000007:6180:1540286735000(2018-10-23 17:25:35)] * End : [mysql-bin.000007:6356:1540286735000(2018-10-23 17:25:35)] **************************************************** ================> binlog[mysql-bin.000007:6180] , executeTime : 1540286735000(2018-10-23 17:25:35) , gtid : () , delay : 393ms BEGIN ----> Thread id: 43 ----------------> binlog[mysql-bin.000007:6311] , name[canal,canal_table] , eventType : DELETE , executeTime : 1540286735000(2018-10-23 17:25:35) , gtid : () , delay : 393 ms id : 8 type=int(10) unsigned name : 512 type=varchar(255) ---------------- END ----> transaction id: 249 ================> binlog[mysql-bin.000007:6356] , executeTime : 1540286735000(2018-10-23 17:25:35) , gtid : () , delay : 394ms **************************************************** * Batch Id: [8] ,count : [3] , memsize : [149] , Time : 2018-10-23 17:27:49 * Start : [mysql-bin.000007:6387:1540286869000(2018-10-23 17:27:49)] * End : [mysql-bin.000007:6563:1540286869000(2018-10-23 17:27:49)] **************************************************** ================> binlog[mysql-bin.000007:6387] , executeTime : 1540286869000(2018-10-23 17:27:49) , gtid : () , delay : 976ms BEGIN ----> Thread id: 43 ----------------> binlog[mysql-bin.000007:6518] , name[canal,canal_table] , eventType : INSERT , executeTime : 1540286869000(2018-10-23 17:27:49) , gtid : () , delay : 976 ms id : 21 type=int(10) unsigned update=true name : aaa type=varchar(255) update=true ---------------- END ----> transaction id: 250 ================> binlog[mysql-bin.000007:6563] , executeTime : 1540286869000(2018-10-23 17:27:49) , gtid : () , delay : 977ms **************************************************** * Batch Id: [9] ,count : [3] , memsize : [161] , Time : 2018-10-23 17:28:22 * Start : [mysql-bin.000007:6594:1540286902000(2018-10-23 17:28:22)] * End : [mysql-bin.000007:6782:1540286902000(2018-10-23 17:28:22)] **************************************************** ================> binlog[mysql-bin.000007:6594] , executeTime : 1540286902000(2018-10-23 17:28:22) , gtid : () , delay : 712ms BEGIN ----> Thread id: 43 ----------------> binlog[mysql-bin.000007:6725] , name[canal,canal_table] , eventType : UPDATE , executeTime : 1540286902000(2018-10-23 17:28:22) , gtid : () , delay : 712 ms id : 21 type=int(10) unsigned name : aaac type=varchar(255) update=true ---------------- END ----> transaction id: 252 ================> binlog[mysql-bin.000007:6782] , executeTime : 1540286902000(2018-10-23 17:28:22) , gtid : () , delay : 713ms
从日志里可以明显看到几个重要字段:
eventType表示了此次行类类别,如UPDATE,INSERT,以及涉及的ID。
name显示了是操作了哪个数据库的哪张表,如上面即是操作了canal数据库的canal_table表。如此也算是成功走完了canal的功能流程。
二、阿里巴巴中间件canal.kafka将mysql-bin日志直接传入kafka消息队列
publish:October 25, 2018 -Thursday 不需要另外再单独安装canal,阿里巴巴有一个包含canal服务端和kafka对接的包:canal.kafka-1.1.0.tar.gz。github主页:
https://github.com/alibaba/canal/releases
从v1.0.26 alpha 4开始释放此包。canal.kafka-1.1.0解压后启动更改配置后直接和kafka相结合,将获取的mysql的bin日志传入kafka消息队列供后续各种使用。Canal服务端与消息队列Kafka以及RocketMQ快速开始QuickStart文档见github:
https://github.com/alibaba/canal/wiki/Canal-Kafka-RocketMQ-QuickStart
先说一下实现mysql的bin日志向kafka的消息队列传送有了canal.kafka后不再需要canal独立服务器安装。只需要:canal.kafka + mysql + kafka服务。启动了canal_kafka了,相当于同时启动了canal服务端以及与kafka的传输通道,接下来只要启动kafka和消费端即可。实际canal.kafka就相当于kafka的生产端。
#下载安装canal.kafka-1.1.0
[root@123 download]# mwget https://github.com/alibaba/canal/releases/download/canal-1.1.0/canal.kafka-1.1.0.tar.gz
[root@123 download]# mkdir /opt/modules/canal_kafka
[root@123 canal_kafka]# tar zxvf canal.kafka-1.1.0.tar.gz -C /opt/modules/canal_kafka
[root@123 canal_kafka]# cd /opt/modules/canal_kafka/
[root@123 canal_kafka]# ll
total 16
drwxr-xr-x 2 root root 4096 Oct 24 15:47 bin
drwxr-xr-x 5 root root 4096 Oct 24 15:47 conf
drwxr-xr-x 2 root root 4096 Oct 24 15:47 lib
drwxrwxrwx 2 root root 4096 Aug 20 13:55 logs
[root@123 canal_kafka]# ll conf/
total 24
-rwxrwxrwx 1 root root 3528 Aug 20 13:31 canal.properties
drwxrwxrwx 2 root root 4096 Oct 24 15:47 example
-rwxrwxrwx 1 root root 403 Aug 20 13:31 kafka.yml
-rwxrwxrwx 1 root root 3094 Aug 20 13:31 logback.xml
drwxrwxrwx 2 root root 4096 Oct 24 15:47 metrics
drwxrwxrwx 3 root root 4096 Oct 24 15:47 spring
[root@123 canal_kafka]# vim conf/example/instance.properties
## mysql serverId , v1.0.26+ will autoGen
#编辑一个ID不要的mysql的server-id以及其它的canal的id一样即可
canal.instance.mysql.slaveId=3
# position info:mysql的IP和端口
canal.instance.master.address=192.168.90.123:3306
# username/password:供canal连接mysql的账号
canal.instance.dbUsername=你的username
canal.instance.dbPassword=你的password
canal.instance.connectionCharset=UTF-8
#修改canal_kafka中的kafka配置,主要就是修改servers和topic(和后面启动的kafka消费队列topic一致)
[root@123 canal_kafka]# vim conf/kafka.yml
servers: 127.0.0.1:9092
canalDestinations:
- canalDestination: example
topic: kermitMQ
partition:
[root@123 canal_kafka]# vim conf/canal.properties
#这个文件不用动,基本使用默认即可。
canal.id= 3
canal.ip=
canal.port=11111
canal.metrics.pull.port=11112
#然后就可以启动canal_kafka了,相当于同时启动了canal服务端以及向kafka传输数据的功能,接下来就是启动kafka了。
[root@123 canal_kafka]# bin/startup.sh
#在另一个端口进入kafka的目录(我这里之前已经安装好了kafka)
[root@123 canal_kafka]# cd /opt/modules/kafka_2.11-2.0.0/bin
#在终端启动一个消费队列,topic
[root@123 bin]# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kermitMQ
接下来变更mysql数据库里的数据后,在上面的消费队列命令终端就会出现数据,流程上是走通了,只不过显示出来就和直接查看mysql-bin日志一样是乱码,截图如下:
在实际业务阿里巴巴中有吃饭对应的MQ数据消费的样例工程,包含数据编解码的功能,详细见下方文档链接:
kafka模式: com.alibaba.otter.canal.client.running.kafka.CanalKafkaClientExample
rocketMQ模式: com.alibaba.otter.canal.client.running.rocketmq.CanalRocketMQClientExample
https://github.com/alibaba/canal/blob/master/client/src/test/java/com/alibaba/otter/canal/client/running/kafka/CanalKafkaClientExample.java