作者最近研究kafka 0.9的connect功能,测试过程如下:
1.创建容器(本次采用docker容器构建kafka环境)
docker run -p 10924:9092 -p 21814:2181 --name confluent -i -t -d java /bin/bash
2.将confluent安装程序拷贝进容器;
docker cp confluent.zip confluent:/root
3.进入到confluent容器
docker exec -it confluent /bin/bash
4.解压confluent压缩包
unzip confluent.zip
5.启动kafka
/root/confluent/bin/zookeeper-server-start /root/confluent/etc/kafka/zookeeper.properties & > zookeeper.log
/root/confluent/bin/kafka-server-start /root/confluent/etc/kafka/server.properties & > server.log
/root/confluent/bin/schema-registry-start /root/confluent/etc/schema-registry/schema-registry.properties & > schema.log
6.测试kafka 是否正常
开两个docker窗口,一个跑producer,一个跑consumer,
/root/confluent/bin/kafka-avro-console-producer --broker-list localhost:9092 --topic test --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
/root/confluent/bin/kafka-avro-console-consumer --topic test --zookeeper localhost:2181 --from-beginning
在producer端依次输入以下记录,确认consumer能正确显示;
{"f1": "value1"}
{"f1": "value2"}
{"f1": "value3"}
以上为安装kafka过程,接下来开始测试jdbc接口;
测试之前,需要获取mysql JDBC的驱动并将获放在kafka环境对应的jre/lib文件夹里
-----------------------------------
测试jdbc connect
1.创建配置文件quickstart-mysql.properties,内容如下:
name=test-mysql-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:mysql://192.168.99.100:33061/test1?user=root&password=welcome1
mode=incrementing
incrementing.column.name=id
topic.prefix=test-mysql-jdbc-
注:mysql是我在另一个容器里运行的,jdbc:mysql://192.168.99.100:33061/test1?user=root&password=welcome1是连接容器里的mysql的连接串
2.执行./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-mysql.properties
3.执行./bin/kafka-avro-console-consumer --new-consumer --bootstrap-server 192.168.99.100:10924 --topic test-mysql-jdbc-accounts --from-beginning
在数据库里增加一条记录
consumer端显示新增记录