一、MySQL、Zookeeper、Kafak安装和配置
略
二、MySQL数据库准备
1.MySQL开启binlog
subo vim /etc/my.cnf
[mysqld] // 添加下面三行
server_id=1
log-bin=mysql-bin
binlog_format=row
注意:
- 万万不可修改/etc/my.cnf的权限,否则MySQL就会忽略这个配置文件;
- 修改成功之后重启MySQL:
sudo systemctl restart mysqld
- 检测binlog是否开启成功:
mysql -uroot -paaaaaa -e "show variables like'%log_bin%'"
2.准备测试数据库和表
启动MySQL客户端,创建数据库和表用于测试
CREATE DATABASE `atguigu2022` CHARACTER SET utf8 COLLATE utf_general_ci;
use atguigu2022;
create table stu(id int primary key,name varchar(255),age int);
三、安装MySQL Connector
1.下载MySQL Connector
wget https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.7.1.Final/debezium-connector-mysql-1.7.1.Final-plugin.tar.gz
2.解压MySQL Connector
mkdir -p /opt/module/debezium/connector
tar -zxvf debezium-connector-mysql-1.7.1.Final-plugin.tar.gz -C /opt/module/debezium/connector/
3.配置MySQL Connector插件
打开kafka配置文件connect-distributed.properties进行配置
vim /opt/module/kafka-2.4.1/config/connect-distributed.properties
bootstrap.servers=192.168.8.105:9092,192.168.8.106:9092,192.168.8.107:9092 // kafka服务地址
group.id=connect-mysql // 自定义名字
key.converter=org.apache.kafka.connect.json.JsonConverter // 不用修改,默认
vvalue.converter=org.apache.kafka.connect.json.JsonConverter // 不用修改,默认
key.converter.schemas.enable=false // 改为false
value.converter.schemas.enable=false // 改为false
status.storage.topic=connect-mysql-stauts // 不用修改,默认
status.storage.replication.factor=2 // 改为2,安全
offset.flush.interval.ms=10000 // 不用修改,默认
plugin.path=/opt/module/debezium/connector // 一定要配置
注意:不要忘记分发配置文件到kafka其他节点
四、启动各个组件
1.启动MySQL、Zookeeper、Kafka
2.启动Kafka Connector
在Hadoop162启动kafka Connector
/opt/module/kafka-2.4.1/bin/connect-distributed.sh -daemon /opt/module/kafka-2.4.1/config/connect-distributed.properties
jps查看进程包含ConnectDistributed表示启动成功
3.检测Kafka Connector是否正常工作
1.检测kafka连接器的服务状态
curl -H "Accept:application/json" 192.168.8.105:8083/
结果
{"version":"2.4.1","commit":"c57222ae8cd7866b","kafka_cluster_id":"n-wGQaVeQWSTyevj6vVx_Q"}%
2.检查向Kafka Connect注册的连接器列表
curl -H "Accept:application/json" 192.168.8.105:8083/connectors
结果
[]%
五、部署Debezium MySQL Connector
1.配置信息
{
"name": "atguigu-mysql-connector", // 自定义名字
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "hadoop162", // 数据库ip
"database.port": "3306",
"database.user": "root", // 数据库用户名
"database.password": "debezium", // 数据库密码
"database.server.id": "184054", // 自定义
"database.server.name": "bigdata", // 自定义,后面队列名有用
"database.include.list": "atguigu2022", // 自定义,后面队列名有用
"database.history.kafka.bootstrap.servers": "hadoop162:9092", // kafka服务地址
"database.history.kafka.topic": "schema-changes.inventory" // 默认,不用修改
}
}
- name:连接器名字
- database.include.list:要监控的数据库列表。不填为全部数据库表,多个用,号分隔
- database.server.name:服务器名,会成为topic的前缀
2.注册连接器
Kafka Connect服务的API提交POST针对/connectors资源的请求,其中包含描述新连接器(称为inventory-connector)的JSON文档。
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" 192.168.8.105:8083/connectors/ -d '{
"name": "atguigu-mysql-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "192.168.8.104",
"database.port": "3306",
"database.user": "root",
"database.password": "root",
"database.server.id": "184054",
"database.server.name": "bigdata",
"database.include.list": "atguigu2022",
"database.history.kafka.bootstrap.servers": "192.168.8.105:9092,192.168.8.106:9092,192.168.8.105:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}'
结果:HTTP/1.1 201 Created 表示成功
Windows运行时执行命令如下:
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" 192.168.8.105:8083/connectors/ -d "{\"name\":\"atguigu-mysql-connector\",\"config\":{\"connector.class\":\"io.debezium.connector.mysql.MySqlConnector\",\"tasks.max\":\"1\",\"database.hostname\":\"192.168.8.104\",\"database.port\":\"3306\",\"database.user\":\"root\",\"database.password\":\"root\",\"database.server.id\":\"184054\",\"database.server.name\":\"bigdata\",\"database.include.list\":\"atguigu2022\",\"database.history.kafka.bootstrap.servers\":\"192.168.8.105:9092,192.168.8.106:9092,192.168.8.105:9092\",\"database.history.kafka.topic\":\"schema-changes.inventory\"}}"
3.测试
curl -H "Accept:application/json" 192.168.8.105:8083/connectors
结果
["atguigu-mysql-connector"]%
消费消息命令:
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.8.105:9092 --topic bigdata.atguigu2022.stu --offset latest --partition 0