flume抓取mysql数据_flume抽取mysql数据到kafka

本文档详细介绍了如何配置和使用Flume从MySQL数据库抓取数据并将其推送到Kafka主题。首先,搭建了Kafka和Zookeeper环境,然后配置Flume的MySQL源和Kafka接收器,最后将MySQL驱动添加到Flume的lib目录,并创建Kafka topic,启动Flume agent,完成数据迁移。
摘要由CSDN通过智能技术生成

kafka+zookeeper搭建见文章

教程url

flume安装:

1、下载地址

2、安装-下图

1b974d5474a39e5db413017c45f5a33a.png

新建数据库和表

773a60c28d6096a288caead1391dd0b5.png

3、配置新增conf/mysql-flume.conf

[[email protected] apache-flume-1.8.0-bin]# cat conf/mysql-flume.conf

a1.channels = ch-1

a1.sources = src-1

a1.sinks = k1

###########sql source#################

# For each one of the sources, the type is defined

a1.sources.src-1.type = org.keedio.flume.source.SQLSource

a1.sources.src-1.hibernate.connection.url = jdbc:mysql://192.168.3.191:3306/chenhuachao

# Hibernate Database connection properties

a1.sources.src-1.hibernate.connection.user = root

a1.sources.src-1.hibernate.connection.password = [email protected]

a1.sources.src-1.hibernate.connection.autocommit = true

a1.sources.src-1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect

a1.sources.src-1.hibernate.connection.driver_class = com.mysql.jdbc.Driver

a1.sources.src-1.run.query.delay=5000

a1.sources.src-1.status.file.path = /opt/apache-flume-1.8.0-bin

a1.sources.src-1.status.file.name = sqlSource.status

# Custom query

a1.sources.src-1.start.from = 0

a1.sources.src-1.custom.query = select `id`, `name` from test

a1.sources.src-1.batch.size = 1000

a1.sources.src-1.max.rows = 1000

a1.sources.src-1.hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider

a1.sources.src-1.hibernate.c3p0.min_size=1

a1.sources.src-1.hibernate.c3p0.max_size=10

################################################################

a1.channels.ch-1.type = memory

a1.channels.ch-1.capacity = 10000

a1.channels.ch-1.transactionCapacity = 10000

a1.channels.ch-1.byteCapacityBufferPercentage = 20

a1.channels.ch-1.byteCapacity = 800000

################################################################

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k1.topic = TestTopic

a1.sinks.k1.brokerList = 192.168.3.191:9092,192.168.3.193:9092,192.168.3.194:9092

a1.sinks.k1.requiredAcks = 1

a1.sinks.k1.batchSize = 20

a1.sinks.k1.channel = c1

a1.sinks.k1.channel = ch-1

a1.sources.src-1.channels=ch-1

4、添加mysql驱动到flume的lib目录下

$ wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.35.tar.gz

$ tar xzf mysql-connector-java-5.1.35.tar.gz

$ cp mysql-connector-java-5.1.35-bin.jar lib/

5、添加kafka的topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic TestTopic

6、启动flume agent

./bin/flume-ng agent -n a1 -c conf -f conf/mysql-flume.conf -Dflume.root.logger=INFO,console

7、操作数据库,新增表数据,查看topic数据

83605231a823168a6a5cee83641dc83a.png

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值