前提:
1.需要安装flume服务
2.mysql-connector-java-5.1.35-bin.jar需要有这个连接的jar包,且版本要一致。
3.status.file.path 路径需要给予flume相应的权限(chmod)
a1.channels = c1
a1.sources = r1
a1.sinks = k1
a1.sources.r1.type = org.keedio.flume.source.SQLSource
a1.sources.r1.hibernate.connection.url = jdbc:mysql://rr-bp1to6k8d31463pu83o.mysql.rds.aliyuncs.com:3306/bigdatabi
a1.sources.r1.hibernate.connection.user = spagobi
a1.sources.r1.hibernate.connection.password = DDzQXZD/VwVkeA==
a1.sources.r1.hibernate.connection.autocommit = true
a1.sources.r1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
a1.sources.r1.hibernate.connection.driver_class = com.mysql.jdbc.Driver
a1.sources.r1.run.query.delay=5000
a1.sources.r1.status.file.path = /var/flume
a1.sources.r1.status.file.name = sqlSource.status
a1.sources.r1.start.from = 1
a1.sources.r1.custom.query = select * from aaa
a1.sources.r1.batch.size = 1000
a1.sources.r1.max.rows = 1000
a1.sources.r1.hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider
a1.sources.r1.hibernate.c3p0.min_size=1
a1.sources.r1.hibernate.c3p0.max_size=10
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000
a1.channels.c1.byteCapacityBufferPercentage = 20
a1.channels.c1.byteCapacity = 800000
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = realtime
a1.sinks.k1.brokerList = datanode1:9092,datanode2:9092,sparkmaster:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
a1.sinks.k1.channel = c1
a1.sources.r1.channels=c1
4.创建kafka的topic
./kafka-topics --describe --zookeeper 10.10.0.27:2181,10.10.0.8:2181,10.10.0.127:2181 --topic realtime
5.启动kafka的topic
./kafka-console-consumer --zookeeper 10.10.0.27:2181,10.10.0.8:2181,10.10.0.127:2181 --from-beginning --topic realtime
6.启动flume的conf
bin/flume-ng agent -c conf -f conf/flume-kafka.conf -name a1 -Dflume.root.logger=INFO,console
补充:
1.在源库上执行了查询,具有入侵性。
2.通过轮询的方式实现增量,只能做到准实时,而且轮询间隔越短,对源库的影响越大。
3.只能识别新增数据,检测不到删除与更新。
4.要求源库必须有用于表示增量的字段。