flume-ng 入 oracle,Flume连接oracle实时推送数据到kafka

版本号:

RedHat6.5   JDK1.8    flume-1.6.0   kafka_2.11-0.8.2.1

flume安装

RedHat6.5安装单机flume1.6:RedHat6.5安装单机flume1.6

kafka安装

RedHat6.5安装kafka集群 : RedHat6.5安装kafka集群

1、下载flume-ng-sql-source-1.4.3.jar

flume-ng-sql-source-1.4.3.jar是flume用于连接数据库的重要支撑jar包。

2、把flume-ng-sql-source-1.4.3.jar放到flume的lib目录下

3、把oracle(此处用的是oracle库)的驱动包放到flume的lib目录下

oracle的jdbc驱动包,放在oracle安装目录下,路径为:D:\app\product\11.2.0\dbhome_1\jdbc\lib

如图:

把ojdbc5.jar放到flume的lib目录下,如图:

4、新建flume-sql.conf

在conf目录新建flume-sql.conf :

touch /usr/local/flume/apache-flume-1.6.0-bin/conf/flume-sql.conf

sudo gedit /usr/local/flume/apache-flume-1.6.0-bin/conf/flume-sql.conf

flume-sql.conf输入以下内容:

agentOne.channels = channelOne

agentOne.sources = sourceOne

agentOne.sinks = sinkOne

###########sql source#################

# For each one of the sources, the type is defined

agentOne.sources.sourceOne.type = org.keedio.flume.source.SQLSource

agentOne.sources.sourceOne.hibernate.connection.url = jdbc:oracle:thin:@192.168.168.100:1521/orcl

# Hibernate Database connection properties

agentOne.sources.sourceOne.hibernate.connection.user = flume

agentOne.sources.sourceOne.hibernate.connection.password = 1234

agentOne.sources.sourceOne.hibernate.connection.autocommit = true

agentOne.sources.sourceOne.hibernate.dialect = org.hibernate.dialect.Oracle10gDialect

agentOne.sources.sourceOne.hibernate.connection.driver_class = oracle.jdbc.driver.OracleDriver

agentOne.sources.sourceOne.run.query.delay=10000

agentOne.sources.sourceOne.status.file.path = /tmp

agentOne.sources.sourceOne.status.file.name = sqlSource.status

# Custom query

agentOne.sources.sourceOne.start.from = 0

agentOne.sources.sourceOne.custom.query = select sysdate from dual

agentOne.sources.sourceOne.batch.size = 1000

agentOne.sources.sourceOne.max.rows = 1000

agentOne.sources.sourceOne.hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider

agentOne.sources.sourceOne.hibernate.c3p0.min_size=1

agentOne.sources.sourceOne.hibernate.c3p0.max_size=10

##############################

agentOne.channels.channelOne.type = memory

agentOne.channels.channelOne.capacity = 10000

agentOne.channels.channelOne.transactionCapacity = 10000

agentOne.channels.channelOne.byteCapacityBufferPercentage = 20

agentOne.channels.channelOne.byteCapacity = 800000

agentOne.sinks.sinkOne.type = org.apache.flume.sink.kafka.KafkaSink

agentOne.sinks.sinkOne.topic = test

agentOne.sinks.sinkOne.brokerList = 192.168.168.200:9092

agentOne.sinks.sinkOne.requiredAcks = 1

agentOne.sinks.sinkOne.batchSize = 20

agentOne.sinks.sinkOne.channel = channelOne

agentOne.sinks.sinkOne.channel = channelOne

agentOne.sources.sourceOne.channels=channelOne

5、flume-ng启动flume-sql.conf和测试

cd /usr/local/flume/apache-flume-1.6.0-bin

bin/flume-ng agent --conf conf --conf-file conf/flume-sql.conf --name agentOne -Dflume.root.logger=INFO,console

运行成功日志如下:

2017-07-08 00:12:55,393 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: sinkOne: Successfully registered new MBean.

2017-07-08 00:12:55,394 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: sinkOne started

2017-07-08 00:12:55,463 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Fetching metadata from broker id:0,host:localhost,port:9092 with correlation id 0 for 1 topic(s) Set(test)

2017-07-08 00:12:55,528 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to localhost:9092 for producing

2017-07-08 00:12:55,551 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Disconnecting from localhost:9092

2017-07-08 00:12:55,582 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to slave2:9092 for producing

启动kafka的消费者,监听topic主题:

kafka-console-consumer.sh --zookeeper localhost:2181 --topic test

运行成功日志如下:

[root@master kafka_2.11-0.9.0.0]# kafka-console-consumer.sh --zookeeper localhost:2181 --topic test

"2017-07-08 00:28:53.0"

"2017-07-08 00:29:03.0"

"2017-07-08 00:29:13.0"

"2017-07-08 00:29:23.0"

"2017-07-08 00:29:33.0"

"2017-07-08 00:29:43.0"

"2017-07-08 00:29:53.0"

"2017-07-08 00:30:03.0"

6、常见报错解决办法

2017-06-27 16:26:01,293 (C3P0PooledConnectionPoolManager[identityToken->1hgey889o1sjxqn51anc3fr|29938ba5]-AdminTaskTimer) [WARN - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector.run(ThreadPoolAsynchronousRunner.java:759)] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@2d6227f3 -- APPARENT DEADLOCK!!! Complete Status:​

连接超时,造成死锁,仔细检查jdbc:oracle:thin:@192.168.168.100:1521/orcl,用户名/密码是否正确;

如果正确,还是连接不上,请检查oralce数据库是否开启了防火墙,如果是,添加入站规则或直接关闭防火墙。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值