java 发送数据到flume_使用flume将数据转储到S3中

org.apache.flume.EventDeliveryException:java.lang.RuntimeException:java.lang.ClassNotFoundException:未找到类org.apache.hadoop.fs.s3native.NativeS3FileSystem

我在core-site.xml中添加了 .

fs.s3n.impl

org.apache.hadoop.fs.s3native.NativeS3FileSystem

The FileSystem for s3n: (Native S3) uris.

bin / hadoop classpath / opt / hadoop / hadoop / etc / hadoop:/ opt / hadoop / hadoop / share / hadoop / common / lib /:/ opt / hadoop / hadoop / share / hadoop / common /:/ opt / hadoop / Hadoop的/股/的Hadoop / HDFS中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / HDFS / lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / HDFS /中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop /纱/ lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop /纱/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / MapReduce的/ lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / MapReduce的/ :: /选择/ Hadoop的/ Hadoop的/股/的Hadoop /工具/ lib中/中:/ opt / Hadoop的/ Hadoop的//的contrib /容量调度/的.jar

Hadoop版本是2.7.1

Flume版本是1.6.0

Flume Agent配置:

flume1.sources =

flume1.channels = kafka-channel-allMsgs kafka-channel-user-match-served-stream

flume1.sinks = s3-sink-user-match-served-stream s3-sink-allMsgs

flume1.channels.kafka-channel-allMsgs.type = org.apache.flume.channel.kafka.KafkaChannel

flume1.channels.kafka-channel-allMsgs.brokerList = 10.0.1.175:9092 , 10.0.1.229:9092

flume1.channels.kafka-channel-allMsgs.zookeeperConnect = 10.0.1.60:2181

flume1.channels.kafka-channel-allMsgs.topic = allMsgs

flume1.channels.kafka-channel-allMsgs.groupId = s3_flume_events

flume1.channels.kafka-channel-allMsgs.readSmallestOffset = false

flume1.channels.kafka-channel-allMsgs.parseAsFlumeEvent = false

flume1.channels.kafka-channel-user-match-served-stream.type = org.apache.flume.channel.kafka.KafkaChannel

flume1.channels.kafka-channel-user-match-served-stream.brokerList = 10.0.1.175:9092 , 10.0.1.229:9092

flume1.channels.kafka-channel-user-match-served-stream.zookeeperConnect = 10.0.1.60:2181

flume1.channels.kafka-channel-user-match-served-stream.topic = user_match_served_stream

flume1.channels.kafka-channel-user-match-served-stream.groupId = s3_flume_matched_served_stream

flume1.channels.kafka-channel-user-match-served-stream.readSmallestOffset = false

flume1.channels.kafka-channel-user-match-served-stream.parseAsFlumeEvent = false

flume1.sinks.s3-sink-user-match-served-stream.channel = kafka-channel-user-match-served-stream

flume1.sinks.s3-sink-user-match-served-stream.type = hdfs

flume1.sinks.s3-sink-user-match-served-stream.hdfs.filePrefix = user_match

flume1.sinks.s3-sink-user-match-served-stream.hdfs.useLocalTimeStamp = true

flume1.sinks.s3-sink-user-match-served-stream.hdfs.path=s3n://:@bucket/served_message/%y-%m/%y-%m-%d

flume1.sinks.s3-sink-user-match-served-stream.hdfs.batchSize = 1024

flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollCount=1270000

flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollInterval=1800

flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollSize=133169152

flume1.sinks.s3-sink-user-match-served-stream.hdfs.codeC=bzip2

flume1.sinks.s3-sink-user-match-served-stream.hdfs.fileType=SequenceFile

flume1.sinks.s3-sink-allMsgs.channel = kafka-channel-allMsgs

flume1.sinks.s3-sink-allMsgs.type = hdfs

flume1.sinks.s3-sink-allMsgs.hdfs.filePrefix = allMsgs

flume1.sinks.s3-sink-allMsgs.hdfs.useLocalTimeStamp = true

flume1.sinks.s3-sink-allMsgs.hdfs.path=s3n://:@bucket/all_Msgs/%y-%m/%y-%m-%d

flume1.sinks.s3-sink-allMsgs.hdfs.batchSize = 1024

flume1.sinks.s3-sink-allMsgs.hdfs.rollCount=1270000

flume1.sinks.s3-sink-allMsgs.hdfs.rollInterval=1800

flume1.sinks.s3-sink-allMsgs.hdfs.rollSize=133169152

flume1.sinks.s3-sink-allMsgs.hdfs.codeC=bzip2

flume1.sinks.s3-sink-allMsgs.hdfs.fileType=SequenceFile

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值