org.apache.flume.EventDeliveryException:java.lang.RuntimeException:java.lang.ClassNotFoundException:未找到类org.apache.hadoop.fs.s3native.NativeS3FileSystem
我在core-site.xml中添加了 .
fs.s3n.impl
org.apache.hadoop.fs.s3native.NativeS3FileSystem
The FileSystem for s3n: (Native S3) uris.
bin / hadoop classpath / opt / hadoop / hadoop / etc / hadoop:/ opt / hadoop / hadoop / share / hadoop / common / lib /:/ opt / hadoop / hadoop / share / hadoop / common /:/ opt / hadoop / Hadoop的/股/的Hadoop / HDFS中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / HDFS / lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / HDFS /中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop /纱/ lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop /纱/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / MapReduce的/ lib中/中:/ opt / Hadoop的/ Hadoop的/股/的Hadoop / MapReduce的/ :: /选择/ Hadoop的/ Hadoop的/股/的Hadoop /工具/ lib中/中:/ opt / Hadoop的/ Hadoop的//的contrib /容量调度/的.jar
Hadoop版本是2.7.1
Flume版本是1.6.0
Flume Agent配置:
flume1.sources =
flume1.channels = kafka-channel-allMsgs kafka-channel-user-match-served-stream
flume1.sinks = s3-sink-user-match-served-stream s3-sink-allMsgs
flume1.channels.kafka-channel-allMsgs.type = org.apache.flume.channel.kafka.KafkaChannel
flume1.channels.kafka-channel-allMsgs.brokerList = 10.0.1.175:9092 , 10.0.1.229:9092
flume1.channels.kafka-channel-allMsgs.zookeeperConnect = 10.0.1.60:2181
flume1.channels.kafka-channel-allMsgs.topic = allMsgs
flume1.channels.kafka-channel-allMsgs.groupId = s3_flume_events
flume1.channels.kafka-channel-allMsgs.readSmallestOffset = false
flume1.channels.kafka-channel-allMsgs.parseAsFlumeEvent = false
flume1.channels.kafka-channel-user-match-served-stream.type = org.apache.flume.channel.kafka.KafkaChannel
flume1.channels.kafka-channel-user-match-served-stream.brokerList = 10.0.1.175:9092 , 10.0.1.229:9092
flume1.channels.kafka-channel-user-match-served-stream.zookeeperConnect = 10.0.1.60:2181
flume1.channels.kafka-channel-user-match-served-stream.topic = user_match_served_stream
flume1.channels.kafka-channel-user-match-served-stream.groupId = s3_flume_matched_served_stream
flume1.channels.kafka-channel-user-match-served-stream.readSmallestOffset = false
flume1.channels.kafka-channel-user-match-served-stream.parseAsFlumeEvent = false
flume1.sinks.s3-sink-user-match-served-stream.channel = kafka-channel-user-match-served-stream
flume1.sinks.s3-sink-user-match-served-stream.type = hdfs
flume1.sinks.s3-sink-user-match-served-stream.hdfs.filePrefix = user_match
flume1.sinks.s3-sink-user-match-served-stream.hdfs.useLocalTimeStamp = true
flume1.sinks.s3-sink-user-match-served-stream.hdfs.path=s3n://:@bucket/served_message/%y-%m/%y-%m-%d
flume1.sinks.s3-sink-user-match-served-stream.hdfs.batchSize = 1024
flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollCount=1270000
flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollInterval=1800
flume1.sinks.s3-sink-user-match-served-stream.hdfs.rollSize=133169152
flume1.sinks.s3-sink-user-match-served-stream.hdfs.codeC=bzip2
flume1.sinks.s3-sink-user-match-served-stream.hdfs.fileType=SequenceFile
flume1.sinks.s3-sink-allMsgs.channel = kafka-channel-allMsgs
flume1.sinks.s3-sink-allMsgs.type = hdfs
flume1.sinks.s3-sink-allMsgs.hdfs.filePrefix = allMsgs
flume1.sinks.s3-sink-allMsgs.hdfs.useLocalTimeStamp = true
flume1.sinks.s3-sink-allMsgs.hdfs.path=s3n://:@bucket/all_Msgs/%y-%m/%y-%m-%d
flume1.sinks.s3-sink-allMsgs.hdfs.batchSize = 1024
flume1.sinks.s3-sink-allMsgs.hdfs.rollCount=1270000
flume1.sinks.s3-sink-allMsgs.hdfs.rollInterval=1800
flume1.sinks.s3-sink-allMsgs.hdfs.rollSize=133169152
flume1.sinks.s3-sink-allMsgs.hdfs.codeC=bzip2
flume1.sinks.s3-sink-allMsgs.hdfs.fileType=SequenceFile