DN日志--Flume--Kafka

一、Flume收集的DN日志作为Kafka的生产数据,exec-memory-kafka.conf如下

a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = com.onlinelog.analysis.ExecSource_JSON
a1.sources.r1.command = tail -F /var/log/hadoop-hdfs/hadoop-cmf-hdfs-DATANODE-hadoop001.log.out
a1.sources.r1.hostname = hadoop001
a1.sources.r1.servicename = DataNode
a1.sources.r1.channels = c1

a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000
a1.channels.c1.byteCapacityBufferPercentage = 20
a1.channels.c1.byteCapacity = 800000

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.brokerList=172.16.15.80:9093    
a1.sinks.k1.topic=kafkatest   
a1.sinks.k1.serializer.class=kafka.serializer.StringEncoder  
a1.sinks.k1.channel = c1

启动Kafka的生产者(exec-memory-kafka.conf)

flume-ng agent \
--name a1  \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-kafka.conf \
-Dflume.root.logger=INFO,console 

二、Kafka消费者查看收集到的日志信息
1、启动Kafka

bin/kafka-server-start.sh config/server.properties

2、创建topic

bin/kafka-topics.sh --create --zookeeper hadoop001:2181 --replication-factor 1 --partitions 1 --topic kafkatest

3、启动消费者

bin/kafka-console-consumer.sh --zookeeper hadoop001:2181 --topic kafkatest --from-beginning
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:23:31,569","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:Scheduling blk_1074115985_375188 file /data/1/dn/current/BP-1517073770-172.16.15.80-1508233672475/current/finalized/subdir5/subdir181/blk_1074115985 for deletion"}
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:23:31,570","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:Deleted BP-1517073770-172.16.15.80-1508233672475 blk_1074115985_375188 file /data/1/dn/current/BP-1517073770-172.16.15.80-1508233672475/current/finalized/subdir5/subdir181/blk_1074115985"}
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:24:20,722","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.DataNode:Receiving BP-1517073770-172.16.15.80-1508233672475:blk_1074115986_375189 src: /172.16.15.80:35481 dest: /172.16.15.80:50010"}
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:24:20,732","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src: /172.16.15.80:35481, dest: /172.16.15.80:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-920005246_125617, offset: 0, srvID: 18833c0a-94fd-4203-b396-675ce5c962d1, blockid: BP-1517073770-172.16.15.80-1508233672475:blk_1074115986_375189, duration: 6642862"}
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:24:20,732","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder: BP-1517073770-172.16.15.80-1508233672475:blk_1074115986_375189, type=HAS_DOWNSTREAM_IN_PIPELINE terminating"}
{"hostname":"hadoop001","servicename":"DataNode","time":"2018-01-18 17:24:25,571","logtype":"INFO","loginfo":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:Scheduling blk_1074115986_375189 file /data/1/dn/current/BP-1517073770-172.16.15.80-1508233672475/current/finalized/subdir5/subdir181/blk_1074115986 for deletion"}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值