flume个人学习笔记

flume
flume介绍
是一个分布式 可靠和高可用的海量日志采集聚合和传输的系统
agent节点 source采集源 sink 下沉地 目的地

安装
tar -zxvf apache-flume-1.6.0-bin.tar.gz -C /apps/

从网络端口接受数据 下沉地
conf 目录下
vi netcat-logger.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = netcat spoolDir
a1.sources.r1.bind = node1
a1.sources.r1.port = 44444

a1.sinks.k1.type = logger

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动命令
$bin/flume-ng agent --conf conf --conf-file conf/netcat-logger.conf --name a1 -Dflume.root.logger=INFO,console

从文件夹读数据
vi spool-logger.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = spoolDir
a1.sources.r1.channel = c1
a1.sources.r1.spoolDir = /home/hadoop/flumespool
a1.sources.r1.fileHeader = true

a1.sinks.k1.type = logger

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

$bin/flume-ng agent --conf conf --conf-file conf/spool-logger.conf --name a1 -Dflume.root.logger=INFO,console

文件夹读数据到hdfs
vi spool-hdfs.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/log/test.log
a1.sources.r1.channel = c1

a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
#文件滚动周期 秒
a1.sinks.k1.hdfs.rollInterval = 3
#文件滚动的大小限制 bytes
a1.sinks.k1.hdfs.rollSize = 500
#写入多少个事件进行滚动
a1.sinks.k1.hdfs.rollCount = 20
a1.sinks.k1.hdfs.batchSize = 1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件类型 默认是sqquencefile 可用DataStream 则为普通文本
a1.sinks.k1.hdfs.fileType = DataStream

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

$bin/flume-ng agent --conf conf --conf-file conf/spool-hdfs.conf --name a1

tail读取数据
vi tail-avro.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/log/test.log
a1.sources.r1.channel = c1

##sink端的avro是一个数据发送者
a1.sinks = k1
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = node2
a1.sinks.k1.port = 4141
a1.sinks.k1.batch-szie = 2

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

$bin/flume-ng agent --conf conf --conf-file conf/tail-avro.conf --name a1
$bin/flume-ng agent -c conf -f conf/tail-avro.conf -n a1

vi avro-logger.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = avro
a1.sources.r1.channel = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141

a1.sinks.k1.type = logger

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

$bin/flume-ng agent --conf conf --conf-file conf/avro-logger.conf --name a1 -Dflume.root.logger=INFO,console

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值