一、多台服务器多个数据源
需求分析
hadoop103上的flume-1监控文件/opt/module/datas/flume_tmp.log
hadoop104上的flume-2监控某一个端口的数据流
flume-1与flume-2将数据发送给hadoop102上的flume-3,flume-3将最终数据打印到控制台
在hadoop102、hadoop103以及hadoop104的/opt/module/flume-1.7.0/job
目录下创建一个group2
文件夹
mkdir group2
创建配置文件
在hadoop103上创建配置文件flume1.conf,配置source用于监控/opt/module/datas/flume_tmp.log
文件,配置sink输出数据到下一级flume。
vim flume1.conf
添加如下内容
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/datas/flume_tmp.log
a1.sources.r1.shell = /bin/bash -c
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop102
a1.sinks.k1.port = 4141
# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
在hadoop104上创建配置文件flume2.conf,配置source监控端口44444数据流,配置sink数据到下一级flume。
vim flume2.conf
添加如下内容
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = netcat
a2.sources.r1.bind = hadoop104
a2.sources.r1.port = 44444
# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop102
a2.sinks.k1.port = 4141
# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
在hadoop102上创建配置文件flume3.conf,配置source用于接收flume1与flume2发送过来的数据流,最终合并后sink到控制台。
vim flume3.conf
添加如下内容
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop102
a3.sources.r1.port = 4141
# Describe the sink
# Describe the sink
a3.sinks.k1.type = logger
# Describe the channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1
执行配置文件
分别开启对应配置文件:flume3.conf,flume2.conf,flume1.conf。
hadoop102
bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group2/flume3.conf -Dflume.root.logger=INFO,console
hadoop103
bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group2/flume1.conf
hadoop104
bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group2/flume2.conf
测试
在hadoop103上向/opt/module/datas/flume_tmp.log
中追加内容
echo 'hello' > /opt/module/datas/flume_tmp.log
在hadoop104上向44444端口发送数据
telnet hadoop104 44444
在 hadoop102的监听窗口就能看到 hadoop103和hadoop104 发过来的内容了
https://blog.csdn.net/yljphp/article/details/90347257
二、同一台服务器----多个数据源(多个src)
首先在dome/flume下新建一个叫app2的目录,
然后再app2目录下新建一个叫app2.cf内容如下:
app1.cf
agent1.sources=src1 src2
agent1.sinks=sin1
agent1.channels=chn1
#src1
agent1.sources.src1.type=netcat
agent1.sources.src1.bind=192.168.208.128
agent1.sources.src1.port=6666
agent1.sources.src1.channels=chn1
#src2
agent1.sources.src2.type=syslogtcp
agent1.sources.src2.host=192.168.208.128
agent1.sources.src2.port=5140
agent1.sources.src2.channels=chn1
#channel
agent1.channels.chn1.type=memory
#………………
agent1.sinks.sin1.type=logger
agent1.sinks.sin1.channel=chn1
我们把服务启动起来
r.sh
flume-ng agent --conf-file app1.cf --name agent3 -Dflume.root.logger=INFO,console
然后测试netcat
键入:telnet 0 6666
接着键入
telnet 192.168.208.128 6666
然后测试syslogtcp
键入:
logger “neirong”
即可
三、一个数据源对应多个channel,多个sink
Hadoop1同时往多个目标进行输出
到Hadoop2的HDFS文件,到 Hadoop3的普通log文件
在Hadoop1上运行
flume-ng agent --conf ./ -f consolidation-accepter.conf -n agent1 -Dflume.root.logger=INFO,console
# 其 consolidation-accepter.conf 内容如下
# Finally, now that we've defined all of our components, tell agent1 which ones we want to activate.
agent1.sources = source1
agent1.channels = ch1 ch2
agent1.sinks = hdfssink1 sink2
# A:数据源就一个,复制这个数据源给其它的channel
agent1.source.source1.selector.type = replicating (这个是默认的)
# B:根据header把不同的分布到不同的channel上(针对channel的)
# agent1.sources.source1.selector.type = multiplexing
# agent1.sources.source1.selector.header = state(头部这个字段)
# agent1.sources.source1.selector.mapping.CZ = ch1(头部state对应的值含有CZ)
# agent1.sources.source1.selector.mapping.US = ch2 c3
# agent1.sources.source1.selector.default = ch4(其余的)
# Define an Avro source called avro-source1 on agent1 and tell it to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.source1.channels = ch1 ch2
agent1.sources.source1.type = avro
agent1.sources.source1.bind = con
agent1.sources.source1.port = 44444
agent1.sources.source1.threads = 5
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 1000000
agent1.channels.ch1.transactionCapacity = 1000000
agent1.channels.ch1.keep-alive = 10
agent1.channels.ch2.type = memory
agent1.channels.ch2.capacity = 1000000
agent1.channels.ch2.transactionCapacity = 100000
agent1.channels.ch2.keep-alive = 10
# Define a logger sink that simply logs all events it receives and connect it to the other end of the same channel.
agent1.sinks.hdfssink1.channel = ch1
agent1.sinks.hdfssink1.type = hdfs
agent1.sinks.hdfssink1.hdfs.path = hdfs://mycluster/flume/%Y-%m-%d/%H%M
agent1.sinks.hdfssink1.hdfs.filePrefix = S1PA124-consolidation-accesslog-%H-%M-%S
agent1.sinks.hdfssink1.hdfs.useLocalTimeStamp = true
agent1.sinks.hdfssink1.hdfs.writeFormat = Text
agent1.sinks.hdfssink1.hdfs.fileType = DataStream
agent1.sinks.hdfssink1.hdfs.rollInterval = 1800
agent1.sinks.hdfssink1.hdfs.rollSize = 5073741824
agent1.sinks.hdfssink1.hdfs.batchSize = 10000
agent1.sinks.hdfssink1.hdfs.rollCount = 0
agent1.sinks.hdfssink1.hdfs.round = true
agent1.sinks.hdfssink1.hdfs.roundValue = 60
agent1.sinks.hdfssink1.hdfs.roundUnit = minute
agent1.sinks.sink2.type = logger
agent1.sinks.sink2.sink.batchSize=10000
agent1.sinks.sink2.sink.batchTimeout=600000
agent1.sinks.sink2.sink.rollInterval = 1000
agent1.sinks.sink2.sink.directory=/root/data/flume-logs/
agent1.sinks.sink2.sink.fileName=accesslog
agent1.sinks.sink2.channel = ch2
2、分别在Hadoop2和Hadoop3运行如下命令,查看Hadoop1发来的数据,不用查看也可
flume-ng agent --conf ./ --conf-file collect-send.conf --name agent2
Flume数据发送器配置文件collect-send.conf内容如下
agent2.sources = source2
agent2.sinks = sink1
agent2.channels = ch2
agent2.sources.source2.type = exec
agent2.sources.source2.command = tail -F /root/data/flume.log
agent2.sources.source2.channels = ch2
#channels configuration
agent2.channels.ch2.type = memory
agent2.channels.ch2.capacity = 10000
agent2.channels.ch2.transactionCapacity = 10000
agent2.channels.ch2.keep-alive = 3
#sinks configuration
agent2.sinks.sink1.type = avro
agent2.sinks.sink1.hostname=consolidationIpAddress
agent2.sinks.sink1.port = 44444
agent2.sinks.sink1.channel = ch2
1、启动Flume汇总进程 flume-ng agent --conf ./ -f consolidation-accepter.conf -n agent1 -Dflume.root.logger=INFO,console 2、启动Flume采集进程 flume-ng agent --conf ./ --conf-file collect-send.conf --name agent2 3、配置参数说明(以下两个条件是or的关系,也就是当一个条件满足就触发) (1)每半小时把channel里的数据冲刷到sink中去,并且另起新的文件来存储 agent1.sinks.hdfssink1.hdfs.rollInterval = 1800 (2)当文件大小为5073741824字节时,另起新的文件来存储 agent1.sinks.hdfssink1.hdfs.rollSize = 5073741824
四、多个数据源,多个channel,多个sink,我们可以灵活的配置。
我们还有sinkgroup,这个是用来负载均衡用的,但是没怎么常用。
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
#任意的到一个sink上。
#下面这个是具有优先级的:
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000
#先到优先级高的,然后等优先级高的出问题后,才到优先级低的。