Apache Flume 两个Agent合作

第一个agent负责收集文件当中的数据,通过网络发送到第二个agent当中去

第二个agent负责接收第一个agent发送的数据,并将数据保存到hdfs上面去

第一步:hadoop02节点安装flume

scp -r apache-flume-1.8.0-bin/ hadoop02:$PWD

第二步:hadoop01开发flume配置文件

cd /export/servers/apache-flume-1.8.0-bin/tmpconf
vim tail-avro-avro-logger.conf
##################
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/taillogs/test.log

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

##sink端的avro是一个数据发送者
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.100.202
# 这里的ip是hadoop02的ip
a1.sinks.k1.port = 4141

#Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

第三步:hadoop02开发flume配置文件

cd /export/servers/apache-flume-1.8.0-bin/tmpconf
vim avro-hdfs.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

##source中的avro组件是一个接收者服务
a1.sources.r1.type = avro
a1.sources.r1.bind = 192.168.100.202
# 从本机的端口上接收数据
a1.sources.r1.port = 4141

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://hadoop01:8020/avro
# 写到hdfs里的/avro里
# Bind the source and sink to the channel 
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

第四步:hadoop01开发脚本文件往test.log里写入数据

vim tail-file.sh
#!/bin/bash
while true
do
 date >> /home/taillogs/test.log;
  sleep 0.5;
done

第五步:顺序启动(先接收端,后发送端)

hadoop02机器启动flume进程

cd /export/servers/apache-flume-1.8.0-bin
bin/flume-ng agent -c conf -f conf/avro-hdfs.conf -n a1  -Dflume.root.logger=INFO,console   

hadoop01机器启动flume进程

cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/
bin/flume-ng agent -c conf -f conf/tail-avro-avro-logger.conf -n a1  -Dflume.root.logger=INFO,console    

hadoop01机器启shell脚本生成文件

mkdir /export/taillogs/
cd  /home
sh tail-file.sh

然后查看两个flume的输出就好了。

可以看到,hadoop01那边没输出,但是hadoop02那边输出了,而且hdfs中多出了很多文件

[root@hadoop01 home]# hadoop fs -ls /avro
Found 8 items
-rw-r--r--   2 root supergroup        735 2019-12-05 15:33 /avro/FlumeData.1575531215867
-rw-r--r--   2 root supergroup        775 2019-12-05 15:33 /avro/FlumeData.1575531215868
-rw-r--r--   2 root supergroup        755 2019-12-05 15:33 /avro/FlumeData.1575531215869
-rw-r--r--   2 root supergroup        755 2019-12-05 15:34 /avro/FlumeData.1575531215870
-rw-r--r--   2 root supergroup        755 2019-12-05 15:34 /avro/FlumeData.1575531215871
-rw-r--r--   2 root supergroup        735 2019-12-05 15:34 /avro/FlumeData.1575531215872
-rw-r--r--   2 root supergroup        735 2019-12-05 15:34 /avro/FlumeData.1575531215873
-rw-r--r--   2 root supergroup        549 2019-12-05 15:34 /avro/FlumeData.1575531215874.tmp
[root@hadoop01 home]# 
发布了216 篇原创文章 · 获赞 181 · 访问量 7267
展开阅读全文

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 技术黑板 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览