flume组合模式之高可用配置

  这里用5台机搭建flume-HA集群,hosts加入以下内容

    192.168.1.71 node01
    192.168.1.72 node02
    192.168.1.73 node03
    192.168.1.74 node04

    192.168.1.75 node05

 案例说明-Consolidation(结合-高可用)
    假设node01、node02、node03是我们的部署web应用的服务器,

    在每个服务器上flume Agent模式部署,采集web应用产生的日志

    并传送给Collector。node04、node05是flume Collector模式部署,

    负责接收每个Agent传送过来的日志,并上传到hdfs。

3、下载安装
    去http://flume.apache.org/download.html下载,这里下载最新稳定版apache-flume-1.8.0-bin.tar.gz

    在node01下,/opt/bigdata/ 路径下,解压 

 tar -zxvf apache-flume-1.8.0-bin.tar.gz
    为了方便,重命名下

mv apache-flume-1.8.0-bin ./flume-1.8.0

4、配置flume
(1)配置环境变量
    在/opt/bigdata/ 下,默认flume-env.sh.template,复制一份命名flume-env.sh

cp flume-1.8.0/conf/flume-env.sh.template  flume-1.8.0/conf/flume-env.sh
    编辑 vim flume-1.8.0/conf/flume-env.sh,指定jdk路径

export JAVA_HOME=$JAVA_HOME 或直接指定jdk路径 export JAVA_HOME=/usr/local/jdk1.8
(2)配置启动参数
在node01,/opt/bigdata/ 下,默认flume-conf.properties.template,复制一份命名flume-conf.properties
cp flume-1.8.0/conf/flume-conf.properties.template  flume-1.8.0/conf/flume-conf.properties
编辑 vim flume-1.8.0/conf/flume-conf.properties,设置相关参数
    由于node01、node02、node03上的flume是Agent角色,所以这三台flume-conf.properties配置一样

#agent1 name  
agent1.channels = c1  
agent1.sources = r1  
agent1.sinks = k1 k2  
 
#set channel  
agent1.channels.c1.type = memory  
agent1.channels.c1.capacity = 1000 #channel中存储 events 的最大数量
agent1.channels.c1.transactionCapacity = 100  #事物容量,不能大于capacity,不能小于batchSize
 
#agent1.sources.r1.type = exec  
#agent1.sources.r1.command = tail -F /data/logdfs/f.log  #实时采集文件
#agent1.sources.r1.channels = c1
 
agent1.sources.r1.type = spooldir #spooldir类型
agent1.sources.r1.spoolDir = /data/logdfs #此路径下每产生新文件,flume就会自动采集
agent1.sources.r1.fileHeader = true
agent1.sources.r1.channels = c1  
 
# set sink1  
agent1.sinks.k1.channel = c1  
agent1.sinks.k1.type = avro  
agent1.sinks.k1.hostname = node04  
agent1.sinks.k1.port = 52020  
# set sink2  
agent1.sinks.k2.channel = c1  
agent1.sinks.k2.type = avro  #协议类型
agent1.sinks.k2.hostname = node05  
agent1.sinks.k2.port = 52020 
 
#set gruop  
agent1.sinkgroups = g1  
#set sink group  
agent1.sinkgroups.g1.sinks = k1 k2  
#set failover  
agent1.sinkgroups.g1.processor.type = failover  #故障转移,若node04故障,node05自动接替node04工作
agent1.sinkgroups.g1.processor.priority.k1 = 10  #优先级10
agent1.sinkgroups.g1.processor.priority.k2 = 5   #优先级5
agent1.sinkgroups.g1.processor.maxpenalty = 10000 #最长等待10秒转移故障
分发flume
scp flume-1.8.0 node02:`pwd`
scp flume-1.8.0 node03:`pwd`
scp flume-1.8.0 node04:`pwd`
scp flume-1.8.0 node05:`pwd`

node04、node05上的flume是Collector角色,而node04承担了master角色,node05承担了slave角色
    所以配置不一样,清空node04、node05配置,从新配置。

    node04如下:

a1.sources = r1  
a1.channels = kafka_c1 hdfs_c2  
a1.sinks = kafka_k1 hdfs_k2
 
#properties of avro-AppSrv-source 
a1.sources.r1.type = avro  
a1.sources.r1.bind = node04  
a1.sources.r1.port = 52020
a1.sources.r1.channels=kafka_c1 hdfs_c2   #设置sources的channels
#增加拦截器 所有events,增加头,类似json格式里的"headers":{" key":" value"}
a1.sources.r1.interceptors = i1  #拦截器名字
a1.sources.r1.interceptors.i1.type = static  #拦截器类型
a1.sources.r1.interceptors.i1.key = Collector  #自定义 
a1.sources.r1.interceptors.i1.value = node04   #自定义
 
#set kafka channel  
a1.channels.kafka_c1.type = memory  
a1.channels.kafka_c1.capacity = 1000  
a1.channels.kafka_c1.transactionCapacity = 100 
#set hdfs channel  
a1.channels.hdfs_c2.type = memory
a1.channels.hdfs_c2.capacity = 1000
a1.channels.hdfs_c2.transactionCapacity = 100
 
#set sink to kafka
a1.sinks.kafka_k1.type=org.apache.flume.sink.kafka.KafkaSink
a1.sinks.kafka_k1.channel=kafka_c1  #传输的channel名
a1.sinks.kafka_k1.topic = rwb_topic #kafka中的topic
a1.sinks.kafka_k1.brokerList = node01:9092,node02:9092,node03:9092
a1.sinks.kafka_k1.requiredAcks = 1
a1.sinks.kafka_k1.batchSize = 1000
#set sink to hdfs  
a1.sinks.hdfs_k2.type=hdfs  #传输到hdfs
a1.sinks.hdfs_k2.channel=hdfs_c2 #传输的channel名
a1.sinks.hdfs_k2.hdfs.path=hdfs://mycluster:8020/flume/logdfs  #这里hadoop集群时HA集群,所以这里写的是集群的Namespace
a1.sinks.hdfs_k2.hdfs.fileType=DataStream  
a1.sinks.hdfs_k2.hdfs.writeFormat=TEXT  #文本格式
a1.sinks.hdfs_k2.hdfs.rollInterval=1    #失败1s回滚
a1.sinks.hdfs_k2.hdfs.filePrefix=%Y-%m-%d  #文件名前缀
a1.sinks.hdfs_k2.hdfs.fileSuffix=.txt      #文件名后缀
a1.sinks.hdfs_k2.hdfs.useLocalTimeStamp = true  
    node05如下:

a1.sources = r1  
a1.channels = kafka_c1 hdfs_c2  
a1.sinks = kafka_k1 hdfs_k2
 
#properties of avro-AppSrv-source 
a1.sources.r1.type = avro  
a1.sources.r1.bind = node05 
a1.sources.r1.port = 52020
a1.sources.r1.channels=kafka_c1 hdfs_c2   #设置sources的channels
#增加拦截器 所有events,增加头,类似json格式里的"headers":{" key":" value"}
a1.sources.r1.interceptors = i1  #拦截器名字
a1.sources.r1.interceptors.i1.type = static  #拦截器类型
a1.sources.r1.interceptors.i1.key = Collector  #自定义 
a1.sources.r1.interceptors.i1.value = node04   #自定义
 
#set kafka channel  
a1.channels.kafka_c1.type = memory  
a1.channels.kafka_c1.capacity = 1000  
a1.channels.kafka_c1.transactionCapacity = 100 
#set hdfs channel  
a1.channels.hdfs_c2.type = memory
a1.channels.hdfs_c2.capacity = 1000
a1.channels.hdfs_c2.transactionCapacity = 100
 
#set sink to kafka
a1.sinks.kafka_k1.type=org.apache.flume.sink.kafka.KafkaSink
a1.sinks.kafka_k1.channel=kafka_c1  #传输的channel名
a1.sinks.kafka_k1.topic = rwb_topic #kafka中的topic
a1.sinks.kafka_k1.brokerList = node01:9092,node02:9092,node03:9092
a1.sinks.kafka_k1.requiredAcks = 1
a1.sinks.kafka_k1.batchSize = 1000
#set sink to hdfs  
a1.sinks.hdfs_k2.type=hdfs  #传输到hdfs
a1.sinks.hdfs_k2.channel=hdfs_c2 #传输的channel名
a1.sinks.hdfs_k2.hdfs.path=hdfs://mycluster:8020/flume/logdfs  #这里hadoop集群时HA集群,所以这里写的是集群的Namespace
a1.sinks.hdfs_k2.hdfs.fileType=DataStream  
a1.sinks.hdfs_k2.hdfs.writeFormat=TEXT  #文本格式
a1.sinks.hdfs_k2.hdfs.rollInterval=1    #失败1s回滚
a1.sinks.hdfs_k2.hdfs.filePrefix=%Y-%m-%d  #文件名前缀
a1.sinks.hdfs_k2.hdfs.fileSuffix=.txt      #文件名后缀
a1.sinks.hdfs_k2.hdfs.useLocalTimeStamp = true  
(3)启动flume
启动Collector
    在node04、node05/opt/bigdata/下分别如下命令启动 

flume-1.8.0/bin/flume-ng agent --conf conf --conf-file flume-1.8.0/conf/flume-conf.properties --name a1 -Dflume.root.logger=INFO,console > flume-1.8.0/logs/flume-server.log 2>&1 &
启动Agent
    在node01、node02、node03 /opt/bigdata/下分别如下命令启动 

flume-1.8.0/bin/flume-ng agent --conf conf --conf-file flume-1.8.0/conf/flume-conf.properties --name agent1 -Dflume.root.logger=DEBUG,console > flume-1.8.0/logs/flume-server.log 2>&1 &
    a1、agent1 为各自flume的代理名
5、高可用测试
    在node01、node02、node03任一节点 /data/logdfs下,产生新文件,

    该节点下的flume就会采集传送的node04的flume。

这里在node01下 复制cp /root/install.log 到 /data/logdfs 下
       此时tail -f flume-1.8.0/logs/flume-server.log 查看node04的flume日志,部分日志如下

18/06/17 14:54:22 INFO hdfs.HDFSEventSink: Writer callback called.
18/06/17 14:58:28 INFO ipc.NettyServer: Connection to /192.168.1.73:34166 disconnected.
18/06/17 14:59:21 INFO ipc.NettyServer: [id: 0xeffc35bb, /192.168.1.71:34175 => /192.168.1.74:52020] OPEN
18/06/17 14:59:21 INFO ipc.NettyServer: [id: 0xeffc35bb, /192.168.1.71:34175 => /192.168.1.74:52020] BOUND: /192.168.1.74:52020
18/06/17 14:59:21 INFO ipc.NettyServer: [id: 0xeffc35bb, /192.168.1.71:34175 => /192.168.1.74:52020] CONNECTED: /192.168.1.73:34175
18/06/17 14:59:21 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/06/17 14:59:21 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961816.txt.tmp
18/06/17 14:59:21 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961816.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961816.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529261961816.txt
18/06/17 14:59:22 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961817.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961817.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961817.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529261961817.txt
18/06/17 14:59:22 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961818.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961818.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961818.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529261961818.txt
18/06/17 14:59:22 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961819.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961819.txt.tmp
18/06/17 14:59:22 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529261961819.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529261961819.txt
    与此此时看node05的flume日志是没有任何情况的,也就是node05的flume没有工作

接下啦kill掉node04的flume
    在node01下,复制一个新文件到cp /root/install.log.syslog 到 /data/logdfs 下,部分日志如下

18/06/17 14:24:17 INFO hdfs.HDFSEventSink: Writer callback called.
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.71:51692 => /192.168.1.75:52020] OPEN
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.71:51692 => /192.168.1.75:52020] BOUND: /192.168.1.75:52020
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.71:51692 => /192.168.1.75:52020] CONNECTED: /192.168.1.71:51692
18/06/17 14:39:45 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/06/17 14:39:45 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785844.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785845.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785846.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785847.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785848.txt
    可以看到node01与node05建立的连接,node05把采集到的文件开始上传到hdfs

    node05-flume接替了node04-flume的工作,故障自动转移了。

此时在重新启动node04的flume,这次我们换台机,在node02 /data/lodfs下产生新文件,
    复制cp /root/nohup.out 到 /data/lodfs 下,查看node04的flume日志,部分日志如下

18/06/17 14:24:17 INFO hdfs.HDFSEventSink: Writer callback called.
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.72:51692 => /192.168.1.74:52020] OPEN
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.72:51692 => /192.168.1.74:52020] BOUND: /192.168.1.74:52020
18/06/17 14:39:42 INFO ipc.NettyServer: [id: 0x7079374a, /192.168.1.72:51692 => /192.168.1.74:52020] CONNECTED: /192.168.1.72:51692
18/06/17 14:39:45 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/06/17 14:39:45 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785844.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785844.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785845.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785845.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785846.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785846.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785847.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785847.txt
18/06/17 14:39:46 INFO hdfs.BucketWriter: Creating hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Closing hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp
18/06/17 14:39:46 INFO hdfs.BucketWriter: Renaming hdfs://mycluster:8020/flume/logdfs/2018-06-17.1529260785848.txt.tmp to hdfs://myclu
ster:8020/flume/logdfs/2018-06-17.1529260785848.txt
可以看到node02与node04建立的连接,node04的flume把采集到的文件开始上传到hdfs,node04的flume启动后,又重新下恢复正常工作了

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值