flume笔记(二):事务/内部原理/拓扑结构-复制和多路复用/负载均衡和故障转移/聚合

目录

flume事务​编辑

flume agent内部原理

flume拓扑结构

简单串联

复制和多路复用

负载均衡和故障转移

聚合


flume事务

 

Put事务流程

doPut:将批数据先写入临时缓冲区putList;

doCommit:检查channel内存队列是否足够合并;

doRollback:channel内存队列空间不足,回滚数据。

Take事务

doTake:将数据取到临时缓冲区takeList,并将数据发送到HDFS;

doCommit:如果数据全部发送成功,则清除临时缓冲区takeList;

doRollback:数据发送过程中如果出现异常,rollback将临时缓冲区takeList中的数据归还给channel内存队列。

 

flume agent内部原理

重要组件:

ChannelSelector

ChannelSelector的作用就是选出Event将要被发往哪个Channel。其共有两种类型,分别是Replicating(复制)和Multiplexing(多路复用)。

ReplicatingSelector会将同一个Event发往所有的Channel,

Multiplexing会根据相应的原则,将不同的Event发往不同的Channel。

SinkProcessor

SinkProcessor共有三种类型 , 分别是DefaultSinkProcessor、LoadBalancingSinkProcessor和FailoverSinkProcessor;

DefaultSinkProcessor对应的是单个的Sink;

LoadBalancingSinkProcessor和FailoverSinkProcessor对应的是Sink Group,

LoadBalancingSinkProcessor可以实现负载均衡的功能;

FailoverSinkProcessor可以实现错误恢复的功能。

flume拓扑结构

简单串联

这种模式是将多个 flume 顺序连接起来了,从最初的 source 开始到最终 sink 传送的

目的存储系统。此模式不建议桥接过多的 flume 数量, flume 数量过多不仅会影响传输速

率,而且一旦传输过程中某个节点 flume 宕机,会影响整个传输系统。

复制和多路复用

Flume支持将事件流向一个或者多个目的地。这种模式可以将相同数据复制到多个channel中,或者将不同数据分发到不同的channel中,sink 可以选择传送到不同的目的地。

例:

(1)需求

用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,Flume-3负责输出到Local FileSystem。

(2)需求分析

Replicating Channel Selector(复制)

(3)实现步骤

1)创建flume-file-flume.conf

配置一个接收文件的source和两个channel,两个sink,分别将数据发送给flume-flume-hdfs.conf和flume-flume-dir.conf

2)编辑flume-file-flume.conf

#agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# 将数据流复制给所有channel
a1.sources.r1.selector.type = replicating

# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /tmp/root/hive.log
a1.sources.r1.shell = /bin/bash -c

# sink
# sink端的avro是一个数据发送者
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop01
a1.sinks.k1.port = 4141
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop01
a1.sinks.k2.port = 4142

# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

3)创建编写flume-flume-hdfs.conf

# agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# source
# source端的avro是一个数据接收服务
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop01
a2.sources.r1.port = 4141

# sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop01:9000/flume2/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2- 
#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 30
#设置每个文件的滚动大小为128M
a2.sinks.k1.hdfs.rollSize = 134217700 
#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0

# channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

4)创建编写flume-flume-dir.conf

# agent
a3.sources = r1
a3.sinks = k1
a3.channels = c2

# source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop01
a3.sources.r1.port = 4142

# sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/module/flume3

# channel
a3.channels.c2.type = memory
a3.channels.c2.capacity = 1000
a3.channels.c2.transactionCapacity = 100

# source and sink to the channel
a3.sources.r1.channels = c2
a3.sinks.k1.channel = c2

注:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

5)执行文件

分别启动对应的flume

flume-flume-dir.conf

bin/flume-ng agent --conf conf/ --name a3 
--conf-file conf/flume-flume-dir.conf

flume-flume-hdfs.conf

bin/flume-ng agent --conf conf/ --name a2 
--conf-file conf/flume-flume-hdfs.conf

 flume-file-flume.conf

bin/flume-ng agent --conf conf/ --name a1 
--conf-file conf/flume-file-flume.conf

6)启动hive,查看数据

负载均衡和故障转移

Flume支持使用将多个sink逻辑上分到一个sink组,sink组配合不同的SinkProcessor可以实现负载均衡和错误恢复的功能。

例:

(1)需求

用Flume1监控一个端口,其sink组中的sink分别对接Flume2和Flume3,采用FailoverSinkProcessor实现故障转移的功能。

(2)需求分析

FailoverSinkProcessor(实现错误恢复的功能)

 

(3)实现步骤

1)创建flume-netcat-flume.conf

配置1个netcat source和1个channel、1个sink group(2个sink),分别输送给flume-flume-console1和flume-flume-console2。

2)编写flume-netcat-flume.conf

# agent
a1.sources = r1
a1.channels = c1
a1.sinkgroups = g1
a1.sinks = k1 k2

# source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop01
a1.sources.r1.port = 44444

a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000

# sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop01
a1.sinks.k1.port = 4141
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop02
a1.sinks.k2.port = 4142

# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# source and sink to the channel
a1.sources.r1.channels = c1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

3)创建编写flume-flume-consolel1.conf

# agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop01
a2.sources.r1.port = 4141

# sink
a2.sinks.k1.type = logger

# channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

4)创建编写flume-flume-consolel2.conf

# agent
a3.sources = r1
a3.sinks = k1
a3.channels = c2

# source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop01
a3.sources.r1.port = 4142

# sink
a3.sinks.k1.type = logger

# channel
a3.channels.c2.type = memory
a3.channels.c2.capacity = 1000
a3.channels.c2.transactionCapacity = 100

# source and sink to the channel
a3.sources.r1.channels = c2
a3.sinks.k1.channel = c2

5)执行文件

flume-flume-console2.conf

bin/flume-ng agent --conf conf/ --name a3 
--conf-file conf/flume-flume-console2.conf -Dflume.root.logger=INFO,console

flume-flume-console1.conf 

bin/flume-ng agent --conf conf/ --name a2 
--conf-file conf/flume-flume-console1.conf -Dflume.root.logger=INFO,console

flume-netcat-flume.conf 

bin/flume-ng agent --conf conf/ --name a1 
--conf-file conf/flume-netcat-flume.conf

6)使用netcat向本机的44444端口发送内容

nc hadoop01 44444

7)查看console1和console2的控制台日志打印情况

8)删除console1查看console2的控制台日志打印情况

注:使用jps -ml查看flume情况。

聚合

日常web应用通常分布在上百个服务器,大者甚至上千个、上万个服务器。产生的日志,处理起来也非常麻烦。用flume的这种组合方式能很好的解决这一问题,每台服务器部署一个flume采集日志,传送到一个集中收集日志的flume,再由此flume上传到hdfs、hive、hbase等进行日志分析。

例·:

(1)需求

hadoop01上的Flume-1监控文件/tmp/root/hive.log;

hadoop02上的Flume-2监控某一个端口的数据流;

Flume-1与Flume-2将数据发送给hadoop03上的Flume-3,Flume-3将最终数据打印到控制台。

(2)需求分析

 

(3)实现步骤

1)创建并编写flume-file-flume.conf

# agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /tmp/root/hive.log
a1.sources.r1.shell = /bin/bash -c

# sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop03
a1.sinks.k1.port = 4141

# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2)创建并编写flume-netcat-flume.conf

# agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# source
a2.sources.r1.type = netcat
a2.sources.r1.bind = hadoop02
a2.sources.r1.port = 44444

# sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop03
a2.sinks.k1.port = 4141

# channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

3)创建并编写flume-flume-logger.conf

# agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop03
a3.sources.r1.port = 4141

# sink
a3.sinks.k1.type = logger

# channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

4)执行配置文件

flume-flume-logger.conf

bin/flume-ng agent --conf conf/ --name a3 
--conf-file conf/flume-flume-logger.conf -Dflume.root.logger=INFO,console

flume-file-flume.conf

bin/flume-ng agent --conf conf/ --name a1 
--conf-file conf/flume-file-flume.conf

flume-netcat-flume.conf

bin/flume-ng agent --conf conf/ --name a2 
--conf-file conf/flume-netcat-flume.conf

5)查看hadoop03上的数据

本文为学习笔记!!!

  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值