1.负载均衡的概念
①负载均衡是用于解决一台机器(一个进程)无法解决所有请求,多个进程一起处理的场景而产生的一种算法。同一个请求只能交给一个进程处理,避免数据重复。
如下图所示,Agent1是一个路由节点,负责将channel暂存的event均衡到对应的多个Sink组件上,而每一个Sink组件分别连接到一个独立的Agent上。
②如何分配请求:
负载均衡算法:1)轮询(round_robin);2)随机(random)。
2.脚本编写
hadoop01:
进入到hadoop01的flume_scripts文件夹,创建exec-avro.properties脚本
#agent1 name
agent1.channels = c1
agent1.sources = r1
agent1.sinks = k1 k2
#set gruop
agent1.sinkgroups = g1
#set channel
agent1.channels.c1.type = memory
agent1.channels.c1.capacity = 1000
agent1.channels.c1.transactionCapacity = 1000
agent1.channels.c1.byteCapacityBufferPercentage = 20
agent1.channels.c1.byteCapacity = 800000
agent1.channels.c1.keep-alive = 60
agent1.channels.c1.capacity = 1000000
agent1.sources.r1.channels = c1
agent1.sources.r1.type = exec
agent1.sources.r1.command = tail -F /home/xiaokang/logs/123.log
# set sink1
agent1.sinks.k1.channel = c1
agent1.sinks.k1.type = avro
agent1.sinks.k1.hostname = hadoop02
agent1.sinks.k1.port = 52020
# set sink2
agent1.sinks.k2.channel = c1
agent1.sinks.k2.type = avro
agent1.sinks.k2.hostname = hadoop03
agent1.sinks.k2.port = 52020
#set sink group
agent1.sinkgroups.g1.sinks = k1 k2
#set failover
agent1.sinkgroups.g1.processor.type = load_balance # 负载均衡
agent1.sinkgroups.g1.processor.backoff = true # 如果开启,则将失败的sink放入黑名单
agent1.sinkgroups.g1.processor.selector = round_robin # 轮询
agent1.sinkgroups.g1.processor.selector.maxTimeOut=10000 # 在黑名单放置的超时时间,超时结束时,若仍无法接收,则超时时间呈指数增长
hadoop02:
进入到hadoop02的flume_scripts文件夹,创建avro-logger.properties脚本
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = hadoop02
a1.sources.r1.port = 52020
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
hadoop03:
进入到hadoop03的flume_scripts文件夹,创建avro-logger.properties脚本
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = hadoop03
a1.sources.r1.port = 52020
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3.flume启动
[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger.properties -Dflume.root.logger=INFO,console
[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger.properties -Dflume.root.logger=INFO,console
[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro.properties -Dflume.root.logger=INFO,console
如图所示,关联成功
4.验证
在hadoop01下的logs文件夹下创建123.log,写入时间。
[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/123.log ; done
[xiaokang@hadoop01 ~]$ cat logs/123.log