负载均衡
负载均衡是用于解决一台机器(一个进程)无法解决所有请求而产生的一种算法。
Load balancing Sink Processor 能够实现 load balance 功能。
如下图Agent1 是一个路由节点,负责将 Channel 暂存的 Event 均衡到对应的多个 Sink组件上,而每个 Sink 组件分别连接到一个独立的 Agent 上,示例配置,如下所示:
在此处通过三台机器来进行模拟flume的负载均衡:
hadoop01:采集数据,发送到hadoop02和hadoop03机器上去
hadoop02:接收hadoop01的部分数据
hadoop03:接收hadoop01的部分数据
第一步:开发hadoop01服务器的flume配置
cd /export/servers/apache-flume-1.8.0-bin/tmpconf
vim load_banlancer_client.conf
#agent name
a1.channels = c1
a1.sources = r1
a1.sinks = k1 k2
#set gruop
a1.sinkgroups = g1
#set sink group
a1.sinkgroups.g1.sinks = k1 k2
#set sources
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/taillogs/test.log
#set channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# set sink1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop02
a1.sinks.k1.port = 52021
# set sink2
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop03
a1.sinks.k2.port = 52021
#set failover
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin
a1.sinkgroups.g1.processor.selector.maxTimeOut=10000
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1
第二步:开发hadoop02服务器的flume配置
cd /export/servers/apache-flume-1.8.0-bin/tmpconf
vim load_banlancer_server.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop02
a1.sources.r1.port = 52021
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Describe the sink
a1.sinks.k1.type = logger
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
第三步:开发node03服务器flume配置
node03服务器配置
cd /export/servers/apache-flume-1.8.0-bin/tmpconf
vim load_banlancer_server.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop03
a1.sources.r1.port = 52021
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Describe the sink
a1.sinks.k1.type = logger
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
第四步:准备启动flume服务
启动hadoop03的flume服务
cd /export/servers/apache-flume-1.8.0-bin/
bin/flume-ng agent -n a1 -c conf -f tmpconf/load_banlancer_server.conf -Dflume.root.logger=DEBUG,console
启动hadoop02的flume服务
cd /export/servers/apache-flume-1.8.0-bin/
bin/flume-ng agent -n a1 -c conf -f tmpconf/load_banlancer_server.conf -Dflume.root.logger=DEBUG,console
启动hadoop01的flume服务
cd /export/servers/apache-flume-1.8.0-bin/
bin/flume-ng agent -n a1 -c conf -f tmpconf/load_banlancer_client.conf -Dflume.root.logger=DEBUG,console
第五步:hadoop01服务器运行脚本产生数据
cd /home
sh tail-file.sh
然后观察各个flume运行状态。
可以观察到,hadoop02和hadoop03同时收到了数据。实现了负载均衡