flume负载均衡

1.负载均衡的概念

①负载均衡是用于解决一台机器(一个进程)无法解决所有请求,多个进程一起处理的场景而产生的一种算法。同一个请求只能交给一个进程处理,避免数据重复。
如下图所示,Agent1是一个路由节点,负责将channel暂存的event均衡到对应的多个Sink组件上,而每一个Sink组件分别连接到一个独立的Agent上。
在这里插入图片描述
②如何分配请求:
负载均衡算法:1)轮询(round_robin);2)随机(random)。

2.脚本编写

hadoop01:
进入到hadoop01的flume_scripts文件夹,创建exec-avro.properties脚本

#agent1 name
agent1.channels = c1
agent1.sources = r1
agent1.sinks = k1 k2

#set gruop
agent1.sinkgroups = g1

#set channel
agent1.channels.c1.type = memory
agent1.channels.c1.capacity = 1000
agent1.channels.c1.transactionCapacity = 1000
agent1.channels.c1.byteCapacityBufferPercentage = 20
agent1.channels.c1.byteCapacity = 800000
agent1.channels.c1.keep-alive = 60
agent1.channels.c1.capacity = 1000000

agent1.sources.r1.channels = c1
agent1.sources.r1.type = exec
agent1.sources.r1.command = tail -F /home/xiaokang/logs/123.log

# set sink1
agent1.sinks.k1.channel = c1
agent1.sinks.k1.type = avro
agent1.sinks.k1.hostname = hadoop02
agent1.sinks.k1.port = 52020

# set sink2
agent1.sinks.k2.channel = c1
agent1.sinks.k2.type = avro
agent1.sinks.k2.hostname = hadoop03
agent1.sinks.k2.port = 52020

#set sink group
agent1.sinkgroups.g1.sinks = k1 k2

#set failover
agent1.sinkgroups.g1.processor.type = load_balance # 负载均衡
agent1.sinkgroups.g1.processor.backoff = true # 如果开启,则将失败的sink放入黑名单
agent1.sinkgroups.g1.processor.selector = round_robin # 轮询
agent1.sinkgroups.g1.processor.selector.maxTimeOut=10000 # 在黑名单放置的超时时间,超时结束时,若仍无法接收,则超时时间呈指数增长

hadoop02:
进入到hadoop02的flume_scripts文件夹,创建avro-logger.properties脚本

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = hadoop02
a1.sources.r1.port = 52020

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

hadoop03:
进入到hadoop03的flume_scripts文件夹,创建avro-logger.properties脚本

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = hadoop03
a1.sources.r1.port = 52020

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger.properties -Dflume.root.logger=INFO,console
[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger.properties -Dflume.root.logger=INFO,console
[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro.properties -Dflume.root.logger=INFO,console

如图所示,关联成功
在这里插入图片描述
在这里插入图片描述

4.验证

在hadoop01下的logs文件夹下创建123.log,写入时间。

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/123.log ; done

在这里插入图片描述
在这里插入图片描述

[xiaokang@hadoop01 ~]$ cat logs/123.log 

在这里插入图片描述

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小刘新鲜事儿

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值