大数据技术之Flume

—————— ———————— ———————— ———————— ——————- ———————— ——————

一、Flume简介

(1) Flume提供一个分布式的,可靠的,对大数据量的日志进行高效收集聚集移动的服务,Flume只能在Unix环境下运行。

(2) Flume基于流式架构,容错性强,也很灵活简单。

(3) Flume、Kafka用来实时进行数据收集,Spark、Flink用来实时处理数据,impala用来实时查询。

二、Flume角色

在这里插入图片描述

2.1 Source

用于采集数据,Source是产生数据流的地方,同时Source会将产生的数据流传输到Channel,这个有点类似于Java IO部分的Channel

2.2 Channel

用于桥接Sources和Sinks,类似于一个队列

2.3 Sink

从Channel收集数据,将数据写到目标源(可以是下一个Source,也可以是HDFS或者HBase)。

2.4 Event

传输单元,Flume数据传输的基本单元,以事件的形式将数据从源头送至目的地。

三、Flume传输过程

source监控某个文件或数据流,数据源产生新的数据,拿到该数据后,将数据封装在一个Event中,并put到channel后commit提交,channel队列先进先出,sink去channel队列中拉取数据,然后写入到HDFS中。

四、Flume部署

4.1 环境配置:

(1)查询JAVA_HOME:echo $JAVA_HOME找到java的路径:

/opt/module/jdk1.8.0_144  /opt/module/jdk1.8.0_144

(2)安装Flume

[itstar@bigdata113
software]$ tar -zxvf apache-flume1.8.0-bin.tar.gz -C /opt/module/
改名:
[itstar@bigdata113 conf]$ mv flume-env.sh.template flume-env.sh

(3)修改Flume中的配置文件:flume-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

4.2 实操案例

4.2.1 案例一:监控端口数据

目标:Flume监控一端Console,另一端Console发送消息,使被监控端实时显示。

(1)安装telnet工具
在这里插入图片描述
(2)创建Flume Agent配置文件flume-telnet.conf

#定义Agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

#定义netcatsource
a1.sources.r1.type = netcat
a1.sources.r1.bind = bigdata111
a1.sources.r1.port = 44445

# 定义sink
a1.sinks.k1.type = logger

# 定义channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1                 

# 双向链接
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(3)判断44444端口是否被占用

$ netstat -tunlp | grep 44445

(4)启动flume配置文件

  /flume-ng agent --conf conf/ --name a1 --conf-file
  conf/flume-telnet.conf -Dflume.root.logger==INFO,console

(5)使用telnet工具向本机的44444端口发送内容

$ telnet bigdata111 44445

4.2.2 案例二:实时读取本地文件到HDFS

(1)创建flume-hdfs.conf文件

# 1agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# 2source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /opt/plus
a2.sources.r2.shell = /bin/bash -c

# 3sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://bigdata111:9000/flume/%Y%m%d/%H

#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-

#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true

#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1

#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour

#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000

#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream

#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 600

#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0

#最小副本数
a2.sinks.k2.hdfs.minBlockReplicas = 1

# Usea channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 1000 

#Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

(3)执行监控配置

/opt/module/flume1.8.0/bin/flume-ng agent \
--conf/opt/module/flume1.8.0/conf/ \
--name a2 \--conf-file/opt/module/flume1.8.0/jobconf/flume-hdfs.conf

4.2.3 案例三:实时读取目录文件到HDFS

目标:使用flume监听整个目录的文件

(1)创建配置文件flume-dir.conf

#1 Agent
a3.sources = r3
a3.sinks = k3
a3.channels = c3

#2 source

#监控目录的类型
a3.sources.r3.type = spooldir

#监控目录的路径
a3.sources.r3.spoolDir = /opt/module/flume1.8.0/upload

#哪个文件上传hdfs,然后给这个文件添加一个后缀
a3.sources.r3.fileSuffix = .COMPLETED
a3.sources.r3.fileHeader = true

#忽略所有以.tmp结尾的文件,不上传(可选)
a3.sources.r3.ignorePattern = ([^ ]*\.tmp)

# 3 sink
a3.sinks.k3.type = hdfs
a3.sinks.k3.hdfs.path = hdfs://bigdata111:9000/flume/%H

#上传文件的前缀
a3.sinks.k3.hdfs.filePrefix = upload-

#是否按照时间滚动文件夹
a3.sinks.k3.hdfs.round = true

#多少时间单位创建一个新的文件夹
a3.sinks.k3.hdfs.roundValue = 1

#重新定义时间单位
a3.sinks.k3.hdfs.roundUnit = hour

#是否使用本地时间戳
a3.sinks.k3.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次
a3.sinks.k3.hdfs.batchSize = 100

#设置文件类型,可支持压缩
a3.sinks.k3.hdfs.fileType = DataStream

#多久生成一个新的文件
a3.sinks.k3.hdfs.rollInterval = 600

#设置每个文件的滚动大小大概是128M
a3.sinks.k3.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关
a3.sinks.k3.hdfs.rollCount = 0

#最小副本数
a3.sinks.k3.hdfs.minBlockReplicas = 1

# Use a channel which buffersevents in memory
a3.channels.c3.type = memory
a3.channels.c3.capacity = 1000
a3.channels.c3.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r3.channels = c3
a3.sinks.k3.channel = c3

(2)执行测试:执行如下脚本后,请向upload文件夹中添加文件试试

/opt/module/flume1.8.0/bin/flume-ng agent \
--conf/opt/module/flume1.8.0/conf/ \
--name a3 \
--conf-file /opt/module/flume1.8.0/jobconf/flume-dir.conf

尖叫提示: 在使用Spooling Directory Source时

  1. 不要在监控目录中创建并持续修改文件
  2. 上传完成的文件会以.COMPLETED结尾
  3. 被监控文件夹每500毫秒扫描一次文件变动

4.2.4 案例四:Flume与Flume之间数据传递:单Flume多Channel、Sink
在这里插入图片描述

目标:使用flume1监控文件变动,flume1将变动内容传递给flume-2,flume-2负责存储到HDFS。同时flume1将变动内容传递给flume-3,flume-3负责输出到local。

(1)创建flume1.conf,用于监控某文件的变动,同时产生两个channel和两个sink分别输送给flume2和flume3

# 1.agent     source->channel对应关系1/n    sink->channel对应关系1/1

a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# 将数据流复制给多个channel
a1.sources.r1.selector.type = replicating

# 2.source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/plus
a1.sources.r1.shell = /bin/bash -c

# 3.sink1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = bigdata112
a1.sinks.k1.port = 4141

# sink2
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = bigdata113
a1.sinks.k2.port = 4141

# 4.channel—1
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 4.channel—2
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

(2)创建flume-2.conf,用于接收flume1的event,同时产生1个channel和1个sink,将数据输送给hdfs:

# 1 agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# 2 source
a2.sources.r1.type = avro
a2.sources.r1.bind = bigdata112
a2.sources.r1.port = 4141

# 3 sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://bigdata111:9000/flume2/%H

#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2-

#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true

#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1

#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour

#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100

#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream

#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 600

#设置每个文件的滚动大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0

#最小副本数
a2.sinks.k1.hdfs.minBlockReplicas = 1

# 4 channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

#5 Bind 
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

(3)创建flume-3.conf,用于接收flume1的event,同时产生1个channel和1个sink,将数据输送给本地目录:

#1 agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# 2 source
a3.sources.r1.type = avro
a3.sources.r1.bind = bigdata113
a3.sources.r1.port = 4141

#3 sink
a3.sinks.k1.type = file_roll
#备注:此处的文件夹需要先创建好
a3.sinks.k1.sink.directory = /opt/flume3

# 4 channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# 5 Bind
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

尖叫提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

(4)执行测试:分别开启对应flume-job(依次启动flume1,flume-2,flume-3),同时产生文件变动并观察结果:

$ bin/flume-ng agent --conf conf/ --name a1 --conf-file jobconf/flume1.conf

$ bin/flume-ng agent --conf conf/ --name a2 --conf-file jobconf/flume2.conf

$ bin/flume-ng agent --conf conf/ --name a3 --conf-file jobconf/flume3.conf

4.2.4 案例四:Flume与Flume之间数据传递:单Flume多Channel、Sink

在这里插入图片描述

目标:使用flume1监控文件变动,flume1将变动内容传递给flume-2,flume-2负责存储到HDFS。同时flume1将变动内容传递给flume-3,flume-3负责输出到local

(1)创建flume1.conf,用于监控某文件的变动,同时产生两个channel和两个sink分别输送给flume2和flume3:

# 1.agent     source->channel对应关系1/n    sink->channel对应关系1/1
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# 将数据流复制给多个channel
a1.sources.r1.selector.type = replicating

# 2.source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/plus
a1.sources.r1.shell = /bin/bash -c

# 3.sink1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = bigdata112
a1.sinks.k1.port = 4141

# sink2
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = bigdata113
a1.sinks.k2.port = 4141

# 4.channel—1
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 4.channel—2
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100 

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

(2)创建flume-2.conf,用于接收flume1的event,同时产生1个channel和1个sink,将数据输送给hdfs:

# 1 agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# 2 source
a2.sources.r1.type = avro
a2.sources.r1.bind = bigdata112
a2.sources.r1.port = 4141

# 3 sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://bigdata111:9000/flume2/%H

#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2-

#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true

#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1

#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour

#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100

#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream

#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 600

#设置每个文件的滚动大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0

#最小副本数
a2.sinks.k1.hdfs.minBlockReplicas = 1 

# 4 channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

#5 Bind 
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

(3)创建flume-3.conf,用于接收flume1的event,同时产生1个channel和1个sink,将数据输送给本地目录:

#1 agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# 2 source
a3.sources.r1.type = avro
a3.sources.r1.bind = bigdata113
a3.sources.r1.port = 4141

#3 sink
a3.sinks.k1.type = file_roll
#备注:此处的文件夹需要先创建好
a3.sinks.k1.sink.directory = /opt/flume3

# 4 channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# 5 Bind
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

尖叫提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

(4)执行测试:分别开启对应flume-job(依次启动flume1,flume-2,flume-3),同时产生文件变动并观察结果:

$ bin/flume-ng agent --conf conf/ --name a1 --conf-file jobconf/flume1.conf

$ bin/flume-ng agent --conf conf/ --name a2 --conf-file jobconf/flume2.conf

$ bin/flume-ng agent --conf conf/ --name a3 --conf-file jobconf/flume3.conf

4.2.5 案例五:Flume与Flume之间数据传递,多Flume汇总数据到单Flume

在这里插入图片描述

目标:flume11监控文件hive.log,flume-22监控某一个端口的数据流,flume11与flume-22将数据发送给flume-33,flume33将最终数据写入到HDFS。

(1)创建flume11.conf,用于监控hive.log文件,同时sink数据到flume-33:

# 1 agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 2 source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/plus
a1.sources.r1.shell = /bin/bash -c

# 3 sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = bigdata113
a1.sinks.k1.port = 4141

# 4 channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 5. Bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(2)创建flume-22.conf,用于监控端口44444数据流,同时sink数据到flume-33:

# 1 agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# 2 source
a2.sources.r1.type = netcat
a2.sources.r1.bind = bigdata112
a2.sources.r1.port = 44444

#3 sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = bigdata113
a2.sinks.k1.port = 4141

# 4 channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# 5 Bind
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

(3)创建flume33.conf,用于接收flume11与flume22发送过来的数据流,最终合并后sink到HDFS:

# 1 agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# 2 source
a3.sources.r1.type = avro
a3.sources.r1.bind = bigdata113
a3.sources.r1.port = 4141

# 3 sink
a3.sinks.k1.type = hdfs
a3.sinks.k1.hdfs.path = hdfs://bigdata111:9000/flume3/%H

#上传文件的前缀
a3.sinks.k1.hdfs.filePrefix = flume3-

#是否按照时间滚动文件夹
a3.sinks.k1.hdfs.round = true

#多少时间单位创建一个新的文件夹
a3.sinks.k1.hdfs.roundValue = 1

#重新定义时间单位
a3.sinks.k1.hdfs.roundUnit = hour

#是否使用本地时间戳
a3.sinks.k1.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次
a3.sinks.k1.hdfs.batchSize = 100

#设置文件类型,可支持压缩
a3.sinks.k1.hdfs.fileType = DataStream

#多久生成一个新的文件
a3.sinks.k1.hdfs.rollInterval = 600

#设置每个文件的滚动大小大概是128M
a3.sinks.k1.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关
a3.sinks.k1.hdfs.rollCount = 0

#最小冗余数
a3.sinks.k1.hdfs.minBlockReplicas = 1

# 4 channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# 5 Bind
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

(4)执行测试:分别开启对应flume-job(依次启动flume-33,flume-22,flume11),同时产生文件变动并观察结果:

$ bin/flume-ng agent --conf conf/ --name a3 --conf-file jobconf/flume33.conf

$ bin/flume-ng agent --conf conf/ --name a2 --conf-file jobconf/flume22.conf

$ bin/flume-ng agent --conf conf/ --name a1 --conf-file jobconf/flume11.conf

数据发送:

1.telnet bigdata111 44444 打开后发送5555555
2.在/opt/plus 中追加666666

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值