大数据学习之Flume

flume简介

概念: Flume是一种分布式,可靠且可用的服务,用于有效地收集,聚合和移动大量日志数据。它具有基于流数据流的简单灵活的体系结构。它具有可调整的可靠性机制以及许多故障转移和恢复机制,具有强大的功能和容错能力。它使用一个简单的可扩展数据模型,允许在线分析应用程序。
概念总结:分布式的高可靠高可用的用来收集,聚合移动日志数据的框架、具有简单的流式数据处理特点(数据源为流式数据,但是不需要计算)

学习重点

flume简介图

  1. event:事件,flume在接收到日志数据之后会以一比一的比例对日志进行封装,一条数据被非为一个Event:{" headers":“自定义内容”,“body”:“自定义内容”}
  2. agent:代理,在Flume集群中,每一个机器节点都叫做一个agent,一个标准的agent中涵盖三个部分source,channel,sink代表了flume一个节点的接收、封装、承载、传输数据的整个过程
  3. source:数据源,负责接受上游传来的数据,并进行封装(event),变更将其输出到Channel进行缓存
  4. channel:复制缓存数据,一般追求数据,一般使用内存(memory)
  5. sink:负责消费channel中的数据并将其输出到指定目的地

Flume的安装

解压和安装

在官网下载1.9并上传到虚拟机

# 解压文件进行安装
tar -xvf apache-flume-1.9.0-bin.tar.gz

将flume.properties文件加到conf文件夹下
在这里插入图片描述
flume.properties详解

 #a1为自定义的agent名字,与启动命令中的-n属性对应
 
a1.sources  =  r1     #定义agent的数据源source,可以有多个。
 
a1.sinks  =  k1       #定义agent的数据出处,可以有多个。
 
a1.channels  =  c1    #定义agent的通道,一般有多少个sink就有多少个channel
 
 
 
a1.sources.r1.type  =  avro      #指定source的类型为avro
 
a1.sources.r1.bind  =  0.0.0.0   #指定source的来源。一般为本机,被动接收。
 
a1.sources.r1.port  =  22222     #指定端口
 
 
 
a1.sinks.k1.type  =  avro    #指定sink的类型为avro
 
a1.sinks.k1.hostname  =  192.168.65.162      #指定sink的目标节点IP
 
a1.sinks.k1.port  =  22222       #指定目标端口
 
 
 
a1.channels.c1.type  =  memory       #指定channel的类型为 内存
 
a1.channels.c1.capacity  =  1000     #指定存储容量,避免强制抢占内存影响其他进程的正常运行
 
a1.channels.c1.transactionCapacity  =  100  #指定事务容量
 
 
 
a1.sources.r1.channels  =  c1    #绑定source
 
a1.sinks.k1.channel  =  c1       #绑定sink

启动:

# 在/home/app/apache-flume-1.9.0-bin/conf目录下执行
../bin/flume-ng agent -c ./ -f ./flume.properties -n a1 -Dflume.root.logger=INFO,console

测试

# flume启动后占用当前窗口,复制一个新的窗口在任意目录下执行以下
curl -X POST -d '[{"headers":{"tester":"tony"},"body":"hello http flume"}]' http://hadoop01:22222

显示效果
在这里插入图片描述
监听文件夹

# 在另一个界面创建文件夹并且进入
mkdir /home/data
mkdir /home/data/log
cd /home/data/log

修改配置文件

# 修改flume.properties的代码如下
a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  spooldir
a1.sources.r1.spoolDir = /home/data/spooldir
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1
  1. 在/home/data下创建文件log.txt并编辑添加数据
  2. 在flume安装目录下的conf目录下执行命令启动agent
  3. 模拟发送avro在flume的bin目录下执行:
../bin/flume-ng agent -c ./ -f ./flume.properties -n a1 -Dflume.root.logger=INFO,console

练习案例

Source练习

avro
修改flume.properties

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  avro
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1
  1. 在/home/data下创建文件log.txt并编辑添加数据
  2. 在flume安装目录下的conf目录下执行命令启动agent
  3. 模拟发送avro在flume的bin目录下执行:
# 启动
./flume-ng avro-client -c ../conf -H hadoop01 -p 22222 -F /home/data/log.txt

Spooldir

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  spooldir
a1.sources.r1.spoolDir = /home/data/spooldir
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

在/home/data目录下创建文件夹spooldir
启动
在spooldir中vim文件并添加内容并保存。发现flume日志中打印编辑内容。

Channel练习

channel部分一般都以内存为资源基础

a1.sources  =  r1
a1.sinks  =  k1 
a1.channels  =  c1

a1.sources.r1.type  =  http 
a1.sources.r1.bind  =  0.0.0.0 
a1.sources.r1.port  =  22222

a1.sinks.k1.type  =  logger

a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100

a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

Sink练习

Logger
a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  http
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

Avro

a1.sources = r1

a1.sinks = k1

a1.channels = c1

a1.sources.r1.type = avro

a1.sources.r1.bind = 0.0.0.0

a1.sources.r1.port = 22222

a1.sinks.k1.type = avro

a1.sinks.k1.hostname = hadoop02

a1.sinks.k1.port = 22222

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

HDFS
a1.sources = r1

a1.sinks = k1

a1.channels = c1

a1.sources.r1.type = http

a1.sources.r1.bind = 0.0.0.0

a1.sources.r1.port = 22222

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = hdfs://hadoop01:9000/flume/data

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

集群部署

Hadoop01:JDK、Hadoop、Flume

Hadoop02:JDK、Flume

Hadoop03:JDK、Flume

只需要将hadoop01安装好的Flume文件夹发送到02 03两个节点相应的位置即可。

案例练习

多级

Hadoop01

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  http
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  avro
a1.sinks.k1.hostname  =  hadoop02
a1.sinks.k1.port  =  22222
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

hadoop02

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  avro
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  avro
a1.sinks.k1.hostname  =  hadoop03
a1.sinks.k1.port  =  22222
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1
hadoop03
a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  avro
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100

a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

扇入

Hadoop01

a1.sources  =  r1 
a1.sinks  =  k1 
a1.channels  =  c1

a1.sources.r1.type  =  http 
a1.sources.r1.bind  =  0.0.0.0 
a1.sources.r1.port  =  22222
  
a1.sinks.k1.type  =  avro
a1.sinks.k1.hostname  =  hadoop03
a1.sinks.k1.port  =  22222

a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

Hadoop02

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  http
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  avro 
a1.sinks.k1.hostname  =  hadoop03 
a1.sinks.k1.port  =  22222
  
a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1 
a1.sinks.k1.channel  =  c1

Hadoop03

a1.sources  =  r1 
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  avro
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222

a1.sinks.k1.type  =  logger

a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000 
a1.channels.c1.transactionCapacity  =  100

a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

扇出

Hadoop01

a1.sources  =  r1 
a1.sinks  =  k1 k2 
a1.channels  =  c1 c2

a1.sources.r1.type  =  http
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222

a1.sinks.k1.type  =  avro 
a1.sinks.k1.hostname  =  hadoop02
a1.sinks.k1.port  =  22222 
 
a1.sinks.k2.type  =  avro 
a1.sinks.k2.hostname  =  hadoop03 
a1.sinks.k2.port  =  22222

a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
a1.channels.c2.type  =  memory
a1.channels.c2.capacity  =  1000 
a1.channels.c2.transactionCapacity  =  100

a1.sources.r1.channels  =  c1 c2 
a1.sinks.k1.channel  =  c1 
a1.sinks.k2.channel  =  c2

Hadoop02

a1.sources  =  r1 
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  avro 
a1.sources.r1.bind  =  0.0.0.0 
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000 
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1
Hadoop03
a1.sources  =  r1 
a1.sinks  =  k1 
a1.channels  =  c1

a1.sources.r1.type  =  avro
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
a1.sinks.k1.type  =  logger
 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000 
a1.channels.c1.transactionCapacity  =  100
  
a1.sources.r1.channels  =  c1 
a1.sinks.k1.channel  =  c1

项目、Flume、HDFS整合
log4j和flume整合
配置log4j.properties

log4j.rootLogger = info,stdout,flume
 
log4j.appender.stdout = org.apache.log4j.ConsoleAppender 
log4j.appender.stdout.Target = System.out 
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout 
log4j.appender.stdout.layout.ConversionPattern = %m%n
 
 
 
# appender flume 
log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender 
log4j.appender.flume.Hostname = hadoop01
log4j.appender.flume.Port = 22222
log4j.appender.flume.UnsafeMode = true

Flume和HDFS整合

配置flume.properties
#命名Agent a1的组件
a1.sources  =  r1 
a1.sinks  =  k1 
a1.channels  =  c1
 
#描述/配置Source 
a1.sources.r1.type  =  avro 
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222
 
#描述Sink
a1.sinks.k1.type  =  hdfs 
a1.sinks.k1.hdfs.path = hdfs://hadoop01:9000/jt/data
a1.sinks.k1.hdfs.fileType=DataStream
 
#描述内存Channel 
a1.channels.c1.type  =  memory
a1.channels.c1.capacity  =  1000
a1.channels.c1.transactionCapacity  =  100
 
#为Channle绑定Source和Sink
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

​​​​​​​

进阶:自定义Sink

pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
 
    <groupId>cn.tedu</groupId>
    <artifactId>flume</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.apache.flume</groupId>
            <artifactId>flume-ng-core</artifactId>
            <version>1.9.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flume.flume-ng-sinks</groupId>
            <artifactId>flume-hdfs-sink</artifactId>
            <version>1.9.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flume.flume-ng-sinks</groupId>
            <artifactId>flume-hive-sink</artifactId>
            <version>1.9.0</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.14</version>
        </dependency>
 
    </dependencies>
</project>

代码

package cn.tedu.flume;
 
import org.apache.flume.*;
import org.apache.flume.conf.Configurable;
import org.apache.flume.sink.AbstractSink;
import org.slf4j.LoggerFactory;
import org.slf4j.Logger;
 
/**
 * 自定义flume-sink
 */
public class MySink extends AbstractSink implements Configurable {
    private static final Logger LOG = LoggerFactory.getLogger(MySink.class);
 
    public Status process() throws EventDeliveryException {
        Status stat;
        Channel channel = getChannel();
        Transaction transaction = channel.getTransaction();
        Event event;
        transaction.begin();
 
        while(true){
            event = channel.take();
            if(event != null){
                break;
            }
        }
 
        try {
            LOG.info(new String(event.getBody()));
            LOG.info("输出在这里");
            transaction.commit();
            stat = Status.READY;
        } catch (Exception e) {
            e.printStackTrace();
            transaction.rollback();
            stat = Status.BACKOFF;
        } finally {
            transaction.close();
        }
 
        LOG.info(stat.toString());
        return stat;
    }
 
    public void configure(Context context) {
 
    }
}

打成jar包上传至flume安装目录中的lib目录下
修改配置文件

mysink.properties

a1.sources  =  r1
a1.sinks  =  k1
a1.channels  =  c1
 
a1.sources.r1.type  =  http
a1.sources.r1.bind  =  0.0.0.0
a1.sources.r1.port  =  22222 
 
a1.sinks.k1.type  =  cn.tedu.flume.MySink
 
a1.channels.c1.type  =  memory 
a1.channels.c1.capacity  =  1000 
a1.channels.c1.transactionCapacity  =  100
 
a1.sources.r1.channels  =  c1
a1.sinks.k1.channel  =  c1

启动

../bin/flume-ng agent -c ./ -f ./mysink.properties -n a1 -Dflume.root.logger=INFO,console
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值