Flume笔记二:案例

案例一: 复制和多路复用

在这里插入图片描述

vim a1.conf
//第一道flume
#各个组件命名
a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2

#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444

#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 10000
a1.channels.c2.transactionCapacity = 100

#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888

#组合 绑定
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
vim a2.conf
//第二道:连接hdfs
#各个组件命名
a2.sources = r1
a2.channels = c1
a2.sinks = k1

#Source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop103
a2.sources.r1.port = 6666

#Channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 10000
a2.channels.c1.transactionCapacity = 100

#Sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop102:9820/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = logs-
###是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
###多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
###重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
###是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
###积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
###设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
###多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 60
###设置每个文件的滚动大小
a2.sinks.k1.hdfs.rollSize = 134217700
###文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0
#组合 绑定
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
vim a3.conf
//第二道flume。输出到控制台中
#各个组件命名
a3.sources = r1
a3.channels = c1
a3.sinks = k1

#Source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop104
a3.sources.r1.port = 8888


#Channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 10000
a3.channels.c1.transactionCapacity = 100

#Sink
a3.sinks.k1.type = logger


#组合 绑定
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

案例二:负载均衡

仅修改a1

vim a1.conf
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2

#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444
#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888

a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = random

#组合 绑定
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

案例三:故障转移

在这里插入图片描述

仅仅修改a1

vim a1.conf
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2

#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444
#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888

a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000

案例四:聚合

在这里插入图片描述

vim a1.conf
-----------------------
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1

#Source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/flume-1.9.0/datas/hive.log

#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop104
a1.sinks.k1.port = 8888

#聚合
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

vim a2.conf
---------------------
#各个组件命名
a2.sources = r1
a2.channels = c1
a2.sinks = k1

#Source
a2.sources.r1.type = netcat
a2.sources.r1.bind = hadoop103
a2.sources.r1.port = 6666

#Channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 10000
a2.channels.c1.transactionCapacity = 100

#Sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop104
a2.sinks.k1.port = 8888


#组合 绑定
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
vim a3.conf
--------------------------
#各个组件命名
a3.sources = r1
a3.channels = c1
a3.sinks = k1

#Source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop104
a3.sources.r1.port = 8888

#Channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 10000
a3.channels.c1.transactionCapacity = 100

#Sink
a3.sinks.k1.type = hdfs
a3.sinks.k1.hdfs.path = hdfs://hadoop102:9820/flume/%Y%m%d/%H
##上传文件的前缀
a3.sinks.k1.hdfs.filePrefix = logs-
###是否按照时间滚动文件夹
a3.sinks.k1.hdfs.round = true
###多少时间单位创建一个新的文件夹
a3.sinks.k1.hdfs.roundValue = 1
###重新定义时间单位
a3.sinks.k1.hdfs.roundUnit = hour
###是否使用本地时间戳
a3.sinks.k1.hdfs.useLocalTimeStamp = true
###积攒多少个Event才flush到HDFS一次
a3.sinks.k1.hdfs.batchSize = 100
###设置文件类型,可支持压缩
a3.sinks.k1.hdfs.fileType = DataStream
###多久生成一个新的文件
a3.sinks.k1.hdfs.rollInterval = 60
###设置每个文件的滚动大小
a3.sinks.k1.hdfs.rollSize = 134217700
###文件的滚动与Event数量无关
a3.sinks.k1.hdfs.rollCount = 0

#组合 绑定
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

案例五:自定义Interceptor

1**)案例需求**

使用Flume采集服务器本地日志,需要按照日志类型的不同,将不同种类的日志发往不同的分析系统。

2**)需求分析**

在实际的开发中,一台服务器产生的日志类型可能有很多种,不同类型的日志可能需要发送到不同的分析系统。此时会用到Flume拓扑结构中的Multiplexing结构,Multiplexing的原理是,根据event中Header的某个key的值,将不同的event发送到不同的Channel中,所以我们需要自定义一个Interceptor,为不同类型的event的Header中的key赋予不同的值。

在该案例中,我们以端口数据模拟日志,以数字(单个)和字母(单个)模拟不同类型的日志,我们需要自定义interceptor区分数字和字母,将其分别发往不同的分析系统(Channel)。

3**)实现步骤**

(1)创建一个maven项目,并引入以下依赖。

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-core</artifactId>
    <version>1.9.0</version>
</dependency>

(2)定义CustomInterceptor类并实现Interceptor接口。

public class LogInterceptor implements Interceptor {
    @Override
    public void initialize() {

    }

    /**
     * 单个event的处理
     * @param event
     * @return
     */
    @Override
    public Event intercept(Event event) {
        //1. 获取body
        String body = new String(event.getBody());
        //2. 获取headers
        Map<String, String> headers = event.getHeaders();
        //3.判断处理
        if(body.contains("flume")){
            headers.put("title","at");
        }else{
            headers.put("title","ot");
        }

        return event;
    }

    /**
     * 多个event的处理
     * @param events
     * @return
     */
    @Override
    public List<Event> intercept(List<Event> events) {
        for (Event event : events) {
            intercept(event);
        }
        return events;
    }

    @Override
    public void close() {

    }


    public static class MyBuilder implements Builder{

        @Override
        public Interceptor build() {
            return new LogInterceptor();
        }

        /**
         * 用于读取配置信息的.
         * @param context
         */
        @Override
        public void configure(Context context) {

        }
    }
}

(3)编辑flume配置文件

为hadoop102上的Flume1配置1个netcat source,1个sink group(2个avro sink),并配置相应的ChannelSelector和interceptor

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.CustomInterceptor$Builder
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = type
a1.sources.r1.selector.mapping.letter = c1
a1.sources.r1.selector.mapping.number = c2
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666

a1.sinks.k2.type=avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Use a channel which buffers events in memory
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100


# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

为hadoop103上的Flume4配置一个avro source和一个logger sink。

a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop103
a1.sources.r1.port = 6666

a1.sinks.k1.type = logger

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.channel = c1
a1.sources.r1.channels = c1

为hadoop104上的Flume3配置一个avro source和一个logger sink。

a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop104
a1.sources.r1.port = 8888

a1.sinks.k1.type = logger

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.channel = c1
a1.sources.r1.channels = c1

(4)分别在hadoop102,hadoop103,hadoop104上启动flume进程,注意先后顺序。

(5)在hadoop102使用netcat向localhost:44444发送字母和数字。

(6)观察hadoop103和hadoop104打印的日志。

案例六:自定义Source

1**)介绍**

Source是负责接收数据到Flume Agent的组件。Source组件可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy。官方提供的source类型已经很多,但是有时候并不能满足实际开发当中的需求,此时我们就需要根据实际需求自定义某些source。

官方也提供了自定义source的接口:

https://flume.apache.org/FlumeDeveloperGuide.html#source根据官方说明自定义MySource需要继承AbstractSource类并实现Configurable和PollableSource接口。

实现相应方法:

getBackOffSleepIncrement() //backoff 步长

getMaxBackOffSleepInterval()//backoff 最长时间

configure(Context context)//初始化context(读取配置文件内容)

process()//获取数据封装成event并写入channel,这个方法将被循环调用。

使用场景:读取MySQL数据或者其他文件系统。

2**)需求**

使用flume接收数据,并给每条数据添加前缀,输出到控制台。前缀可从flume配置文件中配置。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wA2YgOvG-1598360531925)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\1598359687928.png)]

4**)编码**

(1)导入pom依赖

<dependencies>
    <dependency>
        <groupId>org.apache.flume</groupId>
        <artifactId>flume-ng-core</artifactId>
        <version>1.9.0</version>
</dependency>

(2)编写代码

/**
 * 自定义Source需要继承Flume提供的AbstractSource类, 并实现 Configurable,PollableSource接口.
 *
 */
public class MySource  extends AbstractSource implements Configurable, PollableSource {

    private String prefix ;
    /**
     * source的核心处理方法
     *
     * 此方法在flume的内部流程中会循环调用
     * @return
     * @throws EventDeliveryException
     */
    @Override
    public Status process() throws EventDeliveryException {
        try {
            TimeUnit.SECONDS.sleep(1);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        Status status  = null ;
        try {
            // 获取event
            Event event = getSomeData();

            // 处理event
            getChannelProcessor().processEvent(event);

            //正确处理
            status = Status.READY;
        }catch (Throwable t){
            //出现问题
            status = Status.BACKOFF;
        }

        return status;
    }

    /**
     * 结合实际的业务编写采集数据的过程.
     *
     * 需求:每次调用都随机生成一个UUID
     *
     * @return
     */
    public  Event getSomeData() {
        String uuid = UUID.randomUUID().toString();

        Event event = new SimpleEvent();

        event.setBody((prefix +"--" +  uuid).getBytes());
        event.getHeaders().put("flume","NB");

        return event ;
    }




    /**
     * 每次退避增长的超时时间
     * @return
     */
    @Override
    public long getBackOffSleepIncrement() {
        return 1;
    }

    /**
     * 最大的退避时间
     * @return
     */
    @Override
    public long getMaxBackOffSleepInterval() {
        return 10;
    }

    /**
     * 读取配置文件中的配置项
     *
     * 假设将来给自定义Source类配置一个配置项:  a1.sources.r1.prefix
     */
    @Override
    public void configure(Context context) {
        prefix = context.getString("prefix","AT");
    }
}

5**)测试**

(1)打包

将写好的代码打包,并放到flume的lib目录(/opt/module/flume)下。

(2)配置文件

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = com.atguigu.MySource
a1.sources.r1.delay = 1000
#a1.sources.r1.field = atguigu

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(3)开启任务

bin/flume-ng agent -c conf/ -f job/mysource.conf -n a1 -Dflume.root.logger=INFO,console

案例六:自定义Sink

1**)介绍**

Sink不断地轮询Channel中的事件且批量地移除它们,并将这些事件批量写入到存储或索引系统、或者被发送到另一个Flume Agent。

Sink是完全事务性的。在从Channel批量删除数据之前,每个Sink用Channel启动一个事务。批量事件一旦成功写出到存储系统或下一个Flume Agent,Sink就利用Channel提交事务。事务一旦被提交,该Channel从自己的内部缓冲区删除事件。

Sink组件目的地包括hdfs、logger、avro、thrift、ipc、file、null、HBase、solr、自定义。官方提供的Sink类型已经很多,但是有时候并不能满足实际开发当中的需求,此时我们就需要根据实际需求自定义某些Sink。

官方也提供了自定义sink的接口:

https://flume.apache.org/FlumeDeveloperGuide.html#sink根据官方说明自定义MySink需要继承AbstractSink类并实现Configurable接口。

实现相应方法:

configure(Context context)//初始化context(读取配置文件内容)

process()//从Channel读取获取数据(event),这个方法将被循环调用。

使用场景:读取Channel数据写入MySQL或者其他文件系统。

2**)需求**

使用flume接收数据,并在Sink端给每条数据添加前缀和后缀,输出到控制台。前后缀可在flume任务配置文件中配置。

3**)编码**

/**
 * 自定义Sink需要继承Flume提供的AbstractSink类, 实现Configurable接口
 */
public class MySink  extends AbstractSink implements Configurable {

    //定义Logger对象
    Logger logger = LoggerFactory.getLogger(MySink.class);

    /**
     * Sink的核心处理方法
     *
     * 在flume的内部会循环调用process方法.
     * @return
     * @throws EventDeliveryException
     */
    @Override
    public Status process() throws EventDeliveryException {
       Status status = null ;
        //获取channel对象
        Channel channel = getChannel();
        // 获取事务对象
        Transaction transaction = channel.getTransaction();
        try {
            // 开启事务
            transaction.begin();

            //take数据
            Event event ;
            while(true){
              event = channel.take();
              if(event != null){
                  break ;
              }
              //没有take到数据,休息一会
                TimeUnit.SECONDS.sleep(1);
            }
            //处理event
            processEvent(event);

            //提交事务
            transaction.commit();
           //正常处理
           status = Status.READY;
       }catch (Throwable t){
           // 回滚事务
            transaction.rollback();
            // 出现问题
           status = Status.BACKOFF;

       }finally {
            //关闭事务
            transaction.close();
       }

        return status;
    }

    /**
     * 对event的处理
     *
     * 需求:使用Logger的方式打印到控制台.
     * @param event
     */
    public  void processEvent(Event event) {
        logger.info(new String(event.getBody()));

    }

    @Override
    public void configure(Context context) {

    }
}

4**)测试**

(1)打包

将写好的代码打包,并放到flume的lib目录(/opt/module/flume)下。

(2)配置文件

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = com.atguigu.MySink
#a1.sinks.k1.prefix = atguigu:
a1.sinks.k1.suffix = :atguigu

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(3)开启任务

bin/flume-ng agent -c conf/ -f job/mysink.conf -n a1 -Dflume.root.logger=INFO,console
nc localhost 44444
hello
OK

Flume数据流监控

Ganglia的安装与部署

Ganglia由gmond、gmetad和gweb三部分组成。

gmond(Ganglia Monitoring Daemon)是一种轻量级服务,安装在每台需要收集指标数据的节点主机上。使用gmond,你可以很容易收集很多系统指标数据,如CPU、内存、磁盘、网络和活跃进程的数据等。

gmetad(Ganglia Meta Daemon)整合所有信息,并将其以RRD格式存储至磁盘的服务。

gweb(Ganglia Web)Ganglia可视化工具,gweb是一种利用浏览器显示gmetad所存储数据的PHP前端。在Web界面中以图表方式展现集群的运行状态下收集的多种不同指标数据。

1**)安装ganglia**

​ (1)规划

hadoop102:     web  gmetad gmod 

hadoop103:     gmod

hadoop104:     gmod

​ (2)在102 103 104分别安装epel-release

sudo yum -y install epel-release

​ (3)在102 安装

sudo yum -y install ganglia-gmetad

sudo yum -y install ganglia-web

sudo yum -y install ganglia-gmond

​ (4)在103 和 104 安装

sudo yum -y install ganglia-gmond

2**)在102修改配置文件/etc/httpd/conf.d/ganglia.conf**

udo vim /etc/httpd/conf.d/ganglia.conf

修改为红颜色的配置:

\# Ganglia monitoring system php web frontend

\#

Alias /ganglia /usr/share/ganglia
<Location /ganglia>

  \# Require local

  \# 通过windows访问ganglia,需要配置Linux对应的主机(windows)ip地址

​    Require ip 192.168.202.1  

  \# Require ip 10.1.2.3

  \# Require host example.org

</Location>

5**)在102修改配置文件/etc/ganglia/gmetad.conf**

sudo vim /etc/ganglia/gmetad.conf

修改为:

data_source "my cluster" hadoop102

6**)在102 103 104修改配置文件/etc/ganglia/gmond.conf**

sudo vim /etc/ganglia/gmond.conf 
修改为:
cluster {
  name = "my cluster"
  owner = "unspecified"
  latlong = "unspecified"
  url = "unspecified"
}
udp_send_channel {
  #bind_hostname = yes # Highly recommended, soon to be default.
                       # This option tells gmond to use a source address
                       # that resolves to the machine's hostname.  Without
                       # this, the metrics may appear to come from any
                       # interface and the DNS names associated with
                       # those IPs will be used to create the RRDs.
  # mcast_join = 239.2.11.71
  # 数据发送给hadoop102
  host = hadoop102
  port = 8649
  ttl = 1
}
udp_recv_channel {
  # mcast_join = 239.2.11.71
  port = 8649
  # 接收来自任意连接的数据
  bind = 0.0.0.0
  retry_bind = true
  # Size of the UDP buffer. If you are handling lots of metrics you really
  # should bump it up to e.g. 10MB or even higher.
  # buffer = 10485760
}

7)在102修改配置文件/etc/selinux/config

sudo vim /etc/selinux/config
修改为:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

尖叫提示:selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:

sudo setenforce 0

8**)启动ganglia**

(1)在102 103 104 启动

sudo systemctl start gmond

(2)在102 启动

sudo systemctl start httpd

sudo systemctl start gmetad

9**)打开网页浏览ganglia页面**

http://hadoop102/ganglia

操作Flume测试监控

1**)启动Flume任务**

bin/flume-ng agent \

-c conf/ \

-n a1 \

-f datas/netcat-flume-logger.conf \

-Dflume.root.logger=INFO,console \

-Dflume.monitoring.type=ganglia \

-Dflume.monitoring.hosts=hadoop102:8649

2**)发送数据观察ganglia监测图**

nc localhost 44444
 SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

尖叫提示:selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:

sudo setenforce 0

8**)启动ganglia**

(1)在102 103 104 启动

sudo systemctl start gmond

(2)在102 启动

sudo systemctl start httpd

sudo systemctl start gmetad

9**)打开网页浏览ganglia页面**

http://hadoop102/ganglia

操作Flume测试监控

1**)启动Flume任务**

bin/flume-ng agent \

-c conf/ \

-n a1 \

-f datas/netcat-flume-logger.conf \

-Dflume.root.logger=INFO,console \

-Dflume.monitoring.type=ganglia \

-Dflume.monitoring.hosts=hadoop102:8649

2**)发送数据观察ganglia监测图**

nc localhost 44444
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值