自定义Interceptor
(1)案例需求
使用Flume采集服务器本地日志,需要按照日志类型的不同,将不同种类的日志发往不同的分析系统。
(2)需求分析
在实际的开发中,一台服务器产生的日志类型可能有很多种,不同类型的日志可能需要发送到不同的分析系统。此时会用到Flume拓扑结构中的Multiplexing结构,Multiplexing的原理是,根据event中Header的某个key的值,将不同的event发送到不同的Channel中,所以我们需要自定义一个Interceptor,为不同类型的event的Header中的value赋予不同的值。
在该案例中,我们以端口数据模拟日志,以身体信息中是否包含hello模拟不同类型的日志,我们需要自定义interceptor判断hello是否存在,将其分别发往不同的分析系统(Channel)。
(3)实现步骤:
创建一个maven项目,并引入以下依赖
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-core</artifactId>
<version>1.9.0</version>
</dependency>
定义MyInterceptor类并实现Interceptor接口
package com.atguigu.flume;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import javax.lang.model.element.VariableElement;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class MyInterceptor implements Interceptor {
ArrayList<Event> list;
//initialize方法会在一开始调用一次
@Override
public void initialize() {
//定义全局的list集合
list = new ArrayList<Event>();
}
//处理单个event
@Override
public Event intercept(Event event) {
//1.拿到头信息
Map<String,String> headers = event.getHeaders();
//2.拿到身体信息
String body = new String(event.getBody());
if (body.contains("hell")){
headers.put("type","hello");
}else {
headers.put("type","nohello");
}
return event;
}
//批量处理event
@Override
public List<Event> intercept(List<Event> list) {
//清空集合
list.clear();
for (Event event : list) {
this.list.add(intercept(event));
}
return this.list;
}
@Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
@Override
public Interceptor build() {
return new MyInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
在Maven工程中install,将MyInterceptor类达成jar包,并放到/opt/module/flume/lib目录下。
编辑flume配置文件
为hadoop102上的Flume1配置1个netcat source,1个sink group(2个avro sink),并配置相应的ChannelSelector和interceptor。
[atguigu@hadoop102 job]$ vim netcat-flume-loggers4.conf
添加如下内容:
# Name the components on this agent
a2.sources = r1
a2.sinks = k1 k2
a2.channels = c1 c2
# Describe/configure the source
a2.sources.r1.type = netcat
a2.sources.r1.bind = localhost
a2.sources.r1.port = 44444
a2.sources.r1.interceptors = i1
a2.sources.r1.interceptors.i1.type = com.atguigu.flume.MyInterceptor$Builder
a2.sources.r1.selector.type = multiplexing
a2.sources.r1.selector.header = type
a2.sources.r1.selector.mapping.hello = c1
a2.sources.r1.selector.mapping.nohello = c2
# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop103
a2.sinks.k1.port = 4141
a2.sinks.k2.type=avro
a2.sinks.k2.hostname = hadoop104
a2.sinks.k2.port = 4141
# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1 c2
a2.sinks.k1.channel = c1
a2.sinks.k2.channel = c2
为hadoop103上的Flume配置一个avro source和一个logger sink
[atguigu@hadoop103 job]$ vim netcat-flume-loggers4.conf
a3.sources = r1
a3.sinks = k1
a3.channels = c1
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop103
a3.sources.r1.port = 4141
a3.sinks.k1.type = logger
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
a3.sinks.k1.channel = c1
a3.sources.r1.channels = c1
为hadoop104上的Flume配置一个avro source和一个logger sink
[atguigu@hadoop104 job]$ vim netcat-flume-loggers4.conf
a4.sources = r1
a4.sinks = k1
a4.channels = c1
a4.sources.r1.type = avro
a4.sources.r1.bind = hadoop104
a4.sources.r1.port = 4141
a4.sinks.k1.type = logger
a4.channels.c1.type = memory
a4.channels.c1.capacity = 1000
a4.channels.c1.transactionCapacity = 100
a4.sinks.k1.channel = c1
a4.sources.r1.channels = c1
分别在hadoop102,hadoop103,hadoop104上启动flume进程,注意先后顺序。
[atguigu@hadoop103 flume]$ bin/flume-ng agent -n a3 -c conf/ -f job/netcat-flume-loggers4.conf -Dflume.root.logger=INFO,console
[atguigu@hadoop104 flume]$ bin/flume-ng agent -n a4 -c conf/ -f job/netcat-flume-loggers4.conf -Dflume.root.logger=INFO,console
[atguigu@hadoop102 flume]$ bin/flume-ng agent -n a2 -c conf/ -f job/netcat-flume-loggers4.conf
nc窗口输入hello,hadoop103打印的日志:
nc窗口输入world,hadoop104打印的日志: