topic_log的数据采集至hdfs
技术选型
flume KafkaSource (拦截器) -> fileChannel -> hdfsSink
Flume实操
1)创建Flume配置文件
[atguigu@hadoop104 flume]$ vim job/kafka_to_hdfs_log.conf
2)配置文件内容如下
## 组件
a1.sources=r1
a1.channels=c1
a1.sinks=k1
## source1
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 5000
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sources.r1.kafka.topics=topic_log
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.interceptor.TimestampInterceptor$Builder
## channel1
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /opt/module/flume/checkpoint/behavior2
a1.channels.c1.dataDirs = /opt/module/flume/data/behavior2/
a1.channels.c1.maxFileSize = 2146435071
a1.channels.c1.capacity = 1000000
a1.channels.c1.keep-alive = 6
## sink1
a1.sinks.k1.type = hdfs
#HA高可用配置
a1.sinks.k1.hdfs.path = hdfs://mycluster/origin_data/edu/log/topic_log/%Y-%m-%d
a1.sinks.k1.hdfs.filePrefix = log-
a1.sinks.k1.hdfs.round = false
a1.sinks.k1.hdfs.rollInterval = 10
a1.sinks.k1.hdfs.rollSize = 134217728
a1.sinks.k1.hdfs.rollCount = 0
## 控制输出文件是原生文件。
a1.sinks.k1.hdfs.fileType = CompressedStream
a1.sinks.k1.hdfs.codeC = gzip
## 拼装
a1.sources.r1.channels = c1
a1.sinks.k1.channel= c1
并将HA的core-site.xml、hdfs-site.xml配置文件复制到Flume下的conf目录下
3)编写拦截器代码
导入Flume依赖
public class TimestampInterceptor implements Interceptor {
private JsonParser jsonParser;
@Override
public void initialize() {
jsonParser=new JsonParser();
}
@Override
public Event intercept(Event event) {
byte[] body = event.getBody();
String line = new String(body, StandardCharsets.UTF_8);
JsonElement element = jsonParser.parse(line);
JsonObject jsonObject = element.getAsJsonObject();
String ts = jsonObject