9、数据采集模块
(三).日志生成
(1).日志启动
1)代码参数说明
// 参数一:控制发送每条的延时时间,默认是0 Long delay = args.length > 0 ?
Long.parseLong(args[0]) : 0L;
// 参数二:循环遍历次数 int loop_len = args.length > 1 ?
Integer.parseInt(args[1]) : 1000;
2)将生成的jar包log-collector-0.0.1-SNAPSHOT-jar-with-dependencies.jar拷贝到hadoop102、服务器上,并同步到hadoop103的/opt/module路径下,
[weiwei@hadoop102 module]$ xsync log-collector-1.0-SNAPSHOT-jar-with-dependencies.jar
3)在hadoop102上执行jar程序
[weiwei@hadoop102 module]$ java -classpath log-collector-1.0-SNAPSHOT-jar-with-dependencies.jar com.weiwei.appclient.AppMain >/opt/module/test.log
4)在/tmp/logs路径下查看生成的日志文件
[weiwei@hadoop102 module]$ cd /tmp/logs/
[weiwei@hadoop102 logs]$ ls
app-2019-02-10.log
(2).集群日志生成启动脚本
1)在/home/weiwei/bin目录下创建脚本lg.sh
[weiwei@hadoop102 bin]$ vim lg.sh
2)在脚本中编写如下内容
#! /bin/bash
for i in hadoop102 hadoop103
do
ssh $i "java -classpath /opt/module/log-collector-1.0-SNAPSHOT-jar-with-dependencies.jar com.weiwei.appclient.AppMain $1 $2 >/opt/module/test.log &"
done
3)修改脚本执行权限
[weiwei@hadoop102 bin]$ chmod 777 lg.sh
4)启动脚本
[weiwei@hadoop102 module]$ lg.sh
5)分别在hadoop102、hadoop103的/tmp/logs目录上查看生成的数据
[weiwei@hadoop102 logs]$ ls
app-2019-02-10.log
[weiwei@hadoop103 logs]$ ls
app-2019-02-10.log
(3).集群时间同步修改脚本
1)在/home/weiwei/bin目录下创建脚本dt.sh
[weiwei@hadoop102 bin]$ vim dt.sh
2)在脚本中编写如下内容
#!/bin/bash
log_date=$1
for i in hadoop102 hadoop103 hadoop104
do
ssh -t $i "sudo date -s $log_date"
done
说明(ssh -t):https://www.cnblogs.com/kevingrace/p/6110842.html
3)修改脚本执行权限
[weiwei@hadoop102 bin]$ chmod 777 dt.sh
4)启动脚本
[weiwei@hadoop102 bin]$ dt.sh 2019-2-10
(4).集群所有进程查看脚本
1)在/home/weiwei/bin目录下创建脚本xcall.sh
[weiwei@hadoop102 bin]$ vim xcall.sh
2)在脚本中编写如下内容
#! /bin/bash
for i in hadoop102 hadoop103 hadoop104
do
echo --------- $i ----------
ssh $i "$*"
done
3)修改脚本执行权限
[weiwei@hadoop102 bin]$ chmod 777 xcall.sh
4)启动脚本
[weiwei@hadoop102 bin]$ xcall.sh jps
(四).采集日志Flume
(1).项目经验之Flume组件
1)Source
①Taildir Source相比Exec Source、Spooling Directory Source的优势?
答:TailDir Source:断点续传、多目录。Flume1.6以前需要自己自定义Source记录每次读取文件位 置,实现断点续传。
Exec Source可以实时搜集数据,但是在Flume不运行或者Shell命令出错的情况下,数据将会丢 失。
Spooling Directory Source监控目录,不支持断点续传。
② batchSize大小如何设置?
答:Event 1K左右时,500-1000合适(默认为100)
2)Channel
采用Kafka Channel,省去了Sink,提高了效率。
(2).日志采集Flume配置
1)Flume配置分析
Flume直接读log日志的数据,log日志的格式是app-yyyy-mm-dd.log。
2)Flume的具体配置如下
① 在/opt/module/flume/conf目录下创建file-flume-kafka.conf文件
[weiwei@hadoop102 conf]$ vim file-flume-kafka.conf
② 在文件配置如下内容
a1.sources=r1 #组件定义
a1.channels=c1 c2
#configure source
a1.sources.r1.type = TAILDIR #taildir方式读数据
a1.sources.r1.positionFile = /opt/module/flume/test/log_position.json #记录日志读取位置
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+ //读取日志位置
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1 c2
#interceptor
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = com.weiwei.flume.interceptor.LogETLInterceptor$Builder #ETL拦截器
a1.sources.r1.interceptors.i2.type = com.weiwei.flume.interceptor.LogTypeInterceptor$Builder#日志类型拦截器
a1.sources.r1.selector.type = multiplexing #根据日志类型分数据
a1.sources.r1.selector.header = topic
a1.sources.r1.selector.mapping.topic_start = c1
a1.sources.r1.selector.mapping.topic_event = c2
#configure channel
a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.channels.c1.kafka.topic = topic_start #日志类型是start ,数据发往channel1
a1.channels.c1.parseAsFlumeEvent = false
a1.channels.c1.kafka.consumer.group.id = flume-consumer
a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c2.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.channels.c2.kafka.topic = topic_event #日志类型是event,数据发往channel2
a1.channels.c2.parseAsFlumeEvent = false
a1.channels.c2.kafka.consumer.group.id = flume-consumer
注意:com.weiwei.flume.interceptor.LogETLInterceptorcom.weiwei.flume.interceptor.LogTypeInterceptor是自定义的拦截器的全类名。需要根据用户自定义的拦截器做相应修改。
3) Flume的ETL和分类型拦截器
本项目中自定义了两个拦截器,分别是:ETL拦截器、日志类型区分拦截器。
(a) ETL拦截器主要用于,过滤时间戳不合法和Json数据不完整的日志
(b) 日志类型区分拦截器主要用于,将启动日志和事件日志区分开来,方便发往Kafka的不同Topic。
① 创建Maven工程flume-interceptor
② 创建包名:com.weiwei.flume.interceptor
③ 在pom.xml文件中添加如下配置
<dependencies>
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-core</artifactId>
<version>1.7.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
④ 在com.weiwei.flume.interceptor包下创建LogETLInterceptor类名
Flume ETL拦截器LogETLInterceptor
package com.weiwei.flume.interceptor;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
public class LogETLInterceptor implements Interceptor {
@Override
public void initialize() {
}
@Override
public Event intercept(Event event) {
// 1 获取数据
byte[] body = event.getBody();
String log = new String(body, Charset.forName("UTF-8"));
// 2 判断数据类型并向Header中赋值
if (log.contains("start")) {
if (LogUtils.validateStart(log)){
return event;
}
}else {
if (LogUtils.validateEvent(log)){
return event;
}
}
// 3 返回校验结果
return null;
}
@Override
public List<Event> intercept(List<Event> events) {
ArrayList<Event> interceptors = new ArrayList<>();
for (Event event : events) {
Event intercept1 = intercept(event);
if (intercept1 != null){
interceptors.add(intercept1);
}
}
return interceptors;
}
@Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
@Override
public Interceptor build() {
return new LogETLInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
⑤ Flume日志过滤工具类
package com.weiwei.flume.interceptor;
import org.apache.commons.lang.math.NumberUtils;
public class LogUtils {
public static boolean validateEvent(String log) {
// 服务器时间 | json
// 1549696569054 | {"cm":{"ln":"-89.2","sv":"V2.0.4","os":"8.2.0","g":"M67B4QYU@gmail.com","nw":"4G","l":"en","vc":"18","hw":"1080*1920","ar":"MX","uid":"u8678","t":"1549679122062","la":"-27.4","md":"sumsung-12","vn":"1.1.3","ba":"Sumsung","sr":"Y"},"ap":"weather","et":[]}
// 1 切割
String[] logContents = log.split("\\|");
// 2 校验
if(logContents.length != 2){
return false;
}
//3 校验服务器时间
if (logContents[0].length()!=13 || !NumberUtils.isDigits(logContents[0])){
return false;
}
// 4 校验json
if (!logContents[1].trim().startsWith("{") || !logContents[1].trim().endsWith("}")){
return false;
}
return true;
}
public static boolean validateStart(String log) {
if (log == null){
return false;
}
// 校验json
if (!log.trim().startsWith("{") || !log.trim().endsWith("}")){
return false;
}
return true;
}
}
⑥ Flume日志类型区分拦截器LogTypeInterceptor
package com.weiwei.flume.interceptor;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class LogTypeInterceptor implements Interceptor {
@Override
public void initialize() {
}
@Override
public Event intercept(Event event) {
// 区分日志类型: body header
// 1 获取body数据
byte[] body = event.getBody();
String log = new String(body, Charset.forName("UTF-8"));
// 2 获取header
Map<String, String> headers = event.getHeaders();
// 3 判断数据类型并向Header中赋值
if (log.contains("start")) {
headers.put("topic","topic_start");
}else {
headers.put("topic","topic_event");
}
return event;
}
@Override
public List<Event> intercept(List<Event> events) {
ArrayList<Event> interceptors = new ArrayList<>();
for (Event event : events) {
Event intercept1 = intercept(event);
interceptors.add(intercept1);
}
return interceptors;
}
@Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
@Override
public Interceptor build() {
return new LogTypeInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
⑦ 打包
拦截器打包之后,只需要单独包,不需要将依赖的包上传。打包之后要放入Flume的lib文件夹下面。
注意:
为什么不需要依赖包?因为依赖包在flume的lib目录下面已经存在了。
⑧ 需要先将打好的包放入到hadoop102的/opt/module/flume/lib文件夹下面。
[weiwei@hadoop102 lib]$ ls | grep interceptor flume-interceptor-1.0-SNAPSHOT.jar
⑨ 分发Flume到hadoop103、hadoop104
[weiwei@hadoop102 module]$ xsync flume/
[weiwei@hadoop102 flume]$ bin/flume-ng agent --name a1 --conf-file conf/file-flume-kafka.conf &
4.4.5 日志采集Flume启动停止脚本
1)在/home/weiwei/bin目录下创建脚本f1.sh
[weiwei@hadoop102 bin]$ vim f1.sh
在脚本中填写如下内容
#! /bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103
do
echo " --------启动 $i 采集flume-------"
ssh $i "nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/file-flume-kafka.conf --name a1 -Dflume.root.logger=INFO,LOGFILE >/dev/null 2>&1 &"
done
};;
"stop"){
for i in hadoop102 hadoop103
do
echo " --------停止 $i 采集flume-------"
ssh $i "ps -ef | grep file-flume-kafka | grep -v grep |awk '{print \$2}' | xargs kill"
done
};;
esac
说明1:nohup,该命令可以在你退出帐户/关闭终端之后继续运行相应的进程。nohup就是不挂起的意思,不挂断地运行命令。
说明2:/dev/null代表linux的空设备文件,所有往这个文件里面写入的内容都会丢失,俗称“黑洞”。
标准输入0:从键盘获得输入 /proc/self/fd/0
标准输出1:输出到屏幕(即控制台) /proc/self/fd/1
错误输出2:输出到屏幕(即控制台)/proc/self/fd/2
2)增加脚本执行权限
[weiwei@hadoop102 bin]$ chmod 777 f1.sh
3)fume集群启动脚本
[weiwei@hadoop102 module]$ f1.sh start
4)flume集群停止脚本
[weiwei@hadoop102 module]$ f1.sh stop