思路:
1、整合日志输出到Flume
2、整合Flume到Kafka
3、整合Kafka到Spark Streaming
4、Spark Streaming对接收到的数据进行处理
首先服务器集群中将日志信息通过固定的主机名和端口号,对接到Flume中的Source,然后Flume将chanel中的数据按批次sink到Kafka中,即充当Kafka中的生产者,然后,kafka把生产的数据放入到broker list中,而再将Kafka与Spark Streaming 进行对接,即让Spark Streaming充当消费者,对数据进行处理(对接方式主要有两者,之前的博客里介绍过),最后将处理的结果存储到数据库中,而再用WEB UI将数据库的内容展示出来,形成一个界面分析图。
一、模拟日志
<dependency>
<groupId>org.apache.flume.flume-ng-clients</groupId>
<artifactId>flume-ng-log4jappender</artifactId>
<version>1.6.0</version>
</dependency>
import org.apache.log4j.Logger;
/**
* 模拟日志产生
*/
public class LoggerGenerator {
private static Logger logger = Logger.getLogger(LoggerGenerator.class.getName());
public static void main(String[] args) throws Exception{
int index = 0;
while(true) {
Thread.sleep(1000);
logger.info("value : " + index++);
}
}
}
log4j.properties:
log4j.rootLogger=INFO,stdout,flume
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target = System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c] [%p] - %m%n
log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname = hadoop01
log4j.appender.flume.Port = 41414
log4j.appender.flume.UnsafeMode = true
前面的log4j是对控制台的输出,而后面的配置代码,是将Log4j的日志对接到Flume的source中,Hostname和Port设置为Flume的服务器主机和source的端口就可以了。
二、配置Flume的配置文件(测试Flume接收log4j日志)
vim streaming.conf
a1.sources = r1
a1.channels = c1
a1.sinks = k1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41414
a1.channels.c1.type = memory
a1.sinks.k1.type= logger
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
启动Flume:
cd bin
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/streaming.conf \
-Dflume.root.logger=INFO,console
启动日志模拟:
flume采集到日志并输出:
三、配置Flume的配置文件(整合Flume到Kafka)
3.1、在Flume的安装目录里,进入conf文件夹,生成一个配置文件,此处我生成的是streaming2.conf文件,内容如下:
vim streaming2.conf
a1.sources = r1
a1.channels = c1
a1.sinks = k1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41414
a1.channels.c1.type = memory
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sinks.k1.kafka.topic = kafka-streaming_topic
a1.sinks.k1.flumeBatchSize= 5
a1.sinks.k1.kafka.producer.acks = 1
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3.2 启动kafka
a.依次启动zookeeper
zkServer.sh start
b.依次后台启动kafka
bin/kafka-server-start.sh -daemon config/server.properties
查看启动情况:
3.3 启动消费端,测试消费消息
/opt/kafka/kafka_2.12-2.4.0/bin/kafka-console-consumer.sh --bootstrap-server hadoop01:9092 --topic kafka-streaming_topic
整合Kafka到Spark Streaming/Spark Streaming对接收到的数据进行处理
使用Spark Streaming整合Kafka的两种方式文章中的Direct方式整合(推荐使用)
提交代码测试:
./spark-submit --class com.kinglone.streaming.KafkaDirectWordCount --master local[2] --name KafkaDirectWordCount --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 /opt/script/kafkaDirectWordCount.jar hadoop01:9092 kafka-streaming_topic