前言
本文所需要的安装包&Flume配置文件,博主都已上传,链接为本文涉及安装包&Flume配置文件本文涉及的安装包&Flume配置文件,请自行下载~
- flume作为日志实时采集的框架, 可以与Spark Streaming实时处理框架进行对接.
- flume实时产生数据, Spark Streaming做实时处理
- Spark Streaming对接fluem有两种方式,一种是Flume将消息Push推给Spark Streaming;还有一种是Spark Streaming从flume中Poll拉取数据.
1. Flume向Spark Streaming中push推数据
1.1 Flume前期准备
-
安装flume1.6以上
-
下载依赖包
spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目录下. -
修改flume/lib下的scala依赖包版本
从spark安装目录的jars文件夹下找到scala-library-2.11.8.jar 包, 替换掉flume/lib目录下自带的scala-library-2.11.8.jar包. -
写flume的agent, 注意既然是拉取的方式,那么flume向自己所在的机器上产数据就行.
-
编写flume-push.conf配置文件
注意: 因为是Flume主动向Spark Streaming推送数据,所以sink要指定Spark Streaming程序启动的IP地址和port端口号.注意配置文件中指明的hostname和port是spark应用程序所在服务器的ip地址和端口。
#push mode
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /root/test.txt
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
# Spark Streaming程序启动的IP地址和端口号
a1.sinks.k1.hostname=172.16.43.63
a1.sinks.k1.port = 9999
a1.sinks.k1.batchSize= 2000
1.2 Spark Streaming前期准备,编写Spark Streaming程序
- 导入pom依赖
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>2.0.2</version>
</dependency>
注意: 程序中需要指定本程序运行机器的IP地址和Port端口号,要和Flume配置文件flume-push.conf中sink指导的一样
- 使用scala编写程序
package cn.acece.sparkStreamingtest
import java.net.InetSocketAddress
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
/**
* sparkStreaming整合flume 推模式Push
*/
object SparkStreaming_Flume_Push {
//newValues 表示当前批次汇总成的(word,1)中相同单词的所有的1
//runningCount 历史的所有相同key的value总和
def updateFunction(newValues: Seq[Int], runningCount: Option[Int]): Option[Int] = {
val newCount =runningCount.getOrElse(0)+newValues.sum
Some(newCount)
}
def main(args: Array[String]): Unit = {
//配置sparkConf参数
val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreaming_Flume_Push").setMaster("local[2]")
//构建sparkContext对象
val sc: SparkContext = new SparkContext(sparkConf)
//构建StreamingContext对象,每个批处理的时间间隔
val scc: StreamingContext = new StreamingContext(sc, Seconds(5))
//设置日志输出级别
sc.setLogLevel("WARN")
//设置检查点目录
scc.checkpoint("./")
//flume推数据过来
// 当前应用程序部署的服务器ip地址,跟flume配置文件保持一致
val flumeStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(scc,"172.16.43.63",9999,StorageLevel.MEMORY_AND_DISK)
//获取flume中数据,数据存在event的body中,转化为String
val lineStream: DStream[String] = flumeStream.map(x=>new String(x.event.getBody.array()))
//实现单词汇总
val result: DStream[(String, Int)] = lineStream.flatMap(_.split(" ")).map((_,1)).updateStateByKey(updateFunction)
result.print()
scc.start()
scc.awaitTermination()
}
}
}
1.3 Flume向Spark Streaming中push推数据, 要先启动Spark Streaming程序
- 先启动Spark Streaming程序,在IDEA中启动程序
- 后启动Flume程序, 先把**/root/data/ata.txt.COMPLETED 重命名为data.txt**,然后执行以下shell命令
flume-ng agent -n a1 \
-c /opt/bigdata/flume/conf \
-f /opt/bigdata/flume/conf/flume-push.conf \
-Dflume.root.logger=INFO,console
1.4 观察IDEA控制台输出
Flume向Spark Streaming中push推数据成功, 完美运行~