目录
SparkStreaming接收flume数据
接收flume的sparkSink的数据
第一步:下载spark-streaming与flume整合的依赖jar包
要使用sparkSink需要jar包支持
直接去mvnrepository.com这个网址搜索spark-streaming-flume-sink即可。
下载spark-streaming-flume-sink_2.11-2.2.0.jar放入到flume的lib目录下。
这个jar包的版本需要和spark的版本一直。
第二步:替换flume的scala的jar包
将spark的jars目录下的名为scala-library-x.x.x的jar包,复制到flume的lib目录下,并删除flume的lib目录下原来的scala-library-x.x.x。
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/lib
rm -rf scala-library-2.10.5.jar
cp /export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/jars/scala-library-2.11.8.jar /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/lib/
第三步:开发flume的配置文件
vim flume-poll.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /export/servers/flume/flume-poll
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=node03
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000
第四步:启动flume的程序
bin/flume-ng agent -c conf -f conf/flume-poll.conf \
-n a1 -Dflume.root.logger=DEBUG,CONSOLE
第五步:准备数据目录和文件,上传到flume指定的文件夹
mkdir -p /export/servers/flume/flume-poll
cd /export/servers/flume/
vi hello.txt
hadoop spark hive spark
hadoop sqoop spark storm
第六步:代码开发spark程序poll拉取flume数据
沿用Spark(第七节)SparkStreaming介绍,DStream介绍,SparkStreaming接收socket数据、文件数据、自定义数据源数据、RDD队列数据案例中的pom,并添加下面这个依赖:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>2.2.0</version>
</dependency>
import org.apache.spark.streaming.dstream.ReceiverInputDStream
import org.apache.spark.{
SparkConf, SparkContext}
import org.apache.spark.streaming.{
Seconds, StreamingContext}
import org.apache.spark.streaming.flume.{
FlumeUtils, SparkFlumeEvent}
object SparkPollFlume {
def updateFunc(inputSum:Seq[Int], historySum:Option[Int]):Option[Int] = {
Option(inputSum.sum+historySum.getOrElse(0))
}
def main(args: Array[String]): Unit = {
val sparkContext=new SparkContext(new SparkConf()
.setAppName("SparkPollFlume").setMaster("local[2]"))
sparkContext.setLogLevel("WARN")
val streamingContext=new StreamingContext(sparkContext,Seconds(5))
// 设置checkpoint
streamingContext.checkpoint("./poll-checkpoint")
//通过FlumeUtils调用createPollingStream方法获取flume中的数据
//createPollingStream这个返回类是ReceiverInputDStream[SparkFlumeEvent]
val stream:ReceiverInputDStream[SparkFlumeEvent]=
FlumeUtils.createPollingStream(streamingContext,hostname="node03",port=8888)
// 处理接收到的streaming
val lines=stream.map(x => {
// 获取DStream中每个event的字节数组
val byteArr=x.event.getBody.array()
// 将字节数组转换为字符串