kafka作为一个实时的分布式消息队列,实时的生产和消费消息,这里我们可以利用Spark Streaming实时计算框架实时地读取kafka中的数据然后进行计算
本次整合所需依赖:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.0.2</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>2.1.0</version>
</dependency>
</dependencies>
集群操作:
1.启动zookeeper集群
zkServer.sh start
2.启动kafka集群
kafka-server-start.sh /opt/software/kafka/config/server.properties
3.创建topic
kafka-topics.sh --create --zookeeper node01:2181 --replication-factor 1 --partitions 3 --topic kafka_spark
4.向topic中生产数据
kafka-console-producer.sh --broker-list node01:9092 --topic kafka_spark
编写Spark Streaming程序:
package com.nb.lpq
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.immutable
//todo:利用sparkStreaming接受kafka中的数据实现单词计数----采用receivers
object SparkStreamingKafka_Receiver_checkpoint {
def updateFunc(a:Seq[Int], b:Option[Int]) :Option[Int] ={
Some(a.sum+b.getOrElse(0))
}
def main(args: Array[String]): Unit = {
val checkpointPath = "./kafka-receiver"
val ssc = StreamingContext.getOrCreate(checkpointPath, () => {
createFunc(checkpointPath)
})
ssc.start()
ssc.awaitTermination()
}
def createFunc(checkpointPath:String): StreamingContext = {
//todo:1、创建sparkConf
val sparkConf: SparkConf = new SparkConf()
.setAppName("SparkStreamingKafka_Receiver_checkpoint")
.setMaster("local[4]")
//todo:开启wal预写日志
.set("spark.streaming.receiver.writeAheadLog.enable","true")
//todo:2、创建sparkContext
val sc = new SparkContext(sparkConf)
sc.setLogLevel("WARN")
//todo:3、创建StreamingContext
val ssc = new StreamingContext(sc,Seconds(5))
ssc.checkpoint(checkpointPath)
//todo:4、指定zkServer
val zkServer="node02:2181,node03:2181,node04:2181"
//todo:5、指定groupId
val groupId="spark-kafka-receiver01"
//todo:6、指定topics 这个可以利用一个消费者组来消费多个topic,
//(topic_name -> numPartitions) 指定topic消费的线程数
val topics=Map("kafka_spark"->1)
//todo:7、并行运行更多的接收器读取kafak topic中的数据,这里设置3个
val resultDStream: immutable.IndexedSeq[DStream[String]] = (1 to 3).map(x => {
//todo:8、通过使用KafkaUtils的createStream接受kafka topic中的数据,生成DStream
val kafkaDataDStream: DStream[String] = KafkaUtils.createStream(ssc, zkServer, groupId, topics).map(x => x._2)
kafkaDataDStream
}
)
//todo:利用StreamContext将所有的DStream组合在一起
val kafkaDStream: DStream[String] = ssc.union(resultDStream)
//todo:8、获取kafka中topic的内容
//todo:9、切分每一行。每个单词记为1
val wordAndOne: DStream[(String, Int)] = kafkaDStream.flatMap(_.split(" ")).map((_,1))
//todo:10、相同单词出现的次数累加
val result: DStream[(String, Int)] = wordAndOne.updateStateByKey(updateFunc)
//todo:打印
result.print()
ssc
}
}
运行代码,查看控制台:
本次整合遇到的问题:
程序和kafka服务都不报错,但就是接受不到消息???
当我们运行程序时,会产生日志文件(如下图),当日志文件过多,程序会读取日志文件里的东西(或许是文件过多堵塞了?),所以,我们可以把该文件夹删除,再次运行,就可以实时接受消息了