首先新建一个由maven管理的scala的项目
在pom文件中添加以下依赖
2.11.8
2.7.4
2.0.2
org.scala-lang
scala-library
${scala.version}
org.apache.spark
spark-core_2.11
${spark.version}
org.apache.spark
spark-sql_2.11
${spark.version}
org.apache.spark
spark-streaming_2.11
${spark.version}
org.apache.spark
spark-streaming-kafka-0-8_2.11
2.1.0
com.github.sgroschupf
zkclient
0.1
以上依赖可能有的不需要,但是为了程序能够顺利,就加上了
(4)启动zookeeper集群
zkServer.sh start
(5)启动kafka集群(根据自己集群的kafka命令启动为准)
kafka-server-start.sh /export/servers/kafka/config/server.properties
创建topic(主题)
kafka-topics.sh --create --zookeeper 主机名:2181 --replication-factor 1 --partitions 3 --topic 主题名
创建生产者
/opt/software/kafka/bin/kafka-console-producer.sh --broker-list 主机名--topic 主题名
(前面根据kafka安装的目录更改)
创建消费者
/opt/software/kafka/bin/kafka-console-consumer.sh --bootstrap-server 主机名 --topic 主题名 --from-beginning
(前面根据kafka安装的目录更改)
从生产者发送消息看消费者是否接受成功
(8)编写Spark Streaming应用程序
package cn.bw.kafka
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.immutable
//todo:利用sparkStreaming接受kafka中的数据实现单词计数----采用receivers
object SparkStreamingKafka_Receiver_checkpoint {
def updateFunc(a:Seq[Int], b:Option[Int]) :Option[Int] ={
Some(a.sum+b.getOrElse(0))
}
def main(args: Array[String]): Unit = {
val checkpointPath = "./kafka-receiver"
val ssc = StreamingContext.getOrCreate(checkpointPath, () => {
createFunc(checkpointPath)
})
ssc.start()
ssc.awaitTermination()
}
def createFunc(checkpointPath:String): StreamingContext = {
//todo:1、创建sparkConf
val sparkConf: SparkConf = new SparkConf()
.setAppName("SparkStreamingKafka_Receiver_checkpoint")
.setMaster("local[4]")
//todo:开启wal预写日志
.set("spark.streaming.receiver.writeAheadLog.enable","true")
//todo:2、创建sparkContext
val sc = new SparkContext(sparkConf)
sc.setLogLevel("WARN")
//todo:3、创建StreamingContext
val ssc = new StreamingContext(sc,Seconds(5))
ssc.checkpoint(checkpointPath)
//todo:4、指定zkServer
val zkServer="node1:2181,node2:2181,node3:2181"
//todo:5、指定groupId
val groupId="spark-kafka-receiver01"
//todo:6、指定topics 这个可以利用一个消费者组来消费多个topic,
//(topic_name -> numPartitions) 指定topic消费的线程数
val topics=Map("kafka_spark"->1)
//todo:7、并行运行更多的接收器读取kafak topic中的数据,这里设置3个
val resultDStream: immutable.IndexedSeq[DStream[String]] = (1 to 3).map(x => {
//todo:8、通过使用KafkaUtils的createStream接受kafka topic中的数据,生成DStream
val kafkaDataDStream: DStream[String] = KafkaUtils.createStream(ssc, zkServer, groupId, topics).map(x => x._2)
kafkaDataDStream
}
)
//todo:利用StreamContext将所有的DStream组合在一起
val kafkaDStream: DStream[String] = ssc.union(resultDStream)
//todo:8、获取kafka中topic的内容
//todo:9、切分每一行。每个单词记为1
val wordAndOne: DStream[(String, Int)] = kafkaDStream.flatMap(_.split(" ")).map((_,1))
//todo:10、相同单词出现的次数累加
val result: DStream[(String, Int)] = wordAndOne.updateStateByKey(updateFunc)
//todo:打印
result.print()
ssc
}
}
(9)运行代码,查看控制台结果数据
出现结果就成功啦!