1) 需求:通过 SparkStreaming 从 Kafka 读取数据,并将读取过来的数据做简单计算,最终打印到控制台。
2) 导入依赖
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
3) 编写代码
4) /**
* 通过DirectAPI 0-10 消费kafka数据
* 消费的offset保存在_consumer_offsets主题中
*/
object DirectAPI {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf().setMaster("local[*]").setAppName("direct")
val ssc = new StreamingContext(sparkConf,Seconds(3))
//定义kafka相关参数
val kafkaPara :Map[String,Object] = Map[String,Object](ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG ->"node01:9092,node02:9092,node03:9092",
ConsumerConfig.GROUP_ID_CONFIG->"kafka",
"key.deserializer"->"org.apache.kafka.common.serialization.StringDeserializer",
"value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer"
)
//通过读取kafka数据,创建DStream
val kafkaDStream:InputDStream[ConsumerRecord[String,String]] = KafkaUtils.createDirectStream[String,String](
ssc,LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String,String](Set("kafka"),kafkaPara)
)
//提取出数据中的value部分
val valueDStream :DStream[String] = kafkaDStream.map(record=>record.value())
//wordCount计算逻辑
valueDStream.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.print()
ssc.start()
ssc.awaitTermination()
5) 开启Kafka集群
6) 开启Kafka生产者,产生数据
kafka-console-producer.sh --broker-list node01:9092,node02:9092,node03:9092 --topic kafka
7) 运行程序,接收Kafka生产的数据并进行相应处理
8)查看消费进度
kafka-consumer-groups.sh --describe --bootstrap-server node01:9092,node02:9092,node03:9092 --group kafka