StructuredStreaming整合Kafka
官网介绍
http://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html
整合环境准备
●启动kafka
/export/servers/kafka/bin/kafka-server-start.sh -daemon /export/servers/kafka/config/server.properties
●向topic中生产数据
/export/servers/kafka/bin/kafka-console-producer.sh --broker-list hadoop01:9092 --topic 18BD3401
代码演示:
package StructuredStreaming_4_16
import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}
import org.apache.spark.sql.streaming.{DataStreamReader, Trigger}
object StructStreaming_kafka {
def main(args: Array[String]): Unit = {
//创建SparkSession
val spark: SparkSession = SparkSession.builder().master