概述
上一篇我们对Spark Streaming进行了简单的介绍,并使用socket的方式进行了测试,进一步对DStream,以及架构进行了学习,然后介绍了简单的源码分析。最后提到Spark Streaming提供了两类内置streaming sources。
1. 基本来源:StreamingContext API中直接可用的来源。 示例:文件系统和socket connections。
2. 高级来源:可通过额外的实用程序课程获得Kafka,Flume,Kinesis等来源。 这些要求链接部分中讨论的额外依赖关系。(后面我们会进行讲解)
1 socketTextStream,socketStream,rawSocketStream,区别
val DStream=ssc.socketTextStream("192.168.137.130",9998)
通过传递hostname,port,返回一个DStream,上一篇博客中所用到的。我们通过源码进行分析:
/**
* Creates an input stream from TCP source hostname:port. Data is received using
* a TCP socket and the receive bytes is interpreted as UTF8 encoded `\n` delimited
* lines.
* @param hostname Hostname to connect to for receiving data
* @param port Port to connect to for receiving data
* @param storageLevel Storage level to use for storing the received objects
* (default: StorageLevel.MEMORY_AND_DISK_SER_2)
* @see [[socketStream]]
*/
def socketTextStream(
hostname: String,
port: Int,
storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
): ReceiverInputDStream[String] = withNamedScope("socket text stream") {
socketStream[String](hostname, port, SocketReceiver.bytesToLines, storageLevel)
}
可以看到socketTextStream底层就是调用的socketStream,传递hostname,port,SocketReceiver.bytesToLines, storageLevel。对于hostname,port,storageLevel我们应该不陌生,但是
SocketReceiver.bytesToLines是什么呢?也就是调用了SocketReceiver类下的bytesToLines方法,我们从源码中进行分析。
/**
* This methods translates the data from an inputstream (say, from a socket)
* to '\n' delimited strings and returns an iterator to access the strings.
*/
def bytesToLines(inputStream: InputStream): Iterator[String] = {
val dataInputStream = new BufferedReader(
new InputStreamReader(inputStream, StandardCharsets.UTF_8))
new NextIterator[String] {
protected override def getNext() = {
val nextValue = dataInputStream.readLine()
if (nextValue == null) {
finished = true
}
nextValue
}
protected override def close() {
dataInputStream.close()
}
}
}
这个方法解析来自输入流的数据(比如socket)以’\ n’分隔字符串并返回迭代器以访问字符串。
/**
* Creates an input stream from TCP source hostname:port. Data is received using
* a TCP socket and the receive bytes it interpreted as object using the given
* converter.
* @param hostname Hostname to connect to for receiving data
* @param port Port to connect to for receiving data
* @param converter Function to convert the byte stream to objects
* @param storageLevel Storage level to use for storing the received objects
* @tparam T Type of the objects received (after converting bytes to objects)
*/
def socketStream[T: ClassTag](
hostname: String,
port: Int,
converter: (InputStream) => Iterator[T],
storageLevel: StorageLevel
): ReceiverInputDStream[T] = {
new SocketInputDStream[T](this, hostname, port, converter, storageLevel)
}
看到这里我们发现socketStream方法里面免得参数是需要我们自己手动传入的,并不是默认参数,也就是你要使用socketStream要传递四个参数,还要定义一个输入流(InputStream)
object socketStreamData {
def main(args: Array[String]) {
//创建SparkConf
val conf=new SparkConf().setAppName("UpdateStateByKey").setMaster("local[2]")
//通过conf 得到StreamingContext,底层就是创建了一个SparkContext
val ssc=new StreamingContext(conf,Seconds(10))
ssc.checkpoint("/Res")
val SocketData = ssc.socketStream[String]("master",9999,myDeserialize,StorageLevel.MEMORY_AND_DISK_SER )
//val data = SocketData.map(x =>(x,1))
//data.print()
SocketData.print()
ssc.start()
ssc.awaitTermination()
}
def myDeserialize(data:InputStream): Iterator[String]={
data.read().toString.map( x => x.hashCode().toString).iterator
}
}
看到这里大家应该明白了,不知道大家有没有想到,这什么设计模式呢? 静态工厂模式哦。
- rawSocketStream
/**
* Create an input stream from network source hostname:port, where data is received
* as serialized blocks (serialized using the Spark's serializer) that can be directly
* pushed into the block manager without deserializing them. This is the most efficient
* way to receive data.
* @param hostname Hostname to connect to for receiving data
* @param port Port to connect to for receiving data
* @param storageLevel Storage level to use for storing the received objects
* (default: StorageLevel.MEMORY_AND_DISK_SER_2)
* @tparam T Type of the objects in the received blocks
*/
def rawSocketStream[T: ClassTag](
hostname: String,
port: Int,
storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
): ReceiverInputDStream[T] = withNamedScope("raw socket stream") {
new RawInputDStream[T](this, hostname, port, storageLevel)
}
这个方法可以直接接收序列化的数据流存储在block中进行处理,不用我们进行反序列化的操作,是一种高效的处理方式,但是但是,cpu性能不行,慎用把。
2 处理文件系统中的数据
注意:处理文件系统中的数据不需要receive,所以使用local[1]也是可以的。
我们先进行一个简单的测试:
import org.apache.spark.streaming.{Seconds, StreamingContext}
val ssc = new StreamingContext(sc, Seconds(10))
val lines = ssc.textFileStream("file:///home/hadoop/data/streaming/")
lines.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
监控本地/home/hadoop/data/streaming/目录
cat input.txt
hello java
hello hadoop
hello hive
hello sqoop
hello hdfs
hello spark
cp ../input.txt .
结果:
(hive,1)
(hello,6)
(java,1)
(sqoop,1)
(spark,1)
(hadoop,1)
(hdfs,1)
注意:
1.测试的时候我修改了input.txt文件中的内容,大家感觉会有结果吗?
vi input.txt
hello java
hello hadoop
hello hive
hello sqoop
hello hdfs
hello spark
a a
b
b c c
我进行了上面的修改,并没有看到修改后的结果,这也就是意味着一旦处理完毕,对当前窗口内的文件的更改不会导致文件被重新读取。那是:更新被忽略。
- 我再/home/hadoop/data/streaming/目录下创建了一个子目录,并复制了一份数据,会进行处理吗?
[hadoop@hadoop streaming]$ mkdir 1
[hadoop@hadoop streaming]$ cp ../input.txt 1/
[hadoop@hadoop 1]$ pwd
/home/hadoop/data/streaming/1
[hadoop@hadoop 1]$ ll
total 4
-rwxrwxr-x. 1 hadoop hadoop 70 Apr 24 10:12 input.txt
答案是不会的。也就是他只会监控该目录下的文件,新增的子目录中的文件是不会监控的,一旦确定了监控的目录,只会监控该目录下的文件。
3 Transformations on DStreams
spark streaming中的Transformation其实和rdd里面的没有什么区别,多了下面两个:
transform | Meaning |
---|---|
transform(func) | DStream和RDD之间的转化 |
updateStateByKey(func) | 返回一个新的“状态”DStream,其中每个键的状态通过对键的先前状态和键的新值应用给定函数来更新。 这可以用来维护每个键的任意状态数据。 |
这里还不理解没关系,下面会进行代码演示。
其他的方法这里就不写了,不会的可以看看这篇文章
- transform
transform的含义就是DStream和RDD之间的转换,下面我们通过代码进行演示。
我们以模拟黑名单为例对数据进行过滤
1.首先使用RDD进行编程
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.mutable.ListBuffer
object RDDFilter {
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("RDDFilter").setMaster("local[2]")
val sc=new SparkContext(conf)
//黑名单
val black=Array(
"laowang",
"lisi"
)
//数据
val input=Array(
"1,zhangsan,20",
"2,lisi,30",
"3,wangwu,40",
"4,laowang,50"
)
//转化为一个map可以join
val blackRDD=sc.parallelize(black).map(x=>(x,true))
//转化为key=姓名 value=每一行
val inputRDD=sc.parallelize(input)
/**
* (zhangsan,(1,zhangsan,20,None))
(wangwu,(3,wangwu,40,None))
(laowang,(4,laowang,50,Some(true)))
(lisi,(2,lisi,30,Some(true)))
*/
val dateRDD=inputRDD.map(x=>(x.split(",")(1),x)).leftOuterJoin(blackRDD)
//过滤
dateRDD.filter(x=>{
x._2._2.getOrElse(false)!=true
}).map(x=>(x._2._1))
.sortBy(x=>x(0),false)
.foreach(println)
sc.stop()
}
}
使用transform
package cn.zhangyu
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by grace on 2018/6/8.
*/
object Transform {
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("Transform").setMaster("local[2]")
val ssc=new StreamingContext(conf,Seconds(10))
//黑名单
val black=Array(
"laowang",
"lisi"
)
//读取数据生成rdd
val blackRDD=ssc.sparkContext.parallelize(black).map(x=>(x,true))
//输入数据
//"1,zhangsan,20","2,lisi,30", "3,wangwu,40", "4,laowang,50"
val DStream=ssc.socketTextStream("192.168.137.130",9998)
val output=DStream.map(x=>(x.split(",")(1),x)).transform(rdd=>{
rdd.leftOuterJoin(blackRDD).
filter(x=>x._2._2.getOrElse(false)!=true).map(x=>x._2._1)
})
//输出
output.print()
ssc.start()
ssc.awaitTermination()
}
}
- UpdateStateByKey Operation
这个算子的意思就是:统计你的streaming启动到现在为止的信息。举个例子:
以wc为例:
第一次监控的目录中有一个文件:
hello hello
world world world
第一次计算:
hello 2
world 3
这时又添加了一个文件:
hello hello
world world world
welcome
第二次计算:
hello 5
world 5
welcome 1
如何使用呢?
- 定义状态 - 状态可以是任意数据类型。
- 定义状态更新函数 - 使用函数指定如何使之前的状态更新为现在的状态。
代码演示:
import java.io.InputStream
import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.SocketReceiver
import org.apache.spark.streaming.{Seconds, StreamingContext}
object UpdateStateByKey {
def main(args: Array[String]): Unit = {
//创建SparkConf
val conf=new SparkConf().setAppName("UpdateStateByKey").setMaster("local[2]")
//通过conf 得到StreamingContext,底层就是创建了一个SparkContext
val ssc=new StreamingContext(conf,Seconds(10))
//通过socketTextStream创建一个DSteam,可以看出这里返回的是ReceiverInputDStream[T],后面从源码进行分析
val DStream=ssc.socketTextStream("192.168.137.130",9998)
DStream.flatMap(_.split(",")).map((_,1))
.updateStateByKey(updateFunction).print()
ssc.start() // 一定要写
ssc.awaitTermination()
}
def updateFunction(currentValues: Seq[Int], preValues: Option[Int]): Option[Int] = {
val curr = currentValues.sum
val pre = preValues.getOrElse(0)
Some(curr + pre)
}
}
运行上面的代码会发现报如下错误:
18/06/05 20:32:34 ERROR StreamingContext: Error starting the context, marking it as stopped
java.lang.IllegalArgumentException: requirement failed: The checkpoint directory has not been set. Please set it by StreamingContext.checkpoint().
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.streaming.dstream.DStream.validateAtStart(DStream.scala:243)
at org.apache.spark.streaming.dstream.DStream$$anonfun$validateAtStart$8.apply(DStream.scala:276)
at org.apache.spark.streaming.dstream.DStream$$anonfun$validateAtStart$8.apply(DStream.scala:276)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.streaming.dstream.DStream.validateAtStart(DStream.scala:276)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$4.apply(DStreamGraph.scala:51)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$4.apply(DStreamGraph.scala:51)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.streaming.DStreamGraph.start(DStreamGraph.scala:51)
at org.apache.spark.streaming.scheduler.JobGenerator.startFirstTime(JobGenerator.scala:194)
at org.apache.spark.streaming.scheduler.JobGenerator.start(JobGenerator.scala:100)
at org.apache.spark.streaming.scheduler.JobScheduler.start(JobScheduler.scala:103)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply$mcV$sp(StreamingContext.scala:583)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()
at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)
Please set it by StreamingContext.checkpoint(),我们没有设置checkpoint()。
- 正确代码
import java.io.InputStream
import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.SocketReceiver
import org.apache.spark.streaming.{Seconds, StreamingContext}
object UpdateStateByKey {
def main(args: Array[String]): Unit = {
//创建SparkConf
val conf=new SparkConf().setAppName("UpdateStateByKey").setMaster("local[2]")
//通过conf 得到StreamingContext,底层就是创建了一个SparkContext
val ssc=new StreamingContext(conf,Seconds(10))
//hdfs://192.168.137.130:9000/home/hadoop/sparkStreaming_UpdateStateByKey_out/checkpoin//t-1524599640000'
ssc.checkpoint("/home/hadoop/sparkStreaming_UpdateStateByKey_out")
//通过socketTextStream创建一个DSteam,可以看出这里返回的是ReceiverInputDStream[T],后面从源码进行分析
val DStream=ssc.socketTextStream("192.168.137.130",9998)
DStream.flatMap(_.split(",")).map((_,1))
.updateStateByKey(updateFunction).print()
ssc.start() // 一定要写
ssc.awaitTermination()
}
def updateFunction(currentValues: Seq[Int], preValues: Option[Int]): Option[Int] = {
val curr = currentValues.sum
val pre = preValues.getOrElse(0)
Some(curr + pre)
}
}
下面打成jar包进行测试
sudo rz
rz waiting to receive.
Starting zmodem transfer. Press Ctrl+C to cancel.
100% 12 KB 12 KB/s 00:00:01 0 Errors
ll
total 40
-rw-rw-r--. 1 hadoop hadoop 705 Apr 21 03:01 derby.log
drwxrwxr-x. 5 hadoop hadoop 4096 Apr 21 03:01 metastore_db
-rw-r--r--. 1 root root 12847 Jun 5 2018 spark_streaming-1.0-SNAPSHOT.jar
-rw-r--r--. 1 hadoop hadoop 2254 May 21 2018 spark_test-1.0.jar
-rw-r--r--. 1 hadoop hadoop 9499 Apr 27 2018 wordcount-1.0-SNAPSHOT.jar
提交:
spark-submit --master local[2] \
--class cn.zhangyu.UpdateStateByKey \
--name UpdateStateByKey \
/home/hadoop/lib/spark_streaming-1.0-SNAPSHOT.jar
输入:
nc -lp 9998
a,a,a,b,b,c,c //第一次输入
a,a,a,b,b,c,c //第二次输入
a,a,a,b,b,c,c
a,a,a,b,b,c,c //第三次输入两行
结果:
第一次
(b,2)
(a,3)
(c,2)
第二次
(b,4)
(a,6)
(c,4)
第三次
(b,8)
(a,12)
(c,8)
4 Output Operations on DStreams
输出操作允许将DStream的数据推送到外部系统,如数据库或文件系统。 由于输出操作实际上允许外部系统使用转换后的数据,因此它们会触发所有DStream转换的实际执行(这里的Transform Operation也就是RDD中的action。)。 这里有一个不同的:
transform | Meaning |
---|---|
foreachRDD(func) | 将函数func应用于从流中生成的每个RDD。 此功能应将每个RDD中的数据推送到外部系统,例如将RDD保存到文件,或通过网络将其写入数据库。 |
dstream.foreachRDD功能非常强大,允许将数据发送到外部系统。但是,如何理解他并正确有效的使用是非常重要的。 用不好就会产生一些错误和性能上的差距哦,下面我们通过代码演示:
- 需求:我们通过socket监听一个端口收集数据,存储到mysql中
import java.sql.{Connection, DriverManager}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by grace on 2018/6/6.
*/
object ForEachRDD {
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("ForEacheRDD").setMaster("local[2]")
val ssc=new StreamingContext(conf,Seconds(10))
val DStream=ssc.socketTextStream("192.168.137.130",9998)
//wc
val result=DStream.flatMap(x=>x.split(",")).map(x=>(x,1)).reduceByKey(_+_)
//把结果写入到mysql
//foreachRDD把函数作用在每个rdd上
result.foreachRDD(rdd=>{
rdd.foreach(x=>{
val con=getConnection()
val word=x._1
val count=x._2.toInt
//sql
val sql=s"insert into wc values('$word',$count)"
//插入数据
val pstmt=con.prepareStatement(sql)
pstmt.executeUpdate()
//关闭
pstmt.close()
con.close()
})
})
ssc.start()
ssc.awaitTermination()
}
def getConnection(): Connection={
//加载驱动
Class.forName("com.mysql.jdbc.Driver")
//准备参数
val url="jdbc:mysql://localhost:3306/test"
val username="root"
val password="123456"
val con=DriverManager.getConnection(url,username,password)
con
}
}
create table wc(
word char(10),
count int
);
查看结果:
nc -lp 9998
a,a,a,c,c,c,s,s,b,b,b
a,a,a,c,c,c,s,s,b,b,b
a,a,a,c,c,c,s,s,b,b,b
注意:
result.foreachRDD(rdd=>{
val con=getConnection()
rdd.foreach(x=>{
val word=x._1
val count=x._2.toInt
//sql
val sql=s"insert into wc values('$word',$count)"
//插入数据
val pstmt=con.prepareStatement(sql)
pstmt.executeUpdate()
//关闭
pstmt.close()
con.close()
})
})
如果我们把val con=getConnection()
放在外面,大家可以看看结果,包报下面的错误:
Caused by: java.io.NotSerializableException: com.mysql.jdbc.SingleByteCharsetConverter
Serialization stack:
- object not serializable (class: com.mysql.jdbc.SingleByteCharsetConverter, value: com.mysql.jdbc.SingleByteCharsetConverter@620da7ee)
- writeObject data (class: java.util.HashMap)
- object (class java.util.HashMap, {Cp1252=com.mysql.jdbc.SingleByteCharsetConverter@620da7ee, UTF-8=java.lang.Object@2aa4a8ac, US-ASCII=com.mysql.jdbc.SingleByteCharsetConverter@32aa9b4a, utf-8=java.lang.Object@2aa4a8ac})
- field (class: com.mysql.jdbc.ConnectionImpl, name: charsetConverterMap, type: interface java.util.Map)
- object (class com.mysql.jdbc.JDBC4Connection, com.mysql.jdbc.JDBC4Connection@2f73c578)
- field (class: cn.zhangyu.ForEachRDD$$anonfun$main$1$$anonfun$apply$1, name: con$1, type: interface java.sql.Connection)
- object (class cn.zhangyu.ForEachRDD$$anonfun$main$1$$anonfun$apply$1, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 30 more
- 分析出现错误的原因
dstream.foreachRDD { rdd =>
val connection = createNewConnection() // executed at the driver
rdd.foreach { record =>
connection.send(record) // executed at the worker
}
}
val connection = createNewConnection()
执行在driver端
connection.send(record)
执行在worker端
这就涉及到了跨网络传输,肯定会出现序列化的问题,因此正确的写法如下:
dstream.foreachRDD { rdd =>
rdd.foreach { record =>
val connection = createNewConnection()
connection.send(record)
connection.close()
}
}
- 优化1*
把connection创建写在内部,那小伙伴们感觉还会有问题吗?还记得我们spark优化基础篇的东西吗,使用高性能的算子,foreach是对rdd的每个元素进行计算,那我们是不是可以使用foreachPartition
对每个分区进行计算。 - 优化2*
哈哈,还能看出存在什么问题吗?我在以前做项目的时候就出现过这种情况可能出现的问题OOM,就是不断创建connection,创建connection是一个很耗费支援的过程,有些情况执行代码的时间还没有创建connection花费的时间多,你要不断的创建和销毁啊。是不是可以使用连接池哈。
最佳实践
dstream.foreachRDD { rdd =>
rdd.foreachPartition { partitionOfRecords =>
// ConnectionPool is a static, lazily initialized pool of connections
val connection = ConnectionPool.getConnection()
partitionOfRecords.foreach(record => connection.send(record))
ConnectionPool.returnConnection(connection) // return to the pool for future reuse
}
}
5 Window Operations
Spark Streaming还提供了窗口化计算,这些计算允许您在滑动的数据窗口上应用变换。 那到底什么时滑动窗口呢,我看先看一幅图
如图所示,每当窗口在源DStream上滑动时,该窗口内的RDD被组合,每一次对三个time(自己设置的)进行计算,间隔两个time进行一次计算。所以要设置两个参数:
1.窗口长度 - 窗口的持续时间(图中的小框框)。
2.滑动间隔 - 执行窗口操作的时间间隔(图中每个框的间隔时间)。
如图所示就可能出现一个问题,设置的间隔太短就可能出现重复计算的可能,或者某些数据没有计算,这些也是很正常的。
API
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by grace on 2018/6/7.
*/
object WindowOperations {
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("WindowOperations").setMaster("local[2]")
val ssc=new StreamingContext(conf,Seconds(10))
val DStream=ssc.socketTextStream("192.168.137.130",9998)
//wc
DStream.flatMap(_.split(",")).map((_,1)).reduceByKeyAndWindow((a:Int,b:Int)=>(a+b),Seconds(30),Seconds(10)).print
ssc.start()
ssc.awaitTermination()
}
}
- 输入
nc -lp 9998
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
- 输出
(d,1)
(b,3)
(a,4)
(c,3)
(d,2)
(b,6)
(a,8)
(c,6)
(d,3)
(b,9)
(a,12)
(c,9)
(d,4)
(b,12)
(a,16)
(c,12)
6 DataFrame and SQL Operations
我们可以使用DataFrame和SQL来操作流式数据,但是你必须使用StreamingContext正在使用的SparkContext来创建SparkSession,如果driver出现了故障,只有这样才能重新启动。
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by grace on 2018/6/7.
*/
object DataFrameAndSQLOperations {
def main(args: Array[String]): Unit = {
val conf=new SparkConf().setAppName("DataFrameAndSQLOperations").setMaster("local[2]")
val ssc=new StreamingContext(conf,Seconds(10))
val DStream=ssc.socketTextStream("192.168.137.130",9998)
val result=DStream.flatMap(_.split(","))
result.foreachRDD(rdd=>{
val spark =SparkSession.builder().config(rdd.sparkContext.getConf).getOrCreate()
import spark.implicits._
// Convert RDD[String] to DataFrame
val wordsDataFrame = rdd.toDF("word")
// Create a temporary view
wordsDataFrame.createOrReplaceTempView("words")
// Do word count on DataFrame using SQL and print it
val wordCountsDataFrame =
spark.sql("select word, count(*) as total from words group by word")
wordCountsDataFrame.show()
})
ssc.start()
ssc.awaitTermination()
}
}
- 输入
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
a,a,a,a,b,b,b,c,c,c,d
- 结果
+----+-----+
|word|total|
+----+-----+
| d| 4|
| c| 12|
| b| 12|
| a| 16|
+----+-----+