第8章 Spark Streaming进阶与案例实战

8-1 -课程目录

带状态的算子:updateStateByKey

实战:计算到目前为止累计出现的单词写到MySQL中

基于Windows统计

实战:黑名单过滤

实战:Spark Streaming整合Spark SQL实战

 

8-2 -实战之updateStateByKey算子的使用

UpdateStateByKey Operation

The updateStateByKey operation allows you to maintain arbitrary state while continuously updating it with new information. To use this, you will have to do two steps.

  1. Define the state - The state can be an arbitrary data type.
  2. Define the state update function - Specify with a function how to update the state using the previous state and the new values from an input stream.

In every batch, Spark will apply the state update function for all existing keys, regardless of whether they have new data in a batch or not. If the update function returns None then the key-value pair will be eliminated.

Let’s illustrate this with an example. Say you want to maintain a running count of each word seen in a text data stream. Here, the running count is the state and it is an integer. We define the update function as:

 

8-3 -实战之将统计结果写入到MySQL数据库中

使用Spark Streaming进行统计分析

Spark Streaming统计结果写到mysql

查看文档

 

需求:将统计结果写入到MySQL

create table wordcount(

word varchar(50) default null,

worccount int (10) default null

);

源码地址:

https://gitee.com/sag888/big_data/blob/master/Spark%20Streaming%E5%AE%9E%E6%97%B6%E6%B5%81%E5%A4%84%E7%90%86%E9%A1%B9%E7%9B%AE%E5%AE%9E%E6%88%98/project/l2118i/sparktrain/src/main/scala/com/imooc/spark/ForeachRDDApp.scala

源码:

package com.imooc.spark

import java.sql.DriverManager

import org.apache.spark.SparkConf

import org.apache.spark.streaming.{Seconds, StreamingContext}

/**

* 使用Spark Streaming完成词频统计,并将结果写入到MySQL数据库中

*/

object ForeachRDDApp {

def main(args: Array[String]): Unit = {

val sparkConf = new SparkConf().setAppName("ForeachRDDApp").setMaster("local[2]")

val ssc = new StreamingContext(sparkConf, Seconds(5))

val lines = ssc.socketTextStream("localhost", 6789)

val result = lines.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _)

//result.print() //此处仅仅是将统计结果输出到控制台

//TODO... 将结果写入到MySQL

// result.foreachRDD(rdd =>{

// val connection = createConnection() // executed at the driver

// rdd.foreach { record =>

// val sql = "insert into wordcount(word, wordcount) values('"+record._1 + "'," + record._2 +")"

// connection.createStatement().execute(sql)

// }

// })

result.print()

result.foreachRDD(rdd => {

rdd.foreachPartition(partitionOfRecords => {

val connection = createConnection()

partitionOfRecords.foreach(record => {

val sql = "insert into wordcount(word, wordcount) values('" + record._1 + "'," + record._2 + ")"

connection.createStatement().execute(sql)

})

connection.close()

})

})

ssc.start()

ssc.awaitTermination()

}

/**

* 获取MySQL的连接

*/

def createConnection() = {

Class.forName("com.mysql.jdbc.Driver")

DriverManager.getConnection("jdbc:mysql://localhost:3306/imooc_spark", "root", "root")

}

}

通过SQL将统计结果写入到数据库

"insert into wordcount(word, wordcount) values('" + record._1 + "'," + record._2 + ")"

存在的问题

1)对于已有的数据做更新,而是所有的数据均为insert

改进思路:

a)在插入数据前先判断单词是否存在,如果存在就update,不存在就insert

b)工作中:HBase/Redis

2)每个rdd的怕任天堂创建connection,建议改成连接池

 

8-4 -实战之窗口函数的使用

window:定时的进行一个时间段的数据处理

Window Operations

Spark Streaming also provides windowed computations, which allow you to apply transformations over a sliding window of data. The following figure illustrates this sliding window.

 

As shown in the figure, every time the window slides over a source DStream, the source RDDs that fall within the window are combined and operated upon to produce the RDDs of the windowed DStream. In this specific case, the operation is applied over the last 3 time units of data, and slides by 2 time units. This shows that any window operation needs to specify two parameters.

  • window length - The duration of the window (3 in the figure).
  • sliding interval - The interval at which the window operation is performed (2 in the figure).

These two parameters must be multiples of the batch interval of the source DStream (1 in the figure).

Let’s illustrate the window operations with an example. Say, you want to extend the earlier example by generating word counts over the last 30 seconds of data, every 10 seconds. To do this, we have to apply the reduceByKey operation on the pairs DStream of (word, 1) pairs over the last 30 seconds of data. This is done using the operation reduceByKeyAndWindow.

window length:创建的长度

sliding interval:窗口的间隔

这2个参数和我们的batch size有关系:倍数

每隔多久计算某个范围内的数据:每隔10秒计算前10分钟的wc

==>每隔sliding interval统计前window length的值

// Reduce last 30 seconds of data, every 10 seconds val windowedWordCounts = pairs.reduceByKeyAndWindow((a:Int,b:Int) => (a + b), Seconds(30), Seconds(10))

 

8-5 -实战之黑名单过滤

transform算子的使用

Spark Steaming整合RDD进行操作

黑名单过滤

访问日志==>DStream

20180808,zs

20180808,ls

20180808,ww

==>(zs:20180808,zs)(ls:20180808,ls)(ww:20180808,ww)

黑名单列表==>RDD

zs

ls

==>(zs:true)(ls:true)

==>20180808 ,ww

leftjoin

(zs:[<20180808,zs>,<true>]) x

(ls:[<20180808,ls>,<true>]) x

(ww:[<20180808,ww>,<false>]) ==>tuple 1

https://gitee.com/sag888/big_data/blob/master/Spark%20Streaming%E5%AE%9E%E6%97%B6%E6%B5%81%E5%A4%84%E7%90%86%E9%A1%B9%E7%9B%AE%E5%AE%9E%E6%88%98/project/l2118i/sparktrain/src/main/scala/com/imooc/spark/TransformApp.scala

代码:

package com.imooc.spark

import org.apache.spark.SparkConf

import org.apache.spark.streaming.{Seconds, StreamingContext}

/**

* 黑名单过滤

*/

object TransformApp {

def main(args: Array[String]): Unit = {

val sparkConf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")

/**

* 创建StreamingContext需要两个参数:SparkConf和batch interval

*/

val ssc = new StreamingContext(sparkConf, Seconds(5))

/**

* 构建黑名单

*/

val blacks = List("zs", "ls")

val blacksRDD = ssc.sparkContext.parallelize(blacks).map(x => (x, true))

val lines = ssc.socketTextStream("localhost", 6789)

val clicklog = lines.map(x => (x.split(",")(1), x)).transform(rdd => {

rdd.leftOuterJoin(blacksRDD)

.filter(x=> x._2._2.getOrElse(false) != true)

.map(x=>x._2._1)

})

clicklog.print()

ssc.start()

ssc.awaitTermination()

}

}

 

8-6 -实战之Spark Streaming整合Spark SQL操作

 

DataFrame and SQL Operations

源码地址

https://gitee.com/sag888/big_data/blob/master/Spark%20Streaming%E5%AE%9E%E6%97%B6%E6%B5%81%E5%A4%84%E7%90%86%E9%A1%B9%E7%9B%AE%E5%AE%9E%E6%88%98/project/l2118i/sparktrain/src/main/scala/com/imooc/spark/SqlNetworkWordCount.scala

源码:

package com.imooc.spark

import org.apache.spark.SparkConf

import org.apache.spark.rdd.RDD

import org.apache.spark.sql.SparkSession

import org.apache.spark.streaming.{Seconds, StreamingContext, Time}

/**

* Spark Streaming整合Spark SQL完成词频统计操作

*/

object SqlNetworkWordCount {

def main(args: Array[String]): Unit = {

val sparkConf = new SparkConf().setAppName("ForeachRDDApp").setMaster("local[2]")

val ssc = new StreamingContext(sparkConf, Seconds(5))

val lines = ssc.socketTextStream("localhost", 6789)

val words = lines.flatMap(_.split(" "))

// Convert RDDs of the words DStream to DataFrame and run SQL query

words.foreachRDD { (rdd: RDD[String], time: Time) =>

val spark = SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)

import spark.implicits._

// Convert RDD[String] to RDD[case class] to DataFrame

val wordsDataFrame = rdd.map(w => Record(w)).toDF()

// Creates a temporary view using the DataFrame

wordsDataFrame.createOrReplaceTempView("words")

// Do word count on table using SQL and print it

val wordCountsDataFrame =

spark.sql("select word, count(*) as total from words group by word")

println(s"========= $time =========")

wordCountsDataFrame.show()

}

ssc.start()

ssc.awaitTermination()

}

/** Case class for converting RDD to DataFrame */

case class Record(word: String)

/** Lazily instantiated singleton instance of SparkSession */

object SparkSessionSingleton {

@transient private var instance: SparkSession = _

def getInstance(sparkConf: SparkConf): SparkSession = {

if (instance == null) {

instance = SparkSession

.builder

.config(sparkConf)

.getOrCreate()

}

instance

}

}

}

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值