pom文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.wang</groupId>
<artifactId>spark_streaming</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.7.1</hadoop.version>
<scala.compat.version>2.11</scala.compat.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<!--<testSourceDirectory>src/test/scala</testSourceDirectory>-->
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-dependencyfile</arg>
<arg>${project.build.directory}/.scala_dependencies</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter><artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass></mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
案例一:使用Spark Streaming统计HDFS文件的词频
package com.wang.mytest
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.{Seconds, StreamingContext}
object HDFSInputDstreamDemo extends App {
private val conf: SparkConf = new SparkConf().setMaster("local[2]").setAppName("test01")
private val ssc = new StreamingContext(conf,Seconds(5))
private val lines: DStream[String] = ssc.textFileStream("hdfs://hadoop1:9000/data")
private val wordcounts: DStream[(String, Int)] = lines.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
wordcounts.print()
ssc.start()
ssc.awaitTermination()
}
从linux传入数据hdfs dfs -put courses.txt /data/
,数据如下
hadoop hdfs mapreduce hbase hive zookeeper
spark spark core spark sql spark ml spark streaming
redis mongodb cassandra
mysql oracle
python java shell linux
结果:
-------------------------------------------
Time: 1597912120000 ms
-------------------------------------------
(hive,1)
(python,1)
(cassandra,1)
(mapreduce,1)
(zookeeper,1)
(mysql,1)
(oracle,1)
(mongodb,1)
(linux,1)
(java,1)
...
案例二:使用Spark Streaming处理带状态的数据
需求:计算到目前为止累计词频的个数
分析:DStream转换操作包括无状态转换和有状态转换
无状态转换:每个批次的处理不依赖于之前批次的数据
有状态转换:当前批次的处理需要使用之前批次的数据
updateStateByKey属于有状态转换,可以跟踪状态的变化
实现要点
定义状态:状态数据可以是任意类型
定义状态更新函数:参数为数据流之前的状态和新的数据流数据
package com.wang.mytest
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object UpdateStateByKeyDemo extends App {
private val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("test02")
private val ssc = new StreamingContext(conf,Seconds(5))
private val input: ReceiverInputDStream[String] = ssc.socketTextStream("hadoop1",44444)
private val resl: DStream[(String, Int)] = input.flatMap(_.split(" ")).map((_,1))
//做一个checkpoint检查点,负责检查之前的结果,放入缓存
ssc.checkpoint("src\\data\\ck1")
def updateFunc(currentValue:Seq[Int],preValue:Option[Int]) = {
val currsum:Int = currentValue.sum
val pre:Int = preValue.getOrElse(0)
Some(currsum+pre)
}
//updateStateByKey 可以跟踪状态的变化
private val state: DStream[(String, Int)] = resl.updateStateByKey(updateFunc)
state.print()
ssc.start()
ssc.awaitTermination()
}
结果:
(spark,2)
(world,1)
(hello,4)
(test,1)
案例三:Spark Streaming整合Spark SQL
需求:使用Spark Streaming +Spark SQL完成WordCount
分析:将每个RDD转换为DataFrame
package com.wang.mytest
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkSQLSparkStreamingDemo extends App {
private val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("test03")
private val ssc = new StreamingContext(conf,Seconds(5))
private val spark: SparkSession = SparkSession.builder().config(conf).getOrCreate()
private val inputDstream: ReceiverInputDStream[String] = ssc.socketTextStream("hadoop1",9999)
val wordDstream = inputDstream.flatMap(_.split(" "))
import spark.implicits._
wordDstream.foreachRDD(
rdd => {
if (rdd.count()!=0) {
val df1 = rdd.map(x=>WordCount(x)).toDF()
df1.createOrReplaceTempView("tb_word")
spark.sql(
"""
|select name,count(*) from tb_word group by name
""".stripMargin
).show()
}
}
)
ssc.start()
ssc.awaitTermination()
}
case class WordCount(name:String)
数据
hello test hello spark
结果
+-----+--------+
| name|count(1)|
+-----+--------+
|hello| 2|
|spark| 1|
| test| 1|
+-----+--------+
案例四:Spark Streaming整合Flume
方式一:flume的sink使用avro来给spark stream推送数据
- 编写
package com.wang.mytest
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.ReceiverInputDStream
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkFlumePushDemo extends App {
private val conf: SparkConf = new SparkConf().setMaster("local[2]").setAppName("flumeDemo01")
private val ssc = new StreamingContext(conf,Seconds(5))
private val flumeStream: ReceiverInputDStream[SparkFlumeEvent] =
//使用FlumeUtils.createStream来接受avroSink push的数据
FlumeUtils.createStream(ssc,"hadoop1",55555)
flumeStream.map(x=>new String(x.event.getBody.array()).trim)
.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
}
打胖包,将jar包放到linux中(随便放哪)
编写flume的conf文件
vi stream-flume.conf
a1.sources = s1
a1.channels = c1
a1.sinks = k1
a1.sources.s1.type = netcat
a1.sources.s1.bind = hadoop1
a1.sources.s1.port = 44444
a1.sources.s1.channels = c1
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
# AvroSink向Spark(55555)推送数据
# 使用push createStream
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop1
a1.sinks.k1.port = 55555
a1.sinks.k1.channel = c1
启动flume
flume-ng agent --name a1 --conf conf/ --conf-file /opt/soft/flume160/conf/job/stream-flume.conf -Dflume.root.logger=INFO,console
再运行上传的spark jar包
spark-submit --class com.wang.mytest.SparkFlumePushDemo /data/jar/spark_streaming-1.0-SNAPSHOT.jar
再启动telnet
telnet hadoop1 44444
方式二:flume的sink类型使用sparktraming的jar包
idea需要导入依赖:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>2.2.0</version>
</dependency>
如果linux中的flume版本低,将以下3个jar包导入到flume的lib目录下,并删除对应的低版本的jar包
相关jar包下载
链接: https://pan.baidu.com/s/1ii9Y-ypTX-cYCOb3tuOXXw 提取码: ubxc
package com.wang.spark_streaming
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.ReceiverInputDStream
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkFlumePollDmeo extends App {
private val conf: SparkConf = new SparkConf().setMaster("local[2]").setAppName("flumeDemo01")
private val ssc = new StreamingContext(conf,Seconds(5))
//使用createPollingStream方法spark streaming从kafka中拉去数据
private val flumePollStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils
.createPollingStream(ssc,"hadoop1",56789)
flumePollStream.map(x=>new String(x.event.getBody.array()).trim)
.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
}
agent.sources = s1
agent.channels = c1
agent.sinks = sk1
# 设置Source的类型为netcat,使用的channel为c1
agent.sources.s1.type = netcat
agent.sources.s1.bind = 192.168.108.181
agent.sources.s1.port = 44444
agent.sources.s1.channels = c1
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
# AvroSink向Spark(55555)推送数据
# 使用push createStream
agent.sinks.sk1.type=org.apache.spark.streaming.flume.sink.SparkSink
agent.sinks.sk1.hostname=192.168.108.181
agent.sinks.sk1.port=56789
agent.sinks.sk1.channel = c1
打jar包,将jar包放入linux中
启动flume
flume-ng agent --name a1 --conf conf/ --conf-file /opt/soft/flume160/conf/job/stream-flume-jar.conf -Dflume.root.logger=INFO,console
再运行上传的spark jar包
spark-submit --class com.wang.mytest.SparkFlumePollDemo /data/jar/spark_streaming-1.0-SNAPSHOT.jar
再启动telnet
telnet hadoop1 44444
案例五:Spark Streaming整合Kafka(streaming从kafka中读数据)
package com.wang.mytest
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkKafkaDirectDemo extends App {
private val conf: SparkConf = new SparkConf().setAppName("SparkKafkaDirectDemo").setMaster("local[*]")
private val ssc = new StreamingContext(conf,Seconds(5))
// streaming当作消费者,所以要需要配置 StringDeserializer反序列化、GROUP_ID_CONFIG
val kafkaParams = Map(
(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG->"hadoop1:9092"),
("key.deserializer"->"org.apache.kafka.common.serialization.StringDeserializer"),
("value.deserializer"-> "org.apache.kafka.common.serialization.StringDeserializer"),
(ConsumerConfig.GROUP_ID_CONFIG -> "testGroup")
)
private val message: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream(ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe(Set("kb07demo"), kafkaParams)
)
message.map(x=>x.value()).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
}