001SparkStreaming
SparkCore、SparkSQL和SparkStreaming总结
SparkCore:
1.核心抽象(RDD)
2.程序入口(new SparkContext)
3.各种方法调用
SparkSQL:
1.核心抽象(DataFrame/DataSet)
2.程序入口(new SparkSession)
3.各种方法调用
SparkStreaming:
1.核心抽象(DStream)
2.程序入口(new StreamingContext(sc,Seconds(1)))每1秒计算一次当前1秒的数据
3.各种方法调用
注意
blockInterval:200ms,即block文件(block块)生成的间隔
batchInterval:1,2等,即Seconds(1)一样的效果
但是程序中设置参数会覆盖系统参数(或配置文件的参数)
RDD:
mapPartition
foreachPartition
DStream:
transform(mapRDD)
foreachRDD
spark中底层形式转换关系:
RDD -> DataFrame/DataSet -> table -> SQL
DataFrame/DataSet ->RDD -> RDD(transformation)
DStream -> (transform,foreachRDD)RDD -> DataFrame/DataSet -> table -> SQL
实时任务简介
Spark流是对于Spark核心API的拓展,从而支持对于实时数据流的可拓展,高吞吐量和容错性流处理。数据可以由多个源取得,例如:Kafka,Flume,Twitter,ZeroMQ,Kinesis或者TCP接口,同时可以使用由如map,reduce,join和window这样的高层接口描述的复杂算法进行处理。最终,处理过的数据可以被推送到文件系统,数据库和HDFS。
SparkStreaming的程序入口
val conf = new SparkConf().setMaster("local[2]").setAppName("WordCount")
val ssc = new StreamingContext(conf, Seconds(1))
什么是Stream流
离散数据流或者DStream是SparkStreaming提供的基本抽象。其表现数据的连续流,这个输入数据流可以来自于源,也可以来自于转换输入流产生的已处理数据流。内部而言,一个DStream以一系列连续的RDDs所展现,这些RDD是Spark对于不变的,分布式数据集的抽象。一个DStream中的每个RDD都包含来自一定间隔的数据,在DStream上使用的任何操作都会转换为针对底层RDD的操作。
SparkStreaming的运行流程
这里是基于receiver,下面是较简单的运行流程。
1.实时任务提交运行,初始化程序入口;
2.任务开始运行性,Driver端会发送reciver到Executor上,reciver可以设置多个,这里reciver就是一个接收器,用来接收数据的。默认情况下只有一个reciver,一个reciver运行起来其实就是表现为一个task任务。(这里如果在本地运行需要设置setMaster为local[2],因为接收数据一个线程,处理数据一个线程,写成local运行不起来)
3.每个receiver都会接收进来的数据,并且把这些数据写成一个个block(默认是200ms),然后把这些block写入到executor的内存里面,默认情况下有副本。
4.运行过程中,reciver会给driver端发送块报告,进行双方的信息传递,响应和监听。
5.StreamingContext会根据一定的时间(比如Seconds(1)),把这些时间内所有的block都组成一个RDD,接下来对这个RDD进行计算。
入门程序wordcount单词统计
首先配置好pom文件(当然,在这之前准备好IDEA开发环境,搭建好Hadoop集群,配置好Spark on yarn集群,开启socket服务端口用来发送数据,下面socket端口号自行修改)
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.1</spark.version>
<hadoop.version>2.7.5</hadoop.version>
<encoding>UTF-8</encoding>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
WordCount实时任务流程序步骤
步骤一:初始化程序入口
步骤二:获取数据流
步骤三:数据处理
步骤四:数据输出
步骤五:启动任务
工作中基本上这个五个步骤,以及公用部分的封装抽离。
scala版本
object WordCount {
def main(args: Array[String]): Unit = {
//步骤一:初始化程序入口
val conf = new SparkConf().setMaster("local[2]").setAppName("WordCount")
val ssc = new StreamingContext(conf, Seconds(1))
//步骤二:获取数据流
val lines = ssc.socketTextStream("localhost", 9999)
//步骤三:数据处理
val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
//步骤四: 数据输出
wordCounts.print()
//步骤五:启动任务
ssc.start()
ssc.awaitTermination()
ssc.stop()
}
}
java版本
public class WordCount {
public static void main(String[] args) throws Exception{
//步骤一:初始化程序入口
SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("WordCount");
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));
//步骤二:获取数据源
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("10.148.15.10", 9999);
//步骤三:数据处理
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());
JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2);
//步骤四:数据输出
wordCounts.print();
//步骤五:启动程序
jssc.start();
jssc.awaitTermination();
jssc.stop();
}
}
实时任务SparkStreaming程序的数据源
Socket数据源
见上面入门程序
HDFS数据源
注:如果HDFS使用的高可用模式,需要把集群的core-site.xml,hdfs-site.xml文件导入到项目的resources目录里面。
object WordCountForHDFSSource {
def main(args: Array[String]): Unit = {
//步骤一:初始化程序入口
val conf = new SparkConf().setMaster("local[2]").setAppName("WordCount")
val ssc = new StreamingContext(conf, Seconds(1))
//步骤二:获取数据流
val lines = ssc.textFileStream("/tmp");
//步骤三:数据处理
val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
//步骤四: 数据输出
wordCounts.print()
//步骤五:启动任务
ssc.start()
ssc.awaitTermination()
ssc.stop()
}
}
自定义数据源
/**
* 自定义一个Receiver,这个Receiver从socket中接收数据
* 接收过来的数据解析成以 \n 分隔开的text
使用方式:nc -lk 9999
*/
object CustomReceiver {
def main(args: Array[String]): Unit = {
//设置了日志的级别
Logger.getLogger("org").setLevel(Level.ERROR)
// Create the context with a 1 second batch size
val sparkConf = new SparkConf().setAppName("CustomReceiver").setMaster("local[2]")
val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(1))
val lines = ssc.receiverStream(new MyCustomReceiver("localhost", 8888))
val words = lines.flatMap(_.split(","))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
ssc.stop(false)
}
}
//socket
class MyCustomReceiver(host: String, port: Int) extends Receiver[String](StorageLevel.MEMORY_ONLY) with Logging {
def onStart() {
// 启动一个线程,开始接收数据
new Thread("Socket Receiver") {
override def run() { receive() }
}.start()
//数据库里面读取数据
}
def onStop() {
// There is nothing much to do as the thread calling receive()
// is designed to stop by itself isStopped() returns false
//关闭连接
}
/** Create a socket connection and receive data until receiver is stopped */
private def receive() {
var socket: Socket = null
var userInput: String = null
try {
logInfo("Connecting to " + host + ":" + port)
socket = new Socket(host, port)
logInfo("Connected to " + host + ":" + port)
val reader = new BufferedReader(
new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8))
userInput = reader.readLine()
while(!isStopped && userInput != null) {
//把这个接收到到数据写到下游
store(userInput)
userInput = reader.readLine()
}
reader.close()
socket.close()
logInfo("Stopped receiving")
restart("Trying to connect again")
} catch {
case e: java.net.ConnectException =>
restart("Error connecting to " + host + ":" + port, e)
case t: Throwable =>
restart("Error receiving data", t)
}
}
}
Kafka数据源
这里涉及到SparktStreaming与kafka0.8版本和kafka0.10版本的整合。
SparkStreaming与Kafka-0-8整合,支持0.8版本,或者更高的版本
pom.xml文件添加内容如下:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>2.3.3</version>
</dependency>
SparkStreaming与Kafka-0-10整合,支持0.10版本,或者更高的版本
pom.xml文件添加内容如下:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
</dependency>