SparkStreaming 简介
spark的内置模块,用于实时计算
微批次流式实时计算框架
SparkStreaming 架构
有两个线程,采集线程和处理线程
采集线程向Driver 提交,Executor 进行具体数据的处理。
SparkStreaming 的背压机制
根据executor的消费能力来处理数据
SparkStrreaming 的创建方式
DStream 是SparkStraming对处理的数据集的一个抽象
读取指定端口数据的创建DS
通过RDD 队列创建DS(不常用)
通过定义Receiver读取指定数据源数据创建DS
通过读取Kafka 读取数据源
官方给了两种:
spark-streaming-kafka-0-8 在spark2.3.0以后过时
- Receiver DStream
默认情况下,offset 维护在 zk 中 - Direct Dstream
默认情况下,offset 维护在checkpoint检查点,需要改变
SparkStreamingContext的创建方式
可以手动指定 offset 维护位置,为了保证数据的精准一致,维护在有事务的存储。
spark-streaming-kafka-0-10 推荐使用版本
使用官方链接: link.
Kafka 创建主题,并且生产者进行消费
# 创建kafka主题
cd /opt/cloudera/parcels/CDH/bin
# 创建主题
./kafka-topics --create -zookeeper 10.0.0.205:2181 --replication-factor 2 --partitions 3 --topic streamingLLd
# 查看创建主题
./kafka-topics --list -zookeeper 10.0.0.205:2181
# 查看主题明细信息
./kafka-topics --describe --topic streamingLLd -zookeeper 10.0.0.205:2181
# 对 该主题开启生产者 进行生产数据
./kafka-console-producer --broker-list cdh05:9092 --topic streamingLLd
spark-streaming-kafka-0-10 消费代码
package ypl.com.streaming
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkStreamDirectAPI {
def main(args: Array[String]): Unit = {
// 配置文件对象
val conf = new SparkConf().setMaster("local[*]").setAppName("SparkDirectAPI")
// 创建StreamingContext 对象
val scc = new StreamingContext(conf, Seconds(3))
// 定义Kafka 相关的连接参数
val kafkaParams = Map[String, Object](
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> "cdh03:9092,cdh04:9092,cdh05:9092",
ConsumerConfig.GROUP_ID_CONFIG -> "lld",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer]
)
val kafkaDStream: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
scc,
// 位置策略,指定计算的Executor
LocationStrategies.PreferConsistent,
// 消费策略
ConsumerStrategies.Subscribe[String, String](Set("streamingLLd"), kafkaParams)
)
kafkaDStream.map(_.value())
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.print()
scc.start()
scc.awaitTermination()
}
}
log4j.properties 放在 resource 目录下,可以配置控制台输出log级别
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootLogger=INFO, file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.append=true
# log4j.appender.file.file=${spark.yarn.app.container.log.dir}/spark.log
log4j.appender.file.file=./spark.log
log4j.appender.file.MaxFileSize=50MB
log4j.appender.file.MaxBackupIndex=5
log4j.logger.org.apache.spark=INFO
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %p [%t] %c{1}:%L - %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=INFO
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
ideal 的 poml文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ypl.com</groupId>
<artifactId>Spark-API</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<spark.version>2.1.1</spark.version>
<scala.version>2.11.12</scala.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
<!-- <scope>provided</scope>-->
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.apache.spark</groupId>-->
<!-- <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>-->
<!-- <version>2.4.5</version>-->
<!-- </dependency>-->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
此时控制台输入消费者,可以看到能IDEAL进行运算统计
SparkUI 界面可以看到
通过日志文件查看 UI 地址
消费Kakfa 数据模式总结
0-8 ReceiverAPI
- 专门的 Executor 读取数据,速度不统一
- 跨机器传输数据,WAL
- Executor 读取数据通过多个线程方式,想要增加并行度,则需要多个流 union
- offset存储在Zookeeper中
0-8 DirectAPI
- Executor 读取数据并计算
- 增加Executor个数来增加消费的并行度
- offset 存储
CheckPoint(getActiveOrCreate 方式创建StreamingContext)
手动维护(数据库维护)
获取 offset 必须在第一个调用的算子中: offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
0-10 DirectAPI
- Executor 读取数据并计算
- 增加Executor个数来增加消费的并行度
- offset 存储 __consumer_offsets 系统主题中、 手动维护(有事务的存储系统)