Apache Spark

Author: Lijb
Email: lijb1121@163.com
WeChat: ljb1121

Apache Spark 是专为大规模数据处理而设计的快速通用的计算引擎。Spark是UC Berkeley AMP lab (加州大学伯 克利分校的AMP实验室)所开源的类Hadoop MapReduce的通用并行框架。,Spark拥有Hadoop MapReduce所具 有的优点;但不同于MapReduce的是Job中间输出结果可以保存在内存中,从而不再需要读写HDFS,因此Spark能 更好地适用于数据挖掘与机器学习等需要迭代的MapReduce的算法。 Spark 启用了内存分布数据集(Resilient distribute data set),除了能够提供交互式查询外,它还可以优化迭代 工作负载。Spark 是在 Scala 语言中实现的,它将 Scala 用作其应用程序框架。与 Hadoop 不同,Spark 和 Scala 能够紧密集成,其中的 Scala 可以像操作本地集合对象一样轻松地操作分布式数据集。Apache Spark是专为大规 模数据处理而设计的快速通用的计算引擎 。现在形成一个高速发展应用广泛的生态系统 。

Spark技术栈

Spark&Hadoop的关系

Spark计算是对Hadoop传统MapReduce计算的升级和优化,可以理解如果没有hadoop大数据的实战和演变就没 有spark计算,通俗的理解为Hadoop的MapReduce类似于解决了大数据的物质文明需求,还处于初始阶段人们对 大数据计算的性能的要求,也就是说能保证在合理的时间范围内达到对数据的初步计算;而Spark的诞生Spark是 在考虑速度和效率,因此在在这个层面上Spark算是解决精神文明层面的问题。

hadoop&spark

hadoop:基于磁盘迭代计算,在做n次迭代过程中,因为所有的结果都是存储在磁盘,就导致在多次迭代计算中带来更多延迟。基于进程。

Spark:基于内存迭代计算,可以将数据缓存在内存中,这就为后续的迭代计算提供了便捷。基于线程。

Spark的内存计算并不意味着Spark的内存大小必须和数据大小进行匹配(内存不足,可以使用磁盘缓存),spark可以计算任意大小的数据。

Spark发展历史

  • 2009年加州伯克利AMP实验室
  • 2010首次开源
  • 2013年6月份在Apache孵化
  • 2014年2月份变成Apache顶级项目

  • 2018年版本Spark版本2.3.2

Spark环境搭建

大数据生态圈

Spark on Yarn

  • 确保HDFS和YARN正常运行
  • 安装配置Spark
[root@CentOS ~]# tar -zxf spark-2.3.0-bin-hadoop2.6.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.3.0-bin-hadoop2.6/ /usr/spark-2.3.0
[root@CentOS ~]# vi /root/.bashrc 
SPARK_HOME=/usr/spark-2.3.0
HBASE_MANAGES_ZK=false
HBASE_HOME=/usr/hbase-1.2.4
HADOOP_CLASSPATH=/usr/hbase-1.2.4/lib/*
HADOOP_HOME=/usr/hadoop-2.6.0
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$SPARK_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export HADOOP_CLASSPATH
export HBASE_MANAGES_ZK
export HBASE_HOME
export SPARK_HOME
[root@CentOS ~]# source .bashrc 
[root@CentOS ~]# cd /usr/spark-2.3.0/
[root@CentOS spark-2.3.0]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.3.0]# mv conf/slaves.template conf/slaves
[root@CentOS spark-2.3.0]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
[root@CentOS spark-2.3.0]# vi conf/slaves
CentOS
[root@CentOS spark-2.3.0]# vi conf/spark-env.sh
HADOOP_CONF_DIR=/usr/hadoop-2.6.0/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.6.0/etc/hadoop
SPARK_EXECUTOR_CORES=2
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G
LD_LIBRARY_PATH=/usr/hadoop-2.6.0/lib/native
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
  • 连接Spark
[root@CentOS ~]# spark-shell --master yarn --deploy-mode client

出现以下错误

2018-10-29 18:38:28 ERROR YarnClientSchedulerBackend:70 - Yarn application has already exited with state FINISHED!
2018-10-29 18:38:28 ERROR TransportClient:233 - Failed to send RPC 4830215201639506599 to /192.168.29.128:48563: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
	at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
2018-10-29 18:38:28 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint:91 - Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 4830215201639506599 to /192.168.29.128:48563: java.nio.channels.ClosedChannelException
	at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)

解决方案

<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>

<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>

关闭yarn集群,然后再yarn-site.xml添加以上配置,启动Yarn

如果出现Unable to load native

2018-10-29 18:35:29 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

解决方案

LD_LIBRARY_PATH=/usr/hadoop-2.6.0/lib/native
export LD_LIBRARY_PATH

在spark-env.sh中添加以上变量。

正常启动,你可以看到如下:

[root@CentOS ~]# spark-shell --master yarn --deploy-mode client
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2018-10-29 18:45:44 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = yarn, app id = application_1540809754248_0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala >

或者你可以访问:http://centos:8088(Hadoop) http://centos:8080(Spark)

Spark Standalone

  • 安装配置Spark
[root@CentOS ~]# tar -zxf spark-2.3.0-bin-hadoop2.6.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.3.0-bin-hadoop2.6/ /usr/spark-2.3.0
[root@CentOS ~]# vi /root/.bashrc 
SPARK_HOME=/usr/spark-2.3.0
HADOOP_HOME=/usr/hadoop-2.6.0
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export SPARK_HOME
[root@CentOS ~]# source .bashrc 
[root@CentOS ~]# cd /usr/spark-2.3.0/
[root@CentOS spark-2.3.0]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.3.0]# mv conf/slaves.template conf/slaves
[root@CentOS spark-2.3.0]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
[root@CentOS spark-2.3.0]# vi conf/slaves
CentOS
[root@CentOS spark-2.3.0]# vi conf/spark-env.sh
SPARK_MASTER_HOST=CentOS
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_MASTER_PORT=7077
export SPARK_MASTER_HOST
export SPARK_WORKER_CORES
export SPARK_MASTER_PORT
export SPARK_WORKER_MEMORY

启动Spark

[root@CentOS ~]# cd /usr/spark-2.3.0/
[root@CentOS spark-2.3.0]# ./sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/spark-2.3.0/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
[root@CentOS spark-2.3.0]# ./sbin/start-slave.sh spark://CentOS:7077
starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-2.3.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out

链接Spark服务计算

连接Spark集群

[root@CentOS spark-2.3.0]# ./bin/spark-shell --master spark://CentOS:7077 --total-executor-cores 5
2018-11-02 19:33:51 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = spark://CentOS:7077, app id = app-20181102193402-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala>sc.textFile("file:///root/worlds.log").flatMap(_.split(" ")).map((_,1)).groupByKey().map(x=>(x._1,x._2.sum)).collect().foreach(println)

本地仿真

[root@CentOS spark-2.3.0]# ./bin/spark-shell --master local[5] 
2018-11-02 19:51:43 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = local[5], app id = local-1541159513505).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

Spark 架构

spark on standalone

名词解释:

  • Standalone模式下存在的角色。
Client:客户端进程,负责提交作业到Master。

Master:Standalone模式中主控节点,负责接收Client提交的作业,管理Worker,并命令Worker启动Driver和Executor。

Worker:Standalone模式中slave节点上的守护进程,负责管理本节点的资源,定期向Master汇报心跳,接收Master的命令,启动Driver和Executor。

Driver: 一个Spark作业运行时包括一个Driver进程,也是作业的主进程,负责作业的解析、生成Stage并调度Task到Executor上。包括DAGScheduler,TaskScheduler。

Executor:即真正执行作业的地方,一个集群一般包含多个Executor,每个Executor接收Driver的命令Launch Task,一个Executor可以执行一到多个Task。
  • 作业相关的名词解释
Stage:一个Spark作业一般包含一到多个Stage。
Task:一个Stage包含一到多个Task,通过多个Task实现并行运行的功能。
DAGScheduler: 实现将Spark作业分解成一到多个Stage,每个Stage根据RDD的Partition个数决定Task的个数,然后生成相应的Task set放到TaskScheduler中。
TaskScheduler:实现Task分配到Executor上执行。
提交作业有两种方式,分别是Driver(作业的master,负责作业的解析、生成stage并调度task到,包含DAGScheduler)运行在Worker上,Driver运行在客户端。接下来分别介绍两种方式的作业运行原理。
SparkContext:整个应用的上下文,控制应用的生命周期。
Client提交应用,Master找到一个Worker启动Driver,Driver向Master或者资源管理器申请资源,之后将应用转化为RDD有向无环图,再由DAGScheduler将RDD转化为Stage的有向无环图提交给TaskScheduler,由TaskScheduler提交任务给Excutor进行执行,任务执行的过程中其它组件协同工作确保整个应用顺利执行。
  • 关于这种架构有几点有用的注意事项:
1.每个应用程序都有自己的执行程序进程,这些进程在整个应用程序的持续时间内保持不变并在多个线程中运行任务。这样可以在调度方(每个驱动程序调度自己的任务)和执行方(在不同JVM中运行的不同应用程序中的任务)之间隔离应用程序。但是,这也意味着无法在不将Spark应用程序(SparkContext实例)写入外部存储系统的情况下共享数据。

2.Spark与底层集群管理器无关。只要它可以获取执行程序进程,并且这些进程相互通信,即使在也支持其他应用程序的集群管理器(例如Mesos / YARN)上运行它也相对容易。

3.驱动程序必须在其生命周期内监听并接受来自其执行程序的传入连接(例如,请参阅[网络配置部分中的spark.driver.port](http://spark.apache.org/docs/latest/configuration.html#networking))。因此,驱动程序必须是来自工作节点的网络可寻址的。

4.因为驱动程序在集群上调度任务,所以它应该靠近工作节点运行,最好是在同一局域网上。如果您想远程向集群发送请求,最好向驱动程序打开RPC并让它从附近提交操作,而不是远离工作节点运行驱动程序。
Driver运行在Worker上

  • 作业执行流程描述:
1.客户端提交作业给Master 
2.Master让一个Worker启动Driver,即SchedulerBackend。Worker创建一个DriverRunner线程,DriverRunner启动SchedulerBackend进程。 
3.另外Master还会让其余Worker启动Exeuctor,即ExecutorBackend。Worker创建一个ExecutorRunner线程,ExecutorRunner会启动ExecutorBackend进程。 
4.ExecutorBackend启动后会向Driver的SchedulerBackend注册。SchedulerBackend进程中包含DAGScheduler,它会根据用户程序,生成执行计划,并调度执行。对于每个stage的task,都会被存放到TaskScheduler中,ExecutorBackend向SchedulerBackend汇报的时候把TaskScheduler中的task调度到ExecutorBackend执行。 
5.所有stage都完成后作业结束。 
Driver运行在Client

img

  • 作业流程如下
1.客户端启动后直接运行用户程序,启动Driver相关的工作:DAGScheduler和BlockManagerMaster等。 
2.客户端的Driver向Master注册。 
3.Master还会让Worker启动Exeuctor。Worker创建一个ExecutorRunner线程,ExecutorRunner会启动ExecutorBackend进程。 
4.ExecutorBackend启动后会向Driver的SchedulerBackend注册。Driver的DAGScheduler解析作业并生成相应的Stage,每个Stage包含的Task通过TaskScheduler分配给Executor执行。 
5.所有stage都完成后作业结束。
spark on yarn

sparkonyarn

这里Spark AppMaster相当于Standalone模式下的SchedulerBackend,Executor相当于standalone的ExecutorBackend,spark AppMaster中包括DAGScheduler和YarnClusterScheduler。 Spark on Yarn的执行流程可以参考http://www.csdn.net/article/2013-12-04/2817706--YARN spark on Yarn部分。

  • 作业流程如下:
1. 设置环境变量spark.local.dir和spark.ui.port。NodeManager启动ApplicationMaster的时候会传递LOCAL_DIRS(YARN_LOCAL_DIRS)变量,这个变量会被设置为spark.local.dir的值。后续临时文件会存放在此目录下。
2. 获取NodeManager传递给ApplicationMaster的appAttemptId。
3. 创建AMRMClient,即ApplicationMaster与ResourceManager的通信连接。
4. 启动用户程序,startUserClass(),使用一个线程通过发射调用用户程序的main方法。这时候,用户程序中会初始化SparkContext,它包含DAGScheduler和TaskScheduler。
5. 向ResourceManager注册。
6. 向ResourceManager申请containers,它根据输入数据和请求的资源,调度Executor到相应的NodeManager上,这里的调度算法会考虑输入数据的locality。

Spark RDD 编程

resilient distributed dataset (弹性的分布式数据集),是Spark计算的核心,所有的RDD相关的计算都是并行。

Spark创建RDD

  • 通过Sacla集合创建RDD
scala> var list=Array("hello world","ni hao")
list: Array[String] = Array(hello world, ni hao)

scala> var rdd1=sc.parallelize(list) //并行化
rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[6] at parallelize at <console>:27
scala> rdd1.partitions.length  //partitions的数是前面本地仿真指定的
res5: Int = 5

注意:用户可以手动指定切片|分区数目var rdd1=sc.parallelize(list,3)

scala> sc.parallelize(List(1,2,4,5),3).partitions.length
res13: Int = 3
  • 读取外部数据创建RDD
scala> sc.textFile("hdfs://CentOS:9000/demo/src")
res16: org.apache.spark.rdd.RDD[String] = hdfs://CentOS:9000/demo/src MapPartitionsRDD[22] at textFile at <console>:25

scala> sc.textFile("hdfs://CentOS:9000/demo/src").map(_.split(" ").length).reduce(_+_)
res19: Int = 13

1、如果读取的是本地文件,需要将分析的文件拷贝到所有work节点

2、textFile支持读取文件、目录、gz文件textFile("/my/directory"), textFile("/my/directory/*.txt"), textFile("/my/directory/*.gz")

3、sc.textFile(“hdfs://CentOS:9000/xxx/xx”,分区数),要求指定的分区必须>=hdfs的block的个数

  • 其他的方式读取

wholeTextFiles

scala> sc.wholeTextFiles("/root/src/").collect().foreach(t=>println(t._1+"=>"+t._2))

该方法的返回值类型是RDD[(filename,content)]

sequenceFile

scala> sc.sequenceFile[String,String]("/root/part-r-00000").collect().foreach(println)
(192.168.0.1,总数:5)
(192.168.0.3,总数:10)
(192.168.0.4,总数:5)
(192.168.0.5,总数:5)

注意,已知sequenceFile的key是Text,值Text,Spark可以指定做自动的类型的兼容

newAPIHadoopRDD

搭建Spark开发环境

  • 构建空的maven工程
  • 导入以下依赖
<properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <encoding>UTF-8</encoding>
    <scala.version>2.11.12</scala.version>
    <spark.version>2.3.0</spark.version>
    <hadoop.version>2.6.0</hadoop.version>
</properties>
<dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
</dependencies>
  • 添加以下插件
<build>
        <plugins>
            <plugin>
                <!-- 这是个编译scala代码的 -->
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.1</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <!-- 这是个编译java代码的 -->
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>UTF-8</encoding>
                </configuration>
                <executions>
                    <execution>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>  
本地仿真
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

object TestRDD01 {
  def main(args: Array[String]): Unit = {
    //创建SparkContext
    val conf = new SparkConf().setAppName("my spark").setMaster("local[5]")
    val sc = new SparkContext(conf)
   var arr= Array("Hello world","good good study","day day up")
   sc.parallelize(arr,2)
       .flatMap(_.split(" "))
       .map((_,1))
       .groupByKey()
       .map(tuple => (tuple._1,tuple._2.sum))
       .sortBy(_._2,false)
       .collect()
       .foreach(println)
    sc.stop()
  }
}
远程部署
[root@CentOS spark-2.3.0]# ./bin/spark-submit 
						--class com.baizhi.rdd01.TestRDD01 
						--master spark://CentOS:7077 
						--deploy-mode cluster 
						--supervise 
						--executor-memory 1g 
						
						--total-executor-cores 2 /root/rdd-1.0-SNAPSHOT.jar

具体更多模式下的发布,可以参考http://spark.apache.org/docs/latest/submitting-applications.html

Spark读取第三方数据

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.mapreduce.lib.db.{DBConfiguration, DBInputFormat}
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

//创建SparkContext
    val conf = new SparkConf().setAppName("mysql rdd").setMaster("local[5]")
    val sc = new SparkContext(conf)

    var jobConf=new Configuration()
    //配置数据库信息
    DBConfiguration.configureDB(jobConf,
      "com.mysql.jdbc.Driver",
      "jdbc:mysql://CentOS:3306/test",
    "root","root")

    jobConf.set(DBConfiguration.INPUT_CLASS_PROPERTY,"com.baizhi.rdd01.UserDBWritable")
    jobConf.set(DBConfiguration.INPUT_QUERY,"select * from t_user")
    jobConf.set(DBConfiguration.INPUT_COUNT_QUERY,"select count(*) from t_user")

    val userRDD = sc.newAPIHadoopRDD(jobConf,classOf[DBInputFormat[UserDBWritable]],classOf[LongWritable],classOf[UserDBWritable])
    userRDD.map(tuple=>(tuple._2.id,tuple._2.salary))
      .groupByKey()
      .map(tuple=>(tuple._1,tuple._2.sum))
      .saveAsTextFile("file:///D:/spark_result")
    sc.stop()

远程部署

方案1

将需要的jars文件拷贝到spark安装目录的jars目录下

方案2( spark-2.3.2-bin-without-hadoop.tgz)

可以在spark-env.sh配置

SPARK_DIST_CLASSPATH=$(/usr/hadoop-2.6.0/bin/hadoop classpath)
export SPARK_DIST_CLASSPATH

同时在当前hadoop的classpath下配置用户所需jar路径

HADOOP_CLASSPATH=/xxx/xxx.jar
export HADOOP_CLASSPATH
./bin/spark-submit --master spark://CentOS:7077 --class com.baizhi.rdd01.TestRDD01 --jars /root/mysql-connector-java-5.1.6.jar  --packages 'mysql:mysql-connector-java:5.1.38' --driver-memory 1g --driver-library-path /root/mysql-connector-java-5.1.6.jar --executor-memory 1g --total-executor-cores 2 /root/rdd-1.0-SNAPSHOT.jar

依赖

<dependency>
    <groupId>org.scala-lang</groupId>
    <artifactId>scala-library</artifactId>
    <version>${scala.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.11</artifactId>
    <version>${spark.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>${hadoop.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <version>${hadoop.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>${hadoop.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-core</artifactId>
    <version>${hadoop.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
    <version>${hadoop.version}</version>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.38</version>
</dependency>

Spark RDD 算子

  • transformation(转换算子)

只是将一个RDD转换为一个新的RDD,并不会对原有的RDD造成影响。

  • action(动作算子)

触发所有的转换算子进行对RDD做实际数据计算,并且将计算的数据返回给Driver

转换算子

  • map(func)

Return a new distributed dataset formed by passing each element of the source through a function func.

通过函数func传递源的每个元素,返回一个新的分布式数据集。

scala> val list=Array("a","b","c","a")
scala> val rdd=sc.parallelize(list)
scala> rdd.map(x=>(x,1)).collect().foreach(println)
(a,1)
(b,1)
(c,1)
(a,1)
  • filter(func)

Return a new dataset formed by selecting those elements of the source on which funcreturns true.

通过选择源代码上的元素来创建,返回一个布尔值。

scala> val rdd=sc.parallelize(Array(1,2,3,4,5,6))
scala> rdd.filter(x=> x%2==0).collect()
res14: Array[Int] = Array(2, 4, 6)

  • flatMap(func)

Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item).

转换并压缩,把一个元素拆分成0到多个项,返回数组或集合。

scala> val rdd=sc.parallelize(Array("hello world","hello boy"))
scala> rdd.flatMap(line=> line.split(" ")).collect()
res15: Array[String] = Array(hello, world, hello, boy)
  • mapPartitions(func)

Similar to map, but runs separately on each partition (block) of the RDD, so func must be of type Iterator => Iterator when running on an RDD of type T.

与map类似,但是分别在RDD的每个分区(块)上运行,所以func在T类型的RDD上运行时必须是Iterator=>Iterator类型。

scala>  val fun:(Iterator[String])=>Iterator[(String,Int)]=(x)=>{
     |     var lst=List[(String,Int)]()
     |     for(i <- x){
     |       lst ::= i->1
     |     }
     |     lst.iterator
     |   }
fun: Iterator[String] => Iterator[(String, Int)] = <function1>

scala> sc.parallelize(List("a","b","c","d")).mapPartitions(fun).collect()
res5: Array[(String, Int)] = Array((b,1), (a,1), (d,1), (c,1))

  • mapPartitionsWithIndex(func)

Similar to mapPartitions, but also provides func with an integer value representing the index of the partition, so func must be of type (Int, Iterator) => Iterator when running on an RDD of type T.

scala> val fun:(Int,Iterator[String])=>Iterator[(String,Int)]=(part,x)=>{
     |     var lst=List[(String,Int)]()
     |     for(i <- x){
     |       lst ::= i-> part
     |     }
     |     lst.iterator
     |   }
scala> sc.parallelize(List("a","b","c","d"),3).mapPartitionsWithIndex(fun).collect()
res8: Array[(String, Int)] = Array((a,0), (b,1), (d,2), (c,2))
  • union(otherDataset)|intersection(otherDataset)
scala> var rdd1=sc.parallelize(Array(("张三",1000),("李四",100),("赵六",300)))
scala> var rdd2=sc.parallelize(Array(("张三",1000),("王五",100),("温晓琪",500)))
scala> rdd1.union(rdd2).collect()
res9: Array[(String, Int)] = Array((张三,1000), (李四,100), (赵六,300), (张三,1000), (王五,100), (温晓琪,500))
scala> rdd1.intersection(rdd2).collect()
res10: Array[(String, Int)] = Array((张三,1000))
  • groupByKey([numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable) pairs.

当调用(k,v)的数据集时,返回一个数据集(k,可迭代)。

scala> var rdd=sc.parallelize(Array(("张三",1000),("李四",100),("赵六",300),("张三",500)))

scala> rdd.groupByKey().collect
res13: Array[(String, Iterable[Int])] = Array((赵六,CompactBuffer(300)), (张三,CompactBuffer(1000, 500)), (李四,CompactBuffer(100)))

scala> rdd.groupByKey().map(x=>(x._1,x._2.sum)).collect
res14: Array[(String, Int)] = Array((赵六,300), (张三,1500), (李四,100))

  • reduceByKey(func, [numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func, which must be of type (V,V) => V. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.

在对(K,V)对的数据集进行调用时,返回(K,V)对的数据集,其中每个键的值都使用给定的reduce函数func进行聚合,该函数必须是类型(V,V)=>V。与groupByKey中一样,reduce任务的数量可以通过可选的第二个参数进行配置。

scala> rdd.reduceByKey((x,y)=>x+y).collect()
res15: Array[(String, Int)] = Array((赵六,300), (张三,1500), (李四,100))

  • aggregateByKey(zeroValue)(seqOp, combOp, [numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral “zero” value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.

scala> var rdd=sc.parallelize(Array(("张三",1000),("李四",100),("赵六",300),("张三",500)))
scala> rdd.aggregateByKey(0)((x,y)=>x+y,(x,y)=>x+y).collect()
  • sortByKey([ascending], [numPartitions])

When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the boolean ascending argument.

scala> var rdd=sc.parallelize(Array(("a",1000),("b",100),("d",300),("c",500)))
scala> rdd.sortByKey(true).collect()
res21: Array[(String, Int)] = Array((a,1000), (b,100), (c,500), (d,300))

scala> rdd.sortByKey(false).collect()
res22: Array[(String, Int)] = Array((d,300), (c,500), (b,100), (a,1000))

  • join(otherDataset, [numPartitions])

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through leftOuterJoin, rightOuterJoin, and fullOuterJoin.

当调用类型(K,V)和(K,W)的数据集时,返回(K,(V,W))对的数据集,其中每个键的所有元素对。外部连接通过左TouthEnter、右TouthEnter和FulLouthEnter进行支持。

scala> var rdd1=sc.parallelize(Array(("001","张三"),("002","李四"),("003","王五")))
scala> var rdd2=sc.parallelize(Array(("001","苹果"),("002","手机"),("001","橘子")))
scala> rdd1.join(rdd2).collect()
res23: Array[(String, (String, String))] = Array((002,(李四,手机)), (001,(张三,苹果)), (001,(张三,橘子)))
  • cogroup(otherDataset, [numPartitions])|groupWith

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable, Iterable)) tuples. This operation is also called groupWith.

当调用类型(k,v)和(k,w)的数据集时,返回一个数据集(k,(迭代< v>,迭代< W>))元组。这个操作也称为GROUPY。

scala> var rdd1=sc.parallelize(Array(("001","张三"),("002","李四"),("003","王五")))
scala> var rdd2=sc.parallelize(Array(("001","苹果"),("002","手机"),("001","橘子")))
scala> rdd1.cogroup(rdd2).collect()
res24: Array[(String, (Iterable[String], Iterable[String]))] = Array((002,(CompactBuffer(李四),CompactBuffer(手机))), (003,(CompactBuffer(王五),CompactBuffer())), (001,(CompactBuffer(张三),CompactBuffer(苹果, 橘子))))

Action算子

  • reduce(func)

Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.

使用函数FUNC来聚合数据集的元素(这需要两个参数并返回一个参数)。函数应该是可交换的和相联的,从而可以并行计算。

scala> var rdd=sc.parallelize(List("a","b","c"))
scala> rdd.reduce(_+","+_)
res27: String = a,b,c
  • collect()

Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.

在驱动器程序中将数据集的所有元素作为数组返回。这通常在过滤器或其他操作返回有用的数据子集后很有用。

  • count()

Return the number of elements in the dataset.

返回数据集中的元素数量。

scala> var rdd=sc.parallelize(List("a","b","c"))
scala> rdd.count()
res28: Long = 3
  • first()|take(n)

Return the first element of the dataset (similar to take(1)).

scala> var rdd=sc.parallelize(List("a","b","c"))
scala> rdd.first()
res29: String = a

scala> rdd.take(1)
res30: Array[String] = Array(a)

scala> rdd.take(2)
res31: Array[String] = Array(a, b)

  • takeOrdered(n, [ordering])

Return the first n elements of the RDD using either their natural order or a custom comparator.

scala> var rdd= sc.parallelize(Array(("a",3),("b",1),("c",4)),5)
scala> val s=new Ordering[(String, Int)]{
     | override def compare(x: (String, Int), y: (String, Int)): Int = {
     | return -1 * (x._2-y._2)
     | }
     | }
scala> rdd.takeOrdered(2)(s)
res37: Array[(String, Int)] = Array((c,1), (a,3))
  • saveAsTextFile(path)

Write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file.

scala> sc.textFile("file:///root/worlds.log").flatMap(_.split(" ")).map(x=>(x,1)).reduceByKey(_+_,1).saveAsTextFile("file:///cc")

  • countByKey()

Only available on RDDs of type (K, V). Returns a hashmap of (K, Int) pairs with the count of each key.

scala>  sc.textFile("file:///root/worlds.log").flatMap(_.split(" ")).map(x=>(x,1)).countByKey()
res55: scala.collection.Map[String,Long] = Map(this -> 1, demo -> 1, is -> 1, good -> 2, up -> 1, a -> 1, come -> 1, babay -> 1, on -> 1, day -> 2, study -> 1)
  • foreach(func)

Run a function func on each element of the dataset. This is usually done for side effects such as updating an Accumulator or interacting with external storage systems.
Note: modifying variables other than Accumulators outside of the foreach() may result in undefined behavior. See Understanding closures for more details.

人口普查

需求:某个文件夹下存在如下 日志文件名yob年份.txt日志的数据格式如下

名字,性别,人数、
....

要求按年份统计出每一年新生婴儿男女比例,并绘制报表?

**Hadoop MapReduce **

TextInputFormat读取
//Mapper
class UserSexCountMapper extends Mapper[LongWritable,Text,Text,Text]{
  override def map(key: LongWritable, value: Text, context: Mapper[LongWritable, Text, Text, Text]#Context): Unit ={
    val path = context.getInputSplit().asInstanceOf[FileSplit].getPath().getParent()
    var filename=path.getParent().getName()
    var year=filename.substring(filename.lastIndexOf(".")-4,filename.lastIndexOf("."))
    val tokens = value.toString.split(",")
    context.write(new Text(year),new Text(tokens(1)+":"+tokens(2)))
  }
}
//Reducer
class UserSexReducer extends Reducer[Text,Text,Text,Text]{
  override def reduce(key: Text, values: lang.Iterable[Text], context: Reducer[Text, Text, Text, Text]#Context): Unit = {
    var mtotal=0
    var ftotal=0
    for(i <- values){
       var value:Text= i
       var sex=value.toString.split(":")(0)
      if(sex.equals("M")){
        mtotal += value.toString.split(":")(1).toInt
      }else{
        ftotal += value.toString.split(":")(1).toInt
      }
    }
    context.write(key,new Text("男:"+mtotal+",女:"+ftotal))
  }
}
//提交任务
......

Spark解决

import org.apache.spark.storage.StorageLevel
import org.apache.spark.{SparkConf, SparkContext}
/**
  * 写一个类SexAndCountVector表示男孩和女孩的数量
  * case class是可以没有方法实现的
  * @param m
  * @param f
  */
case class SexAndCountVector(var m:Int,var f:Int)

object TestNamesDemo {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf()
    conf.setMaster("local[10]") //本地仿真-->远程部署:"spark:CentOS:7077"
    conf.setAppName("names counts")
    val sc = new SparkContext(conf)

    var cacheRDD = sc.wholeTextFiles("file:///D:/demo/names")
        .map(touple=>(getYear(touple._1),touple._2.split("\r\n")))
        .flatMap(tupe=>for(i<-tupe._2) yield (tupe._1,{
          var s=i.split(",")
          //s(1)+":"+s(2) //将"2016,F,14772"格式->"2016,F:14772"这种格式
          if (s(1).equals("M")) {

            new SexAndCountVector(s(2).toInt, 0) //男性性别的向量
          } else {
            new SexAndCountVector(0, s(2).toInt)  //女性性别的向量
          }
        })).reduceByKey((s1,s2)=>{
              s1.f=s1.f+s2.f  //如果是女性求女性向量的和
              s1.m=s1.m+s2.m  //如果是男性求男性向量的和
              s1  //返回s1

            },1).persist(StorageLevel.DISK_ONLY) //指定为1个分区 cache()/persist()缓存->cacheRDD.unpersist() 清除缓存

    cacheRDD.map(tuple=>tuple._1+"\t"+"男:"+tuple._2.m+",女:"+tuple._2.f) //转化成"2017  男:1,女:1"格式
      .saveAsTextFile("file:///D:/demo/names_result")//远程部署:"file:///root/names_result"
    sc.stop()

  }
  def getYear(name:String):String={
    val i = name.lastIndexOf(".")
    return name.substring(i-4,i)//yob2017.txt index为3到7的子串
  }

}

RDD 依赖关系(血统) 宽、窄依赖,任务DAG生成和RDD依赖关系、分区和并行度的理解

RDD并行计算原理

  • RDD间的依赖关系(血统)
窄依赖(Narrow Dependency):	"一对一关系/多对一关系"
	是指每个父RDD的一个Partition被多个子RDD的一个Partition所使用,例如map、filter、union等都会产生窄依赖;对于窄依赖的RDD,可以以流水线的方式计算所有父分区,不会造成网络之间的数据混合。
	
宽依赖(Wide Dependency):"一对多关系/多对多关系"
	是指一个父RDD的Paratition会被多个子RDD的Partition所使用,例如groupByKey、reduceByKey、sortByKey等操作都会产生宽依赖;对于宽依赖的RDD,则通常伴随着Shuffle操作,即首先需要计算好父分区的数据,然后在节点之间进行Shuffle。
换言之,如果父RDD的一个分区只被一个子RDD的一个分区所使用就是窄依赖,否则就是宽依赖

这样设计的好处是什么?
Spark的这种依赖关系设计,使其具有了天生的容错性,大大加快了Spark的执行速度。因为,RDD数据集通过“血缘关系”记住了它是如何从其它RDD中演变过来的,血缘关系记录的是粗颗粒度的转换操作行为,当这个RDD的部分分区数据丢失时,它可以通过血缘关系获取足够的信息来重新运算和恢复丢失的数据分区,由此带来了性能的提升。相对而言,在两种依赖关系中,窄依赖的失败恢复更为高效,它只需要根据父RDD分区重新计算丢失的分区即可(不需要重新计算所有分区),而且可以并行地在不同节点进行重新计算。而对于宽依赖而言,单个节点失效通常意味着重新计算过程会涉及多个父RDD分区,开销较大。此外,Spark还提供了数据检查点和记录日志,用于持久化中间RDD,从而使得在进行失败恢复时不需要追溯到最开始的阶段。在进行故障恢复时,Spark会对数据检查点开销和重新计算RDD分区的开销进行比较,从而自动选择最优的恢复策略。
  • DAG的stage划分原则
	一般的在做shuffle的时候会划分stage,划分时每一个阶段每一个节点都参与并行计算,当存在宽依赖关系时会自动划分stage,一般的在做actions算子时会出现划分stage。
	
	stage划分算法:
	会从出发action操作的那个rdd开始往前倒推,首先会为最后一个rdd创建一个stage,然后往前倒推,如果发现某个rdd是宽依赖,那么就会将宽依赖的那个rdd创建一个新的stage,那个rdd就是新的stage的最后一个rdd,然后依次类推,根据宽依赖和窄依赖,进行stage划分,直至所有rdd遍历完。

stage划分源码分析

/**
 * Submit a job to the job scheduler and get a JobWaiter object back. The JobWaiter object
 * can be used to block until the the job finishes executing or can be used to cancel the job.
 */
def submitJob[T, U](
    rdd: RDD[T],
    func: (TaskContext, Iterator[T]) => U,
    partitions: Seq[Int],
    callSite: CallSite,
    resultHandler: (Int, U) => Unit,
    properties: Properties): JobWaiter[U] = {
  // Check to make sure we are not launching a task on a partition that does not exist.
  val maxPartitions = rdd.partitions.length
  partitions.find(p => p >= maxPartitions || p < 0).foreach { p =>
    throw new IllegalArgumentException(
      "Attempting to access a non-existent partition: " + p + ". " +
        "Total number of partitions: " + maxPartitions)
  }

  val jobId = nextJobId.getAndIncrement()
  if (partitions.size == 0) {
    return new JobWaiter[U](this, jobId, 0, resultHandler)
  }

  assert(partitions.size > 0)
  val func2 = func.asInstanceOf[(TaskContext, Iterator[_]) => _]
  val waiter = new JobWaiter(this, jobId, partitions.size, resultHandler)
  eventProcessLoop.post(JobSubmitted(
    jobId, rdd, func2, partitions.toArray, callSite, waiter,
    SerializationUtils.clone(properties)))
  waiter
}
这段代码中有个很重要的东西,eventProcessLoop(DAGSchedulerEventProcessLoop是一个DAGScheduler的内部类),调用了post方法发送了JobSubmitted消息。从源码中可以看到,接收到消息后,调用了dagScheduler的handleJobSubmitted方法。这个方法是DAGScheduler的job调度的核心入口

private[scheduler] def handleJobSubmitted(jobId: Int,
    finalRDD: RDD[_],
    func: (TaskContext, Iterator[_]) => _,
    partitions: Array[Int],
    callSite: CallSite,
    listener: JobListener,
    properties: Properties) {
  var finalStage: ResultStage = null
  try {
    // New stage creation may throw an exception if, for example, jobs are run on a
    // HadoopRDD whose underlying HDFS files have been deleted.
//第一步、使用触发job的最后一个RDD,创建finalStage,这个方法就是简单的创建了一个stage,
//并且将stage加入到DAGScheduler的缓存(stage中有个重要的变量isShuffleMap)
    finalStage = newResultStage(finalRDD, partitions.length, jobId, callSite)
  } catch {
    case e: Exception =>
      logWarning("Creating new stage failed due to exception - job: " + jobId, e)
      listener.jobFailed(e)
      return
  }
  if (finalStage != null) {
//第二步、用finalStage创建一个job
    val job = new ActiveJob(jobId, finalStage, func, partitions, callSite, listener, properties)
    clearCacheLocs()
    logInfo("Got job %s (%s) with %d output partitions".format(
      job.jobId, callSite.shortForm, partitions.length))
    logInfo("Final stage: " + finalStage + "(" + finalStage.name + ")")
    logInfo("Parents of final stage: " + finalStage.parents)
    logInfo("Missing parents: " + getMissingParentStages(finalStage))
    val jobSubmissionTime = clock.getTimeMillis()
    jobIdToActiveJob(jobId) = job
//第三步、将job加入到内存缓存中
    activeJobs += job
    finalStage.resultOfJob = Some(job)
    val stageIds = jobIdToStageIds(jobId).toArray
    val stageInfos = stageIds.flatMap(id => stageIdToStage.get(id).map(_.latestInfo))
    listenerBus.post(
      SparkListenerJobStart(job.jobId, jobSubmissionTime, stageInfos, properties))
//第四步、(很关键)使用submitStage方法提交finalStage
//这个方法会导致第一个stage提交,其他的stage放入waitingStages队列,使用递归优先提交父stage
    submitStage(finalStage)
  }
//提交等待的stage队列
  submitWaitingStages()
}
接下来看下第四步调用的submitStage方法,这个是stage划分算法的入口,但是stage划分算法是有submitStage和getMissingParentStages方法共同组成的。

private def submitStage(stage: Stage) {
  val jobId = activeJobForStage(stage)
  if (jobId.isDefined) {
    logDebug("submitStage(" + stage + ")")
    if (!waitingStages(stage) && !runningStages(stage) && !failedStages(stage)) {
//很关键的一行,调用getMissingParentStage方法去获取这个stage的父stage
      val missing = getMissingParentStages(stage).sortBy(_.id)
      logDebug("missing: " + missing)
//其实这里会循环递归调用,直到最初的stage没有父stage,其余的stage被放在waitingMissingStages
      if (missing.isEmpty) {
        logInfo("Submitting " + stage + " (" + stage.rdd + "), which has no missing parents")
//这个就是提交stage的方法。后面再分享
        submitMissingTasks(stage, jobId.get)
      } else {
//如果不为空,就是有父Stage,递归调用submitStage方法去提交父Stage,这里是stage划分算法的精髓。
        for (parent <- missing) {
          submitStage(parent)
        }
//并且将当前stage,放入等待执行的stage队列中
        waitingStages += stage
      }
    }
  } else {
    abortStage(stage, "No active job for stage " + stage.id, None)
  }
}
这里再来看下getMissingParentStage方法

private def getMissingParentStages(stage: Stage): List[Stage] = {
  val missing = new HashSet[Stage]
  val visited = new HashSet[RDD[_]]
  // We are manually maintaining a stack here to prevent StackOverflowError
  // caused by recursively visiting 先入后出
  val waitingForVisit = new Stack[RDD[_]]
  //定义visit方法,供后面代码中stage的RDD循环调用
  def visit(rdd: RDD[_]) {
    if (!visited(rdd)) {
      visited += rdd
      val rddHasUncachedPartitions = getCacheLocs(rdd).contains(Nil)
      if (rddHasUncachedPartitions) {
//遍历RDD
        for (dep <- rdd.dependencies) {
          dep match {
//如果是宽依赖,使用宽依赖的RDD创建一个新的stage,并且会把isShuffleMap变量设置为true
//默认只有最后一个stage不是ShuffleMap stage
            case shufDep: ShuffleDependency[_, _, _] =>
              val mapStage = getShuffleMapStage(shufDep, stage.firstJobId)
              if (!mapStage.isAvailable) {
//把stage放到缓存中
                missing += mapStage
              }
//如果是窄依赖,就把rdd加入到stack中,虽然循环时调用了stack的pop方法,但是这里又push了一个进去。
            case narrowDep: NarrowDependency[_] =>
              waitingForVisit.push(narrowDep.rdd)
          }
        }
      }
    }
  }
//首先,往stack中推入stage的最后一个rdd
  waitingForVisit.push(stage.rdd)
//循环
  while (waitingForVisit.nonEmpty) {
//对stage的最后一个rdd,调用自己内部定义方法(就是上面的visit方法),注意这里stack的pop方法取出rdd
    visit(waitingForVisit.pop())
  }
//立刻返回新的stage
  missing.toList
}

提交stage的方法--submitsMissingTasks

这个方法的作用就是为stage创建一批task,task的数量和partition的数量相同
获取要partition的数量,将stage加入runningStage队列中
这里涉及到一个Task最佳位置的算法,就是调用getpreferredlocs方法。
原理就是:从stage的最后一个rdd开始,去找哪个rdd的partition被cache或checkPoint,那么Task的最佳位置就是cache或checkPoint的位置。因为Task在这个节点上就不用计算之前的RDD。如果没有,就有TaskScheduler来决定到哪个节点上运行。
最后,针对stage的task,创建TaskSet对象,调用TaskScheduler的submitTasks方法,提交TaskSet。

SparkRDD持久化

持久化原理

​ Spark非常重要的一个功能特性就是可以将RDD 持久化在内存中,当对RDD执行持久化操作时,每个节点都会将自己操作的RDD的partition持久化到内存中,并且在之后对该RDD的反复使用中,直接使用内存缓存的partition,这样的话,对于针对一个RDD反复执行多个操作的场景,就只要对RDD计算一次即可,后面直接使用该RDD ,而不需要计算多次该RDD

​ 巧妙使用RDD持久化,甚至在某些场景下,可以将spark应用程序的性能提升10倍。对于迭代式算法和快速交互式应用来说,RDD持久化,是非常重要的。

​ 要持久化一个RDD,只要调用其cache()或者persist()方法即可。在该RDD第一次被计算出来时,就会直接缓存在每个节点中。而且Spark的持久化机制还是自动容错的,如果持久化的RDD的任何partition丢失了,那么Spark会自动通过其源RDD,使用transformation操作重新计算该partition。

cache()persist()的区别在于,cache()persist()的一种简化方式,cache()的底层就是调用的persist()的无参版本,同时就是调用persist(MEMORY_ONLY),将数据持久化到内存中。如果需要从内存中去除缓存,那么可以使用unpersist()方法。

持久化的使用场景
  • 第一次加载大量的数据到RDD中,可以选用持久化

  • 频繁的动态更新RDD、Cache数据,不适合使用缓存持久化

RDD持久化的级别
MEMORY_ONLY:
	使用未序列化的Java对象格式,将数据保存在内存中。如果内存不够存放所有数据,则数据可能就不会进行没序列化,那么下次对这个RDD执行算子操作时,那些没有被持久化的数据,需要根据源头重新计算一遍,这是默认的持久化策略,使用cache()方法时,实际上就是使用的这种持久化策略。
	
MEMORY_AND_DISK:
	使用未序列化的Java对象格式,优先尝试将数据保存在内存中,如果内存不够存放所有数据,会将数据写入磁盘文件中,下次对RDD执行算子时,持久化在磁盘文件中的数据会被读取出来使用。
	
MEMORY_ONLY_SER:
	基本含义同MEMORY_ONLY相同,唯一不同是,会将RDD中的数据进行序列化,RDD的每个partition会被序列化成一个字节数组,这种方式更加节省内存,从而可以避免持久化的数据占用过多的内存导致频繁GC.
	
MEMORY_AND_DISK_SER:
	基本含义同MEMORY_AND_DISK相同,唯一不同是,会将RDD中的数据进行序列化,RDD的每个partition会被序列化成一个字节数组,这种方式更加节省内存,从而可以避免持久化的数据占用过多的内存导致频繁GC。
DISK_ONLY:
	使用未序列化的Java对象格式,将数据全部由写入磁盘中。
MEMORY_ONLY_2、MEMORY_AND_DISK_2等:
	对于上述任意一种持久化策略,如果加上_2后缀,代表的是将每个持久化的数据都复制一份副本,并将副本保存在其他节点上,这种基于副本的持久化机制主要用于进行容错,假如某个节点挂掉,节点的内存或磁盘中的持久化数据丢失了 ,那么后续对RDD计算还可以使用该数据在其他节点上的副本,如果没有副本的话,就只能将这些数据从源头处重新计算一遍了。
持久化策略
  • 默认情况下,性能最高的当然是MEMORY_ONLY,但前提是你的内存必须足够足够大,可以绰绰有余地存放下整个RDD的所有数据。因为不进行序列化与反序列化操作,就避免了这部分的性能开销;对这个RDD的后续算子操作,都是基于纯内存中的数据的操作,不需要从磁盘文件中读取数据,性能也很高;而且不需要复制一份数据副本,并远程传送到其他节点上。但是这里必须要注意的是,在实际的生产环境中,恐怕能够直接用这种策略的场景还是有限的,如果RDD中数据比较多时(比如几十亿),直接用这种持久化级别,会导致JVM的OOM内存溢出异常。

  • 如果使用MEMORY_ONLY级别时发生了内存溢出,那么建议尝试使用MEMORY_ONLY_SER级别。该级别会将RDD数据序列化后再保存在内存中,此时每个partition仅仅是一个字节数组而已,大大减少了对象数量,并降低了内存占用。这种级别比MEMORY_ONLY多出来的性能开销,主要就是序列化与反序列化的开销。但是后续算子可以基于纯内存进行操作,因此性能总体还是比较高的。此外,可能发生的问题同上,如果RDD中的数据量过多的话,还是可能会导致OOM内存溢出的异常。

  • 如果纯内存的级别都无法使用,那么建议使用MEMORY_AND_DISK_SER策略,而不是MEMORY_AND_DISK策略。因为既然到了这一步,就说明RDD的数据量很大,内存无法完全放下。序列化后的数据比较少,可以节省内存和磁盘的空间开销。同时该策略会优先尽量尝试将数据缓存在内存中,内存缓存不下才会写入磁盘。

  • 通常不建议使用DISK_ONLY和后缀为_2的级别:因为完全基于磁盘文件进行数据的读写,会导致性能急剧降低,有时还不如重新计算一次所有RDD。后缀为_2的级别,必须将所有数据都复制一份副本,并发送到其他节点上,数据复制以及网络传输会导致较大的性能开销,除非是要求作业的高可用性,否则不建议使用。

public class PersistApp {
public static void main(String[] args) {
    SparkConf conf = new SparkConf().setAppName(PersistApp.class.getSimpleName()).setMaster("local");
    JavaSparkContext sc = new JavaSparkContext(conf);
    JavaRDD<String> linesRDD = sc.textFile("E:\\test\\scala\\access_2016-05-30.log");
    linesRDD.cache();

    long start = System.currentTimeMillis();
    List<String> list = linesRDD.take(10);
    long end = System.currentTimeMillis();
    System.out.println("first times cost" + (end - start) + "ms");
    System.out.println("-----------------------------------");
    start = System.currentTimeMillis();
    long count = linesRDD.count();
    end = System.currentTimeMillis();
    System.out.println("second times cost" + (end - start) + "ms");
    sc.close();
 }
}
共享变量

​ 通常情况下,当向Spark操作(如map,reduce)传递一个函数时,它会在一个远程集群节点上执行,它会使用函数中所有变量的副本。这些变量被复制到所有的机器上,远程机器上并没有被更新的变量会向驱动程序回传。在任务之间使用通用的,支持读写的共享变量是低效的。 尽管如此,Spark提供了两种有限类型的共享变量,广播变量累加器

  • 广播变量

    Spark的另一种共享变量是广播变量。通常情况下,当一个RDD的很多操作都需要使用driver中定义的变量时,每次操作,driver都要把变量发送给worker节点一次,如果这个变量中的数据很大的话,会产生很高的传输负载,导致执行效率降低。使用广播变量可以使程序高效地将一个很大的只读数据发送给多个worker节点,而且对每个worker节点只需要传输一次,每次操作时executor可以直接获取本地保存的数据副本,不需要多次传输。
    

img

创建并使用广播变量的过程如下:

​ 在一个类型T的对象obj上使用SparkContext.brodcast(obj)方法,创建一个Broadcast[T]类型的广播变量,obj必须满足Serializable。 通过广播变量的.value()方法访问其值。

​ 另外,广播过程可能由于变量的序列化时间过程或者序列化变量的传输过程过程而成为瓶颈,而Spark Scala中使用的默认的Java序列化方法通常是低效的,因此可以通过spark.serializer属性为不同的数据类型实现特定的序列化方法(如Kryo)来优化这一过程。

object BroadCastApp {
def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local[2]").setAppName("BroadCastApp")
    val sc = new SparkContext(conf)
    val list = List(1, 2, 4, 6, 0, 9)
    val set = mutable.HashSet[Int]()
    val num = 7
    val bset = sc.broadcast(set)
    val bNum = sc.broadcast(7)
    val listRDD = sc.parallelize(list)
    listRDD.map(x => {
        bset.value.+=(x)
        x * bNum.value
    }).foreach(x => print(x + " "))
    println("----------------------")
    for (s <- set) {
        println(s)
    }
    sc.stop()
    }
}
  • 累加器

Spark提供的Accumulator,主要用于多个节点对一个变量进行共享性的操作。Accumulator只提供了累加的功能。但是确给我们提供了多个task对一个变量并行操作的功能。但是task只能对Accumulator进行累加操作,不能读取它的值。只有Driver程序可以读取Accumulator的值。

object AccumulatorApp {
def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("AccumulatorApp").setMaster("local[2]")
    val sc = new SparkContext(conf)
    val list = List(1, 2, 4, 6, 0, 9)
    val listRDD = sc.parallelize(list)
    val acc = sc.accumulator(0)
    list.map(x => {
        /**
          * 在这里只能对累加器进行写的操作,不能进行读的操作
          * count-->action
          * 主要是可以替代直接使用count来统计某一个transformation运行的数据量,
          * 因为count是一个action,一旦执行了action操作,前面rdd partition中数据会被释放掉
          * 这样要想在进行其他的操作,就需要重新加载计算数据,会是spark程序性能降低
          */
        acc.add(1)
        (x, 1)
    })
    println("累加结果: " + acc.value)
    sc.stop()
 }
}

Spark SQL

Spark SQL是构建在Spark Core之上一个模块,专门用于将SQL语句翻译成Job,交给Spark计算引擎去计算,计算结果通常会以Datasets或者Data Frame返回。

入口:SparkSession

​ Spark中所有功能点的入口是SparkSession类。可以使用SparkSession.builder()方法来建立一个基础的SparkSession。 Spark2.0中的SarkSession提供了内置的Hive特性支持,包括使用HiveQL编写查询,访问Hive UDF和读取Hive tables中的数据。通过这些特性,用户不再需要有线程的Hive设置。

Datasets 和 DataFrames

一个Dataset是一个分布式数据集合,Dataset是在spark1.6引入的API,集成了Spark RDD的一些优点(强类型、强大的lambda函数|高阶函数),同时得益于Spark SQL执行引擎优化。Dataset可以从JVM的对象创建,然后可使用一些高阶函数对Dataset做处理(map、flatMap、filter等)

Dataset[(Int,String,Boolean)] --> 任意元组的集合,这些元组没有名字

DataFrame其实也是一个命名列的Dataset,DataFrame的构建方式非常宽泛,例如:结构化数据,hive中的表、外部的Database、现存RDD。当使用Java或者ScalaAPI调度DataFrame的时候我们可以将一个DataFrame理解为是一个存储了Row这一个数据集合。在Scala中一个DataFrame就是Dataset[Row]

Dataset[Row] ,其中Row是一个命名元组类型,Row=(Int,String,Boolean)+Schema(id,name,sex)

Hello World

import org.apache.spark.sql.SparkSession

object HelloSparkSQL {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession
      .builder() //创建入口
      .master("local[5]") //本地仿真
      .appName("Spark SQL basic example")
      .getOrCreate()
    import spark.implicits._	//引入一个隐式转换->写到SparkSession之后

    val dataFrame = spark.read.json("file:///D:/examples/src/main/resources/people.json")
    dataFrame.printSchema()
    dataFrame.map(row => row.getAs[String]("name")).collect().foreach(println)

    spark.stop()
  }
}

创建Dataset

  • 将一个任意的数组类型转换为Dataset
//将一个任意的数组类型转换为Dataset
var persons= new Person("zhansgan",18)::new Person("wangwu",26)::Nil
var personDataset=persons.toDS()
personDataset.map( person=> person.name ).collect().foreach(println)
  • 将一个DF转换为Dataset
//将一个DF转换为Dataset
val personDataset = spark.read.json("file:///D:/examples/src/main/resources/people.json").as[Person]
    personDataset.show()
val value = List(1,2,3).toDS()
value.show()

创建DataFrame

可以把任何一个RDD转化成DataFrame,一个DataFrame就能对应一张表。

  • 加载json文件创建DataFrame
//加载json文件
val df = spark.read.json("file:///文件路径")
df.show()
  • 通过RDD元素转换为case class 直接创建DataFrame

case class定义了table的schema。case class中的参数名称会被反射读取成为table中的列名。case class同样可以嵌套或是包含一些复杂类型,例如Seq或是Array。RDD可以被隐式转换为DataFrame并注册成为一个table。注册后的table可以用于随后的SQL语句。

//通过RDD元素转换为case class 直接创建DataFrame
val dataFrame = spark.sparkContext.textFile("file:///D:/person.txt")
      .map(line => line.split(","))
      .map(tokens => new Person(tokens(0).toInt, tokens(1), tokens(2).toBoolean, tokens(3).toInt, tokens(4).toFloat))
      .toDF()
  • 通过直接将元组类型RDD转为DataFrame
//通过直接将元组类型RDD转为DataFrame
val dataFrame = spark.sparkContext.textFile("file:///D:/person.txt")
      .map(line => line.split(","))
      .map(tokens=>(tokens(0),tokens(1),tokens(2),tokens(3),tokens(4)))
      .toDF("id","name","sex","age","salary")
  • 通过编程方式创建DataFrame
//通过编程方式创建DataFrame
val dataRDD = spark.sparkContext.textFile("file:///D:/person.txt")
      .map(line => line.split(","))     .map(tokens=>Row(tokens(0).toInt,tokens(1),tokens(2).toBoolean,tokens(3).toInt,tokens(4).toFloat))
    var fields=StructField("id",IntegerType,true)::StructField("name",StringType,true)::StructField("sex",BooleanType,true)::StructField("age",IntegerType,true)::StructField("salary",FloatType,true)::Nil
    var schema=StructType(fields)
val dataFrame = spark.createDataFrame(dataRDD,schema)

总结DataFrame创建方式

  • spark.read.json(“路径”)
  • RDD[Person].toDF()//Person是case class
  • RDD[元组].toDF(别名)
  • 通过RDD[Row]+Schema 借助spark.createDataFrame(dataRDD,schema)

DataFrame 常规操作

1,zhangsan,true,18,15000
2,lisi,true,20,20000
3,wangwu,false,18,10000
4,zhaoliu,false,18,10000
//创建sparkSession
    val spark = SparkSession
      .builder() //创建入口
      .master("local[5]") //本地仿真
      .appName("Spark SQL basic example")
      .getOrCreate()


//通过直接将元组类型RDD转为DataFrame
val dataFrame = spark.sparkContext.textFile("file:///D:/person.txt")
      .map(line => line.split(","))
      .map(tokens=>(tokens(0).toInt,tokens(1),tokens(2).toBoolean,tokens(3).toInt,tokens(4).toFloat))
        .toDF("id","name","sex","age","salary").show()
+---+--------+-----+---+-------+
| id|    name|  sex|age| salary|
+---+--------+-----+---+-------+
|  1|zhangsan| true| 18|15000.0|
|  2|    lisi| true| 20|20000.0|
|  3|  wangwu|false| 18|10000.0|
|  4| zhaoliu|false| 18|10000.0|
+---+--------+-----+---+-------+

//进行查询操作
dataFrame.select("id","name","salary")
.where($"name" ==="lisi" or $"salary" > 10000)
.filter($"id" === 1)
.show()
+---+--------+-------+
| id|    name| salary|
+---+--------+-------+
|  1|zhangsan|15000.0|
+---+--------+-------+
//按照性别计算平均值
dataFrame.select("sex","salary").groupBy("sex").mean("salary").show()
+-----+-----------+
|  sex|avg(salary)|
+-----+-----------+
| true|    17500.0|
|false|    10000.0|
+-----+-----------+
//常规聚合计算
 dataFrame.select("sex","salary")
             .groupBy("sex")
             .agg(("salary","max"),("salary","min"),("salary","mean"))
             .show()
+-----+-----------+-----------+-----------+
|  sex|max(salary)|min(salary)|avg(salary)|
+-----+-----------+-----------+-----------+
| true|    20000.0|    15000.0|    17500.0|
|false|    10000.0|    10000.0|    10000.0|
+-----+-----------+-----------+-----------+
import org.apache.spark.sql.functions._

dataFrame.select("sex","salary")
             .groupBy("sex")
             .agg($"sex".alias("性别"),sum("salary").alias("总薪资"),avg("salary").alias("平均薪资"))
             .drop("sex")
             .sort($"总薪资".desc)
             .limit(2)
             .show()
+-----+-------+-------+
| 性别| 总薪资 |平均薪资|
+-----+-------+-------+
| true|35000.0|17500.0|
|false|20000.0|10000.0|
+-----+-------+-------+

更多请参考API:http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset

SQL 操作DataFrame

 val dataFrame = spark.sparkContext.textFile("file:///D:/person.txt")
      .map(line => line.split(","))
      .map(tokens=>(tokens(0).toInt,tokens(1),tokens(2).toBoolean,tokens(3).toInt,tokens(4).toFloat))
        .toDF("id","name","sex","age","salary")
//创建局部视图 ,只能由当前SparkSession会话访问
dataFrame.createOrReplaceTempView("t_user")

spark.sql("select * from t_user where id=1 or name='lisi' order by salary desc limit 2").show()

//创建全局视图,可以跨session访问需要在前面添加global_temp
spark.sql("select * from global_temp.t_user where id=1 or name='lisi' order by salary desc limit 2").show()

//获取指定列的值 记住必须导入 import spark.implicits._
 spark.sql("select * from global_temp.t_user where id=1 or name='lisi' order by salary desc limit 2")
.map(row => row.getAs[String]("name"))
.foreach(name=>println("name:"+name))

//获取多个值 默认 系统没有提供对Map[String,Any]类型隐式转换
 implicit var e=Encoders.kryo[Map[String,Any]] 
 spark.sql("select * from t_user where id=1 or name='lisi' order by salary desc limit 2")
        .map(row => row.getValuesMap(List("id","name","sex")))
        .foreach(row => println(row))

用户自定义聚合函数

1,苹果,4.5,2,001
2,橘子,2.5,5,001
3,机械键盘,800,1,002
val dataFrame = spark.sparkContext.textFile("file:///D:/order.log")
      .map(line => line.split(","))
      .map(tokens=>(tokens(0).toInt,tokens(1),tokens(2).toFloat,tokens(3).toInt,tokens(4)))
        .toDF("id","name","price","count","uid")
    dataFrame.createTempView("t_order")
spark.sql("select uid,sum(price * count) cost from t_order group by uid").show()

自定义求和函数

Data Frame 聚合函数

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._

class SumUserDefinedAggregateFunction extends UserDefinedAggregateFunction{
  //声明输入字段
  override def inputSchema: StructType = StructType(StructField("price",FloatType,true)::StructField("count",IntegerType,true)::Nil)

  //声明聚合之后字段类型
  override def bufferSchema: StructType = StructType(StructField("totalCost",FloatType,true)::Nil)

  //聚合后的数据类型
  override def dataType: DataType = FloatType

  override def deterministic: Boolean = true

  //初始化zero值
  override def initialize(buffer: MutableAggregationBuffer): Unit = {
     buffer(0)=0.0f
  }
  //局部累加求和
  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    if(!input.isNullAt(0) && !input.isNullAt(1)){
       buffer(0)=buffer.getFloat(0)+ (input.getAs[Float](0) * input.getAs[Int](1))
    }
  }
  //合并局部结果,必须赋值给buffer1
  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
    buffer1(0)=buffer1.getFloat(0)+buffer2.getFloat(0)
  }

  override def evaluate(buffer: Row): Any = buffer.getFloat(0)
}

val dataFrame = spark.sparkContext.textFile("file:///D:/order.log")
      .map(line => line.split(","))
      .map(tokens=>(tokens(0).toInt,tokens(1),tokens(2).toFloat,tokens(3).toInt,tokens(4)))
        .toDF("id","name","price","count","uid")
    dataFrame.createTempView("t_order")
   
//注册用户自定义 聚合函数
spark.udf.register("mysum", new SumUserDefinedAggregateFunction())

spark.sql("select uid , mysum(price,count) totalCost  from t_order group by uid").show()

Dataset 自定义聚合函数

import org.apache.spark.sql.{Encoder, Encoders}
import org.apache.spark.sql.expressions.Aggregator
case class MyAvgBuffer(var sum:Float,var count:Int)
class MyAvgAggregator extends Aggregator[(Int,String,Boolean,Int,Float),MyAvgBuffer,Float]{
  //初始化 buffer
  override def zero: MyAvgBuffer = new MyAvgBuffer(0.0f,0)
  //局部计算
  override def reduce(b: MyAvgBuffer, a: (Int, String, Boolean, Int, Float)): MyAvgBuffer = {
      b.sum=b.sum + a._5
      b.count += 1
      return  b
  }
  //汇总合并
  override def merge(b1: MyAvgBuffer, b2: MyAvgBuffer): MyAvgBuffer = {
     b1.sum=b1.sum+b2.sum
     b1.count=b1.count+b2.count
     return b1
  }

  override def finish(reduction: MyAvgBuffer): Float = {
     return reduction.sum/reduction.count
  }

  override def bufferEncoder: Encoder[MyAvgBuffer] = {
    return Encoders.kryo[MyAvgBuffer]
  }

  override def outputEncoder: Encoder[Float] = {
     return Encoders.scalaFloat
  }
}

 val dataset = spark.sparkContext.textFile("file:///D:/person.txt")
.map(line => line.split(","))
.map(tokens=>(tokens(0).toInt,tokens(1),tokens(2).toBoolean,tokens(3).toInt,tokens(4).toFloat))
.toDS()

val aggregator = new MyAvgAggregator()
val value = aggregator.toColumn.name("avgSalary")
dataset.select(value).show()

数据的读取和保存

使用场景---->数据迁移:将oracle中大量数据读取出来再写到mysql中,或者Hdfs中的数据迁移到mysql中

  • 从MySQL中加载数据
  val jdbcDF = spark.read
      .format("jdbc")
      .option("url", "jdbc:mysql://CentOS:3306/test") //访问windows上mysql->localhost:3306/test
      .option("dbtable", "t_user")
      .option("user", "root")
      .option("password", "root")
      .load()
  jdbcDF.select("id","name","salary").show()
  • 从本地磁盘读取CSV格式数据
val frame = spark.read
.option("header", "true") //设置表头
.csv("D:/user.csv")
frame.show()
  • 写入数据到MySQL
//创建sparkSession
    val spark = SparkSession
      .builder() //创建入口
      .master("local[5]") //本地仿真
      .appName("Spark SQL basic example")
      .getOrCreate()
    import spark.implicits._

//获取数据
val personDF = spark.sparkContext.parallelize(Array("14 tom 1500", "15 jerry 20000", "16 kitty 26000"))
      .map(_.split(" "))
      .map(p => (p(0).toInt, p(1).trim, p(2).toDouble))
      .toDF("id","name","salary")

//建立连接数据库
val props=new Properties()
props.put("user", "root")
props.put("password", "root")

//将数据写入到指定数据库表中
personDF.write.format("jdbc").mode(SaveMode.Append)
        .jdbc("jdbc:mysql://CentOS:3306/test","t_user",props)
  • 生成json格式
val personDF = spark.sparkContext.parallelize(Array("14 tom 1500", "15 jerry 20000", "16 kitty 26000"),1)
      .map(_.split(" "))
      .map(p => (p(0).toInt, p(1).trim, p(2).toDouble))
      .toDF("id","name","salary")

personDF.write.format("json").mode(SaveMode.Overwrite)
        .save("D://userjson")
  • 生成CSV格式
 val personDF = spark.sparkContext.parallelize(Array("14 tom 1500", "15 jerry 20000", "16 kitty 26000"),1)
      .map(_.split(" "))
      .map(p => (p(0).toInt, p(1).trim, p(2).toDouble))
      .toDF("id","name","salary")

personDF.write.format("csv").mode(SaveMode.Overwrite)
.option("header","true")
.save("D://usercsv")
  • 生成parquet文件
val personDF = spark.sparkContext.parallelize(Array("14 tom 1500", "15 jerry 20000", "16 kitty 26000"),1)
      .map(_.split(" "))
      .map(p => (p(0).toInt, p(1).trim, p(2).toDouble))
      .toDF("id","name","salary")
personDF.write.mode(SaveMode.Overwrite)
        .parquet("file:///D:/parquet")
  • 读取parquet文件
val dataFrame = spark.read.parquet("file:///D:/parquet")
dataFrame.show()
  • 分区存储
val frame: DataFrame = spark.sparkContext.textFile("D:/order.log")
    .map(_.split(","))
    .map(x => (x(0).toInt, x(1), x(2).toDouble, x(3).toInt, x(4)))
    .toDF("id", "name","price","count","uid")
frame.write.format("json").mode(SaveMode.Overwrite).partitionBy("uid").save("D:/res")

Spark Streaming

Spark Streaming是对Spark核心API一个拓展,使得使用Spark Streaming 能够实现对实时数据的可扩展、高吞吐、容错实时在线处理(类似 Storm、Kafka Stream)。Spark Streaming数据可以来自于消息队列、日志采集以及TCP socket数据源的数据,并且采集的数据可以被一些复杂的算法处理例如高阶函数(map、reduce、window),最后将处理后的数据写入数据库、文件系统或者是报表展示。

内部Spark Streaming工作原理是,首先Spark Streaming介绍实时数据流然后将数据分批次处理。这些批次的数据会依次交给Spark Engine处理产生新的批次数据。

Spark Streaming提供了一个高级抽象称为离散流或者DStream,表示一种连续的数据流。DStream可以通过消息队列、日志采集以及TCP socket数据源的数据创建亦可以通过一些高阶函数(map、reduce、window)处理其他DStream获取。Spark Streaming 内部一个DStream就代表一些有序的RDD序列。

QuickExmaple

import org.apache.spark._
import org.apache.spark.streaming._

object QuickExample {
  def main(args: Array[String]): Unit = {
    //创建本地仿真
    val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
    var sc=new SparkContext(conf)
    sc.setLogLevel("FATAL") //关闭日志打印   
    //创建sparkStreaming
    val ssc = new StreamingContext(sc, Seconds(1))
    //创建sparkSession 用于创建DataFrame
    val spark=SparkSession.builder().master("local[5]").appName("xx").getOrCreate()
    import spark.implicits._
    //连接linux9999的服务
    val lines = ssc.socketTextStream("CentOS", 9999)
    //RDD操作进行数据统计处理
    val words = lines.flatMap(_.split(" "))
    val pair = words.map(word=>(word,1))
    val wordCounts = pair.reduceByKey(_+_)
    //优化判断写入非空的数据-->未来可以写在Hdfs/磁盘-->关系型数据库
    wordCounts.foreachRDD(rdd=>{
      if(rdd.count()>0){
        val dataFrame = rdd.toDF("key","count")
        dataFrame.write.mode(SaveMode.Append).json("file:///D:/json")
      }
    })
    //开始计算
    ssc.start()
    ssc.awaitTermination()
  }
}

在CentOS上先安装nc组件yum install nc -y然后执行nc -lk 9999,然后在启动main程序观察控制台输出。

Discretized Streams

一个DStream代表一系列的RDDs,每个DStream中RDD包含着某个时间间隔的数据。

任何对DStream的操作,底层Spark Streaming都会转换为对RDD的转换。

Transformations on DStreams|RDD

Similar to that of RDDs, transformations allow the data from the input DStream to be modified. DStreams support many of the transformations available on normal Spark RDD’s. Some of the common ones are as follows.

TransformationMeaning
map(func)Return a new DStream by passing each element of the source DStream through a function func.
flatMap(func)Similar to map, but each input item can be mapped to 0 or more output items.
filter(func)Return a new DStream by selecting only the records of the source DStream on which func returns true.
repartition(numPartitions)Changes the level of parallelism in this DStream by creating more or fewer partitions.
union(otherStream)Return a new DStream that contains the union of the elements in the source DStream and otherDStream.
count()Return a new DStream of single-element RDDs by counting the number of elements in each RDD of the source DStream.
reduce(func)Return a new DStream of single-element RDDs by aggregating the elements in each RDD of the source DStream using a function func (which takes two arguments and returns one). The function should be associative and commutative so that it can be computed in parallel.
countByValue()When called on a DStream of elements of type K, return a new DStream of (K, Long) pairs where the value of each key is its frequency in each RDD of the source DStream.
reduceByKey(func, [numTasks])When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function. Note: By default, this uses Spark’s default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
join(otherStream, [numTasks])When called on two DStreams of (K, V) and (K, W) pairs, return a new DStream of (K, (V, W)) pairs with all pairs of elements for each key.
cogroup(otherStream, [numTasks])When called on a DStream of (K, V) and (K, W) pairs, return a new DStream of (K, Seq[V], Seq[W]) tuples.
transform(func)Return a new DStream by applying a RDD-to-RDD function to every RDD of the source DStream. This can be used to do arbitrary RDD operations on the DStream.
updateStateByKey(func)Return a new “state” DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values for the key. This can be used to maintain arbitrary state data for each key.
UpdateStateByKey Operation

Spark如何做状态计算?| 如何做到故障恢复的?

spark streaming有状态计算(如UV)通常采用DStream.updateStateByKey(实际是PairDStreamFunctions增强到DStream的),具体实现网上讲的很多。spark streaming是持续计算,有状态时不能通过简单的DAG/lineage容错,所以必须设置checkpoint(否则Job启动会报错)
checkpoint会持久化当批次RDD的快照、未完成的Task状态等。SparkContext通过checkpoint可以重建DStream,即使Driver宕机,重启后仍可用SparkContext.getOrElse从checkpoint恢复之前的状态。如果上游不丢数据(如kafka),那么宕机重启后原则上可以实现续传
事情似乎是很完美,但是拿到实际环境中还是会有问题
		借助于updateStateByKey以及checkpoint实现有状态统计,同时一般会开启Spark Streaming故障恢复功能借助于StreamingContext.getOrCreate(checkponit, creareFuncttion _)

checkpoint保存的是什么?
- RDD计算过程中的元数据
- RDD 数据本身
- 计算的Drive代码程序

当用户StreamingContext.getOrCreate(checkponit, creareFuncttion _)首先查看checkponit是否有数据,如果有数据就不会调用creareFuncttion,只是尝试通过读取checkponit下的数据,对任务做状态恢复。

该操作可使用户维持计算的状态数据,并且可以持续更新。为了实现状态的更新,我们需要做以下两件事

  • Define the state - The state can be an arbitrary data type.

    定义状态-状态可以是任意的数据类型。

  • Define the state update function - Specify with a function how to update the state using the previous state and the new values from an input stream.

    定义状态更新函数-用函数指定如何使用先前状态和输入流中的新值更新状态。

 val updateFunction:(Seq[Int], Option[Int])=>Option[Int] = (newValues,runningCount)=>{
    val newCount = runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }
import org.apache.spark._
import org.apache.spark.streaming._

object QuickExample {
  def main(args: Array[String]): Unit = {
    var checkpoint="file:///D://checkpoint"
    def createStreamContext():StreamingContext={
        val conf = new SparkConf()
          .setMaster("local[2]")
          .setAppName("NetworkWordCount")
        var sc=new SparkContext(conf)
        sc.setLogLevel("FATAL")
        val ssc = new StreamingContext(sc, Seconds(3))
        ssc.checkpoint(checkpoint)
        ssc.socketTextStream("CentOS", 9999)
        .flatMap(_.split(" "))
        .map((_,1))
        .updateStateByKey(updateFunction)
         .checkpoint(Seconds(30)) //设置checkpoint存储频率,推荐batches的5~10倍
        .print()

      ssc
    }
    val ssc=StreamingContext.getOrCreate(checkpoint,createStreamContext _)

    //开始计算
    ssc.start()
    ssc.awaitTermination()

  }
  val updateFunction:(Seq[Int], Option[Int])=>Option[Int] = (newValues,runningCount)=>{
    val newCount = runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }
}

CheckPoint说明

checkpoint

Window Operations

Spark Streaming also provides windowed computations, which allow you to apply transformations over a sliding window of data. The following figure illustrates this sliding window.、

Spark Streaming也提供窗口计算,这允许您在数据滑动窗口上应用转换。下图说明了这个滑动窗口。

TransformationMeaning
window(windowLength, slideInterval)Return a new DStream which is computed based on windowed batches of the source DStream.
countByWindow(windowLength, slideInterval)Return a sliding window count of elements in the stream.
reduceByWindow(func, windowLength, slideInterval)Return a new single-element stream, created by aggregating elements in the stream over a sliding interval using func. The function should be associative and commutative so that it can be computed correctly in parallel.
reduceByKeyAndWindow(func, windowLength, slideInterval, [numTasks])When called on a DStream of (K, V) pairs, returns a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function func over batches in a sliding window. Note: By default, this uses Spark’s default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
reduceByKeyAndWindow(func, invFunc, windowLength, slideInterval, [numTasks])A more efficient version of the above reduceByKeyAndWindow() where the reduce value of each window is calculated incrementally using the reduce values of the previous window. This is done by reducing the new data that enters the sliding window, and “inverse reducing” the old data that leaves the window. An example would be that of “adding” and “subtracting” counts of keys as the window slides. However, it is applicable only to “invertible reduce functions”, that is, those reduce functions which have a corresponding “inverse reduce” function (taken as parameter invFunc). Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument. Note that checkpointing must be enabled for using this operation.
countByValueAndWindow(windowLength,slideInterval, [numTasks])When called on a DStream of (K, V) pairs, returns a new DStream of (K, Long) pairs where the value of each key is its frequency within a sliding window. Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument.
Output Operations on DStreams

Output operations allow DStream’s data to be pushed out to external systems like a database or a file systems. Since the output operations actually allow the transformed data to be consumed by external systems, they trigger the actual execution of all the DStream transformations (similar to actions for RDDs). Currently, the following output operations are defined:

Output OperationMeaning
print()Prints the first ten elements of every batch of data in a DStream on the driver node running the streaming application. This is useful for development and debugging. Python API This is called pprint() in the Python API.
saveAsTextFiles(prefix, [suffix])Save this DStream’s contents as text files. The file name at each batch interval is generated based on prefix and suffix: “prefix-TIME_IN_MS[.suffix]”.
saveAsObjectFiles(prefix, [suffix])Save this DStream’s contents as SequenceFiles of serialized Java objects. The file name at each batch interval is generated based on prefix and suffix: “prefix-TIME_IN_MS[.suffix]”. Python API This is not available in the Python API.
saveAsHadoopFiles(prefix, [suffix])Save this DStream’s contents as Hadoop files. The file name at each batch interval is generated based on prefix and suffix: “prefix-TIME_IN_MS[.suffix]”. Python API This is not available in the Python API.
foreachRDD(func)The most generic output operator that applies a function, func, to each RDD generated from the stream. This function should push the data in each RDD to an external system, such as saving the RDD to files, or writing it over the network to a database. Note that the function func is executed in the driver process running the streaming application, and will usually have RDD actions in it that will force the computation of the streaming RDDs.
Input DStreams and Receivers

每一个SparkStreaming程序都必须有一个Input DStreams,每一个Input DStream都会和一个Receiver对象进行关联,由Receiver负责将数据存储到Spark的内存中,用于后续处理。、

Spark目前提供两种Input DStreams:

  • Basic sources:Sources directly available in the StreamingContext API. Examples: file systems, and socket connections.
  • Advanced sources: Sources like Kafka, Flume, Kinesis, etc. are available through extra utility classes. These require linking against extra dependencies as discussed in the linking section.

FileStreams

streamingContext.textFileStream(dataDirectory)

或者

streamingContext.fileStream[KeyClass, ValueClass, InputFormatClass](dataDirectory)
Custom Receivers
import java.io.{BufferedReader, InputStreamReader}
import java.net.Socket
import java.nio.charset.StandardCharsets

import org.apache.spark.internal.Logging
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.receiver.Receiver

class CustomReceiver(host: String, port: Int) extends Receiver[String](StorageLevel.MEMORY_AND_DISK_2) with Logging {

  def onStart() {
    // Start the thread that receives data over a connection
    new Thread("Socket Receiver") {
      override def run() { receive() }
    }.start()
  }

  def onStop() {
    // There is nothing much to do as the thread calling receive()
    // is designed to stop by itself if isStopped() returns false
  }

  /** Create a socket connection and receive data until receiver is stopped */
  private def receive() {
    var socket: Socket = null
    var userInput: String = null
    try {
      // Connect to host:port
      socket = new Socket(host, port)

      // Until stopped or connection broken continue reading
      val reader = new BufferedReader(
        new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8))
      userInput = reader.readLine()
      while(!isStopped && userInput != null) {
        store(userInput)
        userInput = reader.readLine()
      }
      reader.close()
      socket.close()

      // Restart in an attempt to connect again when server is active again
      restart("Trying to connect again")
    } catch {
      case e: java.net.ConnectException =>
        // restart if could not connect to server
        restart("Error connecting to " + host + ":" + port, e)
      case t: Throwable =>
        // restart if there is any other error
        restart("Error receiving data", t)
    }
  }
}
import org.apache.spark._
import org.apache.spark.streaming._

object QuickExample {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf()
      .setMaster("local[2]")
      .setAppName("NetworkWordCount")
    conf.set("spark.io.compression.codec","lz4")
    var sc=new SparkContext(conf)
    sc.setLogLevel("FATAL")


    var ssc=   new StreamingContext(sc,Seconds(1))
    ssc.receiverStream(new CustomReceiver("CentOS",9999)).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    //开始计算
    ssc.start()
    ssc.awaitTermination()

  }
}

Spark Streaming集成Kafka

参考:http://spark.apache.org/docs/latest/streaming-kafka-integration.html

http://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html

import org.apache.kafka.clients.consumer.ConsumerConfig._
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe

object KafkaStreamingDemo {
  def main(args: Array[String]): Unit = {
    var checkpoint="file:///D://checkpoint"
    def createStreamContext():StreamingContext={
      val conf = new SparkConf()
        .setMaster("local[2]")

        .setAppName("NetworkWordCount")
      var sc=new SparkContext(conf)
      sc.setLogLevel("FATAL")
      val ssc = new StreamingContext(sc, Seconds(3))
      ssc.checkpoint(checkpoint)
      val kafkaParams = Map[String, Object](
        BOOTSTRAP_SERVERS_CONFIG -> "CentOS:9092,CentOS:9093,CentOS:9094",
        KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
        VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
        GROUP_ID_CONFIG -> "g1",
        AUTO_OFFSET_RESET_CONFIG -> "latest",
        ENABLE_AUTO_COMMIT_CONFIG -> (false: java.lang.Boolean)
      )
      val topics = Array("topic01")
      val stream  = KafkaUtils.createDirectStream(ssc,PreferConsistent,Subscribe[String, String](topics, kafkaParams))
      stream.flatMap(record =>record.value().split(" "))
        .map((_,1))
        .updateStateByKey(updateFunction)
        .checkpoint(Seconds(30))
        .reduceByKey(_+_).print()
      ssc
    }
    val ssc= StreamingContext.getOrCreate(checkpoint,createStreamContext _)

    ssc.start()
    ssc.awaitTermination()

  }
  val updateFunction:(Seq[Int], Option[Int])=>Option[Int] = (newValues,runningCount)=>{
    val newCount = runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }
}

解决一个jar包冲突

<dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.0.1</version>
            <exclusions>
                <exclusion>
                    <groupId>net.jpountz.lz4</groupId>
                    <artifactId>lz4</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
<dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
     <version>2.3.2</version>
     <exclusions>
         <exclusion>
             <groupId>org.apache.kafka</groupId>
             <artifactId>kafka-clients</artifactId>
         </exclusion>
     </exclusions>
</dependency>

因为net.jpountz.lz4和Spark自带包冲突。

Spark Streaming 集成Flume

参考:http://spark.apache.org/docs/latest/streaming-flume-integration.html

import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

object FlumeStreamingDemo {
  def main(args: Array[String]): Unit = {
    var checkpoint="file:///D://checkpoint"
    def createStreamContext():StreamingContext={
      val conf = new SparkConf()
        .setMaster("local[2]")

        .setAppName("NetworkWordCount")
      var sc=new SparkContext(conf)
      sc.setLogLevel("FATAL")
      val ssc = new StreamingContext(sc, Seconds(3))
      ssc.checkpoint(checkpoint)

      val topics = Array("topic01")
      val stream  = FlumeUtils.createStream(ssc, "localhost", 44444)
      stream.map(event => new String(event.event.getBody.array()))
        .flatMap(lines=> lines.split(" "))
        .map((_,1))
        .updateStateByKey(updateFunction)
        .checkpoint(Seconds(30))
        .reduceByKey(_+_).print()
      ssc
    }
    val ssc= StreamingContext.getOrCreate(checkpoint,createStreamContext _)

    ssc.start()
    ssc.awaitTermination()

  }
  val updateFunction:(Seq[Int], Option[Int])=>Option[Int] = (newValues,runningCount)=>{
    val newCount = runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }
}
 <!--直接对接flume-->
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-flume_2.11</artifactId>
    <version>${spark.version}</version>
</dependency>
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值