Spark RDD学习资料

Apache Spark

框架概述

Apache Spark是一个快如闪电的统一的分析引擎(仅仅是一款分析引擎,不提供存储服务)

:相比较于第一代基于磁盘计算的离线分析框架MapReduce而言,Spark基于内存计算 较快

统一:Spark提供统一的API访问接口,实现了批处理流处理的统一,并且提供ETL功能

同时提供对大规模数据集的全栈式解决方案:批处理流处理SQLMachine Learning图形关系分析

计算速度快的原因

  • 使用先进的DAG(有向无环图)设计

    MapReduce:矢量计算 起点(Map 并行度)—> 终点(Reduce 并行度)

    Spark:有向无环图 起点 —>第一阶段(并行度)—>第二阶段(并行度)—>第n阶段(并行度)—>终点

  • MapReduce不同,MapReduce运算产生的中间结果(溢写文件)会存储到磁盘,然后再启动下一轮的计算,计算的性能在很大程度上受限于磁盘的读写操作,而Spark的计算是将任务拆分成若干个Stage,每个阶段的计算结果都可以缓存在内存中,在进行迭代计算时,计算结果可以复用,因此极大降低了重复计算和磁盘读写所消耗的时间

  • Spark SQL可利用Query Optimizer优化用户SQL

课外读物

MapReduce作为第一代大数据处理框架,在设计初期只是为了满足基于海量数据级的计算任务的迫切需求,自2006年剥离自Nutch(Java搜索引擎)工程,主要解决早期人们对大数据的初级认知所面临的问题。
在这里插入图片描述
整个MapReduce的计算实质上基于磁盘的IO实现,随着大数据技术的不断普及,人们开始重新定义大数据的处理方式,不仅仅满足于能在合理的时间范围内完成对大数据的计算,还对计算的实效性提出了更苛刻的要求,人们开始探索使用MapReduce计算框架完成一些复杂的高阶算法,往往这些算法通常不能通过一次性的MapReduce迭代计算完成,由于MapReduce的计算模型总是将运算的中间结果存储到磁盘中,每次迭代都需要将数据从磁盘加载到内存,造成后续的迭代计算产生更多的时延的问题。

2009年Spark在加州伯克利AMP实验室诞生,2010年首次开源,后来就受到很多开发人员的喜爱,2013年6月份开始在Apache孵化,2014年2月份正式成为Apache的顶级项目。Spark发展如此之快是因为Spark在计算方面明显优于Hadoop的基于磁盘的MapReduce迭代计算,主要是Spark可以使用内存对数据做计算,而且计算的中间结果也可以缓存在内存中,这就为后续的迭代计算节省了时间,大幅度地提升了针对于海量数据级的计算效率。
在这里插入图片描述
Spark官方给出了在使用MapReduceSpark做线性回归计算(算法实现需要n次迭代)的效率差异结果,Spark的计算速率几乎是MapReduce计算速率的10~100倍。在这里插入图片描述

战略部署

Spark之所以能这么快被程序员所接受,不仅仅是因为Spark是一个基于内存的批处理框架,这与Spark的战略部署息息相关。不仅如此,Spark在设计理念中也提出了One stack ruled them all战略,并且提供了基于Spark批处理之上的计算服务分支,例如:Spark的交互查询、近实时流处理、机器学习、Graphx图形关系计算等。
在这里插入图片描述
从图中不难看出Apache Spark处于计算层,Spark项目在战略上起到了承上启下的作用,并没有废弃原有以hadoop为主体的大数据解决方案,因为Spark向下可以计算来自于HDFSHBaseCassandra亚马逊S3文件服务器的数据,也就意味着使用Spark作为计算层,用户原有的存储层架构无需改动。

计算架构

Spark的计算任务(Application等价于MapReduce中的Job)在集群中以独立的进程集合的形式独立来计算,每一个任务都有自己进程(计算资源),这些计算资源由SparkContext对象负责统筹管理,该对象由用户的主程序创建(入口main函数)- Driver Program
在这里插入图片描述
Application:等价于MapReduce中的Job任务,包含DirverExecutor进程

Driver:用户主程序,用于创建SparkContext,协调整个计算任务(等价于MRAppMaster

ClusterManager:主要负责集群的计算资源的管理,并不负责任务调度(等价于ResourceManager

WorkerNode:泛指能够运行Apllication代码的集群中的机器

Executor:每个Application任务都有自己的进程集合,这些进程负责运行Task(线程),并且存储计算的中间结果

Task:每个计算任务,会被划分为若干个Stage阶段,每个阶段都有自己的并行度,并行度表示线程数目(Task数目)

Stage:每个任务会被划分为若干个阶段,每个阶段都有自己的并行度,阶段与阶段之间有相互的依赖关系(RDD血统)

参考资料:http://spark.apache.org/docs/latest/cluster-overview.html

集群搭建

StandAlone(为主)

环境(centos-6.x)
  • 关闭防火墙
[root@centos ~]# service iptables stop # 关闭防火墙
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@centos ~]# chkconfig iptables off # 关闭开机自启动
  • 修改主机名

    [root@centos ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=centos
    
  • 配置主机名和IP的映射关系
[root@centos ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.58.24 centos
  • 重启centos系统
  • 配置SSH免密登录
[root@centos ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d7:22:ce:b2:2c:d6:ee:cd:50:4b:2f:ff:52:0b:7d:df root@centos
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|           .     |
|        S o..    |
|       = =..o .  |
|     .o = .o o ..|
|    o..* o. .   E|
|   . += o .o.    |
+-----------------+
[root@centos ~]# ssh-copy-id centos
The authenticity of host 'centos (192.168.58.24)' can't be established.
RSA key fingerprint is c8:64:53:f9:ed:e8:a4:2f:f7:13:cf:ad:59:68:c1:b8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.58.24' (RSA) to the list of known hosts.
root@centos's password:
Now try logging into the machine, with "ssh 'centos'", and check in
  .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
  • 安装JDK,配置JAVA_HOME
[root@centos ~]# rpm -ivh jdk-8u191-linux-x64.rpm
warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                ########################################### [100%]
   1:jdk1.8                 ########################################### [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
[root@centos ~]# vi ~/.bashrc
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@centos ~]# source .bashrc
[root@centos ~]# jps
1866 Jps
安装HDFS
[root@centos ~]# tar -zxf hadoop-2.9.2.tar.gz -C /usr/
[root@centos ~]# vi ~/.bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
[root@centos ~]# source .bashrc
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn访问入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://centos:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves
centos
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在的物理主机-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>centos:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
    <name>dfs.datanode.handler.count</name>
    <value>6</value>
</property>
[root@centos ~]# hdfs namenode -format # 创建启动NameNode所需的fsimage文件
[root@centos ~]# start-dfs.sh 
安装Spark
[root@centos ~]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C /usr/
[root@centos ~]# mv /usr/spark-2.4.3-bin-without-hadoop/ /usr/spark-2.4.3
[root@centos ~]# cd /usr/spark-2.4.3/
[root@centos spark-2.4.3]# mv conf/slaves.template conf/slaves
[root@centos spark-2.4.3]# vi conf/slaves
centos
[root@centos spark-2.4.3]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@centos spark-2.4.3]# vi conf/spark-env.sh
SPARK_MASTER_HOST=centos
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_INSTANCES=2
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH
[root@centos spark-2.4.3]# ./sbin/start-all.sh  #只有在Standalone模式下才需要启动
[root@centos spark-2.4.3]# jps
8064 Jps
2066 NameNode
2323 SecondaryNameNode
7912 Worker
7801 Master
7981 Worker
2157 DataNode

用户可以访问:http://centos:8080/
在这里插入图片描述
测试集群计算功能:

[root@centos spark-2.4.3]# ./bin/spark-shell 
	--master spark://centos:7077  # 连接集群的Master 
	--deploy-mode client          # Diver运行方式:必须是client
	--total-executor-cores 4      # 分配计算资源
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://centos:4040
Spark context available as 'sc' (master = spark://centos:7077, app id = app-20190924232452-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc.textFile("hdfs:///demo/words")
        .flatMap(_.split(" "))
        .map((_,1))
        .groupBy(t=>t._1)
        .map(t=>(t._1,t._2.size))
        .sortBy(t=>t._2,false,4)
        .saveAsTextFile("hdfs:///demo/results")

YARN(为辅)

环境(centos-6.x)
  • 关闭防火墙
[root@centos ~]# service iptables stop # 关闭防火墙
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@centos ~]# chkconfig iptables off # 关闭开机自启动
  • 修改主机名

    [root@centos ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=centos
    
  • 配置主机名和IP的映射关系

[root@centos ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.58.24 centos
  • 重启centos系统

  • 配置SSH免密登录

[root@centos ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d7:22:ce:b2:2c:d6:ee:cd:50:4b:2f:ff:52:0b:7d:df root@centos
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|           .     |
|        S o..    |
|       = =..o .  |
|     .o = .o o ..|
|    o..* o. .   E|
|   . += o .o.    |
+-----------------+
[root@CentOS ~]# ssh-copy-id centos
The authenticity of host 'centos (192.168.58.24)' can't be established.
RSA key fingerprint is c8:64:53:f9:ed:e8:a4:2f:f7:13:cf:ad:59:68:c1:b8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.58.24' (RSA) to the list of known hosts.
root@centos's password:
Now try logging into the machine, with "ssh 'CentOS'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
  • 安装JDK

    [root@centos ~]# rpm -ivh jdk-8u191-linux-x64.rpm
    warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
    Preparing...                ########################################### [100%]
       1:jdk1.8                 ########################################### [100%]
    Unpacking JAR files...
            tools.jar...
            plugin.jar...
            javaws.jar...
            deploy.jar...
            rt.jar...
            jsse.jar...
            charsets.jar...
            localedata.jar...
    [root@centos ~]# 
    
  • 配置JAVA_HOME

    [root@centos ~]# vi ~/.bashrc
    JAVA_HOME=/usr/java/latest
    PATH=$PATH:$JAVA_HOME/bin
    CLASSPATH=.
    export JAVA_HOME
    export PATH
    export CLASSPATH
    [root@centos ~]# source .bashrc
    [root@centos ~]# jps
    1866 Jps
    
安装HDFS
[root@centos ~]# tar -zxf hadoop-2.9.2.tar.gz -C /usr/
[root@centos ~]# vi ~/.bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
[root@centos ~]# source .bashrc
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn访问入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://centos:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves
centos
[root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>centos:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
    <name>dfs.datanode.handler.count</name>
    <value>6</value>
</property>
[root@centos ~]# hdfs namenode -format # 创建启动NameNode所需的fsimage文件
[root@centos ~]# start-dfs.sh 
安装YARN
  • 修改yarn-site.xml

    [root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml
    <!--配置MapReduce计算框架的核心实现Shuffle-洗牌-->
    <property> 
        <name>yarn.nodemanager.aux-services</name> 
        <value>mapreduce_shuffle</value> 
    </property> 
    <!--配置资源管理器所在的目标主机-->
    <property> 
        <name>yarn.resourcemanager.hostname</name> 
        <value>centos</value> 
    </property> 
    <!--关闭物理内存检查-->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name> 
        <value>false</value> 
    </property> 
    <!--关闭虚拟内存检查-->
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
    
  • 修改mapred-site.xml

    [root@centos ~]# mv /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml.template /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml 
    [root@centos ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml
    <!--MapRedcue框架资源管理器的实现-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    
    • 启动YARN服务
[root@centos ~]# start-yarn.sh
安装Spark
[root@CentOS ~]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.4.3-bin-without-hadoop/ /usr/spark-2.4.3
[root@CentOS ~]# cd /usr/spark-2.4.3/
[root@CentOS spark-2.4.3]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.4.3]# vi conf/spark-env.sh
HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES=4
SPARK_EXECUTOR_MEMORY=1g
SPARK_DRIVER_MEMORY=1g
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath)
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH

注意:这里和standalone不同,用户无需启动start-all.sh服务,因为任务的执行会交给YARN执行

[root@centos spark-2.4.3]# ./bin/spark-shell
	--master yarn                 # 连接集群的Master 
	--deploy-mode client          # Diver运行方式:必须是client
	--executor-cores 4            # 每个进程最多运行两个Core
	--num-executors 2             # 分配2个Executor进程

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/09/25 00:14:40 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/09/25 00:14:43 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1252)
        at java.lang.Thread.join(Thread.java:1326)
        at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980)
        at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807)
Spark context Web UI available at http://centos:4040
Spark context available as 'sc' (master = yarn, app id = application_1569341195065_0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc.textFile("hdfs:///demo/words")
        .flatMap(_.split(" "))
        .map((_,1))
        .groupBy(t=>t._1)
        .map(t=>(t._1,t._2.size))
        .sortBy(t=>t._2,false,4)
        .saveAsTextFile("hdfs:///demo/results")

更多请参考:https://blog.csdn.net/weixin_38231448/article/details/89382345

发布与部署

远程测试

  • 添加Spark开发的依赖
<properties>
    <spark.version>2.4.3</spark.version>
    <scala.version>2.11</scala.version>
</properties>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_${scala.version}</artifactId>
    <version>${spark.version}</version>
    <!--告知:在Maven打jar包时,自动剔除该包-->
    <scope>provided</scope>
</dependency>

//1.创建SparkContext
val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("spark://centos:7077")
val sc = new SparkContext(sparkConf)
//2.创建分布式集合RDD  -细化
val lines:RDD[String] = sc.textFile("hdfs:///demo/words")
//3.对数据集合进行转换 -细化
val transformRDD:RDD[(String,Int)] = lines.flatMap(_.split(" "))
.map((_, 1))
.groupBy(t => t._1)
.map(t => (t._1, t._2.size))
.sortBy(t => t._2, false, 4)
//4.对RDD做Action动作提交任务 -细化
transformRDD.saveAsTextFile("hdfs:///demo/results")
//5.释放资源
sc.stop()
  • 添加Maven插件
<!--在执行package时,将scala源码编译进jar-->
<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>scala-maven-plugin</artifactId>
    <version>4.0.1</version>
    <executions>
        <execution>
            <id>scala-compile-first</id>
            <phase>process-resources</phase>
            <goals>
                <goal>add-source</goal>
                <goal>compile</goal>
            </goals>
        </execution>
    </executions>
</plugin>
<!--将依赖jar打入到jar中-->
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4.3</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <filters>
                    <filter>
                        <artifact>*:*</artifact>
                        <excludes>
                            <exclude>META-INF/*.SF</exclude>
                            <exclude>META-INF/*.DSA</exclude>
                            <exclude>META-INF/*.RSA</exclude>
                        </excludes>
                    </filter>
                </filters>
            </configuration>
        </execution>
    </executions>
</plugin>
  • 执行Maven打包指令
mvn -package
  • 在项目的target目录下会产生两个jar
original-xxx-1.0-SNAPSHOT.jar //没有第三方依赖
xxx-1.0-SNAPSHOT.jar          //除provide以外的所有jar都会打包进来--->fatjar
  • 使用spark-submit指令提交spark任务
[root@centos spark-2.4.3]# ./bin/spark-submit
							--master spark://centos:7077 
							--deploy-mode cluster 
							--class com.baizhi.demo01.SparkWordCount 
							--driver-cores 2  
							--total-executor-cores 4 
							/root/rdd-1.0-SNAPSHOT.jar

本地测试

//1.创建SparkContext
val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)
//2.创建分布式集合RDD -细化
val lines:RDD[String] = sc.textFile("file:///D:/demo/words")
//3.对数据集合做转换 - 细化
val transformRDD:RDD[(String,Int)] = lines.flatMap(_.split(" "))
.map((_, 1))
.groupBy(t => t._1)
.map(t => (t._1, t._2.size))
.sortBy(t => t._2, false, 4)
//4.对RDD做Action动作提交任务 -细化
transformRDD.saveAsTextFile("file:///D:/demo/results")
//5.释放资源
sc.stop()

注意:直接本地运行,不需要Spark环境的支持,注释掉<scope>provide</scope>即可

./bin/spark-shell 
	--master local[6]  # 连接集群Master 
	--deploy-mode client      # Diver运行方式:必须是client
	--total-executor-cores 4  # 计算资源

History Server

用于记录任务在执行过程中的历史状态信息

  • 添加spark-env.sh
[root@centos spark-2.4.3]# vi conf/spark-env.sh
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
export SPARK_HISTORY_OPTS
  • 修改spark-defaults.conf
[root@centos spark-2.4.3]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
[root@centos spark-2.4.3]# vi conf/spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs
  • 在HDFS上创建spark-logs目录(History服务器存储数据的地方)
[root@centos ~]# hdfs dfs -mkdir /spark-logs
  • 启动history server
[root@centos spark-2.4.3]# ./sbin/start-history-server.sh

测试服务是否正常:http://centos:18080
在这里插入图片描述

RDD编程指南(重点)

概述

俯视整个Spark程序,所有Spark的Application都包含一个Driver程序,该程序是用户的主函数以及在集群中执行各种各样的并行操作。在Spark中提出了一个核心的概念 resilient distributed dataset 简称 RDD,RDD是一个并行的分布式集合 ,该集合数据可以跨节点存储,所有的RDD操作都是在集群的计算节点中并行的执行。RDD可以直接通过Hadoop的文件系统创建(或者所有Hadoop支持的文件系统创建),也可以通过在main函数中定义的Scala集合创建。Spark可以将RDD中的数据缓存在内存中,这样在后续的分布式计算中可以重复使用,从而提高了程序的运行效率,其次RDD可在计算节点出现故障的时候进行故障恢复。(RDD创建/RDD缓存/RDD故障恢复

基本结构

在这里插入图片描述

RDD创建

RDD是一个并行的具有容错功能的分布式数据集,创建一个RDD有两种方式:①从Driver中将Scala的本地集合并行化为一个RDD;②可以通过外围系统的数据集创建,一般要使用Hadoop的InputFormat

集合构建RDD(测试)
val lines=List("this is a demo","hello world","good good")
val linesRDD:RDD[String]=sc.parallelize(lines,3)

这里的3表示将集合分为3个分区,数据均匀分散,此外,用户还可以使用makeRDD创建

val lines=List("this is a demo","hello world","good good")
val linesRDD:RDD[String]=sc.makeRDD(lines,3)
外围系统数据(掌握)
  • √textFile
val linesRDD:RDD[String] = sc.textFile("file:///D:/demo/words",10)

这里的file://表示读取本地系统的文件,如果需要读取HDFS的文件,请指定为hdfs://,如果读取的是来自HDFS上的文本数据,一般不需要指定分区数,如果用户不指定分区数,文件并行加载的并行度默认等于文件的块的数目;如果用户指定了分区数, 该分区数必须大于目标文件的block数目。

  • wholeTextFiles
val wholeFileRDD:RDD[(String,String)]=sc.wholeTextFiles("hdfs:///demo/words")
val linesRDD=wholeFileRDD.flatMap(t=>t._2.split("\n"))
  • newAPIHadoopRDD(MySQL)
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.47</version>
</dependency>
class FlowDBWRiteable extends DBWritable with Serializable {
    var id:Int=_
    var phone:String=_
    var resource:String=_
    var upFlow:Double=_
    var downFlow:Double=_
    var currentTime:Date=_
    //写出
    override def write(preparedStatement: PreparedStatement): Unit = {

    }
    //读入
    override def readFields(resultSet: ResultSet): Unit = {
        id=resultSet.getInt("id")
        phone=resultSet.getString("phone")
        resource=resultSet.getString("resource")
        upFlow=resultSet.getDouble("up_flow")
        downFlow=resultSet.getDouble("down_flow")
        currentTime=resultSet.getDate("current_times")
    }
}
//1.创建SparkContext
val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)
//2.创建分布式集合RDD -细化
val hConf = new Configuration()
DBConfiguration.configureDB(hConf,"com.mysql.jdbc.Driver",
                            "jdbc:mysql://localhost:3306/vue",
                            "root",
                            "root")

hConf.set(DBConfiguration.INPUT_COUNT_QUERY,"select count(*) from h_flow")
hConf.set(DBConfiguration.INPUT_QUERY,"select *  from h_flow")
hConf.set(DBConfiguration.INPUT_CLASS_PROPERTY,"com.baizhi.demo04.FlowDBWRiteable")

val flowRDD: = sc.newAPIHadoopRDD[LongWritable,FlowDBWRiteable,DBInputFormat[FlowDBWRiteable]](hConf,classOf[DBInputFormat[FlowDBWRiteable]],classOf[LongWritable],classOf[FlowDBWRiteable])
flowRDD.map(t=>(t._1.get(),t._2.id,t._2.phone,t._2.downFlow,t._2.upFlow,t._2.resource,t._2.currentTime)).collect().foreach(println)
//5.释放资源
sc.stop()
  • √hbase读取
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_${scala.version}</artifactId>
    <version>${spark.version}</version>
    <!--<scope>provided</scope>-->
    <exclusions>
        <exclusion>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.47</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-auth</artifactId>
    <version>2.9.2</version> </dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>1.2.4</version>
</dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-server</artifactId>
    <version>1.2.4</version>
</dependency>
//1.创建SparkContext
val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)
//2.创建分布式集合RDD -细化
val hConf = new Configuration()
hConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
hConf.set(TableInputFormat.INPUT_TABLE,"baizhi:t_user")

val userRDD:RDD[(ImmutableBytesWritable,Result)] = sc.newAPIHadoopRDD(hConf,classOf[TableInputFormat],classOf[ImmutableBytesWritable],classOf[Result])

userRDD.map(t=>{
    val key=Bytes.toString(t._1.get())
    val name=Bytes.toString(t._2.getValue("cf1".getBytes(),"name".getBytes()))
    (key,name)
}).collect().foreach(println)
//5.释放资源
sc.stop()

RDD剖析(面试准备)

RDD是一个弹性的分布式的数据集

弹性:强调的是RDD的容错性。默认情况下RDD有三种容错策略:

  • RDD重复计算-默认策略,一旦在计算过程中系统出错了,系统可以根据RDD的转换关系去追溯上游RDD,逆推出RDD的计算过程。之所以RDD能够逆推出上游RDD(父RDD),主要是因为Spark会记录RDD之间的依赖关系(RDD血统)
  • 由于重复计算成本较高,因此Spark提供了缓存机制,用于存储RDD计算的中间结果,有利于在故障时能够快速进行状态恢复,提高计算效率,如果某些计算需要重复使用也可以使用RDD缓存机制去优化。
  • 由于缓存存在时效问题,如果RDD缓存失效了,一旦系统出现故障,依然需要重复计算,因此针对一些比较耗时且计算成本比较高的计算,Spark提供了一种更加安全可靠的机制,被称为Checkpoint,缓存一般具有时间限制,长时间不使用会失效,但是checkpoint不同,checpoint机制会将RDD的计算结果直接持久化到磁盘中,被checpoint的数据,会一直持久化到磁盘中,除非手动删除。

分布式:强调的是Stage的划分,Spark会尝试将一个任务拆分成若干个阶段,所有的计算都会按照State进行划分,有条不紊地执行。Spark会根据RDD间的相互依赖关系划分任务的阶段,Spark中RDD的依赖关系被称为RDD的血统-lineage,RDD的血统依赖又分为两种形式宽依赖/窄依赖,如果遇到窄依赖,Spark会尝试将RDD转换归并为一个Stage;如果遇到宽依赖,Spark会产生新的Satge。

数据集:强调的是RDD的操作简单性和易用性,操作并行集合就等价于操作Scala的本地集合这么简单。

  • 代码分析在这里插入图片描述
    以上代码通过textFile创建RDD并且指定分区数,如果不指定分区数,系统默认会按照HDFS上的Block的数目计算分区,该参数不能小于Block的数目,然后可使用flatMap,map算子对分区数据做转换。不难看出Spark将textFile->flatMap->map划分为了一个State0,在执行reduceByKey转换的时候划分出State1,在执行到collect动作算子的时候,Spark进行任务提交,并且内部通过DAGScheduler计算出state0和state1两个状态。
    在这里插入图片描述
RDD容错

在理解DAGSchedule如何做状态划分的前提是需要大家了解一个专业术语lineage通常被人们称为RDD的血统。在了解什么是RDD的血统之前,先来看看程序猿进化过程。
在这里插入图片描述
上图中描述了一个程序猿起源变化的过程,我们可以近似的理解类似于RDD的转换过程,Spark的计算本质就是对RDD做各种转换,由于RDD是一个不可变带有分区只读的集合,因此每次的转换都需要上一次的RDD数据作为本次转换的输入,因此RDD的lineage描述的是RDD间的相互依赖关系。为了保证RDD中数据的健壮性,RDD数据集通过所谓的血统关系(Lineage)记住了它是如何从其它RDD中转换过来的。Spark将RDD之间的关系归类为宽依赖窄依赖。Spark会根据Lineage存储的RDD的依赖关系对RDD计算做故障容错,目前Saprk的容错策略根据RDD依赖关系重新计算RDD做CacheRDD做Checkpoint手段完成RDD计算的故障容错。

宽依赖 | 窄依赖

RDD在Lineage依赖方面分为两种Narrow Dependencies与Wide Dependencies用来解决数据容错的高效性。Narrow Dependencies是指父RDD的每一个分区最多被一个子RDD的分区所用,表现为一个父RDD的分区对应于一个子RDD的分区或多个父RDD的分区对应于子RDD的一个分区,也就是说一个父RDD的一个分区不可能对应一个子RDD的多个分区。Wide Dependencies父RDD的一个分区对应一个子RDD的多个分区。Spark在任务的提交的时候会调用DAGScheduler方法更具最后一个RDD逆向推导出任务的阶段(根据宽、窄依赖)。
在这里插入图片描述

RDD缓存

缓存是RDD在计算过程中实现容错的一种手段,程序在RDD数据丢失的时候,可以通过缓存快速计算当前RDD的值,而不需要反推出所有的RDD重新计算,因此Spark在需要对某个RDD多次使用的时候,为了提高程序的执行效率,用户可以考虑使用RDD的Cache机制。

//1.创建SparkContext
val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)
//2.创建分布式集合RDD -细化
val lines:RDD[String] = sc.textFile("file:///D:/demo/words")

val cacheRDD = lines.flatMap(_.split(" "))//缓存数据
.map((_, 1))
.persist(StorageLevel.MEMORY_ONLY)
//执行聚合
cacheRDD.reduceByKey(_ + _, 4).collect()

var start=System.currentTimeMillis()

for(i <- 0 to 100){//测试时间
    cacheRDD.reduceByKey(_ + _, 4).collect()
}
var end=System.currentTimeMillis()
//5.释放资源
sc.stop()
println("总耗时:"+(end-start))
  • 清除缓存
cacheRDD.unpersist()
  • 思考:面对大规模数据集,直接将RDD数据缓存在内存中是否会导致内存溢出?

默认情况下,Spark的cache方法是用内存来缓存RDD中的数据,这样可以极大提升程序的运行效率,但是面对大规模数据集合可能导致计算的节点产生OOM(Out of Memory),如果数据经过转换后,数据量级依然很大,这个时候不建议使用cache方法,Spark提供以下的存储机制:

rdd#cache <==> rdd.persist(StorageLevel.MEMORY_ONLY)

默认情况下,用户使用cache就等价于使用 rdd.persist(StorageLevel.MEMORY_ONLY),事实上Spark还提供了其它的存储策略,用于节省内存空间以及缓存数据的备份。

StorageLevel.MEMORY_ONLY       # 直接将RDD只存储到内存,效率高,占用空间大
StorageLevel.MEMORY_ONLY_2     # 直接将RDD只存储到内存,效率高,占用空间大,并且存储两份
StorageLevel.MEMORY_ONLY_SER   # 将RDD先进行序列化,效率相对较低,占用空间稍微小
StorageLevel.MEMORY_ONLY_SER_2 # 将RDD先进行序列化,效率相对较低,占用空间稍微小,并且存储两份

StorageLevel.MEMORY_AND_DISK
StorageLevel.MEMORY_AND_DISK_2
StorageLevel.MEMORY_AND_DISK_SER
StorageLevel.MEMORY_AND_DISK_SER_2 # 不确定情况下,一般使用该种缓存策略

StorageLevel.DISK_ONLY             # 基于磁盘存储
StorageLevel.DISK_ONLY_2
StorageLevel.DISK_ONLY_SER
StorageLevel.DISK_ONLY_SER_2
Check Point(检查点)

使用缓存机制可以有效地保证RDD的故障恢复,但是如果缓存失效还是会导致系统重新计算RDD中的结果,所以对于一些RDD的lineage较长的场景,计算比较耗时,用户可以尝试使用checkpoint机制存储RDD的计算结果,这种机制和缓存最大的不同在于,使用checkpoint之后被checkpoint的RDD数据直接持久化在文件系统中,一般推荐将结果写在hdfs中,这种checpoint并不会自动清空。缓存的数据是在计算的时候将结果立即缓存,但是checkpoint不同,并不是在任务的计算过程中立即做checkpoint,仅仅是对需要check的RDD做标记,等到计算结束之后再开始对标记的RDD做重复计算,这个时候才会将数据持久化到磁盘。
在这里插入图片描述

scala> sc.setCheckpointDir("hdfs:///checkpoints")
scala> val mapRDD=sc.textFile("hdfs:///demo/words/").flatMap(_.split(" ")).map((_,1)).cache()
scala> mapRDD.checkpoint()
scala> mapRDD.reduceByKey(_+_,4).collect
scala> mapRDD.unpersisit() # 删除/demo/words/
scala> mapRDD.reduceByKey(_+_,4).collect #正常,之后删除checkpoints数据

考点:注意区分checpoint和缓存的区别

Stage划分源码剖析

在这里插入图片描述

private def submitStage(stage: Stage) {
    val jobId = activeJobForStage(stage)
    if (jobId.isDefined) {
        if (!waitingStages(stage) && !runningStages(stage) && !failedStages(stage)) {
            val missing = getMissingParentStages(stage).sortBy(_.id)
            if (missing.isEmpty) {
                submitMissingTasks(stage, jobId.get)
            } else {
                for (parent <- missing) {
                    submitStage(parent)
                }
                waitingStages += stage //需要将当前的Stage存储到waitingStages
            }
        }
    } else {
        abortStage(stage, "No active job for stage " + stage.id, None)
    }
}

在这里插入图片描述
ShuffleDependency|NarrowDependency ResultStage|ShuflleMapStage

private def submitMissingTasks(stage: Stage, jobId: Int) {
    //计算分区
    val partitionsToCompute: Seq[Int] = stage.findMissingPartitions()
    ...
    //把当前Stage添加到运行runningStages
    runningStages += stage
    ...
    //根据分区计算任务的最优位置
    val taskIdToLocations: Map[Int, Seq[TaskLocation]] = try {
        stage match {
            case s: ShuffleMapStage =>
            partitionsToCompute.map { id => (id, getPreferredLocs(stage.rdd, id))}.toMap
            case s: ResultStage =>
            partitionsToCompute.map { id =>
                val p = s.partitions(id)
                (id, getPreferredLocs(stage.rdd, p))
            }.toMap
        }
    } catch {
        case NonFatal(e) =>
        stage.makeNewStageAttempt(partitionsToCompute.size)
        listenerBus.post(SparkListenerStageSubmitted(stage.latestInfo, properties))
        abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}", Some(e))
        runningStages -= stage
        return
    }
    ...
    //封装任务集合
    val tasks: Seq[Task[_]] = try {
        val serializedTaskMetrics = closureSerializer.serialize(stage.latestInfo.taskMetrics).array()
        stage match {
            case stage: ShuffleMapStage =>
            stage.pendingPartitions.clear()
            partitionsToCompute.map { id =>
                val locs = taskIdToLocations(id)
                val part = partitions(id)
                stage.pendingPartitions += id
                new ShuffleMapTask(stage.id, stage.latestInfo.attemptNumber,
                                   taskBinary, part, locs, properties, serializedTaskMetrics, Option(jobId),
                                   Option(sc.applicationId), sc.applicationAttemptId, stage.rdd.isBarrier())
            }

            case stage: ResultStage =>
            partitionsToCompute.map { id =>
                val p: Int = stage.partitions(id)
                val part = partitions(p)
                val locs = taskIdToLocations(id)
                new ResultTask(stage.id, stage.latestInfo.attemptNumber,
                               taskBinary, part, locs, id, properties, serializedTaskMetrics,
                               Option(jobId), Option(sc.applicationId), sc.applicationAttemptId,
                               stage.rdd.isBarrier())
            }
        }
    } catch {
        case NonFatal(e) =>
        abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}", Some(e))
        runningStages -= stage
        return
    }
    ...
    //提交任务集
    if (tasks.size > 0) {

        taskScheduler.submitTasks(new TaskSet(
            tasks.toArray, stage.id, stage.latestInfo.attemptNumber, jobId, properties))
    } else {
        //标记任务已完成
        markStageAsFinished(stage, None)
        stage match {
            case stage: ShuffleMapStage =>
            logDebug(s"Stage ${stage} is actually done; " +
                     s"(available: ${stage.isAvailable}," +
                     s"available outputs: ${stage.numAvailableOutputs}," +
                     s"partitions: ${stage.numPartitions})")
            markMapStageJobsAsFinished(stage)
            case stage : ResultStage =>
            logDebug(s"Stage ${stage} is actually done; (partitions: ${stage.numPartitions})")
        }
        //计算当前Stage的所有子stage
        submitWaitingChildStages(stage)
    }
}

ShuffleMapTask|ResultTask
在这里插入图片描述

RDD算子实战

RDDs support two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset. For example, map is a transformation that passes each dataset element through a function and returns a new RDD representing the results. On the other hand, reduce is an action that aggregates all the elements of the RDD using some function and returns the final result to the driver program (although there is also a parallel reduceByKey that returns a distributed dataset).

All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program. This design enables Spark to run more efficiently. For example, we can realize that a dataset created through map will be used in a reduce and return only the result of the reduce to the driver, rather than the larger mapped dataset.

By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist (or cache) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting RDDs on disk, or replicated across multiple nodes.

转换算子(重点)
√map(func)

Return a new distributed dataset formed by passing each element of the source through a function func.

scala> val rdd=sc.parallelize(List(1,2,3),3)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.map(item=>item*2).collect()
res0: Array[Int] = Array(2, 4, 6)
√filter(func)

Return a new dataset formed by selecting those elements of the source on which funcreturns true.

scala> val rdd=sc.parallelize(List(1,2,3),3)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.filter(item=>item%2==0).collect()
res1: Array[Int] = Array(2)
√flatMap(func)

Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item).

scala> sc.makeRDD(List("this is a demo","hello world"))
         .flatMap(line=>line.split(" "))
         .collect()
res2: Array[String] = Array(this, is, a, demo, hello, world)

√mapPartitions(func)

Similar to map, but runs separately on each partition (block) of the RDD, so func must be of type Iterator<T> => Iterator<U>when running on an RDD of type T.

scala> val rdd=sc.makeRDD(List("a","b","c","d","e"),3)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[5] at makeRDD at <console>:24
scala> rdd.mapPartitions(vs=> vs.map(_.toUpperCase)).collect()
res3: Array[String] = Array(A, B, C, D, E)

√mapPartitionsWithIndex(func)

Similar to mapPartitions, but also provides func with an integer value representing the index of the partition, so func must be of type (Int, Iterator<T>) => Iterator<U> when running on an RDD of type T.

scala> val rdd=sc.makeRDD(List("a","b","c","d","e"),3)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[5] at makeRDD at <console>:24

scala> rdd.mapPartitionsWithIndex((p,vs)=> vs.map((_,p))).collect
res4: Array[(String, Int)] = Array((a,0), (b,1), (c,1), (d,2), (e,2))
sample(withReplacement, fraction, seed)-抽样

Sample a fraction fraction of the data, with or without replacement, using a given random number generator seed.

rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[8] at makeRDD at <console>:24

scala> rdd.sample(true,0.8,1L).collect()
res5: Array[String] = Array(a, a, d, d, e, e)

scala> rdd.sample(false,0.8,1L).collect()
res6: Array[String] = Array(a, c, d, e)

withReplacement:是否放回抽样、fraction:抽样比例、seed:随机种子

union(otherDataset)

Return a new dataset that contains the union of the elements in the source dataset and the argument.

scala> val rdd1=sc.makeRDD(List("a","b"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[12] at makeRDD at <console>:24

scala> val rdd2=sc.makeRDD(List("c","d"))
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[13] at makeRDD at <console>:24

scala> rdd1.union(rdd2).collect
res7: Array[String] = Array(a, b, c, d)
intersection(otherDataset)

Return a new RDD that contains the intersection of elements in the source dataset and the argument.

scala> val rdd1=sc.makeRDD(List("a","b"))
rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[15] at makeRDD at <console>:24

scala> val rdd2=sc.makeRDD(List("a","c"))
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[16] at makeRDD at <console>:24

scala> rdd1.intersection(rdd2).collect()
res8: Array[String] = Array(a)

√distinct([numPartitions]))

Return a new dataset that contains the distinct elements of the source dataset.

scala> val rdd=sc.makeRDD(List("a","b","a"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[29] at makeRDD at <console>:24

scala> rdd.distinct.collect()
res9: Array[String] = Array(a, b)
√coalesce(numPartitions) -缩放分区

Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more efficiently after filtering down a large dataset.

scala> val rdd=sc.makeRDD(List("a","b","a"),3)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[36] at makeRDD at <console>:24

scala> rdd.coalesce(2).getNumPartitions
res10: Int = 2

scala> rdd.coalesce(5).getNumPartitions
res11: Int = 3
√repartition(numPartitions)

Reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them. This always shuffles all data over the network.

scala> val rdd=sc.makeRDD(List("a","b","a"),3)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[39] at makeRDD at <console>:24

scala> rdd.repartition(5).getNumPartitions
res12: Int = 5

scala> rdd.repartition(1).getNumPartitions
res13: Int = 1

√groupByKey([numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable<V>) pairs.
Note: If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using reduceByKey or aggregateByKey will yield much better performance.
Note: By default, the level of parallelism in the output depends on the number of partitions of the parent RDD. You can pass an optional numPartitions argument to set a different number of tasks.

scala> val wordpair=sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1))
wordpair: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[57] at map at <console>:24

scala> wordpair.groupBy(t=>t._1)
res14: org.apache.spark.rdd.RDD[(String, Iterable[(String, Int)])] = ShuffledRDD[59] at groupBy at <console>:26

scala> wordpair.groupByKey
res15: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[60] at groupByKey at <console>:26

scala> wordpair.groupByKey.map(t=>(t._1,t._2.sum)).collect()
res16: Array[(String, Int)] = Array((this,1), (is,1), (day,2), (come,1), (baby,1), (up,1), (a,1), (on,1), (demo,1), (good,2), (study,1))
√reduceByKey(func, [numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func, which must be of type (V,V) => V. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.

scala> val wordpair=sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1))
wordpair: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[68] at map at <console>:24

scala> wordpair.reduceByKey((v1,v2)=>v1+v2,3).collect
res17: Array[(String, Int)] = Array((day,2), (come,1), (baby,1), (up,1), (is,1), (a,1), (demo,1), (this,1), (on,1), (good,2), (study,1))

√aggregateByKey(zeroValue)(seqOp, combOp, [numPartitions])

When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral “zero” value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.

scala> val wordpair=sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1))
wordpair: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[68] at map at <console>:24

scala> wordpair.aggregateByKey(0)((z,v)=> z+v,(b1,b2)=> b1+b2).collect
res18: Array[(String, Int)] = Array((this,1), (is,1), (day,2), (come,1), (baby,1), (up,1), (a,1), (on,1), (demo,1), (good,2), (study,1))

scala> wordpair.aggregateByKey(0)(_+_,_+_).collect
res19: Array[(String, Int)] = Array((this,1), (is,1), (day,2), (come,1), (baby,1), (up,1), (a,1), (on,1), (demo,1), (good,2), (study,1))
√sortByKey([ascending], [numPartitions])
scala> val wordcount=sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)

wordcount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[76] at reduceByKey at <console>:24

scala> wordcount.sortByKey(true,4).collect
res20: Array[(String, Int)] = Array((a,1), (baby,1), (come,1), (day,2), (demo,1), (good,2), (is,1), (on,1), (study,1), (this,1), (up,1))

scala> wordcount.sortByKey(false,4).collect
res21: Array[(String, Int)] = Array((up,1), (this,1), (study,1), (on,1), (is,1), (good,2), (demo,1), (day,2), (come,1), (baby,1), (a,1))

scala> wordcount.sortBy(t=>t._2,false,4).collect
res22: Array[(String, Int)] = Array((good,2), (day,2), (up,1), (a,1), (on,1), (demo,1), (study,1), (this,1), (is,1), (come,1), (baby,1))

√join(otherDataset, [numPartitions])

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through leftOuterJoin, rightOuterJoin, and fullOuterJoin.

scala> val userRDD=sc.makeRDD(List((1,"zs"),(2,"ls"),(3,"ww")))
userRDD: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[88] at makeRDD at <console>:24

scala> val costRDD=sc.makeRDD(List((1,100),(2,200),(1,150)))
costRDD: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[89] at makeRDD at <console>:24

scala> userRDD.join(costRDD).collect
res23: Array[(Int, (String, Int))] = Array((1,(zs,100)), (1,(zs,150)), (2,(ls,200)))

scala> userRDD.leftOuterJoin(costRDD).collect
res24: Array[(Int, (String, Option[Int]))] = Array((1,(zs,Some(100))), (1,(zs,Some(150))), (2,(ls,Some(200))), (3,(ww,None)))

cogroup(otherDataset, [numPartitions])

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable, Iterable)) tuples. This operation is also called groupWith.

scala> val userRDD=sc.makeRDD(List((1,"zs"),(2,"ls"),(3,"ww")))
scala> val costRDD=sc.makeRDD(List((1,100),(2,200),(1,150)))
scala> userRDD.cogroup(costRDD).collect
res25: Array[(Int, (Iterable[String], Iterable[Int]))] = Array((1,(CompactBuffer(zs),CompactBuffer(100, 150))), (2,(CompactBuffer(ls),CompactBuffer(200))), (3,(CompactBuffer(ww),CompactBuffer())))

scala> userRDD.cogroup(costRDD).map(t=>(t._1,t._2._1.toList(0),t._2._2.sum))
res26: Array[(Int, String, Int)] = Array((1,zs,250), (2,ls,200), (3,ww,0))
动作算子(Action)

触发任务的执行,返回值一般是UnitArrayAnyVal,任何一个spark程序最多只用一个动作算子。

reduce(func)

RDD[T] fun:(T,T)=>T

scala> sc.makeRDD(List(1,2,3)).reduce((v1,v2)=> v1+v2 )
res0: Int = 6

scala> sc.makeRDD(List("a","b","c")).reduce((v1,v2)=> v1+","+v2 )
res1: String = a,c,b
collect()(测试)

仅仅是将远程的计算节点的数据,下载到Driver端。

scala> sc.makeRDD(List("a","b","c")).collect()
res2: Array[String] = Array(a, b, c)

一般要求RDD数据集比较小,RDD[T]-> Array[T],这里的T必须能够参与序列化

foreach(func)

动作算子,返回值为Unit,概算子是在计算节点完成计算。

scala> sc.makeRDD(List(1,2,3)).foreach(println) # 没有打印输出

scala> sc.makeRDD(List(1,2,3)).collect().foreach(println)
1
2
3

count()

计算RDD中元素的个数

scala> sc.makeRDD(List("a" ,"b")).count()
res8: Long = 2

first()

获取第一个等价于take(1)

scala> sc.makeRDD(List(1,2,3)).first
res9: Int = 1

scala> sc.makeRDD(List(1,2,3)).take(1)
res10: Array[Int] = Array(1)

scala> sc.makeRDD(List(1,2,3)).take(2)
res11: Array[Int] = Array(1, 2)

takeSample(withReplacement, num, [seed])

随机从RDD中抽取num个数据,

scala> sc.makeRDD(List(1,2,3,4,5,6)).takeSample(false,10,1L)
res16: Array[Int] = Array(5, 3, 1, 2, 6, 4)

scala> sc.makeRDD(List(1,2,3,4,5,6)).sample(true,0.8,1L).collect()
res12: Array[Int] = Array(1, 1, 4, 4, 6, 6)

takeOrdered(n, [ordering])
scala> sc.makeRDD(List(("zhangsan",10000,18),("zhangsan",10000,19),("lisi",15000,20))).takeOrdered(3)(new Ordering[(String,Int,Int)]{
     |       override def compare(x: (String, Int, Int), y: (String, Int, Int)): Int = {
     |         if(x._2 != y._2){
     |           (x._2-y._2)* -1
     |         }else{
     |           (x._3-y._3) * -1
     |         }
     |       }
     |     })
res18: Array[(String, Int, Int)] = Array((lisi,15000,20), (zhangsan,10000,19), (zhangsan,10000,18))

scala>  sc.makeRDD(List(1,3,2,8,9,6)).takeOrdered(2)
res19: Array[Int] = Array(1, 2)
countByKey()

注意样本数据的key数目不易过大。因为返回值是Map

scala> sc.textFile("hdfs:///demo/words")
        .flatMap(_.split("\\s+"))
        .map((_,1))
        .countByKey()

saveAsTextFile(path) 接近实战
scala> sc.textFile("hdfs:///demo/words")
            .flatMap(_.split("\\s+"))
            .map(word=>word+","+1).saveAsTextFile("hdfs:///results01")
INFO com.baizhi.service.IUserSerice#login UserRisk 001 123456 1000,15000,2000 219.143.103.186 2019-09-27 10:10:00 Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Mobile Safari/537.36

地域 用户分布
时间 每个月用户登录 统计
统计用户APP使用习惯
foreach(func)写出

连接需要序列化

val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)
//定义Hbase连接参数
val hConf = HBaseConfiguration.create()
hConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
val conn = ConnectionFactory.createConnection(hConf)
val table = conn.getTable(TableName.valueOf("baizhi:t_wordcount"))

sc.textFile("hdfs://CentOS:9000/demo/words")
.flatMap(_.split("\\s+"))
.map((_,1))
.reduceByKey(_+_)
.foreach(wordpair=>{
    //写入到HBaase
    val put = new Put(wordpair._1.getBytes())
    put.add("cf1".getBytes(),"key".getBytes(),wordpair._1.getBytes())
    put.add("cf1".getBytes(),"count".getBytes(),(wordpair._2+"").getBytes())
    table.put(put)

})
conn.close()
table.close()
sc.stop()

Driver端定义的变量都必须是只读的,分布式算子在使用Driver的变量的时候需要做网络下载。

频繁的开关连接

val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)

val value = new util.HashMap()
sc.textFile("hdfs://CentOS:9000/demo/words")
.flatMap(_.split("\\s+"))
.map((_,1))
.reduceByKey(_+_)
.foreach(wordpair=>{
    //定义Hbase连接参数
    val hConf = HBaseConfiguration.create()
    hConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
    val conn = ConnectionFactory.createConnection(hConf)
    val table = conn.getTable(TableName.valueOf("baizhi:t_wordcount"))

    //写入到HBaase
    val put = new Put(wordpair._1.getBytes())
    put.add("cf1".getBytes(),"key".getBytes(),wordpair._1.getBytes())
    put.add("cf1".getBytes(),"count".getBytes(),(wordpair._2+"").getBytes())
    table.put(put)

    conn.close()
    table.close()
})
sc.stop()

一个分区一个连接 -foreachpartition

val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)


sc.textFile("hdfs://CentOS:9000/demo/words")
.flatMap(_.split("\\s+"))
.map((_,1))
.reduceByKey(_+_)
.foreachPartition(wordpairs=>{
    //定义Hbase连接参数
    val hConf = HBaseConfiguration.create()
    hConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
    val conn = ConnectionFactory.createConnection(hConf)
    val table = conn.getTable(TableName.valueOf("baizhi:t_wordcount"))

    wordpairs.foreach(wordpair=>{
        //写入到HBaase
        val put = new Put(wordpair._1.getBytes())
        put.add("cf1".getBytes(),"key".getBytes(),wordpair._1.getBytes())
        put.add("cf1".getBytes(),"count".getBytes(),(wordpair._2+"").getBytes())
        table.put(put)
    })

    conn.close()
    table.close()
})

sc.stop()

利用类加载机制,将连接参数做成静态

如果一个节点负责多个分区,同样可以实现共享连接。

object HBaseSink {
    def createConnection(): Connection = {
        val hConf = HBaseConfiguration.create()
        hConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
        ConnectionFactory.createConnection(hConf)
    }
    val conn:Connection=createConnection()
    Runtime.getRuntime.addShutdownHook(new Thread(){
        override def run(): Unit = {
            println("=-===close=====")
            conn.close()
        }
    })
}

val sparkConf = new SparkConf()
.setAppName("wordcount")
.setMaster("local[6]")
val sc = new SparkContext(sparkConf)

sc.textFile("hdfs://CentOS:9000/demo/words")
.flatMap(_.split("\\s+"))
.map((_,1))
.reduceByKey(_+_)
.foreachPartition(wordpairs=>{
    //定义Hbase连接参数
    val table = HBaseSink.conn.getTable(TableName.valueOf("baizhi:t_wordcount"))
    wordpairs.foreach(wordpair=>{
        //写入到HBaase
        val put = new Put(wordpair._1.getBytes())
        put.add("cf1".getBytes(),"key".getBytes(),wordpair._1.getBytes())
        put.add("cf1".getBytes(),"count".getBytes(),(wordpair._2+"").getBytes())
        table.put(put)
    })
    table.close()
})
sc.stop()

共享变量

如果Spark转换算子中用到Driver中定义的变量,spark会将这些定义在Driver中的变量拷贝到所有的计算节点,并且这些变量的修改的值并不会传递回来给Driver定义的变量。

scala> var count:Int=0
count: Int = 0

scala> sc.textFile("hdfs:///demo/words").foreach(line=> count += 1)

scala> println(count)
0

如上面演示,我们在Driver中定义了一个Int类型的变量,在foreach算子中我们常使用count,spar底层在执行的时候,仅仅是将count通过网络方式传递给计算节点,计算节点仅仅是copy的数据,一旦计算结束之后,任务完成, 事实上每个计算节点都维系了一个自己的count变量的副本,并不会修改Driver中的变量。因此可以得出结论,任何定义在Driver中的变量一般情况下都是只读的。并行算子每次在使用的时候会尝试下载本地。

场景1

几十MB用户数据

001 zhangsan
002 lisi

一个TB的订单数据

001 50
002 100
001 120
...

需要用户对用户订单数据做ETL清洗,最终输出结果

001 zhangsan 50
002 lisi 100
001 zhangsan 120
...
scala> val userRDD=sc.makeRDD(List(("001","zhangsan"),("002","lisi")))
scala> val orderRDD=sc.makeRDD(List(("001",500),("002",1000),("001",100)))
scala> userRDD.join(orderRDD).collect
res2: Array[(String, (String, Int))] = Array((001,(zhangsan,500)), (001,(zhangsan,100)), (002,(lisi,1000)))

此时系统一般会产生shuffle,会导致订单数据在shuffle过程占用大量带宽。解决之道

scala> val map=Map(("001","zhangsan"),("002","lisi"))
scala> val orderRDD=sc.makeRDD(List(("001",500),("002",1000),("001",100)))
scala> orderRDD.map(t=>(t._1,map.get(t._1).getOrElse(""),t._2)).collect
res3: Array[(String, String, Int)] = Array((001,zhangsan,500), (002,lisi,1000), (001,zhangsan,100))

如果orderRDD的分区比较多,也就是后续所有的并行任务在计算的时候都需要从Driver端下载map,由于有可能一些任务执行在同一计算节点上。没必要每一次计算都需从Dirver端下载,因此Spark提供一种广播变量机制,通过广播变量将需要下载的数据提前广播给所有的计算节点,这样后续所有计算在使用该变量的时候,直接从本地获取,节省了和Driver通讯的网络成本。

scala> val map=Map(("001","zhangsan"),("002","lisi"))
map: scala.collection.immutable.Map[String,String] = Map(001 -> zhangsan, 002 -> lisi)

scala> val bmap=sc.broadcast(map) //广播变量
bmap: org.apache.spark.broadcast.Broadcast[scala.collection.immutable.Map[String,String]] = Broadcast(6)

scala> orderRDD.map(t=>(t._1,bmap.value.get(t._1).getOrElse(""),t._2)).collect
res5: Array[(String, String, Int)] = Array((001,zhangsan,500), (002,lisi,1000), (001,zhangsan,100))

bmap.value:获取的广播到本地的变量map,程序就会再从Driver下载变量了。

场景2

TB级别

001 zhangsan
002 lisi

TB级别

001 100
002 1000

解决思路:对这两个需要join的数据预分区,必须保证分区数保持高度一致,例如: 分区10000

100MB & 100MB join

sc.textFile("hdfs:///users-log/").map(...).repartitions(1000).savaAsTextFiles("hdfs:///users")
sc.textFile("hdfs:///orders-log/").map(...).repartitions(1000).savaAsTextFiles("hdfs:///orders")

sc.textFile("hdfs://CentOS:9000/users/part-00000").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1)))
.join(sc.textFile("hdfs://CentOS:9000/orders/part-00000").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1))))
...

sc.textFile("hdfs://CentOS:9000/users/part-00001").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1)))
.join(sc.textFile("hdfs://CentOS:9000/orders/part-00002").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1))))
...

累加器

scala> var count:Int=0
count: Int = 0

scala> sc.textFile("hdfs:///demo/words").foreach(line=> count += 1)

scala> println(count)
0

修改的值 并不传递回Driver,可以使用spark提供累加器

scala> val count = sc.longAccumulator("count")
count: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 525, name: Some(count), value: 0)

scala> sc.textFile("hdfs:///demo/words").foreach(line=> count.add(1))

scala> count.value
res14: Long = 4

络成本。

scala> val map=Map(("001","zhangsan"),("002","lisi"))
map: scala.collection.immutable.Map[String,String] = Map(001 -> zhangsan, 002 -> lisi)

scala> val bmap=sc.broadcast(map) //广播变量
bmap: org.apache.spark.broadcast.Broadcast[scala.collection.immutable.Map[String,String]] = Broadcast(6)

scala> orderRDD.map(t=>(t._1,bmap.value.get(t._1).getOrElse(""),t._2)).collect
res5: Array[(String, String, Int)] = Array((001,zhangsan,500), (002,lisi,1000), (001,zhangsan,100))

bmap.value:获取的广播到本地的变量map,程序就会再从Driver下载变量了。

场景2

TB级别

001 zhangsan
002 lisi

TB级别

001 100
002 1000

解决思路:对这两个需要join的数据预分区,必须保证分区数保持高度一致,例如: 分区10000

100MB & 100MB join

sc.textFile("hdfs:///users-log/").map(...).repartitions(1000).savaAsTextFiles("hdfs:///users")
sc.textFile("hdfs:///orders-log/").map(...).repartitions(1000).savaAsTextFiles("hdfs:///orders")

sc.textFile("hdfs://CentOS:9000/users/part-00000").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1)))
.join(sc.textFile("hdfs://CentOS:9000/orders/part-00000").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1))))
...

sc.textFile("hdfs://CentOS:9000/users/part-00001").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1)))
.join(sc.textFile("hdfs://CentOS:9000/orders/part-00002").map(t=>t.split("\\s+")).map(ts=>(ts(0),ts(1))))
...

累加器

scala> var count:Int=0
count: Int = 0

scala> sc.textFile("hdfs:///demo/words").foreach(line=> count += 1)

scala> println(count)
0

修改的值 并不传递回Driver,可以使用spark提供累加器

scala> val count = sc.longAccumulator("count")
count: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 525, name: Some(count), value: 0)

scala> sc.textFile("hdfs:///demo/words").foreach(line=> count.add(1))

scala> count.value
res14: Long = 4
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值