Spark
一、概述
Spark是一个快如闪电的统一分析引擎(计算框架)用于大规模数据集的处理。
Spark在做数据的批处理计算,计算性能大约是Hadoop MapReduce的10~100倍,因为Spark使用比较先进的基于DAG
任务调度,可以将一个任务拆分成若干个阶段,然后将这些阶段分批次交给集群计算节点处理。
DAG
DAG,中文名"有向无环图"。"有向"指的是有方向,准确的说应该是同一个方向,"无环"则指够不成闭环。在DAG中,没有区块的概念,他的组成单元是一笔笔的交易,每个单元记录的是单个用户的交易,这样就省去了打包出块的时间。验证手段则依赖于后一笔交易对前一笔交易的验证,换句话说,你要想进行一笔交易,就必须要验证前面的交易,具体验证几个交易,根据不同的规则来进行。这种验证手段,使得DAG可以异步并发的写入很多交易,并最终构成一种拓扑的树状结构,能够极大地提高扩展性。
https://baijiahao.baidu.com/s?id=1602410984624790167&wfr=spider&for=pc
Mapreduce 与 Spark的区别
:
计算流程
//计算过程对比:
//Mapreduce
1)MapReduce虽然基于矢量编程思想,但是计算状态过于简单,只是简单的将任务分为Map state和Reduce State,没有考虑到迭代计算场景。
2)在Map任务计算的中间结果存储到本地磁盘,IO调用过多,数据读写效率差。
3)MapReduce是先提交任务,然后在计算过程中申请资源。并且计算方式过于笨重。每个并行度都是由一个JVM进程来实现计算。
//Spark
1)智能DAG任务拆分,将一个复杂计算拆分成若干个State,满足迭代计算场景
2)Spark提供了计算的缓冲和容错策略,将计算结果存储在内存或者磁盘,加速每个state的运行,提升运行效率
3)Spark在计算初期,就已经申请好计算资源。任务并行度是通过在Executor进程中启动线程实现,相比较于MapReduce计算更加轻快。
二、环境搭建
①.优化Linux性能
#设置虚拟机进程数和文件数(需重启)
[root@CentOS ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800
②.前置准备
# 配置主机名
vi /etc/sysconfig/network
# 设置IP映射
vi /etc/hosts
# 临时关闭服务
[root@CentOS ~]# service iptables stop
# 关闭开机自动启动
[root@CentOS ~]# chkconfig iptables off
[root@CentOS ~]# chkconfig --list | grep iptables
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
③.Java环境
[root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm
[root@CentOS ~]# vi .bashrc
JAVA_HOME=/root/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@CentOS ~]# source ~/.bashrc
#免密登录
[root@CentOS ~]# ssh-keygen -t rsa
[root@CentOS ~]# ssh-copy-id CentOS
④.HDFS环境
1.core-site
vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn访问入口-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
2.hdfs-site 、slaves(略)
vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因子-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>CentOS:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
<name>dfs.datanode.handler.count</name>
<value>6</value>
</property>
3.yarn.site
vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml
<!--配置MapReduce计算框架的核心实现Shuffle-洗牌-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--配置资源管理器所在的目标主机-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>CentOS</value>
</property>
<!--关闭物理内存检查-->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<!--关闭虚拟内存检查-->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
4.mapred-site
vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml
<!--MapRedcue框架资源管理器的实现-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
5.配置环境变量
[root@CentOS ~]# vi .bashrc
HADOOP_HOME=/root/hadoop-2.9.2
JAVA_HOME=/root/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME
export CLASSPATH
export PATH
export M2_HOME
export MAVEN_OPTS
export HADOOP_HOME
[root@CentOS ~]# source .bashrc
6.启动hadoop
[root@CentOS ~]# hdfs namenode -format # 创建初始化所需的fsimage文件
[root@CentOS ~]# start-dfs.sh
[root@CentOS ~]# start-yarn.sh
⑤.Spark
Ⅰ.spark yarn
下载spark-2.4.1-bin-without-hadoop.tgz
解压,并且将Spark目录修改名字为spark-2.4.1
[root@CentOS ~]# cd /usr/spark-2.4.5/
[root@CentOS spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.4.5]# vi conf/spark-env.sh
1.spark-env.sh
# Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
HADOOP_CONF_DIR=/root/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES= 2 #每个节点有几个线程
SPARK_EXECUTOR_MEMORY=1G #节点的最大内存
SPARK_DRIVER_MEMORY=1G #driver的最大内存
LD_LIBRARY_PATH=/root/hadoop-2.9.2/lib/native
export HADOOP_CONF_DIR export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
2.spark-default.conf
[root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/sparkdefaults.conf
[root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs
3.创建日志输出目录
[root@CentOS ~]# hdfs dfs -mkdir /spark-logs
⑤.启动 Spark history 服务
[root@CentOS spark-2.4.1]# ./sbin/start-history-server.sh
☆☆☆访问端口: 18080
⑥.测试环境
[root@CentOS spark-2.4.1]# ./bin/spark-submit
--master yarn #链接的资源服务器的名字
--deploy-mode client #部署模式,client / cluster
--class org.apache.spark.examples.SparkPi #运行的主类名
--num-executors 2 #计算所需进程数
--executor-cores 3 #节点的最多使用CPU的核数
/usr/spark-2.4.1/examples/jars/spark-examples_2.11-2.4.1.jar
Ⅱ. Spark Standlone
spark-env.sh
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
SPARK_MASTER_HOST=CentOS
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4 #工作节点的线程数 core
SPARK_WORKER_INSTANCES=2 #工作节点数
SPARK_WORKER_MEMORY=2g #工作节点的最大内存
export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES
export LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
启动:
[root@CentOS spark-2.4.5]# ./sbin/start-all.sh
☆☆☆访问端口: 8080
测试:
./bin/spark-submit
--master spark://hbase:7077
--deploy-mode client
--class org.apache.spark.examples.SparkPi
--total-executor-cores 6
/root/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar
⑥.Spark Shell
Ⅰ.yarn模式
./spark-shell
--master yarn
--deploy-mode client
--executor-cores 4
--num-executors 3
Ⅱ.standlone模式
./bin/spark-shell
--master spark://hbase:7077
--total-executor-cores 6
测试案例:
this is a dome
hello world
good good study
day day up
come on baby
上传文件到hdfs的 demo/words
sc.textFile("hdfs:///demo/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2,true).saveAsTextFile("hdfs:///demo/results ")
#输出的结果:
(this,1), (is,1), (come,1), (hello,1), (baby,1), (world,1), (up,1), (a,1), (on,1), (dome,1), (study,1), (good,2), (day,2)
三、Spark RDD
①.RDD
创建
Spark 中最基本的数据抽象是 RDD。
RDD:弹性分布式数据集 (Resilient Distributed DataSet)。
https://www.jianshu.com/p/6411fff954cf
总体上看,每个Spark应用程序都包含一个Driver,该Driver 程序运行用户的main方法,并在集群上执行各种操作。
创建RDD方法两种:
1.Driver并行化现有的Scala集合 2.引用外部存储系统
1.hdfs
1.maven坐标
<!--spark rdd 依赖-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.5</version>
</dependency>
<!--hdfs集成-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.9.2</version>
</dependency>
<!--scala编译插件-->
<build>
<plugins>
<!--scala编译插件-->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>4.0.1</version>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!--创建fatjar插件-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude