Spark概述与环境搭建(yarn|Standlone)

概述

Spark是一个快如闪电的统一分析引擎(计算框架)用于大规模数据集的处理。Spark在做数据的批处理计算,计算性能大约是Hadoop MapReduce的10~100倍,因为Spark使用比较先进的基于 DAG 任务调度,可以将一个任务拆分成若干个阶段,然后将这些阶段分批次交给 集群计算节点 处理
在这里插入图片描述

MapReduce VS Spark

MapReduce作为第一代大数据处理框架,在设计初期只是为了满足基于海量数据级的海量数据计算的迫切需求。自2006年剥离自Nutch(Java搜索引擎)工程,主要解决的是早期人们对大数据的初级认知所
面临的问题。

在这里插入图片描述
整个MapReduce的计算实现的是基于磁盘的IO计算,随着大数据技术的不断普及,人们开始重新定义大数据的处理方式,不仅满足于能在合理的时间范围内完成对大数据的计算,还对计算的实效性提出了更苛刻的要求,因为人们开始探索使用Map Reduce计算框架完成一些复杂的高阶算法,往往这些算法通常不能通过1次性的Map Reduce迭代计算完成。由于Map Reduce计算模型总是把结果存储到磁盘中,每次迭代都需要将数据磁盘加载到内存,这就为后续的迭代带来了更多延长。
2009年Spark在加州伯克利AMP实验室诞生,2010首次开源后该项目就受到很多开发人员的喜爱,2013年6月份开始在Apache孵化,2014年2月份正式成为Apache的顶级项目。Spark发展如此之快是因为Spark在计算层方面明显优于Hadoop的Map Reduce这磁盘迭代计算,因为Spark可以使用内存对数据做计算,而且计算的中间结果也可以缓存在内存中,这就为后续的迭代计算节省了时间,大幅度的提升了针对于海量数据的计算效率。
在这里插入图片描述
Spark也给出了在使用MapReduce和Spark做线性回归计算(算法实现需要n次迭代)上,Spark的速率几乎是MapReduce计算10~100倍这种计算速度。
在这里插入图片描述
不仅如此Spark在设计理念中也提出了 One stack ruled them all战略,并且提供了基于Spark批处理至上的计算服务分支例如:实现基于Spark的交互查询、近实时流处理、机器学习、Grahx 图形关系存储等

在这里插入图片描述
从图中不难看出Apache Spark处于计算层,Spark项目在战略上启到了承上启下的作用,并没有废弃原有以hadoop为主体的大数据解决方案。因为Spark向下可以计算来自于HDFS、HBase、Cassandra和亚马逊S3文件服务器的数据,也就意味着使用Spark作为计算层,用户原有的存储层架构无需改动。

计算流程

因为Spark计算是在MapReduce计算之后诞生,吸取了MapReduce设计经验,极大地规避了MapReduce计算过程中的诟病,先来回顾一下MapReduce计算的流程。
在这里插入图片描述

总结⼀下几点缺点:

1)MapReduce虽然基于矢量编程思想,但是计算状态过于简单,只是简单的将任务分为Map state和Reduce
State,没有考虑到迭代计算场景。

2)在Map任务计算的中间结果存储到本地磁盘,IO

调用过多,数据读写效率差。

3)MapReduce是先提交任务,然后在计算过程中申请资源。并且计

算方式过于笨重。每个并行度都是由一个JVM进程来实现计算。

通过简单的罗列不难发现MapReduce计算的诟病和问题,因此Spark在计算层面上借鉴了MapReduce计算设计的经验,提出了DGASchedule和TaskSchedual概念,打破了在MapReduce任务中一个job只用Map State和Reduce State的两个阶段,并不适合⼀些迭代计算次数比较多的场景。因此Spark 提出了一个比较先进的设计理念,任务状态拆分,Spark在任务计算初期首先通过DGASchedule计算任务的State,将每个阶段的Sate封装成一个TaskSet,然后由TaskSchedual将TaskSet提交集群进行计算。可以尝试将Spark计算的流程使用一下的流程图描述如下:
在这里插入图片描述
相⽐比较于MapReduce计算,Spark计算有以下优点:

1)智能DAG任务拆分,将一个复杂计算拆分成若干个State,满足迭代计算场景
2)Spark提供了计算的缓冲和容错策略,将计算结果存储在内存或者磁盘,加速每个state的运行,提升运行效率
3)Spark在计算初期,就已经申请好计算资源。任务并行度是通过在Executor进程中启动线程实现,相比较于MapReduce计算更加轻快

提示 目前Spark提供了Cluster Manager的实现由Yarn、Standalone、Messso、kubernates等实现。其中企业常用的有Yarn和Standalone方式的管理。

环境搭建

1.Spark On Yarn

Hadoop环境
  • 设置CentOS进程数和文件数(可选)
 [root@CentOS ~]# vi /etc/security/limits.conf
    * soft nofile 204800
    * hard nofile 204800
    * soft nproc 204800
    * hard nproc 204800

优化linux性能,修改这个最大值,重启CentOS生效


 - 配置主机名(重启生效)


```powershell
 [root@CentOS ~]# vi /etc/hostname
  CentOS
  [root@CentOS ~]# rebbot
  • 设置IP映射
[root@CentOS ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.134 CentOS
  • 防火墙服务
  # 临时关闭服务
  [root@CentOS ~]# systemctl stop firewalld
  [root@CentOS ~]# firewall-cmd --state
  not running

关闭开机⾃自动启动

  [root@CentOS ~]# systemctl disable firewalld
  • 安装JDK1.8+
[root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm
  [root@CentOS ~]# ls -l /usr/java/
  total 4
  lrwxrwxrwx. 1 root root 16 Mar 26 00:56 default -> /usr/java/latest
  drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64
  lrwxrwxrwx. 1 root root 28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64
  [root@CentOS ~]# vi .bashrc
    JAVA_HOME=/usr/java/latest
    PATH=$PATH:$JAVA_HOME/bin
    CLASSPATH=.
    export JAVA_HOME
    export PATH
    export CLASSPATH
   [root@CentOS ~]# source ~/.bashrc
  • SSH配置免密
 [root@CentOS ~]# ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 root@CentOS
    The key's randomart image is:
    +--[ RSA 2048]----+
    | |
    | o . . |
    | . + + o .|
    | . = * . . . |
    | = E o . . o|
    | + = . +.|
    | . . o + |
    | o . |
    | |
    +-----------------+
    [root@CentOS ~]# ssh-copy-id CentOS
    The authenticity of host 'centos (192.168.40.128)' can't be established.
    RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'centos,192.168.40.128' (RSA) to the list of known hosts.
    root@centos's password:
    Now try logging into the machine, with "ssh 'CentOS'", and check in:
    .ssh/authorized_keys
    to make sure we haven't added extra keys that you weren't expecting.
    [root@CentOS ~]# ssh root@CentOS
    Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1
    [root@CentOS ~]# exit
    logout
    Connection to CentOS closed.
  • 配置HDFS|YARN

  • 将 hadoop-2.9.2.tar.gz 解压到系统的 /usr 目录下然后配置[core|hdfs|yarn|mapred]-site.xml配置文件。

    [root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml

<!--nn访问入口-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基础目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

 <!--block副本因子-->
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    <!--配置Sencondary namenode所在物理主机-->
    <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
    </property>
    <!--设置datanode最大文件操作数-->
    <property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
    </property>
    <!--设置datanode并行处理能力-->
    <property>
    <name>dfs.datanode.handler.count</name>
    <value>6</value>
    </property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml

yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname CentOS yarn.nodemanager.pmem-check-enabled false yarn.nodemanager.vmem-check-enabled false

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml

 <!--MapRedcue框架资源管理器的实现-->
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
  • 配置hadoop环境变量
 [root@CentOS ~]# vi .bashrc
    JAVA_HOME=/usr/soft/jdk1.8
    HADOOP_HOME=/root/hadoop-2.9.2
    PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    CLASSPATH=.
    export JAVA_HOME
    export CLASSPATH
    export PATH
    export HADOOP_HOME
    [root@CentOS ~]# source .bashrc
  • 启动Hadoop服务
 [root@CentOS ~]# hdfs namenode -format # 创建初始化所需的fsimage⽂文件
    [root@CentOS ~]# start-dfs.sh
    [root@CentOS ~]# start-yarn.sh
    [root@CentOS ~]# jps
    122690 NodeManager
    122374 SecondaryNameNode
    122201 DataNode
    122539 ResourceManager
    122058 NameNode
    123036 Jps

访问:http://CentOS:8088以及 http://centos:50070/

Spark环境

下载 spark-2.4.5-bin-without-hadoop.tgz 解压到 /usr 目录,并且将Spark目录修改名字为 spark-2.4.5 然后修改 spark-env.sh 和 spark-default.conf 文件.

  • 解压安装spark
[root@CentOS ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/
  [root@CentOS ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5
  [root@CentOS ~]# tree -L 1 /usr/spark-2.4.5/
  !"" bin # Spark系统执行行脚本
  !"" conf # Spar配置⽬目录
  !"" data
  !"" examples # Spark提供的官⽅方案例例
  !"" jars
  !"" kubernetes
  !"" LICENSE
  !"" licenses
  !"" NOTICE
  !"" python
  !"" R
  !"" README.md
  !"" RELEASE
  !"" sbin # Spark⽤用户执⾏行行脚本
  #"" yarn
  • 配置Spark服务
   [root@CentOS ~]# cd /usr/spark-2.4.5/
    [root@CentOS spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
    [root@CentOS spark-2.4.5]# vi conf/spark-env.sh
    
    # Options read in YARN client/cluster mode
    # - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
    # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
    # - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
    # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
    # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
    # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
    
    HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
    YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
    SPARK_EXECUTOR_CORES= 2
    SPARK_EXECUTOR_MEMORY=1G
    SPARK_DRIVER_MEMORY=1G
    
    LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
    export HADOOP_CONF_DIR
    export YARN_CONF_DIR
    export SPARK_EXECUTOR_CORES
    export SPARK_DRIVER_MEMORY
    export SPARK_EXECUTOR_MEMORY
    export LD_LIBRARY_PATH
    export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
    export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"



    [root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/spark- defaults.conf
    [root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
    
    #放最后
    spark.eventLog.enabled=true
    spark.eventLog.dir=hdfs:///spark-logs

需要现在HDFS上创建 spark-logs 目录,用于作为Sparkhistory服务器存储历史计算数据的地方

[root@CentOS ~]# hdfs dfs -mkdir /spark-logs
  • 启动Spark history server

    [root@CentOS spark-2.4.5]# ./sbin/start-history-server.sh
    [root@CentOS spark-2.4.5]# jps
    124528 HistoryServer
    122690 NodeManager
    122374 SecondaryNameNode
    122201 DataNode
    122539 ResourceManager
    122058 NameNode
    124574 Jps

  • 访问 http://主机ip:18080 访问Spark History Server

测试环境
spark-submit
[root@CentOS spark-2.4.5]# ./bin/spark-submit
--master yarn
--deploy-mode client
--class org.apache.spark.examples.SparkPi
--num-executors 2
--executor-cores 3
/usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

pi的结果

19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID
0) in 6609 ms on CentOS (executor 1) (1/2)
19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID
1) in 6403 ms on CentOS (executor 1) (2/2)
19/04/21 03:30:39 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have
all completed, from pool
19/04/21 03:30:39 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at
SparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 30.317103 s
`Pi is roughly 3.141915709578548`
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped Spark@41035930{HTTP/1.1,
[http/1.1]}{0.0.0.0:4040}
19/04/21 03:30:40 INFO ui.SparkUI: Stopped Spark web UI at http://CentOS:4040
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
参数说明
maste–链接的资源服务器的名字 yarn
–deploy-mode部署模式,可选值有 client 和 cluster ,决定Driver程序是否在远程执行
–num-executors计算过程所需要的进程数
–executor-cores每个Exector最多使用的CPU的核数

Spark shell
在这里插入图片描述

2.Spark Standalone

Hadoop环境

  • 设置CentOS进程数和文件数(可选)

    [root@CentOS ~]# vi /etc/security/limits.conf

    • soft nofile 204800
    • hard nofile 204800
    • soft nproc 204800
    • hard nproc 204800

优化linux性能,修改这个最大值,重启CentOS生效

  • 配置主机名(重启生效)
    [root@CentOS ~]# vi /etc/hostname
    CentOS
    [root@CentOS ~]# rebbot

  • 设置IP映射

    [root@CentOS ~]# vi /etc/hosts
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.52.134 CentOS

  • 防火墙服务
    # 临时关闭服务
    [root@CentOS ~]# systemctl stop firewalld
    [root@CentOS ~]# firewall-cmd --state
    not running
    # 关闭开机自动启动
    [root@CentOS ~]# systemctl disable firewalld

  • 安装JDK1.8+
    [root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm
    [root@CentOS ~]# ls -l /usr/java/
    total 4
    lrwxrwxrwx. 1 root root 16 Mar 26 00:56 default -> /usr/java/latest
    drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64
    lrwxrwxrwx. 1 root root 28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64
    [root@CentOS ~]# vi .bashrc
    JAVA_HOME=/usr/java/latest
    PATH= P A T H : PATH: PATH:JAVA_HOME/bin
    CLASSPATH=.
    export JAVA_HOME
    export PATH
    export CLASSPATH
    [root@CentOS ~]# source ~/.bashrc

  • SSH配置免密

    [root@CentOS ~]# ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory ‘/root/.ssh’.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 root@CentOS
    The key’s randomart image is:
    ±-[ RSA 2048]----+
    | |
    | o . . |
    | . + + o .|
    | . = * . . . |
    | = E o . . o|
    | + = . +.|
    | . . o + |
    | o . |
    | |
    ±----------------+
    [root@CentOS ~]# ssh-copy-id CentOS
    The authenticity of host ‘centos (192.168.40.128)’ can’t be established.
    RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added ‘centos,192.168.40.128’ (RSA) to the list of known hosts.
    root@centos’s password:
    Now try logging into the machine, with “ssh ‘CentOS’”, and check in:
    .ssh/authorized_keys
    to make sure we haven’t added extra keys that you weren’t expecting.
    [root@CentOS ~]# ssh root@CentOS
    Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1
    [root@CentOS ~]# exit
    logout
    Connection to CentOS closed.

  • 配置HDFS
    将 hadoop-2.9.2.tar.gz 解压到系统的 /usr 目录下然后配置[core|hdfs|yarn|mapred]-site.xml配置文
    件。

    [root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml

    fs.defaultFS hdfs://CentOS:9000 hadoop.tmp.dir /usr/hadoop-2.9.2/hadoop-${user.name}

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

<!--block副本因子-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>CentOS:50090</value>
</property>
<!--设置datanode最大文件操作数-->
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
<!--设置datanode并行处理能力-->
<property>
<name>dfs.datanode.handler.count</name>
<value>6</value>
</property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml

<!--配置MapReduce计算框架的核⼼心实现Shuffle-洗牌-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--配置资源管理理器器所在的⽬目标主机-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>CentOS</value>
</property>
<!--关闭物理理内存检查-->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<!--关闭虚拟内存检查-->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml

<!--MapRedcue框架资源管理器的实现-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
  • 配置hadoop环境变量

    [root@CentOS ~]# vi .bashrc
    JAVA_HOME=/usr/java/latest
    HADOOP_HOME=/usr/hadoop-2.9.2
    PATH= P A T H : PATH: PATH:JAVA_HOME/bin: H A D O O P H O M E / b i n : HADOOP_HOME/bin: HADOOPHOME/bin:HADOOP_HOME/sbin
    CLASSPATH=.
    export JAVA_HOME
    export CLASSPATH
    export PATH
    export HADOOP_HOME
    [root@CentOS ~]# source .bashrc

  • 启动Hadoop服务

    [root@CentOS ~]# hdfs namenode -format # 创建初始化所需的fsimage⽂文件
    [root@CentOS ~]# start-dfs.sh
    [root@CentOS ~]# jps
    122374 SecondaryNameNode
    122201 DataNode
    122058 NameNode
    123036 Jps

访问: http://centos:50070/

Spark环境

下载 spark-2.4.5-bin-without-hadoop.tgz 解压到 /usr 目录,并且将Spark目录修改名字为 spark-2.4.5 然后修改 spark-env.sh 和 spark-default.conf ⽂文件.

  • 解压安装spark
    [root@CentOS ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/
    [root@CentOS ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5
    [root@CentOS ~]# tree -L 1 /usr/spark-2.4.5/
    !"" bin # Spark系统执行行脚本
    !"" conf # Spar配置⽬目录
    !"" data
    !"" examples # Spark提供的官⽅方案例例
    !"" jars
    !"" kubernetes
    !"" LICENSE
    !"" licenses
    !"" NOTICE
    !"" python
    !"" R
    !"" README.md
    !"" RELEASE
    !"" sbin # Spark⽤用户执⾏行行脚本
    #"" yarn

  • 配置Spark服务

 [root@CentOS ~]# cd /usr/spark-2.4.5/
    [root@CentOS spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
    [root@CentOS spark-2.4.5]# vi conf/spark-env.sh
    
    # Options read in YARN client/cluster mode
    # - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
    # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
    # - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
    # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
    # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
    # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
    #>>>>>以下为yarn配置
    #YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
    #SPARK_EXECUTOR_CORES= 2
    #SPARK_EXECUTOR_MEMORY=1G
    #SPARK_DRIVER_MEMORY=1G
    #LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
    #export HADOOP_CONF_DIR
    #export YARN_CONF_DIR
    #export SPARK_EXECUTOR_CORES
    #export SPARK_DRIVER_MEMORY
    #export SPARK_EXECUTOR_MEMORY
    #export LD_LIBRARY_PATH
    #export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
    #export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
    
    
    # Options for the daemons used in the standalone deploy mode
    # - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
    # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
    # - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
    # - SPARK_WORKER_CORES, to set the number of cores to use on this machine
    # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
    # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
    # - SPARK_WORKER_DIR, to set the working directory of worker processes
    # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
    # - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
    # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
    # - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service(e.g. "-Dx=y")
    # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
    # - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
    # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
    SPARK_MASTER_HOST=hbase
    SPARK_MASTER_PORT=7077
    SPARK_WORKER_CORES=4
    SPARK_WORKER_INSTANCES=2
    SPARK_WORKER_MEMORY=2g
    
    export SPARK_MASTER_HOST
    export SPARK_MASTER_PORT
    export SPARK_WORKER_CORES
    export SPARK_WORKER_MEMORY
    export SPARK_WORKER_INSTANCES
    
    export LD_LIBRARY_PATH=/root/hadoop-2.9.2/lib/native
    export SPARK_DIST_CLASSPATH=$(hadoop classpath)
    export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"



    [root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/spark-defaults.conf
    [root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
    
    #放最后
    spark.eventLog.enabled=true
    spark.eventLog.dir=hdfs:///spark-logs

需要现在HDFS上创建 spark-logs 目录,用于作为Sparkhistory服务器存储历史计算数据的地方

[root@CentOS ~]# hdfs dfs -mkdir /spark-logs
  • 启动Spark history server

暂停./sbin/stop-history-server.sh

[root@CentOS spark-2.4.5]# ./sbin/start-history-server.sh
[root@CentOS spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps
  • 访问 http://主机ip:18080 访问Spark History Server

在这里插入图片描述

  • 启动Spark自己计算服务

  • echo $(hadoop classpath )
    /root/hadoop-2.9.2/etc/hadoop:/root/hadoop-2.9.2/share/hadoop/common/lib/:/root/hadoop-2.9.2/share/hadoop/common/:/root/hadoop-2.9.2/share/hadoop/hdfs:/root/hadoop-2.9.2/share/hadoop/hdfs/lib/:/root/hadoop-2.9.2/share/hadoop/hdfs/:/root/hadoop-2.9.2/share/hadoop/yarn:/root/hadoop-2.9.2/share/hadoop/yarn/lib/:/root/hadoop-2.9.2/share/hadoop/yarn/:/root/hadoop-2.9.2/share/hadoop/mapreduce/lib/:/root/hadoop-2.9.2/share/hadoop/mapreduce/:/root/hadoop-2.9.2/contrib/capacity-scheduler/*.jar

    [root@CentOS spark-2.4.5]# ./sbin/start-all.sh
    starting org.apache.spark.deploy.master.Master, logging to /usr/spark-
    2.4.5/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
    localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-
    2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out
    localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-
    2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-2-CentOS.out
    [root@CentOS spark-2.4.5]# jps
    7908 Worker
    7525 HistoryServer
    8165 Jps
    122374 SecondaryNameNode
    7751 Master
    122201 DataNode
    122058 NameNode
    7854 Worker

⽤户可以访问http://CentOS:8080
在这里插入图片描述

测试环境
spark-submit
[root@CentOS spark-2.4.5]#  ./bin/spark-submit
--master spark://hbase:7077
--deploy-mode client
--class org.apache.spark.examples.SparkPi
--total-executor-cores 6
/usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

pi的结果

19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID
0) in 6609 ms on CentOS (executor 1) (1/2)
19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID
1) in 6403 ms on CentOS (executor 1) (2/2)
19/04/21 03:30:39 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have
all completed, from pool
19/04/21 03:30:39 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at
SparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 30.317103 s
`Pi is roughly 3.141915709578548`
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped Spark@41035930{HTTP/1.1,
[http/1.1]}{0.0.0.0:4040}
19/04/21 03:30:40 INFO ui.SparkUI: Stopped Spark web UI at http://CentOS:4040
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
参数说明
–master链接的资源服务器的名字 spark://CentOS:7077
–deploy-mode部署模式,可选值有 client 和 cluster ,决定Driver程序是否在远程执行
–class运行的主类名字
–total-executor-cores计算过程所需要的计算资源 线程数
Spark shell
[root@CentOS spark-2.4.5]# ./bin/spark-shell --master spark://CentOS:7077 --total-executor-cores 6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = spark://CentOS:7077, app id = app-
20200207140419-0003).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.5
/_/
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_231)
Type in expressions to have them evaluated.
Type :help for more informat.
scala> sc.textFile("hdfs:///demo/words").flatMap(_.split("
")).map((_,1)).reduceByKey(_+_).sortBy(_._2,true).saveAsTextFile("hdfs:///demo/results
")
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值