- 准备环境
- 安装JDK, 不需要安装Scala, Spark内置scala, 最好搭配JDK8, 搭配8以上版本, 依据个人经验可能出问题。
- 上传spark安装包
- 解压spark并修改配置文件(两个配置文件,第一个配置文件添加了3个配置文件)
Spark.env.sh
文件,增加如下3个属性, 指定JDK, 指定Spark master, 指定Spark集群rpc通讯端口
export JAVA_HOME=/root/apps/jdk1.8.0_65
export SPARK_MASTER_HOST=hadoop-01
export SPARK_MASTER_PORT=7077
如果想变成高可用的master, 那么我们需要利用zookeeper集群, 而且我们既然利用了zoo keeper集群, 那么我们的worker就不需要从配置文件中寻找master, 直接从zookeeper中找, 所以我注释了export SPARK_MASTER_HOST
这个变量和他的端口。
配置文件应该该改成如下:
#export SPARK_MASTER_HOST=hadoop-01
#export SPARK_MASTER_PORT=7077
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop-01:2181,hadoop-02:2181,hadoop-03:2181 -Dspark.deploy.zookeeper.dir=/spark"
Spark.env.sh
文件中我们也可以设定每个worker的资源
export SPARK_WORKER_CORES=4
export SPARK_WORKER_MEMORY=2g
slaves
文件中指定worker
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# A Spark Worker will be started on each of the machines listed below.
hadoop-03
hadoop-04
hadoop-05
- 将配置好的spark安装程序拷贝给其他机器, 可以写一个循环的shell
for i in {3..5};
do
scp -r /root/apps/spark-2.1.1-bin-hadoop2.7/ root@hadoop-0$i:/root/apps/;
done
- 启动spark (sbin/start-all.sh)
Worker通过读取spark-env.sh
文件(高可用集群
下是利用zookeeper
)得知Master在哪里的 - 通过web页面访问spark管理页面(master所在机器的地址+8080端口), 注意: 此8080并非tomcat或者啥乱七八糟的, 而是netty通讯端口。
- 启动集群, 注意
sbin/start-all.sh
没有那么智能,我们还需利用sbin/start-master
手动启动第二个standby
的master
. - 提交第一个Spark example 程序
SparkPi
(计算圆周率) 并运行:
第一个参数指定master, 第二个参数指定主类名 第三个参数指定jar包,第四第五个参数可选, 定制了此次任务的性能资源(core, ram)分配 最后参数为这个样例类的参数, 也就是迭代100次 (迭代越多次越准)
–executor-memory 每个executor使用的内存大小
–total-executor-cores 整个app使用的核数
bin/spark-submit --master spark://hadoop-01hadoop-02:7077 --class org.apache.spark.examples.SparkPi --executor-memory 2048mb --total-executor-cores 12 examples/jars/spark-examples_2.11-2.1.1.jar 10
提交任务时可以指定多个master地址, 保证提交任务高可用.
bin/spark-submit --master spark://hadoop-01:7077,hadoop-02:7077 --class org.apache.spark.examples.SparkPi --executor-memory 2048mb --total-executor-cores 12 examples/jars/spark-examples_2.11-2.1.1.jar 10
- 提交一个spark程序到spark集群,会产生哪些进程?
SparkSubmint
(Driver)提交任务
Executor
执行真正的计算任务的 类比Yarn
的container