spark2.3.1集群安装方法hadoop2.7.6和scala2.11.6

spark2.3.1集群安装方法hadoop2.7.6和scala2.11.6

下载安装包:spark-2.3.1-bin-hadoop2.7.tgz

http://spark.apache.org/downloads.html

下载安装包:scala-2.11.6.tgz

https://www.scala-lang.org/download/all.html

集群规划

节点名称 IP Zookeeper Master Worker
master Zookeeper 主Master Worker
slave1 Zookeeper 备Master Worker
slave2 Zookeeper Worker

安装scala

tar zxvf scala-2.11.6.tgz
mv scala

安装spark

tar zxvf spark-2.3.1-bin-hadoop2.7.tgz
mv spark-2.3.1 spark

进入spark/conf/ 目录

复制spark-env.sh.template并重命名为spark-env.sh

cp spark-env.sh.template spark-env.sh
vi spark-env.sh

#=====================================
#指定默认master的ip或主机名
export SPARK_MASTER_HOST=master  
#export SPARK_MASTER_IP=slave1
#指定maaster提交任务的默认端口为7077    
export SPARK_MASTER_PORT=7077 
#指定masster节点的webui端口       
export SPARK_MASTER_WEBUI_PORT=18080 
#每个worker从节点能够支配的内存数 
export SPARK_WORKER_MEMORY=1g        
#允许Spark应用程序在计算机上使用的核心总数(默认值:所有可用核心)
export SPARK_WORKER_CORES=1    
#每个worker从节点的实例(可选配置) 
export SPARK_WORKER_INSTANCES=1   
#指向包含Hadoop集群的(客户端)配置文件的目录,运行在Yarn上配置此项   
export HADOOP_CONF_DIR=/home/hadoop3/app/hadoop/etc/hadoop
#指定整个集群状态是通过zookeeper来维护的,包括集群恢复
export SPARK_DAEMON_JAVA_OPTS="      
-Dspark.deploy.recoveryMode=ZOOKEEPER 
-Dspark.deploy.zookeeper.url=master:2181,slave1:2181,slave2:2181
-Dspark.deploy.zookeeper.dir=/spark"
#=====================================

slave1 /app/spark/conf/spark-env.sh
修改slave1 节点上conf/spark-env.sh配置的MasterIP为SPARK_MASTER_IP=slave1

#=====================================
#指定默认master的ip或主机名
#export SPARK_MASTER_HOST=master  
export SPARK_MASTER_IP=slave1
#指定maaster提交任务的默认端口为7077    
export SPARK_MASTER_PORT=7077 
#指定masster节点的webui端口       
export SPARK_MASTER_WEBUI_PORT=18080 
#每个worker从节点能够支配的内存数 
export SPARK_WORKER_MEMORY=1g        
#允许Spark应用程序在计算机上使用的核心总数(默认值:所有可用核心)
export SPARK_WORKER_CORES=1    
#每个worker从节点的实例(可选配置) 
export SPARK_WORKER_INSTANCES=1   
#指向包含Hadoop集群的(客户端)配置文件的目录,运行在Yarn上配置此项   
export HADOOP_CONF_DIR=/home/hadoop3/app/hadoop/etc/hadoop
#指定整个集群状态是通过zookeeper来维护的,包括集群恢复
export SPARK_DAEMON_JAVA_OPTS="      
-Dspark.deploy.recoveryMode=ZOOKEEPER 
-Dspark.deploy.zookeeper.url=master:2181,slave1:2181,slave2:2181
-Dspark.deploy.zookeeper.dir=/spark"
#=====================================

复制slaves.template成slaves

cp slaves.template slaves
vi slaves
并修改配置内容
slaves文件内容为:

# A Spark Worker will be started on each of the machines listed below.
master
slave1
slave2

配置环境变量

所有节点均要配置

vi ~/.bashrc

export SCALA_HOME=/home/hadoop3/app/scala
export PATH=${SCALA_HOME}/bin:$PATH

export SPARK_HOME=/home/hadoop3/app/spark
export PATH=${SPARK_HOME}/bin:${SPARK_HOME}/sbin:$PATH

~/.bashrc文件内容为:

[hadoop3@master conf]$ cat ~/.bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=



# User specific aliases and functions

# add to path for /home/hadoop3/app/jdk   and   /home/hadoop3/tools
JAVA_HOME=/home/hadoop3/app/jdk
HADOOP_HOME=/home/hadoop3/app/hadoop
ZOOKEEPER_HOME=/home/hadoop3/app/zookeeper
HBASE_HOME=/home/hadoop3/app/hbase
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$HBASE_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:/home/hadoop3/tools:$PATH
export JAVA_HOME CLASSPATH  HADOOP_HOME ZOOKEEPER_HOME HBASE_HOME  PATH

export HBASE_CLASSPATH=/home/hadoop3/app/hbase/conf
export HBASE_MANAGES_ZK=false   #不用自带的zookeeper


export SCALA_HOME=/home/hadoop3/app/scala
export PATH=${SCALA_HOME}/bin:$PATH

export SPARK_HOME=/home/hadoop3/app/spark
export PATH=${SPARK_HOME}/bin:${SPARK_HOME}/sbin:$PATH

启动集群

启动zookeeper集群

所有zookeeper节点均要执行
zkServer.sh start
runRemoveCmd.sh “zkServer.sh start” all

启动Hadoop集群

start-dfs.sh
start-yarn.sh
yarn-daemon.sh start resourcemanager

启动Spark集群

启动spark:
启动master节点:sbin/start-master.sh
启动worker节点:sbin/start-slaves.sh

或者:sbin/start-all.sh

备用master节点需要手动启动
slave1 sbin/start-master.sh

查看进程

[hadoop3@master sbin]$ runRemoteCmd.sh "jps" all
*******************master***************************
8800 Master
8593 Worker
7779 ResourceManager
8916 Jps
6311 NameNode
2360 QuorumPeerMain
7881 NodeManager
6589 JournalNode
6717 DFSZKFailoverController
6415 DataNode
*******************slave1***************************
8496 Jps
4289 DataNode
2339 QuorumPeerMain
7110 ResourceManager
4217 NameNode
4366 JournalNode
6926 NodeManager
7790 Worker
4431 DFSZKFailoverController
8047 Master
*******************slave2***************************
2673 QuorumPeerMain
4482 Jps
3443 DataNode
3940 NodeManager
4310 Worker
[hadoop3@master sbin]$ 

验证集群HA

看Web页面Master状态

slave1是ALIVE状态,master为STANDBY状态,WebUI查看:
http://192.168.10.200:18080/

http://192.168.10.201:18080/

Spark Master at spark://master:7077
URL: spark://master:7077
REST URL: spark://master:6066 (cluster mode)
Alive Workers: 0
Cores in use: 0 Total, 0 Used
Memory in use: 0.0 B Total, 0.0 B Used
Applications: 0 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: STANDBY
Spark Master at spark://slave1:7077
URL: spark://slave1:7077
REST URL: spark://slave1:6066 (cluster mode)
Alive Workers: 3
Cores in use: 3 Total, 0 Used
Memory in use: 3.0 GB Total, 0.0 B Used
Applications: 0 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE

从节点连接地址:

http://192.168.10.200:8081/

http://192.168.10.201:8081/

http://192.168.10.202:8081/

Spark Worker at 192.168.10.200:43208
ID: worker-20180810202818-192.168.10.200-43208
Master URL: spark://slave1:7077
Cores: 1 (0 Used)
Memory: 1024.0 MB (0.0 B Used)
Back to Master
Spark Worker at 192.168.10.201:35779
ID: worker-20180810202817-192.168.10.201-35779
Master URL: spark://slave1:7077
Cores: 1 (0 Used)
Memory: 1024.0 MB (0.0 B Used)
Back to Master
Spark Worker at 192.168.10.202:39623
ID: worker-20180810202817-192.168.10.202-39623
Master URL: spark://slave1:7077
Cores: 1 (0 Used)
Memory: 1024.0 MB (0.0 B Used)
Back to Master

验证HA的高可用

手动干掉slave1上面的Master进程,slave1:8080无法访问,master:18080状态如下,Master状态成功自动进行切换。

主备切换过程中不能提交Application。
主备切换过程中不影响已经在集群中运行的Application。因为Spark是粗粒度资源调度。
—the–end—

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值