搭建Spark Standalone高可用集群
此文以Spark 3.1.2版本为例!
如未指定,下述命令在所有节点执行!
一、系统资源及组件规划
节点名称 | 系统名称 | CPU/内存 | 网卡 | 磁盘 | IP地址 | OS | 节点角色 |
---|---|---|---|---|---|---|---|
Master1 | master1 | 2C/4G | ens33 | 128G | 192.168.0.11 | CentOS7 | Master |
Master2 | master2 | 2C/4G | ens33 | 128G | 192.168.0.12 | CentOS7 | Master |
Worker1 | worker1 | 2C/4G | ens33 | 128G | 192.168.0.21 | CentOS7 | Worker、ZooKeeper |
Worker2 | worker2 | 2C/4G | ens33 | 128G | 192.168.0.22 | CentOS7 | Worker、ZooKeeper |
Worker3 | worker3 | 2C/4G | ens33 | 128G | 192.168.0.23 | CentOS7 | Worker、ZooKeeper |
二、系统软件安装与设置
1、安装基本软件
yum -y install vim lrzsz bash-completion
2、设置名称解析
echo 192.168.0.11 master1 >> /etc/hosts
echo 192.168.0.12 master2 >> /etc/hosts
echo 192.168.0.21 worker1 >> /etc/hosts
echo 192.168.0.22 worker2 >> /etc/hosts
echo 192.168.0.23 worker3 >> /etc/hosts
3、设置NTP
yum -y install chrony
systemctl start chronyd
systemctl enable chronyd
systemctl status chronyd
chronyc sources
4、设置SELinux、防火墙
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
三、搭建Spark Standalone高可用集群
1、设置SSH免密登录
在Master1和Master2节点上配置免密ssh所有节点:
ssh-keygen -t rsa
for host in master1 master2 worker1 worker2 worker3; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done
2、安装JDK
下载JDK文件:
参考地址:https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
解压JDK安装文件:
tar -xf /root/jdk-8u291-linux-x64.tar.gz -C /usr/local/
设置环境变量:
export JAVA_HOME=/usr/local/jdk1.8.0_291/
export PATH=$PATH:/usr/local/jdk1.8.0_291/bin/
添加环境变量至/etc/profile文件:
export JAVA_HOME=/usr/local/jdk1.8.0_291/
PATH=$PATH:/usr/local/jdk1.8.0_291/bin/
查看Java版本:
java -version
3、安装ZooKeeper
下载ZooKeeper文件:
参考地址:https://downloads.apache.org/zookeeper/stable/
在Worker节点上(ZooKeeper节点)解压ZooKeeper安装文件:
tar -xf /root/apache-zookeeper-3.6.3-bin.tar.gz -C /usr/local/
在Worker节点上(ZooKeeper节点)设置环境变量:
export PATH=$PATH:/usr/local/apache-zookeeper-3.6.3-bin/bin/
在Worker节点上(ZooKeeper节点)添加环境变量至/etc/profile文件:
PATH=$PATH:/usr/local/apache-zookeeper-3.6.3-bin/bin/
在Worker节点上(ZooKeeper节点)创建ZooKeeper数据目录:
mkdir /usr/local/apache-zookeeper-3.6.3-bin/data/
在Worker节点上(ZooKeeper节点)创建ZooKeeper配置文件:
mv /usr/local/apache-zookeeper-3.6.3-bin/conf/zoo_sample.cfg /usr/local/apache-zookeeper-3.6.3-bin/conf/zoo.cfg
在Worker节点上(ZooKeeper节点)修改ZooKeeper配置文件:
vim /usr/local/apache-zookeeper-3.6.3-bin/conf/zoo.cfg
添加数据目录:
dataDir=/usr/local/apache-zookeeper-3.6.3-bin/data/
添加ZooKeeper节点:
server.1=worker1:2888:3888
server.2=worker2:2888:3888
server.3=worker3:2888:3888
ZooKeeper配置参数解读
Server.A=B:C:D。
A是一个数字,表示这个是第几号服务器;
B是这个服务器的IP地址;
C是这个服务器与集群中的Leader服务器交换信息的端口;
D是当集群中的Leader服务器故障,需要一个端口来重新进行选举,选出一个新的Leader,而端口D就是用来执行选举时服务器相互通信的端口。
集群模式下配置一个文件myid,这个文件在dataDir目录下,这个文件里面有一个数据就是A的值,Zookeeper启动时读取此文件,拿到里面的数据与zoo.cfg里面的配置信息比较从而判断到底是哪个server。
在Worker1节点上(ZooKeeper节点)创建myid文件,并添加A值:
touch /usr/local/apache-zookeeper-3.6.3-bin/data/myid
echo 1 > /usr/local/apache-zookeeper-3.6.3-bin/data/myid
在Worker2节点上(ZooKeeper节点)创建myid文件,并添加A值:
touch /usr/local/apache-zookeeper-3.6.3-bin/data/myid
echo 2 > /usr/local/apache-zookeeper-3.6.3-bin/data/myid
在Worker3节点上(ZooKeeper节点)创建myid文件,并添加A值:
touch /usr/local/apache-zookeeper-3.6.3-bin/data/myid
echo 3 > /usr/local/apache-zookeeper-3.6.3-bin/data/myid
在Worker节点上(ZooKeeper节点)启动ZooKeeper:
zkServer.sh start
在Worker节点上(ZooKeeper节点)查看ZooKeeper状态:
zkServer.sh status
4、安装Spark Standalone高可用集群
下载Spark文件:
参考地址:http://spark.apache.org/downloads.html
解压Spark安装文件:
tar -zxf /root/spark-3.1.2-bin-hadoop3.2.tgz -C /usr/local/
设置环境变量:
export PATH=$PATH:/usr/local/spark-3.1.2-bin-hadoop3.2/bin/:/usr/local/spark-3.1.2-bin-hadoop3.2/sbin/
添加环境变量至/etc/profile文件:
PATH=$PATH:/usr/local/spark-3.1.2-bin-hadoop3.2/bin/:/usr/local/spark-3.1.2-bin-hadoop3.2/sbin/
Master节点/etc/profile文件:
Worker节点/etc/profile文件:
5、配置Spark Standalone高可用集群
创建spark-env.sh文件:
cat > /usr/local/spark-3.1.2-bin-hadoop3.2/conf/spark-env.sh << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_291/
SPARK_MASTER_PORT=7077
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=worker1:2181,worker2:2181,worker3:2181 -Dspark.deploy.zookeeper.dir=/spark"
EOF
创建workers文件,指定Worker节点:
cat > /usr/local/spark-3.1.2-bin-hadoop3.2/conf/workers << EOF
worker1
worker2
worker3
EOF
6、启动Spark Standalone高可用集群
在Master1节点上(主Master)启动Spark集群:
start-all.sh
在Master2节点上(备Master)启动Spark Master节点:
start-master.sh
在各类节点上查看Spark进程:
jps
7、登录Spark Standalone高可用集群
登录主Master节点:
http://192.168.0.11:8080
登录备Master节点:
http://192.168.0.12:8080
登录Worker节点:
http://192.168.0.21:8081
8、Spark Standalone集群故障演示
关闭Master1节点
在Master2节点上查看集群状态:
恢复Master1节点,启动Spark Master节点
在Master节点上查看集群状态:
Master1节点恢复,主从节点未切换
9、关闭Spark Standalone高可用集群
在Master1节点上(备Master)关闭Spark Master节点:
stop-master.sh
在Master2节点上(主Master)关闭Spark集群:
stop-all.sh