1.上传spark-3.2.1-bin-hadoop2.7.tgz到/opt目录,并解压到/usr/local
tar -zxf /opt/spark-3.2.1-bin-hadoop2.7.tgz -C /usr/local/
2.在所有节点配置Spark环境变量
vi /etc/profile
在文件尾加入:
export SPARK_HOME=/usr/local/spark-3.2.1-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
执行source /etc/profile使命令生效
以下在master节点进行
3.进入/usr/local/spark-3.2.1-bin-hadoop2.7/conf
复制workers.template 为workers
cp workers.template workers
vi workers
修改workers内容为:
slave1
slave2
4.修改spark-defaults.conf
cp spark-defaults.conf.template spark-defaults.conf
vi spark-defaults.conf
添加:
spark.master spark://master:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs://master:8020/spark-logs
spark.history.fs.logDirectory hdfs://master:8020/spark-logs
5.修改spark-env.sh
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
spark webui与Zookeeper有端口冲突,修改为8085
添加:
JAVA_HOME=/usr/java/jdk1.8.0_281-amd64
HADOOP_CONF_DIR=/usr/local/hadoop-3.1.4/etc/hadoop
SPARK_MASTER_IP=master
SPARK_MASTER_WEBUI_PORT=8085
SPARK_MASTER_PORT=7077
SPARK_WORKER_MEMORY=512m
SPARK_WORKER_CORES=1
SPARK_EXECUTOR_MEMORY=512m
SPARK_EXECUTOR_CORES=1
SPARK_WORKER_INSTANCES=1
6.在HDFS中新建目录:
hdfs dfs -mkdir /spark-logs
7.将Spark安装包分发到其他节点
scp -qr /usr/local/spark-3.2.1-bin-hadoop2.7/ slave1:/usr/local/
scp -qr /usr/local/spark-3.2.1-bin-hadoop2.7/ slave2:/usr/local/
8.启动spark
进入/usr/local/spark-3.2.1-bin-hadoop2.7/sbin
执行
./start-all.sh
9.查看客户端
http://master:8085
10.关闭spark集群
./stop-all.sh