spark-standalone集群部署

spark-standalone集群部署

服务器基本信息

IP地址安装服务用户名host
12.0.2.29spark(master)mppadminqfs010
12.0.2.30spark(slave)mppadminqfs011
12.0.2.31spark(slave)mppadminqfs012
12.0.2.32spark(slave)mppadminqfs013
12.0.2.33spark(slave)mppadminqfs014
12.0.2.34spark(slave)mppadminqfs015
12.0.2.35spark(slave)mppadminqfs016

1 配置jdk环境

下载安装包jdk-8u271-linux-x64.tar.gz,并解压,所有节点做如下步骤:

tar -zxvf jdk-8u271-linux-x64.tar.gz -C /usr/local/src/

vi /etc/profile
# JDK_HOME author:BIGDATA_N1
export JAVA_HOME=/usr/local/src/jdk1.8.0_271
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

source /etc/profile

2 master节点配置

tar -zxvf spark-3.0.1-bin-hadoop3.2.tgz
#配置环境变量
vi /etc/profile
# SPARK_HOME author:BIGDATA_N1
export SPARK_HOME=/home/mppadmin/spark-2.4.7-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

source /etc/profile
#配置集群启动环境
[root@qfs010 conf]# cp slaves.template slaves
[root@qfs010 conf]# vi slaves
qfs011
qfs012
qfs013
qfs014
qfs015
qfs016

[root@qfs010 sbin]# vi spark-config.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_271

3 slave节点配置(qfs011-qfs016机器)

首先,需要配置jdk,见1中的配置jdk环境。
其次,拷贝qfs011安装包spark-2.4.7-bin-hadoop2.7到其他slave节点,无需任何配置。

4 master启动集群

[root@qfs010 spark-2.4.7-bin-hadoop2.7]# start-all.sh 
[mppadmin@qfs010 spark-2.4.7-bin-hadoop2.7]$ start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.master.Master-1-qfs010.out
qfs011: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs011.out
qfs014: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs014.out
qfs015: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs015.out
qfs012: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs012.out
qfs013: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs013.out
qfs016: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs016.out

5 使用spark-submit测试环境

[mppadmin@qfs010 jars]$ spark-submit --master spark://qfs010:7077 --executor-memory 20G --executor-cores 6 /home/mppadmin/spark-2.4.7-bin-hadoop2.7/examples/src/main/python/pi.py
21/02/04 14:49:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/02/04 14:49:29 WARN TaskSetManager: Stage 0 contains a task of very large size (371 KB). The maximum recommended task size is 100 KB.
Pi is roughly 3.140360
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值