Linux下安装Spark集群

3 篇文章 0 订阅
1 篇文章 0 订阅

1.安装Scala

1.1 下载解压安装包
cd /usr/local
wget https://downloads.lightbend.com/scala/2.12.2/scala-2.12.2.tgz
tar -zxvf scala-2.12.2.tgz
mv scala-2.12.2 scala
1.2 配置环境变量
vi /etc/profile
#添加下面配置
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
#使其生效
source   /etc/profile
#验证(能出现版本则成功)
scala -version

2.下载解压Spark安装包

cd /usr/local
wget http://mirror.bit.edu.cn/apache/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
tar -zxvf spark-2.4.4-bin-hadoop2.7.tgz
mv spark-2.4.4-bin-hadoop2.7 spark

3.配置环境变量

sudo vi /etc/profile
#编辑环境变量
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
#使其生效
source /etc/profile

4.配置Spark系统文件

cd spark/conf
#重命名
mv spark-env.sh.template spark-env.sh
#修改spark-env.sh
vi spark-env.sh
#添加如下配置
export JAVA_HOME=/usr/local/jdk1.8.0_231
export HADOOP_HOME=/usr/local/hadoop-2.7.1
export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.1/etc/hadoop
export SPARK_MASTER_IP=dn1
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_COSER=1
export SPARK_WORKER_INSTANCES=1
#重命名
mv slaves.template slaves
#修改slaves(添加workers节点的ip或者域名)
xt2
xt3
#将环境变量配置文件和spark和Scala发送到其他两个节点
scp /etc/profile hadoop@xt2:/etc
scp /etc/profile hadoop@xt3:/etc
scp -r scala hadoop@xt2:/usr/local
scp -r scala hadoop@xt3:/usr/local
scp -r spark hadoop@xt2:/usr/local
scp -r spark hadoop@xt3:/usr/local

5.启动集群

cd /usr/local/spark/sbin
./start-all.sh
#测试(在主节点能看到master,从节点能看到worker)就是成功
jps
#访问WebUI界面(ip为你的master节点ip地址)
http://192.168.123.100:8080/

6.报错

  1. 如果是权限不足Permission denied
    在这里插入图片描述
    只是因为没有写入的权限,因此只需要修改目标文件夹的权限即可,使其拥有写入权限。
sudo chmod 777 local
  1. 如果启动的时候报

chown: changing ownership of ‘/usr/local/spark-2.4.4/logs’: Operation not permitted
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-xt1.out
/usr/local/spark-2.4.4/sbin/spark-daemon.sh: line 128: /usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-xt1.out: Permission denied
failed to launch: nice -n 0 /usr/local/spark-2.4.4/bin/spark-class org.apache.spark.deploy.master.Master --host xt1 --port 7077 --webui-port 8080
tail: cannot open ‘/usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-xt1.out’ for reading: No such file or directory
full log in /usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-xt1.out
xt3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-xt3.out
xt2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.4.4/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-xt2.out

我执行这一句就可以解决

sudo chown -R $(whoami) /usr/local/* 

有什么问题欢迎留言!

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

xiaotian_dev

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值