standalone模式下 spark 集群搭建完成之后work进程没有起来

无法启动:nice -n 0 / home / hadoop / soft / spark / bin / spark-class org.apache.spark.deploy.worker.Worker –webui-port 8081 spark:// master:7077

好久没搭建spark 集群了,今天搭建了一下spark集群的 standalone 模式 之后,发现在

[hadoop@master sbin]$ ./start-all.sh

启动spark的进程时,突然就发生一下错误 :

starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.out
slave3: failed to launch: nice -n 0 /home/hadoop/soft/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
slave3: full log in /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.out
slave2: failed to launch: nice -n 0 /home/hadoop/soft/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
slave2: full log in /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.out
slave1: failed to launch: nice -n 0 /home/hadoop/soft/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
slave1: full log in /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.out

经过打印的日志信息分析可以看到:
failed to launch:nice -n 0 / home / hadoop / soft / spark / bin / spark-class org.apache.spark.deploy.worker.Worker –webui-port 8081 spark:// master:7077
看到这句话是说:无法启动这个进程,于是乎我去检查了我的spark的节点配置信息,在

/spark/conf/slaves
slave1 #配置的节点
slave2
slave3

这个没问题,其它节点经过检查也是好的,那问题就是在环境变量上了

[hadoop@slave1 ~]$ vim ~/.bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific aliases and functions

export JAVA_HOME=/home/hadoop/soft/jdk

export HADOOP_HOME=/home/hadoop/soft/hadoop

export ZOOKEEPER_HOME=/home/hadoop/soft/zoo

export HBASE_HOME=/home/hadoop/soft/hbase

export HIVE_HOME=/home/hadoop/soft/hive

export SPARK_HOME=/home/hadoop/soft/spark

export FLUME_HOME=/home/hadoop/soft/flume

export KAFKA_HOME=/home/hadoop/soft/kafka

PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$PATH:$HBASE_HOME/bin:$HIVE_HOME/bin:$SPARK_HOME/bin:$FLUME_HOME/bin:$KAFKA_HOME/bin:$HOME/bin
export PATH

环境变量前后检查了一遍,没问题重新source一遍

source ~./bashrc

完成之后,在

/spark/conf/sbin/

下执行

[hadoop@master sbin]$ ./stop-all.sh
slave2: stopping org.apache.spark.deploy.worker.Worker
slave1: stopping org.apache.spark.deploy.worker.Worker
slave3: stopping org.apache.spark.deploy.worker.Worker
stopping org.apache.spark.deploy.master.Master

执行完成之后,重新启动关于spark的进程


[hadoop@master sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave3.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/soft/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.out

[hadoop@slave1 ~]$ jps
2635 Jps
2510 Worker
[hadoop@slave1 ~]$ exit
logout
Connection to slave1 closed.
[hadoop@master sbin]$ ssh slave2
Last login: Fri Sep 14 09:43:00 2018 from master
[hadoop@slave2 ~]$ jps
2667 Jps
2511 Worker
[hadoop@slave2 ~]$ exit
logout
Connection to slave2 closed.
[hadoop@master sbin]$ ssh slave3
Last login: Fri Sep 14 09:43:12 2018 from master
[hadoop@slave3 ~]$ jps
2246 Worker
2665 Jps
[hadoop@slave3 ~]$

spark 的work进程在其它节点上都启动了!


分析结果

1.配置环境变量完成之后一定要 source ~/.bashrc (我一般配置在~/.bashrc 中,你的按情况而定)
2.启动 [hadoop@master sbin]$ ./start-all.sh 未成功一定要关闭进程在重新排查
3.source ~/.bashrc 之后在去/spark/sbin中 ./start-all.sh


  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值