Spark WARN cluster.ClusterScheduler: Initial job has not accepted any resources;check your cluster


当我在Spark集群模式执行以下命令时:

root@debian-master:/home/hadoop/spark-0.8.0-incubating-bin-hadoop1# ./run-example org.apache.spark.examples.SparkPi spark://master:7077

出现了这个错误: WARN cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

可能是你在运行的时候hosts写错了. ”root@debian-master :“说明主机的hostname是debian-master,所以解决方法是用正确的host名(实在不行用ip)执行程序:

root@debian-master:/home/hadoop/spark-0.8.0-incubating-bin-hadoop1# ./run-example org.apache.spark.examples.SparkPi spark://debian-master:7077




其他解决方法:


1,hosts和ip配置不正确

如果接收到异常为:


WARNYarnClientClusterScheduler: Initial job has not accepted any resources;check your cluster UI to ensure that workers are registered and havesufficient memory

 出现这个错误是因为提交任务的节点不能和spark工作节点交互,因为提交完任务后提交任务节点上会起一个进程,展示任务进度,大多端口为4044,工作节点需要反馈进度给该该端口,所以如果主机名或者IP在hosts中配置不正确,就会报

WARN YarnClientClusterScheduler: Initial job has not accepted anyresources; check your cluster UI to ensure that workers are registeredand have sufficient memory错误。
所以请检查主机名和IP是否配置正确。

>> original link




下面是我自己的host配置:

_____________________________________________________________________

root@debian-master:/home/hadoop/spark-0.8.0-incubating-bin-hadoop1# cat /etc/hosts

127.0.0.1    localhost
192.168.137.5    debian-master 
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

#hadoop
192.168.137.5 master   
192.168.137.6 slave1
192.168.137.7 slave2
192.168.137.6 debian-slave
192.168.137.7 hadoop-slave

root@debian-master:/home/hadoop/spark-0.8.0-incubating-bin-hadoop1# cat /etc/hostname
debian-master
root@debian-master:/home/hadoop/spark-0.8.0-incubating-bin-hadoop1#
_____________________________________________________________________



2. 内存不足

You get this when either the number of cores or amount of RAM (per node) you request via settingspark.cores.max andspark.executor.memoryresp' exceeds what is available. Therefore even if no one else is usingthe cluster, and you specify you want to use, say 100GB RAM per node,but your nodes can only support 90GB, then you will get this errormessage.

To be fair the message is vague in this situation, it would be more helpful if it said your exceeding the maximum.

>> original link


3.端口号被占用,之前的程序已运行。 

when you already start a spark_shell in another terminal , run a example you will get FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in use,finally ,you got the problem as title describe...

so ,just exit spark_shell

and run again !


评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值