WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources

提交Spark任务时,报错:

scala> sc.textFile("hdfs://master.hadoop:9000/spark/a.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2).collect
[Stage 0:>                                                          (0 + 0) / 2]
18/08/24 20:44:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/08/24 20:44:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/08/24 20:45:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[Stage 0:>                                                          (0 + 0) / 2]
18/08/24 20:45:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/08/24 20:45:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/08/24 20:45:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

正常启动应用的话,worker机器上应该有这样的进程(CoarseGrainedExecutorBackend)

[root@slave4 conf]# jps
3472 DataNode
3540 SecondaryNameNode
3798 Worker
3862 CoarseGrainedExecutorBackend
3958 Jps

而如果worker机器上没有出现的话,说明启动失败。

解决办法:

首先查看spark配置文件是否有配置SPARK_EXECUTOR_MEMORY这个属性(我的机器是配置的512M),如果不配置的话默认为1G,根据自己的情况而定。

export JAVA_HOME=/apps/jdk1.8.0_171
export SCALA_HOME=/apps/scala-2.11.7
export HADOOP_HOME=/apps/hadoop-2.8.0/
export HADOOP_CONF_DIR=/apps/hadoop-2.8.0/etc/hadoop
export SPARK_MASTER_IP=master.hadoop
export SPARK_WORKER_MEMORY=512m
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
export SPARK_EXECUTOR_MEMORY=512m

我的机器出现问题就是因为没有配置SPARK_EXECUTOR_EMEORY

我以为只需要配置了SPARK_WORKER_MEMORY就可以了,所以一直出错。好几天才搞定!

配置好以后使用top命令查看一下worker结点的空余内存(free)是否够用,我的还剩余765m,所以配置512m是合理的。

[root@slave4 hdptmp]# top
top - 21:59:25 up  3:37,  1 user,  load average: 0.24, 0.12, 0.08
Tasks: 111 total,   1 running, 110 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  1867048 total,   765924 free,   630728 used,   470396 buff/cache
KiB Swap:  1048572 total,  1048572 free,        0 used.  1030032 avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                
  3472 root      20   0 2830276 154668  19032 S   1.0  8.3   0:20.83 java                                   

再次启动发现worker节点上启动了(CoarseGrainedExecutorBackend)进程,并且执行spark程序也成功了。

18/08/30 14:12:40 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.1.5:4040
Spark context available as 'sc' (master = spark://slave3.hadoop:7077, app id = app-20180830141225-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.1
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc
res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@18356951

scala> sc.textFile("hdfs://slave3.hadoop:9000/a.txt").flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_).sortBy(_._2).collect
res1: Array[(String, Int)] = Array((jim,1), (jarry,1), (wo,1), (ni,1), (hello,4))

scala>

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值