spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)

 

 

   不多说,直接上干货!

 

 

  问题详情

  电脑8G,目前搭建3节点的spark集群,采用YARN模式。

    master分配2G,slave1分配1G,slave2分配1G。(在安装虚拟机时)

    export SPARK_WORKER_MERMORY=1g  (在spark-env.sh)

export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60         (必须写)
export SCALA_HOME=/usr/local/scala/scala-2.10.5    (必须写)
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0    (必须写)
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop-2.6.0/etc/hadoop   (必须写)
export SPARK_MASTER_IP=192.168.80.10  
export SPARK_WORKER_MERMORY=1G     (官网上说,至少1g)

 

 

 其实这个问题解决办法很简单,就是3个节点的是,你的内存最好还是大点。如master分配4G,slave1分配2G,slave2分配2G!!!!(尽可能的大)

 当然,很多博友们,跟我的情况是一样的,在学校阶段,电脑只有8g已经是最大电脑内存限制了。

 

 

 

 一般是由于有多个用户同时向集群提交任务或一个用户向集群同时提交了多个任务导致Yarn资源的分配错误。解决这个问题,只需要更改Hadoop的配置文件:/etc/hadoop/conf/capacity-scheduler.xml,把选项:yarn.scheduler.capacity.maximum-am-resource-percent从0.1改成0.5。顾名思义,这个选项是增加Yarn可调度的资源量,当然也可以视具体情况增加更多。也可见,默认情况下,Yarn没有将很多资源分配给任务的能力。

 

 

 

 

 

 

具体,见

Spark on YARN模式的安装(spark-1.6.1-bin-hadoop2.6.tgz + hadoop-2.6.0.tar.gz)(master、slave1和slave2)(博主推荐)

 

 

[spark@master logs]$  $SPARK_HOME/bin/spark-submit  \
> --class org.apache.spark.examples.JavaSparkPi \
> --master yarn-cluster \
> --num-executors 1 \
> --driver-memory 512m \
> --executor-memory 512m \
> --executor-cores 1 \
>  /usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar





注意:
driver-memory不指定也可以,默认使用512M
executor-memory不指定的化, 默认是1G
 
 

17/04/09 17:03:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/04/09 17:03:55 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.80.10:8032 17/04/09 17:03:56 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers 17/04/09 17:03:56 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) 17/04/09 17:03:56 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 17/04/09 17:03:56 INFO yarn.Client: Setting up container launch context for our AM 17/04/09 17:03:56 INFO yarn.Client: Setting up the launch environment for our AM container 17/04/09 17:03:56 INFO yarn.Client: Preparing resources for our AM container 17/04/09 17:03:59 INFO yarn.Client: Uploading resource file:/usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar -> hdfs://master:9000/user/spark/.sparkStaging/application_1491728358337_0001/spark-assembly-1.6.1-hadoop2.6.0.jar 17/04/09 17:04:19 INFO yarn.Client: Uploading resource file:/usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar -> hdfs://master:9000/user/spark/.sparkStaging/application_1491728358337_0001/spark-examples-1.6.1-hadoop2.6.0.jar 17/04/09 17:04:49 INFO yarn.Client: Uploading resource file:/tmp/spark-d152ed1b-09ca-47c8-8457-58f7e52ff419/__spark_conf__6499474209714260387.zip -> hdfs://master:9000/user/spark/.sparkStaging/application_1491728358337_0001/__spark_conf__6499474209714260387.zip 17/04/09 17:04:50 INFO spark.SecurityManager: Changing view acls to: spark 17/04/09 17:04:50 INFO spark.SecurityManager: Changing modify acls to: spark 17/04/09 17:04:50 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 17/04/09 17:04:50 INFO yarn.Client: Submitting application 1 to ResourceManager 17/04/09 17:04:51 INFO impl.YarnClientImpl: Submitted application application_1491728358337_0001 17/04/09 17:04:
  • 3
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值