YarnClusterScheduler: Initial job has not accepted any resources;
spark-submit突然报这个错误,好像是spark的内存不够了
我的spark-env.sh是这样的
JAVA_HOME=/usr/local/jdk
SCALA_HOME=/usr/local/scala
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
YARN_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G
SPARK_CONF_DIR=/usr/local/spark/conf
SPARK_MASTER_HOST=qianfeng01
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=1G
重点是这个:SPARK_WORKER_MEMORY=1G
现在就增加一下spark-submit的内存
原先的指令后面添加 --executor-memory 512m 就可以了