问题描述:
在yarn集群上运行spark时,报了如下错误
主要内容:
Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (1024 MB), offHeap memory (0) MB, overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
大概就是,内存问题
话不多说,先上图:
解决方法:
找到yarn-site.xml文件
添加以下内容即可解决:
<!-- 设置RM内存资源配置,两个参数 -->
<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property><property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
</property>
<property>
<description>Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
</description>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>4096</value>
</property>
保存退出需要加感叹号,因为这是一个很重要的文件,所以系统怕你不小心乱搞才要加“ !”
并且需要输入以下代码,使得文件生效
source /etc/profile
stop并重启yarn:
stop-yarn.sh
start-yarn.sh
注意:这一步是必须的,很重要,不重启的话没用。
亲测有用,问题解决~