问题描述:
Sparn on Yarn场景下执行Spark任务时,单个executor的memeory设置超过8G就会报以下错误,这是yarn设置不合理导致的,yarn默认单个nodemanager为8G资源,这是不合理的,因此我在集群上对此做出调整。
主要日志表现:
解决方案:
修改yarn-site.xml
<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>16384</value>
</property>
<property>
<description>yarn分配给nodemanager最大个核数</description>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>16</value>
</property>