spark在yarn模式下运行程序报running beyond virtual memory limits

错误提示:

Diagnostics: Container [pid=5677,containerID=container_e01_1594549493537_0002_02_000001] is running beyond virtual memory limits. Current usage: 269.4 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container.

20/07/12 20:47:25 ERROR cluster.YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/07/12 20:47:25 ERROR cluster.YarnClientSchedulerBackend: Diagnostics message: Application application_1594552606982_0004 failed 2 times due to AM Container for appattempt_1594552606982_0004_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://hrbu31:8088/cluster/app/application_1594552606982_0004Then, click on links to logs of each attempt.
Diagnostics: Container [pid=2534,containerID=container_e03_1594552606982_0004_02_000001] is running beyond virtual memory limits. Current usage: 170.6 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_e03_1594552606982_0004_02_000001 :
	|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
	|- 2534 2532 2534 2534 (bash) 0 1 108609536 333 /bin/bash -c /opt/wdp/jdk//bin/java -server -Xmx512m -Djava.io.tmpdir=/opt/ha/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/ha/hadoop/logs/userlogs/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'hrbu30:50076' --properties-file /opt/ha/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /opt/ha/hadoop/logs/userlogs/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/stdout 2> /opt/ha/hadoop/logs/userlogs/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/stderr 
	|- 2538 2534 2534 2534 (java) 498 131 2304425984 43328 /opt/wdp/jdk//bin/java -server -Xmx512m -Djava.io.tmpdir=/opt/ha/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/ha/hadoop/logs/userlogs/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg hrbu30:50076 --properties-file /opt/ha/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1594552606982_0004/container_e03_1594552606982_0004_02_000001/__spark_conf__/__spark_conf__.properties

在这里插入图片描述

报错原因:

首先,可能是因为你的hadoop的yarn没有设置当启动一个线程,检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉的参数,如果你没设置,yarn会默认为true,所以会杀死该线程;其次,可能是因为你的hadoop的yarn没有设置当启动一个线程,检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,yarn会默认为true,所以会杀死该线程。

解决办法:

1.修改hadoop的配置文件yarn-site.xml,添加下面的配置文件

<!--是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
        <property>
                <name>yarn.nodemanager.pmem-check-enabled</name>
                <value>false</value>
        </property>
<!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
        <property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
        </property>

2.重启hadoop的yarn,然后运行程序,就不会报错了

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值