yarn 运行mapreduce常见错误 beyond the 'VIRTUAL' memory limit

学习mapreduce,在虚拟机上搭建了四台服务器的集群测试,结果运行 hadoop jar wordcount.jar com.csnt.vordcountsubmit后报错,不能运行出结果,尝试了好多种方法才解决了。

报错粘贴:

hadoop jar mapreduce20-0.0.1-SNAPSHOT.jar com.hadoop.mapreduce20.WordCountJobSubmit
2019-03-14 10:05:50,409 INFO client.RMProxy: Connecting to ResourceManager at hdp-01/192.168.233.11:8032
2019-03-14 10:05:53,648 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-03-14 10:05:53,793 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1552572302328_0001
2019-03-14 10:05:54,941 INFO input.FileInputFormat: Total input files to process : 5
2019-03-14 10:05:55,442 INFO mapreduce.JobSubmitter: number of splits:5
2019-03-14 10:05:55,553 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-03-14 10:05:56,348 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552572302328_0001
2019-03-14 10:05:56,352 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-03-14 10:05:57,483 INFO conf.Configuration: resource-types.xml not found
2019-03-14 10:05:57,484 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2019-03-14 10:05:58,711 INFO impl.YarnClientImpl: Submitted application application_1552572302328_0001
2019-03-14 10:05:58,926 INFO mapreduce.Job: The url to track the job: http://hdp-01:8088/proxy/application_1552572302328_0001/
2019-03-14 10:05:58,928 INFO mapreduce.Job: Running job: job_1552572302328_0001
2019-03-14 10:06:40,652 INFO mapreduce.Job: Job job_1552572302328_0001 running in uber mode : false
2019-03-14 10:06:40,655 INFO mapreduce.Job:  map 0% reduce 0%

[2019-03-13 11:39:19.148]Container [pid=8794,containerID=container_1552484587522_0002_01_000005] is running 431118848B beyond the 'VIRTUAL' memory limit. Current usage: 51.4 MB of 1 GB physical memory used; 2.5 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1552484587522_0002_01_000005 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 8794 8791 8794 8794 (bash) 2 8 115896320 305 /bin/bash -c /home/cdl/apps/java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN   -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1552484587522_0002/container_1552484587522_0002_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/cdl/apps/hadoop-3.1.1/logs/userlogs/application_1552484587522_0002/container_1552484587522_0002_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.233.12 38908 attempt_1552484587522_0002_m_000004_0 5 1>/home/cdl/apps/hadoop-3.1.1/logs/userlogs/application_1552484587522_0002/container_1552484587522_0002_01_000005/stdout 2>/home/cdl/apps/hadoop-3.1.1/logs/userlogs/application_1552484587522_0002/container_1552484587522_0002_01_000005/stderr  
        |- 8810 8794 8794 8794 (java) 299 129 2570080256 12842 /home/cdl/apps/java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1552484587522_0002/container_1552484587522_0002_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/cdl/apps/hadoop-3.1.1/logs/userlogs/application_1552484587522_0002/container_1552484587522_0002_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.233.12 38908 attempt_1552484587522_0002_m_000004_0 5 

[2019-03-13 11:39:20.034]Container killed on request. Exit code is 143
[2019-03-13 11:39:20.084]Container exited with a non-zero exit code 143. 

解决方法:发现有两种方法能解决这个问题:

1.将yarn.nodemanager.vmem-check-enabled的值改为false,即不检查VM的值;

2.将yarn.scheduler.minimum-allocation-mb的值调高一些,默认是1024mb,或者修改yarn.nodemanager.vmem-pmem-ratio的值,默认为2.1,将该值改得更大。

于是,我采用了第一种方法,重启yarn服务,再次提交,果然好了。
 

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值