Hadoop yarn OutOfMemoryError: unable to create new native thread

Bug

 2015-08-23 18:00:12,084 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:713)
    at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
    at java.lang.UNIXProcess.initStreams(UNIXProcess.java:172)
    at java.lang.UNIXProcess$2.run(UNIXProcess.java:145)
    at java.lang.UNIXProcess$2.run(UNIXProcess.java:143)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.lang.UNIXProcess.(UNIXProcess.java:143)
    at java.lang.ProcessImpl.start(ProcessImpl.java:130)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:485)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.containerIsAlive(DefaultContainerExecutor.java:430)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.signalContainer(DefaultContainerExecutor.java:401)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java:419)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:139)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
    at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
    at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
    at java.lang.Thread.run(Thread.java:744)
2015-08-23 18:00:12,086 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
2015-08-23 18:13:35,544 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: STARTUP_MSG: 

Solution

At first,I thought my mapreduce program may need more memorys and there may be a limit on nproc .So,I changed the conf in linux and hadoop

/etc/security/limits.conf

* soft nofile 65536
* hard nofile 65536

/etc/security/limits.d/90-nproc.conf

* soft nproc unlimited
* hard nproc unlimited

mapred-site.xml

mapreduce.map.memory.mb  4096
mapreduce.reduce.memory.mb 8192
mapreduce.map.java.opts  -Xmx3072m
mapreduce.reduce.java.opts  -Xmx7168m

But,It didn’t work.The truth is that I have not enough memory to allocate to each maps and reduces.In fact, they don’t need a lot of memory and I over allocate to them.The solution is

mapred-site.xml

mapreduce.map.memory.mb  1024
mapreduce.reduce.memory.mb 2048
mapreduce.map.java.opts  -Xmx800m
mapreduce.reduce.java.opts  -Xmx1600m

Environment

Centos 6.4 core 3.10.80
Hadoop 2.6
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值