Linux@ulimit

报错: 

2020-01-05 12:58:48,019 INFO  [ProcedureExecutor-2] master.AssignmentManager: Unable to communicate with hadoop,16020,1578200311421 in order to assign regions,
java.io.IOException: java.io.IOException: unable to create new native thread
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2457)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)
        at org.apache.hadoop.hbase.executor.ExecutorService$Executor.submit(ExecutorService.java:230)
        at org.apache.hadoop.hbase.executor.ExecutorService.submit(ExecutorService.java:154)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1843)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22737)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        ... 3 more

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:871)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1846)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:3044)
        at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2991)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.assign(ServerCrashProcedure.java:568)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:270)
        at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:75)
        at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
        at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:506)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1167)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:955)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:908)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:77)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:482)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: unable to create new native thread
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2457)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)
        at org.apache.hadoop.hbase.executor.ExecutorService$Executor.submit(ExecutorService.java:230)
        at org.apache.hadoop.hbase.executor.ExecutorService.submit(ExecutorService.java:154)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1843)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22737)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
        ... 3 more

        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:386)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
        at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
        at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
        at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:600)
        at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:334)
        at java.lang.Thread.run(Thread.java:748)

分析:

其实这个问题主要是由于Linu系统的配置问题导致的,关键的报错在于不能够创建新的本地线程,所以需要对Linux系统本身进行修改,如下:

对文件/etc/security/limits.d/90-nproc.conf进行修改,如下:

# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

hadoop		soft	nofile		4096
hadoop		soft	nproc		65535

然后使用ulimit -a进行检查,可以看到max number of open file descriptors和max number of processes都已经设置了最大

 

之后,再去启动相关的Java进程就不会报错了

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值