RpcServer.default.FPBQ.Fifo.handler hdfs.DFSClient: Connection failure: Failed to connect to IP50010

2022-03-31 17:33:43,812 WARN  [RpcServer.default.FPBQ.Fifo.handler=114,queue=18,port=16020] hdfs.DFSClient: Connection failure: Failed to connect to /IP:50010
for file /apps/hbase/data/data
0.82.3.153-1619078021188:blk_1095926490_22186461:java.net.ConnectException: Connection timed out
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3385)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3136)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:273)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:225)
        at org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:183)
        at org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:437)
        at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6413)
        at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6427)
        at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6654)
        at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6490)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:669)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.seekOrSkipToNextColumn(StoreScanner.java:738)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:969)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:978)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:275)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:311)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:369)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:469)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:256)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:347)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:833)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:852)
        at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1597)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1772)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1568)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:808)
        at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1312)
        at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1348)
        at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:992)
        at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1040)
        at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:641)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:746)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:821)
        at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2934)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
java.net.ConnectException: Connection timed out

连接数过多
– dfs.namenode.handler.count   (设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20logN,N为集群大小)

– NameNode用来处理来自DataNode的RPC请求的线程数量

– 建议设置为DataNode数量的10%,一般在10~200个之间

– 如设置太小,DataNode在传输数据的时候日志中会报告“connecton refused"信息

– 在NameNode上设定

– 默认值:10

– dfs.datanode.handler.count (可适当增加这个数值来提升DataNode RPC服务的并发度)

– DataNode用来连接NameNode的RPC请求的线程数量

– 取决于系统的繁忙程度

– 设置太小会导致性能下降甚至报错

– 在DataNode上设定

– 默认值:3

– dfs.datanode.max.xcievers (表示datanode上负责进行文件操作的线程数,如果这样的线程过多,系统内存就会暴掉 一个线程约占1M内存,一台datanode以60G内存计,则最多允许有6万个线程,而这只是理想状态的

Reserve(20%)是指多分配20%的空间以允许文件数等的增长

– DataNode可以同时处理的数据传输连接数

– 默认值:256

– 建议值:4096

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值