hbase报错:java.io.IOException: Got error for OP_READ_BLOCK

2020-01-16 14:57:32,689 WARN [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR

写入数据的时候,hdfs阻塞了, 需要等待一会, 报错如下, 阻塞的原因可能是在hbase正在做split,compaction。

2020-01-16 14:57:32,687 INFO  [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] regionserver.HStore: Validating hfile at hdfs://migumaster/pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top for inclusion in store m region migu:download_log20200116,66,1579103995918.8738786b95593adeafa7fa20bc92cc8e.
2020-01-16 14:57:32,689 WARN  [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/10.186.59.94:45870, remote=/10.186.59.90:50010, for file /pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top, for pool BP-266398130-10.186.59.129-1574389974472 block 1087859234_14161728
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467)
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:652)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:879)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:932)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:391)
    at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:482)
    at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:540)
    at org.apache.hadoop.hbase.regionserver.HStore.assertBulkLoadHFileOk(HStore.java:734)
    at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5350)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:1950)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33650)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2171)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
2020-01-16 14:57:32,689 WARN  [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] hdfs.DFSClient: Failed to connect to /10.186.59.90:50010 for block, add to deadNodes and continue. java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/10.186.59.94:45870, remote=/10.186.59.90:50010, for file /pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top, for pool BP-266398130-10.186.59.129-1574389974472 block 1087859234_14161728
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/10.186.59.94:45870, remote=/10.186.59.90:50010, for file /pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top, for pool BP-266398130-10.186.59.129-1574389974472 block 1087859234_14161728
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467)
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:652)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:879)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:932)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:391)
    at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:482)
    at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:540)
    at org.apache.hadoop.hbase.regionserver.HStore.assertBulkLoadHFileOk(HStore.java:734)
    at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5350)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:1950)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33650)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2171)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
2020-01-16 14:57:32,726 INFO  [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] regionserver.HStore: Loaded HFile hdfs://migumaster/pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top into store 'm' as hdfs://migumaster/hbase/data/migu/download_log20200116/8738786b95593adeafa7fa20bc92cc8e/m/6738bc2bfdde45498adcddde4c149d66_SeqId_549_ - updating store file list.
2020-01-16 14:57:32,734 INFO  [RpcServer.FifoWFPBQ.priority.handler=17,queue=1,port=6201] regionserver.HStore: Successfully loaded store file hdfs://migumaster/pub_stat_migu/hbasetmp/m/.tmp/369d4d2a859c475491118a50d5e6ae02.top into store m (new location: hdfs://migumaster/hbase/data/migu/download_log20200116/8738786b95593adeafa7fa20bc92cc8e/m/6738bc2bfdde45498adcddde4c149d66_SeqId_549_)
HBase是一种分布式的非关系型数据库,它基于Hadoop的HDFS文件系统进行存储,并且提供了高可靠性、高扩展性和高性能的特性。当在使用HBase时,有时可能会遇到一些报错信息。 对于报错信息:java.io.IOException: could not locate executable null\bin\win,这是由于系统环境变量配置不正确导致的。在Windows系统中,HBase需要依赖一些可执行文件来执行不同的操作。然而,这个错误消息告诉我们系统找不到指定位置的可执行文件。 解决这个问题,我们可以按照以下步骤操作: 1. 首先,确认你已经正确安装了HBase并且设置好了系统环境变量。确保HBase的安装目录被正确添加到 PATH 环境变量中。 2. 确保在 HBase 的 conf 目录下,有一个名为 hbase-site.xml 的配置文件。在这个文件中,你需要设置 HBase 的主要配置属性,例如 HBase 的根目录。 3. 确认 Hadoop 的 bin 目录也被正确添加到 PATH 环境变量中。这是因为HBase依赖于Hadoop的一些可执行文件。 4. 确认 Hadoop 的配置文件也存在于其 conf 目录中,并且 Hadoop 的根目录也被正确设置。 5. 最后,尝试重新启动 HBase,看看是否仍然报错。 如果以上步骤都正确进行,并且环境配置也正确,你应该能够避免这个错误。如果问题仍然存在,请仔细检查上述步骤,并确保每一步都按照正确的方式进行操作。 希望以上解答能够帮助你解决HBase链接报错的问题。如果仍然有疑问,请提供更多的详细信息,以便我们能更准确地定位问题并给出进一步的解决方案。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

千里风雪

你的鼓励是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值