HBase常见报错

HBase启动出现问题

报错一
1.1错误
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/MasterProcWALs/state-00000000000000000011.log could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1559)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3245)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:663)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)	
1.2原因

无法写入到datanode中。

1.3解决方法

请继续往下查看异常信息

报错二
2.1报错信息
2018-07-08 10:06:07,091 WARN  [master/littlelawson/192.168.211.3:60000-SendThread(server5:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.NoRouteToHostException: No route to host
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
2.2原因

无法将答复包回给主机

2.3 解决方法

防火墙没有关闭【关闭防火墙的方法见链接https://blog.csdn.net/liu16659/article/details/80957695】

报错三
3.1报错信息
exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on server6,16020,1531015564292
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3086)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1244)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1529)
	at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22731)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277	
3.2 原因

服务器server6上的RegionServer没有启动

3.3 解决方法

具体的原因请看下面这个异常信息

报错四
4.1报错信息
2018-07-08 10:06:17,078 WARN  [ProcedureExecutor-3] master.SplitLogManager: Returning success without actually splitting and deleting all the log files in path hdfs://192.168.211.3:9000/hbase/WALs/server6,16020,1530955607284-splitting: [FileStatus{path=hdfs://192.168.211.3:9000/hbase/WALs/server6,16020,1530955607284-splitting/server6%2C16020%2C1530955607284.meta.1530970444067.meta; isDirectory=false; length=0; replication=3; blocksize=134217728; modification_time=1530977645764; access_time=1530970443901; owner=root; group=supergroup; permission=rw-r--r--; isSymlink=false}, FileStatus{path=hdfs://192.168.211.3:9000/hbase/WALs/server6,16020,1530955607284-splitting/server6%2C16020%2C1530955607284.meta.1530974044356.meta; isDirectory=false; length=0; replication=3; blocksize=134217728; modification_time=1530977645764; access_time=1530974044160; owner=root; group=supergroup; permission=rw-r--r--; isSymlink=false}]
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException): `/hbase/WALs/server6,16020,1530955607284-splitting is non empty': Directory is not empty
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4012)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3968)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3952)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:825)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)	
4.2 原因

引起server6的RegionServer不能正确启动的原因就是/hbase/WALs/server6出现重复。

4.3解决方法

需要将HDFS文件系统下的HBase存储相应节点的WALs的数据删除,然后重启HBase即可。这里可以查看一个HBase的jira,可以看出这是HBase的一个bug,这个bug早在2015年就已经提出来了,但是至今仍未解决【https://issues.apache.org/jira/browse/HBASE-14729】。查看HDFS的HBase的表目录如下:
查看结果如下:server6这台服务器果然出现了问题【删除!除了删除我也不知道有啥更好的方法了】

[root@littlelawson ~]# hdfs dfs -ls /hbase/WALs
Found 4 items
drwxr-xr-x   - root supergroup          0 2018-07-08 10:07 /hbase/WALs/server4,16020,1531015564301
drwxr-xr-x   - root supergroup          0 2018-07-08 10:06 /hbase/WALs/server5,16020,1531015564319
drwxr-xr-x   - root supergroup          0 2018-07-08 10:06 /hbase/WALs/server6,16020,1530955607284-splitting
drwxr-xr-x   - root supergroup          0 2018-07-08 10:08 /hbase/WALs/server6,16020,1531015564292
[root@littlelawson ~]# hdfs dfs -rm -rf /hbase/WALs/server6*
\-rm: Illegal option -rf
Usage: hadoop fs [generic options] -rm [-f] [-r|-R] [-skipTrash] <src> ...
  • 执行删除操作
[root@littlelawson ~]# hdfs dfs -rm -f -R /hbase/WALs/server6*
18/07/08 10:19:59 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /hbase/WALs/server6,16020,1530955607284-splitting
18/07/08 10:19:59 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /hbase/WALs/server6,16020,1531015564292
报错五
5.1原因

直接使用bin/zkServer.sh start这个命令,然后再使用hbase shell,运行shell时,可能会出现“ERROR: Can’t get master address from ZooKeeper; znode data == null”。即不能够正确的同zookeeper通信而导致此问题的发生。

5.2 解决办法:

先启动zookeeper,然后再启动hbase shell

bin/zkServer.sh start conf/zoo.cfg

这个命令的意思是根据conf/zoo.cfg这个配置文件来执行bin/zkServer.sh这个脚本。当然这个脚本是用来启动zookeeper。

报错六
6.1报错信息
2018-10-08 16:21:40,984 FATAL [server4:60000.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From server4/192.168.211.4 to server4:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
	at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2601)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
	at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:559)
	at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:1005)
	at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:459)
	at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:166)
	at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:141)
	at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:741)
	at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:205)
	at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:2023)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 29 more
2018-10-08 16:21:40,984 FATAL [server4:60000.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From server4/192.168.211.4 to server4:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
	at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2601)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
	at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:559)
	at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:1005)
	at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:459)
	at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:166)
	at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:141)
	at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:741)
	at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:205)
	at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:2023)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 29 more
2018-10-08 16:21:40,984 FATAL [server4:60000.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From server4/192.168.211.4 to server4:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
	at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2601)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1207)
	at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:559)
	at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:1005)
	at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:459)
	at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:166)
	at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:141)
	at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:741)
	at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:205)
	at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:2023)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 29 more
6.2原因

hbase无法和hadoop(hdfs)通信

6.3解决办法

重启hadoop(或者解决hadoop报错信息内容)

===============update on 2018/12/18===============
报错七
7.1 报错信息
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class CalleeWriteObserver cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
	at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1974)
	at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1827)
	at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1729)
	at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:479)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:63400)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
7.2 报错背景

报错的原因是:在添加协处理器方法,再创建表时出现上述错误

7.3 报错原因

未知。

  • 5
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

说文科技

看书人不妨赏个酒钱?

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值