Hbase部署报错

各节点节点时间不一致

1
2
3
4
5
6
7
8
9
10
11
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoopslave2, 60020 , 1372320861420 has been rejected; Reported time is too far out of sync with master.  Time difference of 143732ms > max allowed of 30000ms
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java: 57 )
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java: 45 )
         at java.lang.reflect.Constructor.newInstance(Constructor.java: 525 )
         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java: 95 )
         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java: 79 )
         at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java: 2093 )
         at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java: 744 )
         at java.lang.Thread.run(Thread.java: 722 )
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoopslave2, 60020 , 1372320861420 has been rejected; Reported time is too far out of sync with master.  Time difference of 143732ms > max allowed of 30000ms

在各节点的hbase-site.xml文件中加入下列代码

   <property>
     <name>hbase.master.maxclockskew</name>
     <value>200000</value>
   </property>

Directory is not empty

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException): `/hbase/WALs/slave1, 16000 , 1446046595488 -splitting is non empty': Directory is not empty
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java: 3524 )
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java: 3479 )
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java: 3463 )
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java: 751 )
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java: 562 )
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$ 2 .callBlockingMethod(ClientNamenodeProtocolProtos.java)
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java: 585 )
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java: 928 )
     at org.apache.hadoop.ipc.Server$Handler$ 1 .run(Server.java: 2013 )
     at org.apache.hadoop.ipc.Server$Handler$ 1 .run(Server.java: 2009 )
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java: 415 )
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java: 1614 )
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java: 2007 )
     at org.apache.hadoop.ipc.Client.call(Client.java: 1411 )
     at org.apache.hadoop.ipc.Client.call(Client.java: 1364 )
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java: 206 )
     at com.sun.proxy.$Proxy15.delete(Unknown Source)
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java: 490 )
     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
     at java.lang.reflect.Method.invoke(Method.java: 606 )
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java: 187 )
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java: 102 )
     at com.sun.proxy.$Proxy16.delete(Unknown Source)
     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
     at java.lang.reflect.Method.invoke(Method.java: 606 )
     at org.apache.hadoop.hbase.fs.HFileSystem$ 1 .invoke(HFileSystem.java: 279 )
     at com.sun.proxy.$Proxy17.delete(Unknown Source)
     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
     at java.lang.reflect.Method.invoke(Method.java: 606 )
     at org.apache.hadoop.hbase.fs.HFileSystem$ 1 .invoke(HFileSystem.java: 279 )
     at com.sun.proxy.$Proxy17.delete(Unknown Source)
     at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java: 1726 )
     at org.apache.hadoop.hdfs.DistributedFileSystem$ 11 .doCall(DistributedFileSystem.java: 588 )
     at org.apache.hadoop.hdfs.DistributedFileSystem$ 11 .doCall(DistributedFileSystem.java: 584 )
     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java: 81 )
     at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java: 584 )
     at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java: 297 )
     at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java: 400 )
     at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java: 373 )
     at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java: 295 )
     at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitLogs(ServerCrashProcedure.java: 388 )
     at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java: 228 )
     at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java: 72 )
     at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java: 119 )
     at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java: 452 )
     at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java: 1050 )
     at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java: 841 )
     at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java: 794 )
     at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$ 400 (ProcedureExecutor.java: 75 )
     at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$ 2 .run(ProcedureExecutor.java: 479 )

参考https://issues.apache.org/jira/browse/HBASE-14729,进入hadoop文件系统,删除掉报错的目录或真个WALs

TableExistsException: hbase:namespace

1
2
3
4
5
6
7
8
9
10
zookeeper.MetaTableLocator: Failed verification of hbase:meta,, 1 at address=slave1, 16020 , 1428456823337 , exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,, 1 is not online on worker05, 16020 , 1428461295266
         at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.Java: 2740 )
         at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java: 859 )
         at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java: 1137 )
         at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$ 2 .callBlockingMethod(AdminProtos.java: 20862 )
         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java: 2031 )
         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java: 107 )
         at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java: 130 )
         at org.apache.hadoop.hbase.ipc.RpcExecutor$ 1 .run(RpcExecutor.java: 107 )
         at java.lang.Thread.run(Thread.java: 745 )

HMaster启动之后自动挂掉(或非正常重启),并且master的log里出现“TableExistsException: hbase:namespace”字样;
很可能是更换了Hbase的版本过后zookeeper还保留着上一次的Hbase设置,所以造成了冲突.
删除zookeeper信息,重启之后就没问题了

1
2
3
4
# sh zkCli.sh -server slave1: 2181
[zk: slave1: 2181 (CONNECTED)  0 ] ls /
[zk: slave1: 2181 (CONNECTED)  0 ] rmr /hbase
[zk: slave1: 2181 (CONNECTED)  0 ] quit

转载于:https://www.cnblogs.com/grow1016/p/11288007.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值