hbase配置、运行错误总结

一、执行$ hbase hbck 命令时,出现以下提示:

Invalid maximum heap size: -Xmx4096m
The specified size exceeds the maximum representable size.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

原因:jvm设置的内存过大,减小配置文件hbase-env.sh内的设置即可。例如:

export HBASE_HEAPSIZE=1024

二、无法启动hbase,regionserver log里会有这样的错误,zookeeper也有初始化问题的错误

FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 10.210.70.57,60020,1340088145399: Initialization of RS failed. Hence aborting RS.

因为之前安装配置的时候是好好的,中间经历过强行kill daemon的过程,又是报错初始化问题,所以估计是有缓存影响了,所以清理了tmp里的数据,然后发现HRegionServer依然无法启动,不过还好的是zookeeper启动了,一怒之下把hdfs里的hbase数据也都清理了,同时再清理tmp,检查各个节点是否有残留hbase进程,kill掉,重启hbase,然后这个世界都正常了。不知道具体哪里影响了,不推荐这种暴力解决办法,如果有谁知道原因请告之。

三、无法启动reginserver daemon,报错如下:

Exception in thread "main" java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
...
Caused by: java.net.BindException: Problem binding to /10.210.70.57:60020 : Cannot assign requested address

根据错误提示,检查ip对应的机器是否正确,如果出错机器的ip正确,检查60020端口是否被占用。

四、执行hbase程序orshell命令出现如下提示:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

是因为hbase和hadoop里都有这个jar包,选择其一移除即可。

五、执行hbase的mapreduce作业,有些节点无任何报错正常执行,有些节点总报类似Status : FAILED
java.lang.NullPointerException的错误,查看tasktracker的log日志有如下错误:

WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
...
caused by java.net.ConnectException: Connection refused

官方对这个错误给了说明,

Errors like this... are either due to ZooKeeper being down, or unreachable due to network issues.

当初配置zookeeper时只说尽量配置奇数节点防止down掉一个节点无法选出leader,现在看这个问题貌似所以想执行任务的节点都必须配置zookeeper啊。

六、报告找不到方法异常,但是报告的函数并非自己定义的,也并没有调用这样的函数,类似信息如下:

java.lang.RuntimeException: java.lang.NoSuchMethodException: com.google.hadoop.examples.Simple$MyMapper.()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:45)
at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:32)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:53)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:209)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1210)
Caused by: java.lang.NoSuchMethodException: com.google.hadoop.examples.Simple$MyMapper.()
at java.lang.Class.getConstructor0(Class.java:2705)
at java.lang.Class.getDeclaredConstructor(Class.java:1984)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:41)
... 4 more

网上找到解决方法如下:

This is actually the() function. The display on the web page doesn't translate into html, but dumps plain text, sois treated as a (nonexistant) tag by your browser. This function is created as a default initializer for non-static classes. This is most likely caused by having a non-static Mapper or Reducer class. Try adding the static keyword to your class declaration, ie:

意思是缺少static关键字~添加上即可。如下:

public static class MyMapper extends MapReduceBase implements Mapper {...}

七、使用mapreduce程序写HFile操作hbase时,可能会有这样的错误:


java.lang.IllegalArgumentException: Can't read partitions file
...
Caused by: java.io.IOException: wrong key class: org.apache.hadoop.io.*** is not class org.apache.hadoop.hbase.io.ImmutableBytesWritable

这里需要注意的是无论是map还是reduce作为最终的输出结果,输出的key和value的类型应该是:< ImmutableBytesWritable, KeyValue> 或者< ImmutableBytesWritable, Put>。改成这样的类型就行了。

八、如果启动hbase集群出现regionserver无法启动,日志报告如下类似错误时,说明是集群的时间不同步,只需要同步即可解决。

FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 10.210.78.22,60020,1344329095415: Unhandled exceptio
n: org.apache.hadoop.hbase.ClockOutOfSyncException: Server 10.210.78.22,60020,1344329095415 has been rejected; Reported time is too far out of sync with mast
er. Time difference of 90358ms > max allowed of 30000ms
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server 10.210.78.22,60020,1344329095415 has been rejected;
Reported time is too far out of sync with master. Time difference of 90358ms > max allowed of 30000ms
......
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server 10.210.78.22,60020,1344329095415 has been rejected;
Reported time is too far out of sync with master. Time difference of 90358ms > max allowed of 30000ms

只需要执行一下这条命令即可同步国际时间:

/usr/sbin/ntpdate tick.ucla.edu tock.gpsclock.com ntp.nasa.gov timekeeper.isi.edu usno.pa-x.dec.com;/sbin/hwclock --systohc > /dev/null

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值