Hbase由于GC时间过长导致 Zookeeper认为其死亡 节点自动关闭

Hbase由于GC时间过长导致 Zookeeper认为其死亡 节点自动关闭

目录

记录Hbase regionserver 经常死亡的原因。

日志

日志内容如下:

2018-05-29 10:04:20,809 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
2018-05-29 10:04:20,809 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
2018-05-29 10:12:51,889 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
2018-05-29 11:14:07,761 ERROR [regionserver60020] wal.ProtobufLogWriter: Got IOException while writing trailer
2018-05-29 11:14:07,762 ERROR [regionserver60020] regionserver.HRegionServer: Close and delete failed
2018-05-29 11:14:23,608 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4 attempts
2018-05-29 11:14:23,639 ERROR [regionserver60020] zookeeper.ZooKeeperWatcher: regionserver:60020-0x163a9b30c340106, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:14:43,443 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
2018-05-29 11:14:43,444 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
2018-05-29 11:40:59,161 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-2] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2018-05-29 11:40:59,161 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-2] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:40:59,161 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-2] handler.OpenRegionHandler: Failed transitioning node ubas:log_ehire_resume_view,,1518176163274.3bebe8bfacdee0fab80667bde2beef9a. from OPENING to OPENED -- closing region
2018-05-29 11:41:32,227 ERROR [PriorityRpcServer.handler=2,queue=0,port=60020] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2018-05-29 11:41:32,227 ERROR [PriorityRpcServer.handler=2,queue=0,port=60020] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:41:32,227 ERROR [PriorityRpcServer.handler=2,queue=0,port=60020] regionserver.HRegionServer: Can't retrieve recovering state from zookeeper
2018-05-29 11:41:32,227 ERROR [PriorityRpcServer.handler=2,queue=0,port=60020] ipc.RpcServer: Unexpected throwable object
2018-05-29 11:41:32,314 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-0] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2018-05-29 11:41:32,314 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-0] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:41:55,069 ERROR [PriorityRpcServer.handler=8,queue=0,port=60020] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2018-05-29 11:41:55,069 ERROR [PriorityRpcServer.handler=8,queue=0,port=60020] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:41:55,069 ERROR [PriorityRpcServer.handler=8,queue=0,port=60020] regionserver.HRegionServer: Can't retrieve recovering state from zookeeper
2018-05-29 11:41:55,069 ERROR [PriorityRpcServer.handler=8,queue=0,port=60020] ipc.RpcServer: Unexpected throwable object
2018-05-29 11:41:55,070 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-1] zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2018-05-29 11:41:55,070 ERROR [RS_OPEN_REGION-dev-hadoop6:60020-1] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:41:57,020 ERROR [regionserver60020] wal.ProtobufLogWriter: Got IOException while writing trailer
2018-05-29 11:41:57,021 ERROR [regionserver60020] regionserver.HRegionServer: Close and delete failed
2018-05-29 11:42:12,145 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4 attempts
2018-05-29 11:42:12,145 ERROR [regionserver60020] zookeeper.ZooKeeperWatcher: regionserver:60020-0x263a980386c01cf, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 11:42:27,410 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
2018-05-29 11:42:27,422 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
2018-05-29 14:19:22,210 ERROR [B.DefaultRpcServer.handler=56,queue=2,port=60020] observer.AggrRegionObserver: tracker Coprocessor Error
2018-05-29 14:19:22,350 ERROR [RS_CLOSE_REGION-dev-hadoop6:60020-0] regionserver.HRegion: Memstore size is 1269536
2018-05-29 14:19:22,543 ERROR [regionserver60020] wal.ProtobufLogWriter: Got IOException while writing trailer
2018-05-29 14:19:22,544 ERROR [regionserver60020] regionserver.HRegionServer: Close and delete failed
2018-05-29 14:19:47,151 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4 attempts
2018-05-29 14:19:47,151 ERROR [regionserver60020] zookeeper.ZooKeeperWatcher: regionserver:60020-0x163a9b30c3401a1, quorum=dev-hadoop4:2181,dev-hadoop5:2181,dev-hadoop6:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
2018-05-29 14:20:02,153 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper delete failed after 4 attempts
2018-05-29 14:20:02,154 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting

官方解释

http://hbase.apache.org/0.94/book/important_configurations.html
2.5.2.1.1. zookeeper.session.timeout

The default timeout is three minutes (specified in milliseconds). This means that if a server crashes, it will be three minutes before the Master notices the crash and starts recovery. You might like to tune the timeout down to a minute or even less so the Master notices failures the sooner. Before changing this value, be sure you have your JVM garbage collection configuration under control otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out your RegionServer (You might be fine with this – you probably want recovery to start on the server if a RegionServer has been in GC for a long period of time).

长时间的GC将导致regionserver死亡。。而且官方下面还自己吐槽这是个noob question。已经设置了很长的默认时间 3分钟。 然而实际使用时3分钟并不能有效另regionserver恢复,单是GC就会出现150s左右的情况。

配置添加

hbase-site.xml :

 <property>
    <name>zookeeper.session.timeout</name>
    <value>300000</value>
 </property>

后记

此方法只支持长时间GC原因导致的Hbase节点死亡问题,如果日志出现类似情况,并且伴有长时间GC则可通过此配置让节点不会自行关闭,不过生产环境还是应该避免这种长时间GC。 上述日志内容为开发环境,因为整个组的es,redis,hadoop,storm都起在了一起,导致经常会有gc过久的情况。

PS:在打架抢占资源的过程中,es永远是最猛的。。。。。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值