org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: org.apache.hadoop.hdfs.server.namenode

1.最近hbase的rgion经常挂掉一个,查看该节点日志发现如下错误:

2014-02-22 01:52:02,194 ERROR org.apache.Hadoop.hbase.regionserver.HRegionServer: Close and delete failed

org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)

查了很长时间也没找到hbase的问题,后来根据网上资料查看了hadoop的日志如下:

2014-02-22 01:52:00,935 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

2014-02-22 01:52:00,936 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411, DFSClient_hb_rs_testhd3,60020,1392948100268, null) from 172.72.101.213:59979: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)

结果发现两个日志有几乎相同的记录,可以确认hbase的问题是由hadoop引起,修改如下:

解决办法,调整xcievers参数

默认是4096,改为8192

vi /home/dwhftp/opt/hadoop/conf/hdfs-site.xml

<property>

<name>dfs.datanode.max.xcievers</name>

<value>8192</value>

</property>

dfs.datanode.max.xcievers 参数说明

一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:

<property>

<name>dfs.datanode.max.xcievers</name>

<value>4096</value>

</property>

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值