Hbase配置优化(转)

We have been implementing our product to support real time queries on HBase(version 0.94.0 with hadoop-1.0.0) & to improve performance of read & write operation, I have tunned hadoop/hbase configuration.

I will try to summaries all exceptions got in CRUD operation with their corresponding configuration changes to resolve that.

  • LeaseExpiredException/UnknownScannerException/ScannerTimeoutException:
It occurs mostly as client is making very slow call to server & in next() batch call of scan doesn't find its lease & it throws the exception, To fix this increase the lease timeout.
About this more information, can be find  here.
Exception: org.apache.hadoop.hbase.regionserver.LeaseException: lease '*************' does not exist
Changes in conf/hbase-site.xml:
<property>
   <name>hbase.regionserver.lease.period</name>
   <value>1200000</value>
</property>

<property>
   <name>hbase.rpc.timeout</name>
   <value>1200000</value>
</property>


  • Zookeeper Session TimeOut:
Session timeout occurs when server doesn't hear any heartbeat from client within timeout interval. It should be less and also JVM GC should be configured accordingly. By default it is 60 sec. I configured it to 20 sec. For more information, can find  here:
Changes in conf/hbase-site.xml:
<property>
    <name>zookeeper.session.timeout</name>
    <value>20000</value>
</property>


  • RegionServer Handler Count:
The number of RPC listeners/threads that answers incoming requests to region server. It should be set according to memory allocated to region servers, otherwise out of memory exception occurs. By default it is 10 & very low in order to prevent users from killing their region servers when using large write buffers with a high number of concurrent clients. If you are using co-processors, than it should be increased. I configured it to 50. For more information, find here:
Changes in conf/hbase-site.xml:
 <property>
    <name>hbase.regionserver.handler.count</name>
    <value>50</value>
  </property>


  • Zookeeper MaxClient Connection:
Number of concurrent connections may make to a single member of the ZooKeeper ensemble. It should set high to avoid zk connection loss issues. By default, it is set to 300. I configured it to 1000.
Exception: 
org.apache.hadoop.hbase.ZooKeeperConnectionException: org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode = ConnectionLoss for /hbase
Changes in conf/hbase-site.xml:
<property>
   <name>hbase.zookeeper.property.maxClientCnxns</name>
   <value>1000</value>
 </property>


  • Scanner Caching Size:
It tells the scanner how many rows to fetch at a time from the server, higher caching value makes scanner faster but eats up more memory. So it should configured based on allocated memory as well as it shouldn't configured too high as next() call takes more time than lease timeout. By default it is set to 1 & gives very poor results. We configured it to 100.
Changes in conf/hbase-site.xml:
<property>
    <name>hbase.client.scanner.caching</name>
    <value>100</value>
 </property>


  • Maximum HStore file Size:
If any one of a column families' HStoreFiles has grown to exceed this value, the hosting HRegion is split in two. In older version it is set to be 256MB but in 0.94.0 onwards it set to 10GB. It is better to high, so no split occur in between any CRUD operation. If in high region size, performance is poor than split manually according to your need. More information, can be find  here:
Changes in conf/hbase-site.xml:
<property>
    <name>hbase.hregion.max.filesize</name>
    <value>10737418240</value>
 </property>


  • Major Compaction:
There are two types of compactions: minor and major. Minor compactions will usually pick up a couple of the smaller adjacent StoreFiles and rewrite them as one. Minors do not drop deletes or expired cells, only major compactions do this. After a major compaction runs there will be a single StoreFile per Store, and this will help performance usually. But it rewrite all of the Stores data. So its better to do manually major compaction. By default it is set to 1 day. I set this property to 0 for disabling it and running manually on hbase shell "major_compact 'tableName'".
Changes in conf/hbase-site.xml:
<property> 
    <name>hbase.hregion.majorcompaction</name>
   <value>0</value>
 </property>


  • Memstore File Size:
All writes/updates goes to memstore, when the memstore size hits its defined flush size, it will be flushed to disk & mutate operations are blocked in either case of blockMultiplier*flushSize or blockingStoreFiles is reached. 
To stop blocking of update operation, we increased blockMultiplier(default 2) to 4 & blockingStoreFiles(number of storeFiles in a store, default 7) to 30. 
For more information, find  here:
Changes in conf/hbase-site.xml: 
<property>
    <name>hbase.hregion.memstore.flush.size</name>
    <value>134217728</value>
  </property>
  <property>
    <name>hbase.hregion.memstore.block.multiplier</name>
    <value>4</value>
  </property>
  <property>
    <name>hbase.hstore.blockingStoreFiles</name>
    <value>30</value>
  </property>

  • HeapSize of HBase:
The maximum amount of heap that can be used by HBase, is 1000MB by default & it's very low. We configured it to 4000MB and for region servers 8000MB.
Exception: OutOfMemoryExceptions
Changes in conf/hbase-env.sh:
export HBASE_HEAPSIZE=4000
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xms8000m -Xmx8000m"

  • GC Configuration: 
Bigger heap size makes longer GC pauses. As we configured heap size as 8GB & if it takes  10sec/GB than it pauses for 80sec & it can throw zookeeper session timeout exception if it reached timeout. Longer GC pause disables server to send heartbeats & others will assume this server is dead. More information can be find  here. 
Exception: java.lang.OutOfMemoryError: GC overhead limit exceeded 
To avoid longer GC pause, we configured GC in such way:
Changes in conf/hbase-env.sh: 
export HBASE_OPTS="$HBASE_OPTS -server -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=64m -XX:MaxNewSize=64m -XX:+CMSIncrementalMode -Djava.net.preferIPv4Stack=true"

  • Datanode Xcievers:
An upper bound on the number of files that it will serve at any one time. If it is not properly configured, we can get exceptions about missing blocks due to xcievers exceeded. More information, can be find  here:
Exception: Could not obtain block blk_***_**** from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
Changes in conf/hdfs-site.xml:
 <property>
  <name>dfs.datanode.max.xcievers</name>
  <value>2048</value>
</property>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值