介绍的两种预分区方法生产环境中用过第二种,确实好用,解决了写热点的问题。
HBase中,表会被划分为1...n个Region,被托管在RegionServer中。Region二个重要的属性:StartKey与 EndKey表示这个Region维护的rowKey范围,当我们要读/写数据时,如果rowKey落在某个start-end key范围内,那么就会定位到目标region并且读/写到相关的数据。简单地说,有那么一点点类似人群划分,1-15岁为小朋友,16-39岁为年轻 人,40-64为中年人,65岁以上为老年人。(这些数值都是拍脑袋出来的,只是举例,非真实),然后某人找队伍,然后根据年龄,处于哪个范围,就找到它 所属的队伍。 : ( 有点废话了。。。。
然后,默认地,当我们只是通过HBaseAdmin指定TableDescriptor来创建一张表时,只有一个region,正处于混沌时 期,start-end key无边界,可谓海纳百川。啥样的rowKey都可以接受,都往这个region里装,然而,当数据越来越多,region的size越来越大时,大到 一定的阀值,hbase认为再往这个region里塞数据已经不合适了,就会找到一个midKey将region一分为二,成为2个region,这个过 程称为分裂(region-split).而midKey则为这二个region的临界,左为N无下界,右为M无上界。< midKey则为阴被塞到N区,> midKey则会被塞到M区。 如何找到midKey?涉及的内容比较多,暂且不去讨论,最简单的可以认为是region的总行数 / 2 的那一行数据的rowKey.虽然实际上比它会稍复杂点。
如果我们就这样默认地,建表,表里不断地Put数据,更严重的是我们的rowkey还是顺序增大的,是比较可怕的。存在的缺点比较明显。
首先是热点写,我们总是会往最大的start-key所在的region写东西,因为我们的rowkey总是会比之前的大,并且hbase的是按升序方式排序的。所以写操作总是被定位到无上界的那个region中。
其次,由于写热点,我们总是往最大start-key的region写记录,之前分裂出来的region不会再被写数据,有点被打进冷宫的赶脚,它们都处于半满状态,这样的分布也是不利的。
如果在写比较频率的场景下,数据增长快,split的次数也会增多,由于split是比较耗时耗资源的,所以我们并不希望这种事情经常发生。
............
看到这些缺点,我们知道,在集群的环境中,为了得到更好的并行性,我们希望有好的load blance,让每个节点提供的请求处理都是均等的。我们也希望,region不要经常split,因为split会使server有一段时间的停顿,如何能做到呢?
随机散列与预分区。二者结合起来,是比较完美的,预分区一开始就预建好了一部分region,这些region都维护着自已的start-end keys,再配合上随机散列,写数据能均等地命中这些预建的region,就能解决上面的那些缺点,大大地提高了性能。
提供2种思路: hash 与 partition.
一、hash就是rowkey前面由一串随机字符串组成,随机字符串生成方式可以由SHA或者MD5等方式生成,只要region所管理的start-end keys范围比较随机,那么就可以解决写热点问题。
long currentId = 1L; byte [] rowkey = Bytes.add(MD5Hash.getMD5AsHex(Bytes.toBytes(currentId)).substring(0, 8).getBytes(), Bytes.toBytes(currentId));
假设rowKey原本是自增长的long型,可以将rowkey转为hash再转为bytes,加上本身id 转为bytes,组成rowkey,这样就生成随便的rowkey。那么对于这种方式的rowkey设计,如何去进行预分区呢?
1.取样,先随机生成一定数量的rowkey,将取样数据按升序排序放到一个集合里
2.根据预分区的region个数,对整个集合平均分割,即是相关的splitKeys.
3.HBaseAdmin.createTable(HTableDescriptor tableDescriptor,byte[][] splitkeys)可以指定预分区的splitKey,即是指定region间的rowkey临界值.
1.创建split计算器,用于从抽样数据中生成一个比较合适的splitKeys
- public class HashChoreWoker implements SplitKeysCalculator{
- //随机取机数目
- private int baseRecord;
- //rowkey生成器
- private RowKeyGenerator rkGen;
- //取样时,由取样数目及region数相除所得的数量.
- private int splitKeysBase;
- //splitkeys个数
- private int splitKeysNumber;
- //由抽样计算出来的splitkeys结果
- private byte[][] splitKeys;
- public HashChoreWoker(int baseRecord, int prepareRegions) {
- this.baseRecord = baseRecord;
- //实例化rowkey生成器
- rkGen = new HashRowKeyGenerator();
- splitKeysNumber = prepareRegions - 1;
- splitKeysBase = baseRecord / prepareRegions;
- }
- public byte[][] calcSplitKeys() {
- splitKeys = new byte[splitKeysNumber][];
- //使用treeset保存抽样数据,已排序过
- TreeSet<byte[]> rows = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
- for (int i = 0; i < baseRecord; i++) {
- rows.add(rkGen.nextId());
- }
- int pointer = 0;
- Iterator<byte[]> rowKeyIter = rows.iterator();
- int index = 0;
- while (rowKeyIter.hasNext()) {
- byte[] tempRow = rowKeyIter.next();
- rowKeyIter.remove();
- if ((pointer != 0) && (pointer % splitKeysBase == 0)) {
- if (index < splitKeysNumber) {
- splitKeys[index] = tempRow;
- index ++;
- }
- }
- pointer ++;
- }
- rows.clear();
- rows = null;
- return splitKeys;
- }
- }
- KeyGenerator及实现
- //interface
- public interface RowKeyGenerator {
- byte [] nextId();
- }
- //implements
- public class HashRowKeyGenerator implements RowKeyGenerator {
- private long currentId = 1;
- private long currentTime = System.currentTimeMillis();
- private Random random = new Random();
- public byte[] nextId() {
- try {
- currentTime += random.nextInt(1000);
- byte[] lowT = Bytes.copy(Bytes.toBytes(currentTime), 4, 4);
- byte[] lowU = Bytes.copy(Bytes.toBytes(currentId), 4, 4);
- return Bytes.add(MD5Hash.getMD5AsHex(Bytes.add(lowU, lowT)).substring(0, 8).getBytes(),
- Bytes.toBytes(currentId));
- } finally {
- currentId++;
- }
- }
- }
- @Test
- public void testHashAndCreateTable() throws Exception{
- HashChoreWoker worker = new HashChoreWoker(1000000,10);
- byte [][] splitKeys = worker.calcSplitKeys();
- HBaseAdmin admin = new HBaseAdmin(HBaseConfiguration.create());
- TableName tableName = TableName.valueOf("hash_split_table");
- if (admin.tableExists(tableName)) {
- try {
- admin.disableTable(tableName);
- } catch (Exception e) {
- }
- admin.deleteTable(tableName);
- }
- HTableDescriptor tableDesc = new HTableDescriptor(tableName);
- HColumnDescriptor columnDesc = new HColumnDescriptor(Bytes.toBytes("info"));
- columnDesc.setMaxVersions(1);
- tableDesc.addFamily(columnDesc);
- admin.createTable(tableDesc ,splitKeys);
- admin.close();
- }
以上,就已经按hash方式,预建好了分区,以后在插入数据的时候,也要按照此rowkeyGenerator的方式生成rowkey,有兴趣的话,也可以做些试验,插入些数据,看看数据的分布。
二、partition故名思义,就是分区式,这种分区有点类似于mapreduce中的partitioner,将区域用长整数(Long)作为分区号,每 个region管理着相应的区域数据,在rowKey生成时,将id取模后,然后拼上id整体作为rowKey.这个比较简单,不需要取 样,splitKeys也非常简单,直接是分区号即可。直接上代码吧:
- public class PartitionRowKeyManager implements RowKeyGenerator,
- SplitKeysCalculator {
- public static final int DEFAULT_PARTITION_AMOUNT = 20;
- private long currentId = 1;
- private int partition = DEFAULT_PARTITION_AMOUNT;
- public void setPartition(int partition) {
- this.partition = partition;
- }
- public byte[] nextId() {
- try {
- long partitionId = currentId % partition;
- return Bytes.add(Bytes.toBytes(partitionId),
- Bytes.toBytes(currentId));
- } finally {
- currentId++;
- }
- }
- public byte[][] calcSplitKeys() {
- byte[][] splitKeys = new byte[partition - 1][];
- for(int i = 1; i < partition ; i ++) {
- splitKeys[i-1] = Bytes.toBytes((long)i);
- }
- return splitKeys;
- }
- }
calcSplitKeys方法比较单纯,splitKey就是partition的编号,我们看看测试类:
- @Test
- public void testPartitionAndCreateTable() throws Exception{
- PartitionRowKeyManager rkManager = new PartitionRowKeyManager();
- //只预建10个分区
- rkManager.setPartition(10);
- byte [][] splitKeys = rkManager.calcSplitKeys();
- HBaseAdmin admin = new HBaseAdmin(HBaseConfiguration.create());
- TableName tableName = TableName.valueOf("partition_split_table");
- if (admin.tableExists(tableName)) {
- try {
- admin.disableTable(tableName);
- } catch (Exception e) {
- }
- admin.deleteTable(tableName);
- }
- HTableDescriptor tableDesc = new HTableDescriptor(tableName);
- HColumnDescriptor columnDesc = new HColumnDescriptor(Bytes.toBytes("info"));
- columnDesc.setMaxVersions(1);
- tableDesc.addFamily(columnDesc);
- admin.createTable(tableDesc ,splitKeys);
- admin.close();
- }
通过partition实现的loadblance写的话,当然生成rowkey方式也要结合当前的region数目取模而求得,大家同样也可以做些实验,看看数据插入后的分布。
在这里也顺提一下,如果是顺序的增长型原id,可以将id保存到一个数据库,传统的也好,redis的也好,每次取的时候,将数值设大1000左右,以后 id可以在内存内增长,当内存数量已经超过1000的话,再去load下一个,有点类似于oracle中的sqeuence.
随机分布加预分区也不是一劳永逸的。因为数据是不断地增长的,随着时间不断地推移,已经分好的区域,或许已经装不住更多的数据,当然就要进一步进行 split了,同样也会出现性能损耗问题,所以我们还是要规划好数据增长速率,观察好数据定期维护,按需分析是否要进一步分行手工将分区再分好,也或者是 更严重的是新建表,做好更大的预分区然后进行数据迁移。
下面是从另外一片博文转的一些代码,本人的工作环境是内网环境,完全与互联网物理隔绝的,所以没法贴本人代码。用的是第二种方法,大家可以参考以下:
API 操作:
- import java.io.IOException;
- import java.util.ArrayList;
- import java.util.List;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.hbase.HBaseConfiguration;
- import org.apache.hadoop.hbase.HColumnDescriptor;
- import org.apache.hadoop.hbase.HTableDescriptor;
- import org.apache.hadoop.hbase.KeyValue;
- import org.apache.hadoop.hbase.MasterNotRunningException;
- import org.apache.hadoop.hbase.TableName;
- import org.apache.hadoop.hbase.ZooKeeperConnectionException;
- import org.apache.hadoop.hbase.client.Get;
- import org.apache.hadoop.hbase.client.HBaseAdmin;
- import org.apache.hadoop.hbase.client.HTable;
- import org.apache.hadoop.hbase.client.HTablePool;
- import org.apache.hadoop.hbase.client.Put;
- import org.apache.hadoop.hbase.client.Result;
- import org.apache.hadoop.hbase.client.ResultScanner;
- import org.apache.hadoop.hbase.client.Scan;
- import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
- import org.apache.hadoop.hbase.filter.Filter;
- import org.apache.hadoop.hbase.filter.FilterList;
- import org.apache.hadoop.hbase.filter.PrefixFilter;
- import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
- import org.apache.hadoop.hbase.util.Bytes;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import com.kktest.hbase.HashChoreWoker;
- import com.kktest.hbase.HashRowKeyGenerator;
- import com.kktest.hbase.RowKeyGenerator;
- import com.kktest.hbase.BitUtils;
- /**
- * hbase 客户端
- *
- * @author kuang hj
- *
- */
- @SuppressWarnings("all")
- public class HBaseClient {
- private static Logger logger = LoggerFactory.getLogger(HBaseClient.class);
- private static Configuration config;
- static {
- config = HBaseConfiguration.create();
- config.set("hbase.zookeeper.quorum",
- "192.168.1.100:2181,192.168.1.101:2181,192.168.1.103:2181");
- }
- /**
- * 根据随机散列(hash)创建分区表
- *
- * @throws Exception
- * hash_split_table
- */
- public static void testHashAndCreateTable(String tableNameTmp,
- String columnFamily) throws Exception {<p> // 取随机散列 10 代表 10个分区
- HashChoreWoker worker = new HashChoreWoker(1000000, 10);
- byte[][] splitKeys = worker.calcSplitKeys();
- HBaseAdmin admin = new HBaseAdmin(config);
- TableName tableName = TableName.valueOf(tableNameTmp);
- if (admin.tableExists(tableName)) {
- try {
- admin.disableTable(tableName);
- } catch (Exception e) {
- }
- admin.deleteTable(tableName);
- }
- HTableDescriptor tableDesc = new HTableDescriptor(tableName);
- HColumnDescriptor columnDesc = new HColumnDescriptor(
- Bytes.toBytes(columnFamily));
- columnDesc.setMaxVersions(1);
- tableDesc.addFamily(columnDesc);
- admin.createTable(tableDesc, splitKeys);
- admin.close();
- }
- /**
- * @Title: queryData
- * @Description: 从HBase查询出数据
- * @author kuang hj
- * @param tableName
- * 表名
- * @param rowkey
- * rowkey
- * @return 返回用户信息的list
- * @throws Exception
- */
- @SuppressWarnings("all")
- public static ArrayList<String> queryData(String tableName, String rowkey)
- throws Exception {
- ArrayList<String> list = new ArrayList<String>();
- logger.info("开始时间");
- HTable table = new HTable(config, tableName);
- Get get = new Get(rowkey.getBytes()); // 根据主键查询
- Result r = table.get(get);
- logger.info("结束时间");
- KeyValue[] kv = r.raw();
- for (int i = 0; i < kv.length; i++) {
- // 循环每一列
- String key = kv[i].getKeyString();
- String value = kv[i].getValueArray().toString();
- // 将查询到的结果写入List中
- list.add(key + ":"+ value);
- }// end of 遍历每一列
- return list;
- }
- /**
- * 增加表数据
- *
- * @param tableName
- * @param rowkey
- */
- public static void insertData(String tableName, String rowkey) {
- HTable table = null;
- try {
- table = new HTable(config, tableName);
- // 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
- for (int i = 1; i < 100; i++) {
- byte[] result = getNumRowkey(rowkey,i);
- Put put = new Put(result);
- // 本行数据的第一列
- put.add(rowkey.getBytes(), "name".getBytes(),
- ("aaa" + i).getBytes());
- // 本行数据的第三列
- put.add(rowkey.getBytes(), "age".getBytes(),
- ("bbb" + i).getBytes());
- // 本行数据的第三列
- put.add(rowkey.getBytes(), "address".getBytes(),
- ("ccc" + i).getBytes());
- table.put(put);
- }
- } catch (Exception e1) {
- e1.printStackTrace();
- }
- }
- private static byte[] getNewRowkey(String rowkey) {
- byte[] result = null;
- RowKeyGenerator rkGen = new HashRowKeyGenerator();
- byte[] splitKeys = rkGen.nextId();
- byte[] rowkeytmp = rowkey.getBytes();
- result = new byte[splitKeys.length + rowkeytmp.length];
- System.arraycopy(splitKeys, 0, result, 0, splitKeys.length);
- System.arraycopy(rowkeytmp, 0, result, splitKeys.length,
- rowkeytmp.length);
- return result;
- }
- public static void main(String[] args) {
- RowKeyGenerator rkGen = new HashRowKeyGenerator();
- byte[] splitKeys = rkGen.nextId();
- System.out.println(splitKeys);
- }
- private static byte[] getNumRowkey(String rowkey, int i) {
- byte[] result = null;
- RowKeyGenerator rkGen = new HashRowKeyGenerator();
- byte[] splitKeys = rkGen.nextId();
- byte[] rowkeytmp = rowkey.getBytes();
- byte[] intVal = BitUtils.getByteByInt(i);
- result = new byte[splitKeys.length + rowkeytmp.length + intVal.length];
- System.arraycopy(splitKeys, 0, result, 0, splitKeys.length);
- System.arraycopy(rowkeytmp, 0, result, splitKeys.length,
- rowkeytmp.length);
- System.arraycopy(intVal, 0, result, splitKeys.length+rowkeytmp.length,
- intVal.length);
- return result;
- }
- /**
- * 删除表
- *
- * @param tableName
- */
- public static void dropTable(String tableName) {
- try {
- HBaseAdmin admin = new HBaseAdmin(config);
- admin.disableTable(tableName);
- admin.deleteTable(tableName);
- } catch (MasterNotRunningException e) {
- e.printStackTrace();
- } catch (ZooKeeperConnectionException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- /**
- * 查询所有
- *
- * @param tableName
- */
- public static void QueryAll(String tableName) {
- HTable table = null;
- try {
- table = new HTable(config, tableName);
- ResultScanner rs = table.getScanner(new Scan());
- for (Result r : rs) {
- System.out.println("获得到rowkey:" + new String(r.getRow()));
- for (KeyValue keyValue : r.raw()) {
- System.out.println("列:" + new String(keyValue.getFamily())
- + "====值:" + new String(keyValue.getValue()));
- }
- }
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- /**
- * 查询所有
- *
- * @param tableName
- */
- public static void QueryByCondition1(String tableName) {
- HTable table = null;
- try {
- table = new HTable(config, tableName);
- Get scan = new Get("abcdef".getBytes());// 根据rowkey查询
- Result r = table.get(scan);
- System.out.println("获得到rowkey:" + new String(r.getRow()));
- for (KeyValue keyValue : r.raw()) {
- System.out.println("列:" + new String(keyValue.getFamily())
- + "====值:" + new String(keyValue.getValue()));
- }
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- /**
- * 根据rowkwy前坠查询
- * @param tableName
- * @param rowkey
- */
- public static void queryByRowKey(String tableName,String rowkey)
- {
- try {
- HTable table = new HTable(config, tableName);
- Scan scan = new Scan();
- scan.setFilter(new PrefixFilter(rowkey.getBytes()));
- ResultScanner rs = table.getScanner(scan);
- KeyValue[] kvs = null;
- for (Result tmp : rs)
- {
- kvs = tmp.raw();
- for (KeyValue kv : kvs)
- {
- System.out.print(kv.getRow()+" ");
- System.out.print(kv.getFamily()+" :");
- System.out.print(kv.getQualifier()+" ");
- System.out.print(kv.getTimestamp()+" ");
- System.out.println(kv.getValue());
- }
- }
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- /**
- * 查询所有
- *
- * @param tableName
- */
- public static void QueryByCondition2(String tableName) {
- try {
- HTable table = new HTable(config, tableName);
- // 当列column1的值为aaa时进行查询
- Filter filter = new SingleColumnValueFilter(
- Bytes.toBytes("column1"), null, CompareOp.EQUAL,
- Bytes.toBytes("aaa"));
- Scan s = new Scan();
- s.setFilter(filter);
- ResultScanner rs = table.getScanner(s);
- for (Result r : rs) {
- System.out.println("获得到rowkey:" + new String(r.getRow()));
- for (KeyValue keyValue : r.raw()) {
- System.out.println("列:" + new String(keyValue.getFamily())
- + "====值:" + new String(keyValue.getValue()));
- }
- }
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- /**
- * 查询所有
- *
- * @param tableName
- */
- public static void QueryByCondition3(String tableName) {
- try {
- HTable table = new HTable(config, tableName);
- List<Filter> filters = new ArrayList<Filter>();
- Filter filter1 = new SingleColumnValueFilter(
- Bytes.toBytes("column1"), null, CompareOp.EQUAL,
- Bytes.toBytes("aaa"));
- filters.add(filter1);
- Filter filter2 = new SingleColumnValueFilter(
- Bytes.toBytes("column2"), null, CompareOp.EQUAL,
- Bytes.toBytes("bbb"));
- filters.add(filter2);
- Filter filter3 = new SingleColumnValueFilter(
- Bytes.toBytes("column3"), null, CompareOp.EQUAL,
- Bytes.toBytes("ccc"));
- filters.add(filter3);
- FilterList filterList1 = new FilterList(filters);
- Scan scan = new Scan();
- scan.setFilter(filterList1);
- ResultScanner rs = table.getScanner(scan);
- for (Result r : rs) {
- System.out.println("获得到rowkey:" + new String(r.getRow()));
- for (KeyValue keyValue : r.raw()) {
- System.out.println("列:" + new String(keyValue.getFamily())
- + "====值:" + new String(keyValue.getValue()));
- }
- }
- rs.close();
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }</p>
- HashChoreWoker:
- import java.util.Iterator;
- import java.util.TreeSet;
- import org.apache.hadoop.hbase.util.Bytes;
- /**
- *
- * @author kuang hj
- *
- */
- public class HashChoreWoker{
- // 随机取机数目
- private int baseRecord;
- // rowkey生成器
- private RowKeyGenerator rkGen;
- // 取样时,由取样数目及region数相除所得的数量.
- private int splitKeysBase;
- // splitkeys个数
- private int splitKeysNumber;
- // 由抽样计算出来的splitkeys结果
- private byte[][] splitKeys;
- public HashChoreWoker(int baseRecord, int prepareRegions) {
- this.baseRecord = baseRecord;
- // 实例化rowkey生成器
- rkGen = new HashRowKeyGenerator();
- splitKeysNumber = prepareRegions - 1;
- splitKeysBase = baseRecord / prepareRegions;
- }
- public byte[][] calcSplitKeys() {
- splitKeys = new byte[splitKeysNumber][];
- // 使用treeset保存抽样数据,已排序过
- TreeSet<byte[]> rows = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
- for (int i = 0; i < baseRecord; i++) {
- rows.add(rkGen.nextId());
- }
- int pointer = 0;
- Iterator<byte[]> rowKeyIter = rows.iterator();
- int index = 0;
- while (rowKeyIter.hasNext()) {
- byte[] tempRow = rowKeyIter.next();
- rowKeyIter.remove();
- if ((pointer != 0) && (pointer % splitKeysBase == 0)) {
- if (index < splitKeysNumber) {
- splitKeys[index] = tempRow;
- index++;
- }
- }
- pointer++;
- }
- rows.clear();
- rows = null;
- return splitKeys;
- }
- }
- HashRowKeyGenerator:
- import org.apache.hadoop.hbase.util.Bytes;
- import org.apache.hadoop.hbase.util.MD5Hash;
- import com.kktest.hbase.BitUtils;
- /**
- *
- *
- **/
- public class HashRowKeyGenerator implements RowKeyGenerator {
- private static long currentId = 1;
- private static long currentTime = System.currentTimeMillis();
- //private static Random random = new Random();
- public byte[] nextId()
- {
- try {
- currentTime = getRowKeyResult(Long.MAX_VALUE - currentTime);
- byte[] lowT = Bytes.copy(Bytes.toBytes(currentTime), 4, 4);
- byte[] lowU = Bytes.copy(Bytes.toBytes(currentId), 4, 4);
- byte[] result = Bytes.add(MD5Hash.getMD5AsHex(Bytes.add(lowT, lowU))
- .substring(0, 8).getBytes(), Bytes.toBytes(currentId));
- return result;
- } finally {
- currentId++;
- }
- }
- /**
- * getRowKeyResult
- * @param tmpData
- * @return
- */
- public static long getRowKeyResult(long tmpData)
- {
- String str = String.valueOf(tmpData);
- StringBuffer sb = new StringBuffer();
- char[] charStr = str.toCharArray();
- for (int i = charStr.length -1 ; i > 0; i--)
- {
- sb.append(charStr[i]);
- }
- return Long.parseLong(sb.toString());
- }
- }