HBase BlockCache 代码分析

1. Cache 读写
调用逻辑:
hmaster.handleCreateTable->HRegion.createHRegion-> HRegion. initialize->initializeRegionInternals->instantiateHStore
->Store.Store->new CacheConfig(conf, family)-> CacheConfig.instantiateBlockCache->new LruBlockCache
传入参数

  /**
* Configurable constructor. Use this constructor if not using defaults.
* @param maxSize maximum size of this cache, in bytes
* @param blockSize expected average size of blocks, in bytes
* @param evictionThread whether to run evictions in a bg thread or not
* @param mapInitialSize initial size of backing ConcurrentHashMap
* @param mapLoadFactor initial load factor of backing ConcurrentHashMap
* @param mapConcurrencyLevel initial concurrency factor for backing CHM
* @param minFactor percentage of total size that eviction will evict until
* @param acceptableFactor percentage of total size that triggers eviction
* @param singleFactor percentage of total size for single-access blocks
* @param multiFactor percentage of total size for multiple-access blocks
* @param memoryFactor percentage of total size for in-memory blocks
*/
public LruBlockCache(long maxSize, long blockSize, boolean evictionThread,
int mapInitialSize, float mapLoadFactor, int mapConcurrencyLevel,
float minFactor, float acceptableFactor,
float singleFactor, float multiFactor, float memoryFactor)

new LruBlockCache时除了设置默认的参数外,还会创建evictionThread并wait和一个定时打印的线程StatisticsThread


当执行HFileReaderV2的readBlock时,会先看判断是否开户了Cache ,如果开启,则使用cache中block

        // Check cache for block. If found return.
if (cacheConf.isBlockCacheEnabled()) {
// Try and get the block from the block cache. If the useLock variable is true then this
// is the second time through the loop and it should not be counted as a block cache miss.
HFileBlock cachedBlock = (HFileBlock)
cacheConf.getBlockCache().getBlock(cacheKey, cacheBlock, useLock);
if (cachedBlock != null) {
BlockCategory blockCategory =
cachedBlock.getBlockType().getCategory();

getSchemaMetrics().updateOnCacheHit(blockCategory, isCompaction);

if (cachedBlock.getBlockType() == BlockType.DATA) {
HFile.dataBlockReadCnt.incrementAndGet();
}

validateBlockType(cachedBlock, expectedBlockType);

// Validate encoding type for encoded blocks. We include encoding
// type in the cache key, and we expect it to match on a cache hit.
if (cachedBlock.getBlockType() == BlockType.ENCODED_DATA &&
cachedBlock.getDataBlockEncoding() !=
dataBlockEncoder.getEncodingInCache()) {
throw new IOException("Cached block under key " + cacheKey + " " +
"has wrong encoding: " + cachedBlock.getDataBlockEncoding() +
" (expected: " + dataBlockEncoder.getEncodingInCache() + ")");
}
return cachedBlock;
}
// Carry on, please load.
}
在getBlock方法中,会更新一些统计数据,重要的时更新
BlockPriority.SINGLE为BlockPriority.MULTI
public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat) {
CachedBlock cb = map.get(cacheKey);
if(cb == null) {
if (!repeat) stats.miss(caching);
return null;
}
stats.hit(caching);
cb.access(count.incrementAndGet());
return cb.getBuffer();
}
---------------------
若是第一次读,则将block加入Cache.

        // Cache the block if necessary
if (cacheBlock && cacheConf.shouldCacheBlockOnRead(
hfileBlock.getBlockType().getCategory())) {
cacheConf.getBlockCache().cacheBlock(cacheKey, hfileBlock,
cacheConf.isInMemory());
}

2. LRU evict

写入cache时就是将block加入到 一个 ConcurrentHashMap中,并更新Metrics,之后判断if(newSize > acceptableSize() && !evictionInProgress), acceptableSize是初始化时给的值(long)Math.floor(this.maxSize * this.acceptableFactor),acceptableFactor是一个百分比,是可以配置的:"hbase.lru.blockcache.acceptable.factor"(0.85f), 这里的意思就是判断总Size是不是大于这个值,如果大于并且没有正在执行的eviction线程, 那么就执行evict。

/**
* Cache the block with the specified name and buffer.
* <p>
* It is assumed this will NEVER be called on an already cached block. If
* that is done, an exception will be thrown.
* @param cacheKey block's cache key
* @param buf block buffer
* @param inMemory if block is in-memory
*/
public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) {
CachedBlock cb = map.get(cacheKey);
if(cb != null) {
throw new RuntimeException("Cached an already cached block");
}
cb = new CachedBlock(cacheKey, buf, count.incrementAndGet(), inMemory);
long newSize = updateSizeMetrics(cb, false);
map.put(cacheKey, cb);
elements.incrementAndGet();
if(newSize > acceptableSize() && !evictionInProgress) {
runEviction();
}
}

在evict方法中,
1. 计算总size和需要free的size, minsize = (long)Math.floor(this.maxSize * this.minFactor);其中minFactor是可配置的"hbase.lru.blockcache.min.factor"(0.75f);
      long currentSize = this.size.get();
long bytesToFree = currentSize - minSize();

2. 初始化三种BlockBucket:bucketSingle,bucketMulti,bucketMemory并遍历map,按照三种类型分别add进各自的queue(MinMaxPriorityQueue.expectedSize(initialSize).create();)中, 并按照访问的次数逆序。
三种类型的区别是:
SINGLE对应第一次读的
MULTI对应多次读
MEMORY是设定column family中的IN_MEMORY为true的

      // Instantiate priority buckets
BlockBucket bucketSingle = new BlockBucket(bytesToFree, blockSize,
singleSize());
BlockBucket bucketMulti = new BlockBucket(bytesToFree, blockSize,
multiSize());
BlockBucket bucketMemory = new BlockBucket(bytesToFree, blockSize,
memorySize());


其中三种BlockBuckt Size大小分配比例默认是:
static final float DEFAULT_SINGLE_FACTOR = 0.25f;
static final float DEFAULT_MULTI_FACTOR = 0.50f;
static final float DEFAULT_MEMORY_FACTOR = 0.25f;

  private long singleSize() {
return (long)Math.floor(this.maxSize * this.singleFactor * this.minFactor);
}
private long multiSize() {
return (long)Math.floor(this.maxSize * this.multiFactor * this.minFactor);
}
private long memorySize() {
return (long)Math.floor(this.maxSize * this.memoryFactor * this.minFactor);
}


并将三种BlockBuckt 加入到优先队列中,按照totalSize - bucketSize排序,,再计算需要free大小,执行free:
PriorityQueue<BlockBucket> bucketQueue =
new PriorityQueue<BlockBucket>(3);

bucketQueue.add(bucketSingle);
bucketQueue.add(bucketMulti);
bucketQueue.add(bucketMemory);

int remainingBuckets = 3;
long bytesFreed = 0;

BlockBucket bucket;
while((bucket = bucketQueue.poll()) != null) {
long overflow = bucket.overflow();
if(overflow > 0) {
long bucketBytesToFree = Math.min(overflow,
(bytesToFree - bytesFreed) / remainingBuckets);
bytesFreed += bucket.free(bucketBytesToFree);
}
remainingBuckets--;
}

free方法中一个一个取出queue中block,由于是按照访问次数逆序,所以从后面取出就是先取出访问次数少的,将其在map中一个一个remove, 并更新Mertrics.

    public long free(long toFree) {
CachedBlock cb;
long freedBytes = 0;
while ((cb = queue.pollLast()) != null) {
freedBytes += evictBlock(cb);
if (freedBytes >= toFree) {
return freedBytes;
}
}
return freedBytes;
}


protected long evictBlock(CachedBlock block) {
map.remove(block.getCacheKey());
updateSizeMetrics(block, true);
elements.decrementAndGet();
stats.evicted();
return block.heapSize();
}

3. HBase LruBlockCache的特点是针对不同的访问次数使用不同的策略,避免频繁的更新的Cache(便如Scan),这样更加有利于提高读的性能。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值