在第七节put过程中首先写入memStore,在 操作的最后会调用一次判断是否需要写入到HFile中。
memStore flush to Hfile 流程如下:
1)判断是否需要写入到HFile中,判断大小值1024×1024×128 =128M,可以通过修改hbase.hregion.memstore.flush.size配置
2)在flush 过程首先将mem 数据new FlushRegionEntry 然后方法队列中
3)MemStoreFlusher里启动的run会不断去判断是否需要写数据。
4)写完之前会判断是否需要spit和comp ,同样flush结束之后也做判断。
5)写的过程中,需要先写快照,然后写成tmp文件,然后再移动文件变成正式文件。
6)里面涉及很多WAL和MVCC,已经region,row的锁操作,
7)写tmp主要是定义一个Write 然后 调用append,然后写到HFile。
1)flush 入口 put过程,调用HRegin类:
@Override
public void processRowsWithLocks(RowProcessor<?,?> processor, long timeout,
long nonceGroup, long nonce) throws IOException {
- //写入内存。平统计占用多大内存
- {
// 8. Apply to memstore
Store store = getStore(cell);
addedSize += store.add(cell);
} finally {
closeRegionOperation();
if (!mutations.isEmpty() &&
- //如果本region大于128M就开始Flush调用Flush过程
isFlushSize(this.addAndGetGlobalMemstoreSize(addedSize))) {
requestFlush();
}
}
- }
他会调用MemStoreFlusher.java里的requestFlush方法,这方法会new一个FlushRegionEntry然后放入到队列中去。而这
MemStoreFlusher是regionService start时候启动的。这个可以看我的上一个章节HMaster 和RegionService 启动过程。
这个MemStoreFlusher 里面又new 一个FlushHandler,这个handler就不停处理上面方法放入队列的
FlushRegionEntry
放入队列方法类
MemStoreFlusher中
:
@Override
public void requestFlush(Region r, boolean forceFlushAllStores) {
synchronized (regionsInQueue) {
if (!regionsInQueue.containsKey(r)) {
// This entry has no delay so it will be added at the top of the flush
// queue. It'll come out near immediately.
FlushRegionEntry fqe = new FlushRegionEntry(r, forceFlushAllStores);
this.regionsInQueue.put(r, fqe);
this.flushQueue.add(fqe);
}
}
}
读队列
MemStoreFlusher
中
:里面删除很多东西。留下主要要的。
private class FlushHandler extends HasThread {
- @Override
public void run() {
//死循环
while (!server.isStopped()) {
FlushQueueEntry fqe = null;
try {
fqe = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
。。。//这里删除了很多。
//调用刷新
if (!flushRegion(fre)) {
break;
}
}
}
这个里面会判断是否需要spit,spit 过程留到下一个章节去分析。
private boolean flushRegion(final FlushRegionEntry fqe) {
Region region = fqe.region;
//首先会去判断是否太多文件,如果文件太大,可能需要spit,或者compation
if (!region.getRegionInfo().isMetaRegion() &&
isTooManyStoreFiles(region)) {
if (!this.server.compactSplitThread.requestSplit(region)) {
try {
this.server.compactSplitThread.requestSystemCompaction(
region, Thread.currentThread().getName());
}
}
}
}
//做完上面的时候会flush
return flushRegion(region, false, fqe.isForceFlushAllStores());
}
这个memStoreFlusher的flushRegion方法首先会去刷新,然后判断是否需要spit
private boolean flushRegion(final Region region, final boolean emergencyFlush,
boolean forceFlushAllStores) {
//判断是否已经在处理了。已经省略
lock.readLock().lock();
try {
notifyFlushRequest(region, emergencyFlush);
//进行flush
FlushResult flushResult = region.flush(forceFlushAllStores);
boolean shouldCompact = flushResult.isCompactionNeeded();
//写到Hfile之后判断是否需要spit 或者需要
shouldCompactboolean shouldSplit = ((HRegion)region).checkSplit() != null;
if (shouldSplit) {
this.server.compactSplitThread.requestSplit(region);
} else if (shouldCompact) {
server.compactSplitThread.requestSystemCompaction(
region, Thread.currentThread().getName());
}
}
之后会调用HRegion的
flushcache方法。
public FlushResult flushcache(boolean forceFlushAllStores, boolean writeFlushRequestWalMarker)
throws IOException {
lock.readLock().lock();
try {
try {
Collection<Store> specificStoresToFlush =
forceFlushAllStores ? stores.values() : flushPolicy.selectStoresToFlush();
FlushResult fs = internalFlushcache(specificStoresToFlush,
status, writeFlushRequestWalMarker);
}
然后
internalFlushcache 会调用
protected FlushResult internalFlushcache(final WAL wal, final long myseqid,
final Collection<Store> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker)
throws IOException {
//准备阶段,主要写快照。
PrepareFlushResult result
= internalPrepareFlushCache(wal, myseqid, storesToFlush, status, writeFlushWalMarker);
if (result.result == null) {
//将文件写入到正式的文件目录下
return internalFlushCacheAndCommit(wal, status, result, storesToFlush);
} else {
return result.result; // early exit due to failure from prepare stage
}
}
//写快照方法
protected PrepareFlushResult internalPrepareFlushCache(final WAL wal, final long myseqid,
final Collection<Store> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker)
throws IOException {
this.updatesLock.writeLock().lock();
- Set<byte[]> flushedFamilyNames = new HashSet<byte[]>();
for (Store store: storesToFlush) {
flushedFamilyNames.add(store.getFamily().getName());
}
//多版本
MultiVersionConcurrencyControl.WriteEntry writeEntry = mvcc.begin();
//多版本结束
mvcc.completeAndWait(writeEntry);
writeEntry = null;
try {
try {
for (Store s : storesToFlush) {
totalFlushableSizeOfFlushableStores += s.getFlushableSize();
}
// 快照 WAL
if (wal != null && !writestate.readOnly) {
FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.START_FLUSH,
getRegionInfo(), flushOpSeqId, committedFiles);
// no sync. Sync is below where we do not hold the updates lock
trxId = WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(),
desc, false, mvcc);
}
//写快照
for (StoreFlushContext flush : storeFlushCtxs.values()) {
flush.prepare();
}
finally {
this.updatesLock.writeLock().unlock();
}
// 等待WAL 写到HLog中
wal.sync(); // ensure that flush marker is sync'ed
mvcc.complete(writeEntry);
}
//flush 到HFile,删除很多。留下几个关键位置,首先写入tmp目录,下面,然后rename一下就行。最后还会将空间减少。
protected FlushResult internalFlushCacheAndCommit(
final WAL wal, MonitoredTask status, final PrepareFlushResult prepareResult,
final Collection<Store> storesToFlush)
throws IOException {
//写到temp文件
for (StoreFlushContext flush : storeFlushCtxs.values()) {
flush.flushCache(status);
}
// Switch snapshot (in memstore) -> new hfile (thus causing
// all the store scanners to reset/reseek).
Iterator<Store> it = storesToFlush.iterator();
// stores.values() and storeFlushCtxs have same order
for (StoreFlushContext flush : storeFlushCtxs.values()) {
boolean needsCompaction = flush.commit(status);
if (needsCompaction) {
compactionRequested = true;
}
this.addAndGetGlobalMemstoreSize(-totalFlushableSizeOfFlushableStores);
}
flushCache 主要定义一个Write ,将snapShot 写入tmp文件。
flush
.
commit
(
status
); 这是
flushCache的返回结果目录,移动到正式目录下面。
到此文件都写入文件结束。
如果写入文件还有什么疑问,可以留言。
http://blog.csdn.net/chenfenggang/article/details/75195041