接上篇
源码分析
1.关于Buffer的理解
kvbuffer = new byte[maxMenUsage];//save both the raw and meta data
bufvoid = kvbuffer.length;
kvmeta = ByteBuffer.wrap[kvbuffer]
.order(ByteOrder.nativeOrder[])
.asintBuffer();//int scope of bytebuffer, didn't create a new array
ByteBuffer是一个抽象类,类中封装了对byte数组各种get和put的方法,程序中使用到的wrap方法是将byte数组与返回的数组绑定,order方法是用来设定“大小端”存放的,asIntBuffer是个抽象方法用以后续交给子类实现。
2.封装序列化的问题
任何网络传输的数据都是可序列化的,所以在mapreduce编程框架中使用Serializer对象进行序列化操作。
private Serializer<K> keySerializer;
private Serializer<V> valSerializer;
final BlockingBuffer bb = new BlockingBuffer[];
keySerializer = serializationFactory.getSerializer(keyClass);
keySerializer.open(bb);
valSerializer = serializationFactory.getSerializer(valClass);
valSerializer.open(bb);
keySerializer.serialize(key);
valSerializer.serialize(value);
该代码用于序列化之前和指定的输出流进行绑定,其中serialize方法由绑定的OutputStream的write方法执行。BlockingBuffer是OutputStream的实现类,另外一个实现类是Buffer。下面讨论下具体kvBuffer是怎样存储数据的。
如果是在一个比较理想的前提,即kvBuffer中的内存是足够的,就可以之间将raw和meta数据直接写入buffer中。那如果内存不足的情况呢?
if(bufindex + len > bufvoid) {
final int gaplen = bufvoid - bufindex;
System.arraycopy(b, off, kvbuffer, bufindex, gaplen);
len -= gaplen;
off += gaplen;
bufindex = 0;
}
System.arraycopy(b,off,kvbuffer,bufindex,len);
bufindex += len;
其中len表示插入数据的长度,bufindex表示插入前的位置,(bufindex+len)表示插入后的位置,bufvoid表示buffer的总长度。在判断条件:插入后的位置>buffer的总长度,成立时就会将多余的部分移到buffer开始的部分。
spill的过程会按照raw数据的key进行排序,输入到RawComparator中
的key要求是内存中的一段连续空间,所以得避免key不连续的问题。BlockingBuffer在Buffer的基础上封装了调整key放置的操作,即将整个key移到起始位置。
protected void shiftBufferedKey() throws IOException {
//spillLock unnecessary; both kvend and kvindex are current
int headbytelen = bufvoid - bufmark;//超出部分的长度
bufvoid = bufmark;//重置bufvoid
final int kvbidx = 4*kvindex;
final int kvbend = 4*kvend;
final int avail = Math.min(distanceTo(0,kvbidx), distanceTo(0,kvend));//从头部到整个meta数据的空间
if(bufindex + headbytelen < avail) {//从头部开始的剩余空间足够放得下整个key
System.arraycopy(kvbuffer,0,kvbuffer,headbytelen,bufindex);//将原来在头部的那部分key拷贝到headbytelen开始的位置
System.arraycopy(kvbuffer,bufvoid,kvbuffer,0,headbytelen);
bufindex += headbytelen;
bufferRemaining -= kebuffer.length - bufvoid;
}else {//头部剩余空间不足
byte[] keytmp = new byte[bufindex];
System.arraycopy(kvbuffer,0,keytmp,0,bufindex);//暂存头部那部分key
bufindex= 0;
//调用Buffer类的write方法,会处理空间不够的情况
out.write(kvbuffer,bufmark,headbytelen);//先从0开始拷贝尾部key
out.write(keytmp);//接上暂存的头部那部分key
}
}
3.spill的细节
溢写由SpillThread作为一个守护线程完成,在初始化时,由以下代码控制:
boolean spillnProgress;
final ReentrantLock spillLock = new ReentrantLock();
final Condition spillDone = spillLock.newCondition();
final Condition spillReady = spillLock.newCondition();
volatile boolean spillThreadRunning = false;
final SpillThread spillThread = new SpillThread();
spillDone标志一次spill过程完成的Condition,sillDone.await表示阻塞当前线程,暂时释放锁,直到spill完成后重新获得锁再返回,spillDone.signal表示一次spill过程完成。spillDone.await在代码中出现了三次,第一次是在init方法启动spillThread的时候,等待spillThread进入循环,第二次是在Buffer.write方法中,空间严重不足,需要等待完全spill完成才可以继续,第三次是在flush方法中,flush方法在close方法中被调用,负责收尾工作,所以已经要等待spill完成才可以。spillDone.signal在代码中只出现一次,位置是SpillThread的run方法,每次while循环开始的时候会调用一次,标志上一次spill已经完成。
spillReady是标志spill的条件已经成熟的Condition,spillReady.await表示等待spill的条件,阻塞当前线程,暂时释放锁,spillReady.signal表示spill条件已经达到,spillThread应该开始工作。spillReady.await在代码中出现一次,是SpillThread.run循环开始的地方,等待条件成熟。spillReady.signal在代码中出现一次,是startSpill中,会触发spillThread开始工作。startSpill是当空间不足时会被调用。
spill数据时排序的控制
private void sortAndSpill() throws IOException, ClassNotFoundException,
InterruptedException {
//approximate the length of the output file to be the length of the
//buffer + header lengths for the partitions
final long size = (bufend >=bufstart
? bufend -bufstart
: (bufvoid - bufend) + bufstart) +
partitions * APPROX_HEADER_LENGTH;
FSDataOutputStream out = null;
try {
// create spill file
final SpillRecord spillRec = new SpillRecord(partitions);
final Path filename =
mapOutputFile.getSpillFileForWrite(numSpills, size);
out = rfs.create(filename);
final int mstart = kvend / NMETA;
final int mend = 1 + // kvend is a valid record
(kvstart >=kvend
?kvstart
: kvmeta.capacity() + kvstart) / NMETA;
//对meta key进行排序,先按照partition,partition内部根据OutputKeyComparator
sorter.sort(MapOutputBuffer.this, mstart, mend, reporter);
int spindex = mstart;
final IndexRecord rec = new IndexRecord();
final InMemValBytes value = new InMemValBytes();
//循环遍历各个partition
for (int i = 0; i < partitions; ++i) {
IFile.Writer<K, V> writer = null;
try {
long segmentStart = out.getPos();
//对应此次循环的写入
writer = new Writer<K, V>(job, out, keyClass, valClass, codec,
spilledRecordsCounter);
if (combinerRunner == null) {
// spill directly
DataInputBuffer key = new DataInputBuffer();
while (spindex < mend &&
kvmeta.get(offsetFor(spindex % maxRec) + PARTITION) == i) {
final int kvoff = offsetFor(spindex % maxRec);
int keystart = kvmeta.get(kvoff + KEYSTART);
int valstart = kvmeta.get(kvoff + VALSTART);
key.reset(kvbuffer, keystart, valstart - keystart);
getVBytesForOffset(kvoff, value);
writer.append(key, value);
++spindex;
}
} else {
int spstart = spindex;
while (spindex < mend &&
kvmeta.get(offsetFor(spindex % maxRec)
+ PARTITION) == i) {
++spindex;
}
// Note: we would like to avoid the combiner if we've fewer
// than some threshold of records for a partition
if (spstart != spindex) {
combineCollector.setWriter(writer);
RawKeyValueIterator kvIter =
new MRResultIterator(spstart, spindex);
combinerRunner.combine(kvIter, combineCollector);
}
}
// close the writer
writer.close();
// record offsets
rec.startOffset = segmentStart;
rec.rawLength = writer.getRawLength();
rec.partLength = writer.getCompressedLength();
spillRec.putIndex(rec, i);
writer = null;
} finally {
if (null != writer) writer.close();
}
}
if (totalIndexCacheMemory >= indexCacheMemoryLimit) {
// create spill index file
Path indexFilename =
mapOutputFile.getSpillIndexFileForWrite(numSpills,partitions
* MAP_OUTPUT_INDEX_RECORD_LENGTH);
spillRec.writeToFile(indexFilename, job);
} else {//加入缓存
indexCacheList.add(spillRec);
totalIndexCacheMemory +=
spillRec.size() * MAP_OUTPUT_INDEX_RECORD_LENGTH;
}
LOG.info("Finished spill " + numSpills);
++numSpills;
} finally {
if (out != null) out.close();
}
}