流程分为以下几个流程:
1、从对象池里面拿到PooledByteBuf进行复用
2、从缓存上面进行分配
3、从内存堆里面进行内存分配(如果在缓存上分配不成功,就在内存堆里面分配)
回到上一节开始的代码:
@Override
protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
PoolThreadCache cache = threadCache.get();
PoolArena<ByteBuffer> directArena = cache.directArena;
ByteBuf buf;
if (directArena != null) {
buf = directArena.allocate(cache, initialCapacity, maxCapacity);
} else {
if (PlatformDependent.hasUnsafe()) {
buf = UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity);
} else {
buf = new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
}
}
return toLeakAwareBuffer(buf);
}
我们已经介绍了前面的两句代码,现在开始buf = directArena.allocate(cache, initialCapacity, maxCapacity);这一段代码:
PooledByteBuf<T> allocate(PoolThreadCache cache, int reqCapacity, int maxCapacity) {
PooledByteBuf<T> buf = newByteBuf(maxCapacity);
allocate(cache, buf, reqCapacity);
return buf;
}
一、从对象池里面拿到PooledByteBuf进行复用
PooledByteBuf<T> buf = newByteBuf(maxCapacity);就是获取pooledByteBuf的方法,我们现在分析的是DirectArena的,所以找到它的实现:
@Override
protected PooledByteBuf<ByteBuffer> newByteBuf(int maxCapacity) {
if (HAS_UNSAFE) {
return PooledUnsafeDirectByteBuf.newInstance(maxCapacity);
} else {
return PooledDirectByteBuf.newInstance(maxCapacity);
}
}
一般都是支持unsafe的所以进入PooledUnsafeDirectByteBuf.newInstance(maxCapacity);
static PooledUnsafeDirectByteBuf newInstance(int maxCapacity) {
PooledUnsafeDirectByteBuf buf = RECYCLER.get();
buf.reuse(maxCapacity);
return buf;
}
我们说这个是可以回收的,所以有个RECYCLER,看看它的定义:
private static final Recycler<PooledUnsafeDirectByteBuf> RECYCLER = new Recycler<PooledUnsafeDirectByteBuf>() {
@Override
protected PooledUnsafeDirectByteBuf newObject(Handle<PooledUnsafeDirectByteBuf> handle) {
return new PooledUnsafeDirectByteBuf(handle, 0);
}
};
这里有个handle传入进去了,这个handle,有个循环利用的方法,实现由具体的子类去做:
public interface Handle<T> {
void recycle(T object);
}
然后就是又返回一个新的对象的newObject,具体RECYCLER的实现,这里就没有什么讲解了,自己去看应该也很简单。
然后会复用这个PooledUnsafeDirectByteBuf,使用这一行代码:
buf.reuse(maxCapacity);
源码:
final void reuse(int maxCapacity) {
maxCapacity(maxCapacity);
setRefCnt(1);
setIndex0(0, 0);
discardMarks();
}
我们首先要把index和一些标志位设置为初始值,l例如setIndex0(0,0):
final void setIndex0(int readerIndex, int writerIndex) {
this.readerIndex = readerIndex;
this.writerIndex = writerIndex;
}
例如这个discardMarks():
final void discardMarks() {
markedReaderIndex = markedWriterIndex = 0;
}
二、从缓存上面进行分配、从内存堆里面进行内存分配
回到一开始的这个源码:
PooledByteBuf<T> allocate(PoolThreadCache cache, int reqCapacity, int maxCapacity) {
PooledByteBuf<T> buf = newByteBuf(maxCapacity);
allocate(cache, buf, reqCapacity);
return buf;
}
我们第一点分析了 PooledByteBuf<T> buf = newByteBuf(maxCapacity);,下面一点分析怎么进行内存分配,就是allocate(cache, buf, reqCapacity):
private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity) {
final int normCapacity = normalizeCapacity(reqCapacity);
if (isTinyOrSmall(normCapacity)) { // capacity < pageSize
int tableIdx;
PoolSubpage<T>[] table;
boolean tiny = isTiny(normCapacity);
if (tiny) { // < 512
if (cache.allocateTiny(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
tableIdx = tinyIdx(normCapacity);
table = tinySubpagePools;
} else {
if (cache.allocateSmall(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
tableIdx = smallIdx(normCapacity);
table = smallSubpagePools;
}
final PoolSubpage<T> head = table[tableIdx];
/**
* Synchronize on the head. This is needed as {@link PoolChunk#allocateSubpage(int)} and
* {@link PoolChunk#free(long)} may modify the doubly linked list as well.
*/
synchronized (head) {
final PoolSubpage<T> s = head.next;
if (s != head) {
assert s.doNotDestroy && s.elemSize == normCapacity;
long handle = s.allocate();
assert handle >= 0;
s.chunk.initBufWithSubpage(buf, handle, reqCapacity);
if (tiny) {
allocationsTiny.increment();
} else {
allocationsSmall.increment();
}
return;
}
}
allocateNormal(buf, reqCapacity, normCapacity);
return;
}
if (normCapacity <= chunkSize) {
if (cache.allocateNormal(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
allocateNormal(buf, reqCapacity, normCapacity);
} else {
// Huge allocations are never served via the cache so just call allocateHuge
allocateHuge(buf, reqCapacity);
}
}
这个源码一看这么复杂。我们先从一个小点分析,其实很简单,直接看下面这一段:
if (normCapacity <= chunkSize) {
if (cache.allocateNormal(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
allocateNormal(buf, reqCapacity, normCapacity);
}
我们看源码暂且知道分配的是normCapacity的内存。逻辑就是,如果在cache分配成功,就直接返回,否者就在调用allocateNormal(buf, reqCapacity, normCapacity);从内存堆里面进行内存分配。
然后,我们就可以化整为零的分析了。我们知道,内存分为small、tiny、nomal、huge。这段源码就是按照这样的顺序分析的:
private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity) {
final int normCapacity = normalizeCapacity(reqCapacity);
if (isTinyOrSmall(normCapacity)) { // capacity < pageSize
int tableIdx;
PoolSubpage<T>[] table;
boolean tiny = isTiny(normCapacity);
if (tiny) { // < 512
...
} else {
...
}
return;
}
if (normCapacity <= chunkSize) {
...
} else {
// Huge allocations are never served via the cache so just call allocateHuge
allocateHuge(buf, reqCapacity);
}
}
判断是否是tiny还是small,然后执行各自的逻辑。如果都不是,那就判断是否normal,最后还不是就执行huge的内存分配。huge就不会在缓存上面分配了,直接在内存分配。
我们深入分析一下isTinyOrSmall这段逻辑:,这一段首先产生table:
int tableIdx;
PoolSubpage<T>[] table;
boolean tiny = isTiny(normCapacity);
if (tiny) { // < 512
if (cache.allocateTiny(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
tableIdx = tinyIdx(normCapacity);
table = tinySubpagePools;
} else {
if (cache.allocateSmall(this, buf, reqCapacity, normCapacity)) {
// was able to allocate out of the cache so move on
return;
}
tableIdx = smallIdx(normCapacity);
table = smallSubpagePools;
}
final PoolSubpage<T> head = table[tableIdx];
然后试图在这个table的head也就是cache里面分配内存,如果可以,就直接返回了,在之中也有做tiny还是small的判断:
/**
* Synchronize on the head. This is needed as {@link PoolChunk#allocateSubpage(int)} and
* {@link PoolChunk#free(long)} may modify the doubly linked list as well.
*/
synchronized (head) {
final PoolSubpage<T> s = head.next;
if (s != head) {
assert s.doNotDestroy && s.elemSize == normCapacity;
long handle = s.allocate();
assert handle >= 0;
s.chunk.initBufWithSubpage(buf, handle, reqCapacity);
if (tiny) {
allocationsTiny.increment();
} else {
allocationsSmall.increment();
}
return;
}
}
如果不行,就直接在内存上面分配,然后返回:
allocateNormal(buf, reqCapacity, normCapacity);
return;
不要太简单。