uclibc中dlmalloc实现分析

Doug Lea malloc是一个用C语言实现的非常流行的内存分配器,由纽约州立大学Oswego分校计算机系教授Doug Lea于1987年撰写,许多人将其称为Doug Lea的malloc,或者简称dlmalloc。

由于具备高效且占用空间较小等特点,dlmalloc被广泛使用,用Doug Lea自己的话说,就是“它在一些linux版本里面作为默认的malloc被使用,被编译到一些公共的软件包里,并且已经被用于各种PC环境及嵌入式系统,以及许多甚至我也不知道的地方”。

uclibc默认采用的是dlmalloc, GNU libc 默认采用的是ptmalloc2, 它是由Wolfram Gloger所写,基于dlmalloc。

尽管dlmalloc经历了诸多版本的变化,然而malloc算法的两个核心元素一直没变:边界标记( Boundary Tags )和分箱管理( Binning )。
Boundary Tags
Chunks of memory carry around with them size information fields both before and after the chunk. This allows for two important capabilities:
  • Two bordering unused chunks can be coalesced into one larger chunk. This minimizes the number of unusable small chunks.
  • All chunks can be traversed starting from any known chunk in either a forward or backward direction.

The original versions implemented boundary tags exactly in this fashion. More recent versions omit trailer fields on chunks that are in use by the program. This is itself a minor trade-off: The fields are not ever used while chunks are active so need not be present. Eliminating them decreases overhead and wastage. However, lack of these fields weakens error detection a bit by making it impossible to check if users mistakenly overwrite fields that should have known values.
Binning
Available chunks are maintained in bins, grouped by size. There are a surprisingly large number (128) of fixed-width bins, approximately logarithmically spaced in size. Bins for sizes less than 512 bytes each hold only exactly one size (spaced 8 bytes apart, simplifying enforcement of 8-byte alignment). Searches for available chunks are processed in smallest-first,  best-fit order. As shown by Wilson et al, best-fit schemes (of various kinds and approximations) tend to produce the least fragmentation on real loads compared to other general approaches such as first-fit.

Until the versions released in 1995, chunks were left unsorted within bins, so that the best-fit strategy was only approximate. More recent versions instead sort chunks by size within bins, with ties broken by an oldest-first rule. (This was done after finding that the minor time investment was worth it to avoid observed bad cases.)
Thus, the general categorization of this algorithm is  best-first with coalescing: Freed chunks are coalesced with neighboring ones, and held in bins that are searched in size order.
Memory, once it has been free()’d is stored in linked lists called bin’s, they are
sorted by size to allow for the quickest access of finding a given chunk for
retrieval, that is to say that when you free() memory, it doesn’t actually get
returned to the operating system, but rather gets potentially defragmented and
coalesced and stored in a linked list in a bin to be retrieved for an allocation
later.
The bin’s, if you recall, are arrays of pointers to linked lists. There are essentially
two types of bin, a fastbin and a ‘normal’ bin. Chucks of memory that are
considered for use in fastbin’s are small (default maximum size is sixty bytes with a
configurable maximum of eighty), they are not coalesced with surrounding
chunks on free(), they are not sorted and they only have singular linked lists,
instead of doubly linked lists. The data structure of the block is still the same as
with ‘normal’ blocks, only their representation and use differs.
Because the chunks are not consolidated their access
is dramatically quicker than that of a normal chunk, essentially fastbin’s trade
speed for fragmentation.

公开发布的源码: http://gee.cs.oswego.edu/pub/misc/, 可以看看 malloc-2.5.1.c文件中的设计说明。

本文分析的版本为2.7.2, 代码目录位于: libc\stdlib\malloc-standard, 首先来看看malloc.c文件:

struct malloc_chunk {
  size_t      prev_size;  /* Size of previous chunk (if free).  */                                      //前一个chunk的大小,如果是mmap则是对齐后前面剩余的大小,偏移0byte
  size_t      size;       /* Size in bytes, including overhead. */                                      //当前chunk的大小和一些标记,包括头的大小,偏移4byte
  struct malloc_chunk* fd;         /* double links -- used only if free. */                 //forward,存放前一个free chunk的地址 , 偏移8byte
  struct malloc_chunk* bk;                                                                                           //back,存放后一个free chunk的地址, 偏移12byte
};
内存块头信息,刚好16byte

/*
  Fastbins
    An array of lists holding recently freed small chunks.  Fastbins
    are not doubly linked.  It is faster to single-link them, and
    since chunks are never removed from the middles of these lists,
    double linking is not necessary. Also, unlike regular bins, they
    are not even processed in FIFO order (they use faster LIFO) since
    ordering doesn't much matter in the transient contexts in which
    fastbins are normally used.
    Chunks in fastbins keep their inuse bit set, so they cannot
    be consolidated with other free chunks. __malloc_consolidate
    releases all chunks in fastbins and consolidates them with
    other free chunks.
*/
typedef struct malloc_chunk* mfastbinptr;

#define get_malloc_state() (&(__malloc_state))

struct malloc_state __malloc_state;  /* never directly referenced */         //全局静态未初始化变量,放在BSS段中,加载时填充为0

struct malloc_state {
  /* The maximum chunk size to be eligible for fastbin */
  size_t   max_fast ;   /* low 2 bits used as flags */               //最大的fastbin的大小
  /* Fastbins */
  mfastbinptr       fastbins [NFASTBINS];                 //fast bins
  /* Base of the topmost chunk -- not otherwise kept in a bin */
  mchunkptr         top ;                      //总是指向MORECORE的top chunk
  /* The remainder from the most recent split of a small request */
  mchunkptr         last_remainder ;       //指向上一次request smallbin,分割后剩下的remainder chunk
  /* Normal bins packed as described above */
  mchunkptr         bins [NBINS * 2];                       //normal bins
  /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
  unsigned int     binmap[BINMAPSIZE+1];
  /* Tunable parameters */
  unsigned long     trim_threshold;           //内存紧缩阈值,release memory to system
  size_t  top_pad;                           //每次系统调用brk时申请的内存size上加上这个额外的pad,不过每次总是页对齐的
  size_t  mmap_threshold;             //大于这个值采用mmap
  /* Memory map support */
  int              n_mmaps;                //当前mmap的次数
  int              n_mmaps_max;         //mmap的footprint最大内存数
  int              max_n_mmaps;        //mmap的footprint的最大次数
  /* Cache malloc_getpagesize */
  unsigned int     pagesize;            //kernel当前支持的 一个页的大小
  /* Track properties of MORECORE */
  unsigned int     morecore_properties;
  /* Statistics */
  size_t  mmapped_mem;
  size_t  sbrked_mem;
  size_t  max_sbrked_mem;
  size_t  max_mmapped_mem;
  size_t  max_total_mem;
}

void* malloc(size_t bytes)
{
    mstate av;
    size_t nb;               /* normalized request size */
    unsigned int    idx;              /* associated bin index */
    mbinptr         bin;              /* associated bin */
    mfastbinptr*    fb;               /* associated fastbin */
    mchunkptr       victim;           /* inspected/selected chunk */
    size_t size;             /* its size */
    int             victim_index;     /* its bin index */
    mchunkptr       remainder;        /* remainder from a split */
    unsigned long    remainder_size;   /* its size */
    unsigned int    block;            /* bit map traverser */
    unsigned int    bit;              /* bit map traverser */
    unsigned int    map;              /* current word of binmap */
    mchunkptr       fwd;              /* misc temp for linking */
    mchunkptr       bck;              /* misc temp for linking */
    void *          sysmem;
    void *          retval;
#if !defined(__MALLOC_GLIBC_COMPAT__)
    if (!bytes) {
        __set_errno(ENOMEM);
        return NULL;
    }
#endif
    __MALLOC_LOCK;
    av = get_malloc_state();
    /*
       Convert request size to internal form by adding (sizeof(size_t)) bytes
       overhead plus possibly more to obtain necessary alignment and/or
       to obtain a size of at least MINSIZE, the smallest allocatable
       size. Also, checked_request2size traps (returning 0) request sizes
       that are so large that they wrap around zero when padded and
       aligned.
       */
    checked_request2size(bytes, nb);   //  (bytes+4+7) & ~8,   返回值跟8对齐,最小值是16
    /*
       Bypass search if no frees yet
       */
    if (!have_anychunks(av)) {
    if (av->max_fast == 0) /* initialization check */   //未初始化过,第一次进来
        __malloc_consolidate(av);
    goto use_top;
    }
    /*
        If the size qualifies as a fastbin, first check corresponding bin .
       */
    if ((unsigned long)(nb) <= (unsigned long)(av->max_fast)) {      //申请的size小于fastbin 72byte,则从fastbin中去申请
    fb = &(av->fastbins[(fastbin_index(nb))]);                     // #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2) ,  为何要右移3位呢,因为是8byte对齐的,那为何要减2呢,因为nb最小为16,要不然0元素索引不到,  找到对应大小的箱子(bin)
    if ( (victim = *fb) != 0) {   //该箱子不为空
        *fb = victim->fd;    //将链表中的下一个chunk放入箱子中
        check_remalloced_chunk(victim, nb);
        retval = chunk2mem(victim);     //返回箱子中第一个chunk
        goto DONE;
    }
    }
    /*
       If a small request, check regular bin.  Since these "smallbins"
       hold one size each, no searching within bins is necessary.
       (For a large request, we need to wait until unsorted chunks are
       processed to find best fit. But for small ones, fits are exact
       anyway, so we can check now, which is faster.)
       */
    if (in_smallbin_range(nb)) {             //smallbin         80=<nb< 256byte
    idx = smallbin_index(nb);                 //#define smallbin_index(sz)     (((unsigned)(sz)) >> 3)  , 因为8byte对齐,所以要右移3位,idx最小是10
    bin = bin_at(av,idx);       //#define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - ((sizeof(size_t))<<1)))    为何要左移一位呢,因为normal bin是双向链表的,一个fd(forward),一个bk(back),但是bin[0]总是索引不到的,因为fd和bk相对malloc_chunk的偏移分别是8和12

    if ( (victim = last(bin)) != bin) {    //#define last(b)      ((b)->bk)      该箱子不为空
        bck = victim->bk;
        set_inuse_bit_at_offset(victim, nb);
        bin->bk = bck;
        bck->fd = bin;
        check_malloced_chunk(victim, nb);
        retval = chunk2mem(victim);   //返回该bin中第一个chunk
        goto DONE;
    }
    }
    /* If this is a large request, consolidate fastbins before continuing.
       While it might look excessive to kill all fastbins before
       even seeing if there is space available, this avoids
       fragmentation problems normally associated with fastbins.
       Also, in practice, programs tend to have runs of either small or
       large requests, but less often mixtures, so consolidation is not
       invoked all that often in most programs. And the programs that
       it is called frequently in otherwise tend to fragment.
       */
    else {
    idx = __malloc_largebin_index(nb);
    if (have_fastchunks(av))         //如果有fastbins,首先进行合并
        __malloc_consolidate(av);   //因为free不会进行fastbin的合并,这里进行fastbin的合并,并将合并后的chunk放入其他normal bin中,这样申请可能就会得到满足
    }
    /*
       Process recently freed or remaindered chunks, taking one only if
       it is exact fit, or, if this a small request, the chunk is remainder from
       the most recent non-exact fit.  Place other traversed chunks in
       bins.  Note that this step is the only place in any routine where
       chunks are placed in bins.
       */
//这里对unsort bin进行整理,将其中的每个chunk放入对应大小的bin中
    while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {                //#define unsorted_chunks(M)          (bin_at(M, 1)),   上文不是说到了normal bin的最小索引是10么,那么剩下的索引就用于其他的目的,索引1用于unsort chunk
    bck = victim->bk;
    size = chunksize(victim);    //unsort链表中第一个chunk的大小
    /* If a small request, try to use last remainder if it is the
       only chunk in unsorted bin.  This helps promote locality for
       runs of consecutive small requests. This is the only
       exception to best-fit, and applies only when there is
       no exact fit for a small chunk.
       */
    if (in_smallbin_range(nb) &&
        bck == unsorted_chunks(av) &&        //unsort中只有这一个chunk
        victim == av->last_remainder &&
        (unsigned long)(size) > (unsigned long)(nb + MINSIZE)) {
        /* split and reattach remainder */
        remainder_size = size - nb;
        remainder = chunk_at_offset(victim, nb);
        unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
        av->last_remainder = remainder;
        remainder->bk = remainder->fd = unsorted_chunks(av);
        set_head(victim, nb | PREV_INUSE);
        set_head(remainder, remainder_size | PREV_INUSE);
        set_foot(remainder, remainder_size);      //在remainder的下一个块头记录remainder块的大小并标记为未使用
        check_malloced_chunk(victim, nb);
        retval = chunk2mem(victim);
        goto DONE;
    }
    /* remove from unsorted list */
    unsorted_chunks(av)->bk = bck;                //从unsort双向链表中移除该chunk
    bck->fd = unsorted_chunks(av);
    /* Take now instead of binning if exact fit */
    if (size == nb) {           //如果刚好和申请的大小相等,则直接返回该chunk
        set_inuse_bit_at_offset(victim, size);             // #define set_inuse_bit_at_offset(p, s)\
                                                                                         (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)   //设置相邻的下一个chunk的标志为PREV_INUSE, 表示该chunk已经被使用了
        check_malloced_chunk(victim, nb);
        retval = chunk2mem(victim);
        goto DONE;
    }
    /* place chunk in bin */             //整理unsort chunk,放入对应大小的箱子中
    if (in_smallbin_range(size)) {    //放入smallbin中
        victim_index = smallbin_index(size);
        bck = bin_at(av, victim_index);
        fwd = bck->fd;
    }
    else {
        victim_index = __malloc_largebin_index(size);
        bck = bin_at(av, victim_index);
        fwd = bck->fd;
        if (fwd != bck) {            //箱子不为空
        /* if smaller than smallest, place first */
        if ((unsigned long)(size) < (unsigned long)(bck->bk->size)) {
            fwd = bck;
            bck = bck->bk;
        }
        else if ((unsigned long)(size) >=
            (unsigned long)(FIRST_SORTED_BIN_SIZE)) {
            /* maintain large bins in sorted order */
            size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
            while ((unsigned long)(size) < (unsigned long)(fwd->size))
            fwd = fwd->fd;
            bck = fwd->bk;
        }
        }
    }
    mark_bin(av, victim_index);
    victim->bk = bck;            //将chunk放入对应的箱子中
    victim->fd = fwd;
    fwd->bk = victim;
    bck->fd = victim;
    }


//从整理后的箱子中进行分配
    /*
       If a large request, scan through the chunks of current bin to
       find one that fits.  (This will be the smallest that fits unless
       FIRST_SORTED_BIN_SIZE has been changed from default.)  This is
       the only step where an unbounded number of chunks might be
       scanned without doing anything useful with them. However the
       lists tend to be short.
       */
    if (!in_smallbin_range(nb)) {       //large request
    bin = bin_at(av, idx);
    for (victim = last(bin); victim != bin; victim = victim->bk) {
        size = chunksize(victim);
        if ((unsigned long)(size) >= (unsigned long)(nb)) {
        remainder_size = size - nb;
        unlink(victim, bck, fwd);
        /* Exhaust */
        if (remainder_size < MINSIZE)  {    //剩余的小于16byte
            set_inuse_bit_at_offset(victim, size);    //不进行分割,直接返回
            check_malloced_chunk(victim, nb);
            retval = chunk2mem(victim);
            goto DONE;
        }
        /* Split */
        else {
            remainder = chunk_at_offset(victim, nb);
            unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;     //分割后的chunk放入unsort bin中
            remainder->bk = remainder->fd = unsorted_chunks(av);
            set_head(victim, nb | PREV_INUSE);
            set_head(remainder, remainder_size | PREV_INUSE);
            set_foot(remainder, remainder_size);
            check_malloced_chunk(victim, nb);
            retval = chunk2mem(victim);
            goto DONE;
        }
        }
    }
    }


//申请分配的内存大小在对应大小的箱子中没有找到空闲chunk,则在更大size的箱子中进行查找
    /*
       Search for a chunk by scanning bins, starting with next largest
       bin. This search is strictly by best-fit; i.e., the smallest
       (with ties going to approximately the least recently used) chunk
       that fits is selected.
       The bitmap avoids needing to check that most blocks are nonempty.
       */
    ++idx;    //将箱子尺寸加大一号
    bin = bin_at(av,idx);
    block = idx2block(idx);
    map = av->binmap[block];
    bit = idx2bit(idx);
    for (;;) {
    /* Skip rest of block if there are no more set bits in this block.  */
    if (bit > map || bit == 0) {
        do {
        if (++block >= BINMAPSIZE)  /* out of bins */
            goto use_top;
        } while ( (map = av->binmap[block]) == 0);
        bin = bin_at(av, (block << BINMAPSHIFT));
        bit = 1;
    }
    /* Advance to bin with set bit. There must be one. */
    while ((bit & map) == 0) {
        bin = next_bin(bin);
        bit <<= 1;
        assert(bit != 0);
    }
    /* Inspect the bin. It is likely to be non-empty */
    victim = last(bin);
    /*  If a false alarm (empty bin), clear the bit. */
    if (victim == bin) {
        av->binmap[block] = map &= ~bit; /* Write through */
        bin = next_bin(bin);
        bit <<= 1;
    }
    else {
        size = chunksize(victim);
        /*  We know the first chunk in this bin is big enough to use. */
        assert((unsigned long)(size) >= (unsigned long)(nb));
        remainder_size = size - nb;
        /* unlink */
        bck = victim->bk;
        bin->bk = bck;
        bck->fd = bin;
        /* Exhaust */
        if (remainder_size < MINSIZE) {
        set_inuse_bit_at_offset(victim, size);
        check_malloced_chunk(victim, nb);
        retval = chunk2mem(victim);
        goto DONE;
        }
        /* Split */
        else {
        remainder = chunk_at_offset(victim, nb);
        unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
        remainder->bk = remainder->fd = unsorted_chunks(av);
        /* advertise as last remainder */
        if (in_smallbin_range(nb))
            av->last_remainder = remainder;
        set_head(victim, nb | PREV_INUSE);
        set_head(remainder, remainder_size | PREV_INUSE);
        set_foot(remainder, remainder_size);
        check_malloced_chunk(victim, nb);
        retval = chunk2mem(victim);
        goto DONE;
        }
    }
    }
use_top:     //使用top chunk:1. 第一次malloc;2.所有箱子都为空;3.申请内存过大
    /*
       If large enough, split off the chunk bordering the end of memory
       (held in av->top). Note that this is in accord with the best-fit
       search rule.  In effect, av->top is treated as larger (and thus
       less well fitting) than any other available chunk since it can
       be extended to be as large as necessary (up to system
       limitations).
       We require that av->top always exists (i.e., has size >=
       MINSIZE) after initialization, so if it would otherwise be
       exhuasted by current request, it is replenished. (The main
       reason for ensuring it exists is that we may need MINSIZE space
       to put in fenceposts in sysmalloc.)
       */
    victim = av->top;    //top chunk
    size = chunksize(victim);
    if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) {    //需要分割
    remainder_size = size - nb;      //分割后剩余的chunk
    remainder = chunk_at_offset(victim, nb);
    av->top = remainder;     //将top指针指向剩余的chunk
    set_head(victim, nb | PREV_INUSE);       //更新头信息
    set_head(remainder, remainder_size | PREV_INUSE);
    check_malloced_chunk(victim, nb);
    retval = chunk2mem(victim);     //返回偏移头后的内存地址,偏移8byte
    goto DONE;
    }
    /* If no space in top, relay to handle system-dependent cases */
    sysmem = __malloc_alloc(nb, av);       //没有足够内存,则从系统中进行申请
    retval = sysmem;
DONE:
    __MALLOC_UNLOCK;
    return retval;
}
/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
#define PREV_INUSE 0x1
/* extract inuse bit of previous chunk */
#define prev_inuse(p)       ((p)->size & PREV_INUSE)
/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
#define IS_MMAPPED 0x2
/* check for mmap()'ed chunk */
#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)

#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
/* Get size, ignoring use bits */
#define chunksize(p)         ((p)->size & ~(SIZE_BITS))
用size的第0位表示chunk是否free,第1位表示内存块是否mmap或sbrk得到的

#define checked_request2size(req, sz)                             \
  if (REQUEST_OUT_OF_RANGE(req)) {                                \
    errno = ENOMEM;                                               \
    return 0;                                                     \
  }                                                               \
  (sz) =   request2size (req);

#define request2size(req)                                         \
  (((req) + (sizeof(size_t)) + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
   MINSIZE :                                                      \
   ((req) + (sizeof(size_t)) + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)

#ifndef MALLOC_ALIGNMENT
#define   MALLOC_ALIGNMENT        (2 * (sizeof(size_t)))                           //8
#endif
/* The corresponding bit mask value */
#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)

static void*   __malloc_alloc (size_t nb, mstate av)
{
    mchunkptr       old_top;        /* incoming value of av->top */
    size_t old_size;       /* its size */
    char*           old_end;        /* its end address */
    long            size;           /* arg to first MORECORE or mmap call */
    char*           fst_brk;        /* return value from MORECORE */
    long            correction;     /* arg to 2nd MORECORE call */
    char*           snd_brk;        /* 2nd return val */
    size_t front_misalign; /* unusable bytes at front of new space */
    size_t end_misalign;   /* partial page left at end of new space */
    char*           aligned_brk;    /* aligned offset into brk */
    mchunkptr       p;              /* the allocated/returned chunk */
    mchunkptr       remainder;      /* remainder from allocation */
    unsigned long    remainder_size; /* its size */
    unsigned long    sum;            /* for updating stats */
    size_t          pagemask  = av->pagesize - 1;
    /*
       If there is space available in fastbins, consolidate and retry
       malloc from scratch rather than getting memory from system.  This
       can occur only if nb is in smallbin range so we didn't consolidate
       upon entry to malloc. It is much easier to handle this case here
       than in malloc proper.
       */
    if (have_fastchunks(av)) {          //判断是否fastbin箱子为空
    assert(in_smallbin_range(nb));
    __malloc_consolidate(av);     //如果fastbin中有chunk,则调用内存合并函数
    return malloc(nb - MALLOC_ALIGN_MASK);  //合并后递归申请
    }

//否则需要从系统中去申请了
    /*
       If have mmap, and the request size meets the mmap threshold, and
       the system supports mmap, and there are few enough currently
       allocated mmapped regions, try to directly map this request
       rather than expanding top.
       */
    if ((unsigned long)(nb) >= (unsigned long)(av->mmap_threshold) &&
        (av->n_mmaps < av->n_mmaps_max)) {          //通过mmap去申请,默认的阈值是256 * 1024,n_mmaps是当前mmap的次数
    char* mm;             /* return value from mmap call*/
    /*
       Round up size to nearest page.  For mmapped chunks, the overhead
       is one (sizeof(size_t)) unit larger than for normal chunks, because there
       is no following chunk whose prev_size field could be used.
       */
    size = (nb + (sizeof(size_t)) + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
    /* Don't try if size wraps around 0 */
    if ((unsigned long)(size) > (unsigned long)(nb)) {
        mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE));
        if (mm != (char*)(MORECORE_FAILURE)) {
        /*
           The offset to the start of the mmapped region is stored
           in the prev_size field of the chunk. This allows us to adjust
           returned start address to meet alignment requirements here
           and in memalign(), and still be able to compute proper
           address argument for later munmap in free() and realloc().
           */
        front_misalign = (size_t)chunk2mem(mm) & MALLOC_ALIGN_MASK;
        if (front_misalign > 0) {       //mmap的内存起始地址没有8byte对齐
            correction = MALLOC_ALIGNMENT - front_misalign;
            p = (mchunkptr)(mm + correction);
            p->prev_size = correction;      //记录对齐后前面剩余的内存
            set_head(p, (size - correction) |IS_MMAPPED);      //标记chunk为mmap的
        }
        else {
            p = (mchunkptr)mm;
            p->prev_size = 0;
            set_head(p, size|IS_MMAPPED);
        }
        /* update statistics */
        if (++av->n_mmaps > av->max_n_mmaps)
            av->max_n_mmaps = av->n_mmaps;
        sum = av->mmapped_mem += size;
        if (sum > (unsigned long)(av->max_mmapped_mem))
            av->max_mmapped_mem = sum;
        sum += av->sbrked_mem;
        if (sum > (unsigned long)(av->max_total_mem))
            av->max_total_mem = sum;
        check_chunk(p);
        return chunk2mem(p);         //跳过8byte的头信息
        }
    }
    }
    /* Record incoming configuration of top */
    old_top  = av->top;
    old_size = chunksize(old_top);
    old_end  = (char*)(chunk_at_offset(old_top, old_size));
    fst_brk = snd_brk = (char*)(MORECORE_FAILURE);
    /* If not the first time through, we require old_size to
     * be at least MINSIZE and to have prev_inuse set.  */
    assert((old_top == initial_top(av) && old_size == 0) ||
        ((unsigned long) (old_size) >= MINSIZE &&
         prev_inuse(old_top)));
    /* Precondition: not enough current space to satisfy nb request */
    assert((unsigned long)(old_size) < (unsigned long)(nb + MINSIZE));
    /* Precondition: all fastbins are consolidated */
    assert(!have_fastchunks(av));
    /* Request enough space for nb + pad + overhead */
    size = nb + av->top_pad + MINSIZE;             //av->top_pad为额外申请的内存大小,主要用于MORECORE特别慢的系统,linux默认设置为0
    /*
       If contiguous, we can subtract out existing space that we hope to
       combine with new space. We add it back later only if
       we don't actually get contiguous space.
       */
    if (contiguous(av))  //如果是连续的,则减去top内存块的长度,因为它们可以合并,合并后的长度就能满足申请的长度需要了
    size -= old_size;
    /*
       Round to a multiple of page size.
       If MORECORE is not contiguous, this ensures that we only call it
       with whole-page arguments.  And if MORECORE is contiguous and
       this is not first time through, this preserves page-alignment of
       previous calls. Otherwise, we correct to page-align below.
       */
    size = (size + pagemask) & ~pagemask;           //break地址必须是页对齐的
    /*
       Don't try to call MORECORE if argument is so big as to appear
       negative. Note that since mmap takes size_t arg, it may succeed
       below even if we cannot call MORECORE.
       */
    if (size > 0)
    fst_brk = (char*)(MORECORE(size));          // first break 地址,即原来的break地址
    /*
       If have mmap, try using it as a backup when MORECORE fails or
       cannot be used. This is worth doing on systems that have "holes" in
       address space, so sbrk cannot extend to give contiguous space, but
       space is available elsewhere.  Note that we ignore mmap max count
       and threshold limits, since the space will not be used as a
       segregated mmap region.
       */
    if (fst_brk == (char*)(MORECORE_FAILURE)) {    //MORECORE申请内存失败
    /* Cannot merge with old top, so add its size back in */
    if (contiguous(av))      //尝试从mmap申请内存,所以不能和top chunk合并,因此top size应该减去
        size = (size + old_size + pagemask) & ~pagemask;
    /* If we are relying on mmap as backup, then use larger units */
    if ((unsigned long)(size) < (unsigned long)(MMAP_AS_MORECORE_SIZE))  //#define MMAP_AS_MORECORE_SIZE (1024 * 1024)   mmap最小为1M
        size = MMAP_AS_MORECORE_SIZE;
    /* Don't try if size wraps around 0 */
    if ((unsigned long)(size) > (unsigned long)(nb)) {
        fst_brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE));   //从mmap中申请
        if (fst_brk != (char*)(MORECORE_FAILURE)) {    //申请成功
        /* We do not need, and cannot use, another sbrk call to find end */
        snd_brk = fst_brk + size;    //记录申请内存的起始地址和结束地址
        /* Record that we no longer have a contiguous sbrk region.
           After the first time mmap is used as backup, we do not
           ever rely on contiguous space since this could incorrectly
           bridge regions.
           */
        set_noncontiguous(av);     //设置不连续标志
        }
    }
    }
    if (fst_brk != (char*)(MORECORE_FAILURE)) {     //申请内存成功
    av->sbrked_mem += size;
    /*
       If MORECORE extends previous space, we can likewise extend top size.
       */
    if (fst_brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {      //如果新分配的内存块和top内存块连续,则直接并入top内存块中,为何会出现不相等的情况呢,如果只有中其他的线程也调用了MORECORE()函数,则有可能会造成不连续的情况
        set_head(old_top, (size + old_size) | PREV_INUSE);
    }
    /*
       Otherwise, make adjustments:
     * If the first time through or noncontiguous, we need to call sbrk
     just to find out where the end of memory lies.
     * We need to ensure that all returned chunks from malloc will meet
     MALLOC_ALIGNMENT
     * If there was an intervening foreign sbrk, we need to adjust sbrk
     request size to account for fact that we will not be able to
     combine new space with existing space in old_top.
     * Almost all systems internally allocate whole pages at a time, in
     which case we might as well use the whole last page of request.
     So we allocate enough more memory to hit a page boundary now,
     which in turn causes future contiguous calls to page-align.
     */
    else {
        front_misalign = 0;
        end_misalign = 0;
        correction = 0;
        aligned_brk = fst_brk;
        /*
           If MORECORE returns an address lower than we have seen before,
           we know it isn't really contiguous.  This and some subsequent
           checks help cope with non-conforming MORECORE functions and
           the presence of "foreign" calls to MORECORE from outside of
           malloc or by other threads.  We cannot guarantee to detect
           these in all cases, but cope with the ones we do detect.
           */
        if (contiguous(av) && old_size != 0 && fst_brk < old_end) {   //比原来地址小和mmap两种情况视均为不连续
        set_noncontiguous(av);
        }
        /* handle contiguous cases */
        if (contiguous(av)) {    //空洞(hole)视为连续
        /* We can tolerate forward non-contiguities here (usually due
           to foreign calls) but treat them as part of our space for
           stats reporting.  */
        if (old_size != 0)
            av->sbrked_mem += fst_brk - old_end;    //加上空洞的大小
        /* Guarantee alignment of first new chunk made from this space */
        front_misalign = (size_t)chunk2mem(fst_brk) & MALLOC_ALIGN_MASK;
        if (front_misalign > 0) {    //没有8byte对齐
            /*
               Skip over some bytes to arrive at an aligned position.
               We don't need to specially mark these wasted front bytes.
               They will never be accessed anyway because
               prev_inuse of av->top (and any chunk created from its start)
               is always true after initialization.
               */
            correction = MALLOC_ALIGNMENT - front_misalign;
            aligned_brk += correction;
        }
        /*
           If this isn't adjacent to existing space, then we will not
           be able to merge with old_top space, so must add to 2nd request.
           */
        correction += old_size;
        /* Extend the end address to hit a page boundary */
        end_misalign = (size_t)(fst_brk + size + correction);
        correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
        assert(correction >= 0);
        snd_brk = (char*)(MORECORE(correction));
        if (snd_brk == (char*)(MORECORE_FAILURE)) {
            /*
               If can't allocate correction, try to at least find out current
               brk.  It might be enough to proceed without failing.
               */
            correction = 0;
            snd_brk = (char*)(MORECORE(0));
        }
        else if (snd_brk < fst_brk) {
            /*
               If the second call gives noncontiguous space even though
               it says it won't, the only course of action is to ignore
               results of second call, and conservatively estimate where
               the first call left us. Also set noncontiguous, so this
               won't happen again, leaving at most one hole.
               Note that this check is intrinsically incomplete.  Because
               MORECORE is allowed to give more space than we ask for,
               there is no reliable way to detect a noncontiguity
               producing a forward gap for the second call.
               */
            snd_brk = fst_brk + size;
            correction = 0;
            set_noncontiguous(av);
        }
        }
        /* handle non-contiguous cases */
        else {
        /* MORECORE/mmap must correctly align */
        assert(aligned_OK(chunk2mem(fst_brk)));
        /* Find out current end of memory */
        if (snd_brk == (char*)(MORECORE_FAILURE)) {
            snd_brk = (char*)(MORECORE(0));
            av->sbrked_mem += snd_brk - fst_brk - size;
        }
        }
        /* Adjust top based on results of second sbrk */
        if (snd_brk != (char*)(MORECORE_FAILURE)) {
        av->top = (mchunkptr)aligned_brk;       //新申请的内存块设置为top
        set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
        av->sbrked_mem += correction;
        /*
           If not the first time through, we either have a
           gap due to foreign sbrk or a non-contiguous region.  Insert a
           double fencepost at old_top to prevent consolidation with space
           we don't own. These fenceposts are artificial chunks that are
           marked as inuse and are in any case too small to use.  We need
           two to make sizes and alignments work out.
           */
//对于空洞怎么处理的问题:
当前chunk通过 PREV_INUSE可以知道前一个chunk的使用情况,所以前一个chunk不连续的情况很好处理,只需要把前一个hole设置为 PREV_INUSE即可,关键是后面一个chunk,因为必须要通过下下一个chunk的头信息才能知道下一个chunk的使用情况,所以后面保存足够的byte进行标记
        if (old_size != 0) {           //原来的top chunk不为空
            /* Shrink old_top to insert fenceposts, keeping size a
               multiple of MALLOC_ALIGNMENT. We know there is at least
               enough space in old_top to do this.
               */
            old_size = (old_size - 3*(sizeof(size_t))) & ~MALLOC_ALIGN_MASK;
            set_head(old_top, old_size | PREV_INUSE);  //减小top chunk的大小,对后面的hole进行标记
            /*
               Note that the following assignments completely overwrite
               old_top when old_size was previously MINSIZE.  This is
               intentional. We need the fencepost, even if old_top otherwise gets
               lost.
               */
            chunk_at_offset(old_top, old_size          )->size =    //下一个chunk标记为PREV_INUSE
            (sizeof(size_t))|PREV_INUSE;
            chunk_at_offset(old_top, old_size + (sizeof(size_t)))->size =   //下下一个chunk标记为PREV_INUSE,这样就可以保证下一个chunk永远为inuse状态,这样就能保证永远不进行合并了
            (sizeof(size_t))|PREV_INUSE;
            /* If possible, release the rest, suppressing trimming.  */
            if (old_size >= MINSIZE) {
            size_t tt = av->trim_threshold;
            av->trim_threshold = (size_t)(-1);
            free(chunk2mem(old_top));                  //将原来的top chunk的内存释放
            av->trim_threshold = tt;
            }
        }
        }
    }
    /* Update statistics */
    sum = av->sbrked_mem;
    if (sum > (unsigned long)(av->max_sbrked_mem))
        av->max_sbrked_mem = sum;
    sum += av->mmapped_mem;
    if (sum > (unsigned long)(av->max_total_mem))
        av->max_total_mem = sum;
    check_malloc_state();
    /* finally, do the allocation */
    p = av->top;       //返回内存的chunk
    size = chunksize(p);
    /* check that one of the above allocation paths succeeded */
    if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) {       //需要分割
        remainder_size = size - nb;
        remainder = chunk_at_offset(p, nb);
        av->top = remainder;
        set_head(p, nb | PREV_INUSE);
        set_head(remainder, remainder_size | PREV_INUSE);
        check_malloced_chunk(p, nb);
        return chunk2mem(p);
    }
    }
    /* catch all failure paths */
    errno = ENOMEM;
    return 0;
}

/*
  MORECORE is the name of the routine to call to obtain more memory
  from the system.  See below for general guidance on writing
  alternative MORECORE functions, as well as a version for WIN32 and a
  sample version for pre-OSX macos.
*/
#ifndef MORECORE
#define MORECORE sbrk
#endif

/* Defined in brk.c.  */
extern void *__curbrk attribute_hidden;
/* Extend the process's data space by INCREMENT.
   If INCREMENT is negative, shrink data space by - INCREMENT.
   Return start of new space allocated, or -1 for errors.  */
void * sbrk (intptr_t increment)
{
    void *oldbrk;
    if (__curbrk == NULL)   //返回当前break的位置
    if (brk (NULL) < 0)    /* Initialize the break.  */
        return (void *) -1;
    if (increment == 0)      //当等于0时,返回当前break位置
    return __curbrk;
    oldbrk = __curbrk;
    if (brk (oldbrk + increment) < 0)        //当increment为负数时,允许内存收缩
    return (void *) -1;
    return oldbrk;        //返回之前的break位置
}

void *__curbrk attribute_hidden = 0;
int brk (void *addr)
{
  void *newbrk;
  {
    register long int res __asm__ ("$2");
    __asm__ ("move\t$4,%2\n\t"
     "li\t%0,%1\n\t"
     "syscall"        /* Perform the system call.  */
     : "=r" (res)
     : "I" (__NR_brk), "r" (addr)
     : "$4", "$7", __SYSCALL_CLOBBERS);
    newbrk = (void *) res;                 //返回的地址总是页对齐的
  }
  __curbrk = newbrk;               //记录当前break位置
  if (newbrk < addr)               //对齐的地址总是round up的
    {
      __set_errno (ENOMEM);
      return -1;
    }
  return 0;
}


free.c

void attribute_hidden   __malloc_consolidate (mstate av)          //释放fastbin中的chunk
{
    mfastbinptr*    fb;                 /* current fastbin being consolidated */
    mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
    mchunkptr       p;                  /* current chunk being consolidated */
    mchunkptr       nextp;              /* next chunk to consolidate */
    mchunkptr       unsorted_bin;       /* bin header */
    mchunkptr       first_unsorted;     /* chunk to link to */
    /* These have same use as in free() */
    mchunkptr       nextchunk;
    size_t size;
    size_t nextsize;
    size_t prevsize;
    int             nextinuse;
    mchunkptr       bck;
    mchunkptr       fwd;
    /*
       If max_fast is 0, we know that av hasn't
       yet been initialized, in which case do so below
       */
    if (av->max_fast != 0) {
    clear_fastchunks(av);
    unsorted_bin = unsorted_chunks(av);
    /*
       Remove each chunk from fast bin and consolidate it, placing it
       then in unsorted bin. Among other reasons for doing this,
       placing in unsorted bin avoids needing to calculate actual bins
       until malloc is sure that chunks aren't immediately going to be
       reused anyway.
       */
    maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
    fb = &(av->fastbins[0]);
    do {
        if ( (p = *fb) != 0) {
        *fb = 0;      //清除fastbin
        do {
            check_inuse_chunk(p);
            nextp = p->fd;
            /* Slightly streamlined version of consolidation code in free() */
            size = p->size & ~PREV_INUSE;
            nextchunk = chunk_at_offset(p, size);
            nextsize = chunksize(nextchunk);
            if (!prev_inuse(p)) {   //和前一个chunk合并
            prevsize = p->prev_size;
            size += prevsize;
            p = chunk_at_offset(p, -((long) prevsize));
            unlink(p, bck, fwd);        //从bin中删除前一个chunk,因为已经合并了,所以不应该放入该size的bin中了,显然bin中的chunk头中包含了16个字节,包含了fd和bk指针
            }
            if (nextchunk != av->top) {     //相邻的下一个不是top chunk
            nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
            set_head(nextchunk, nextsize);
            if (!nextinuse) {    //和后一个chunk合并
                size += nextsize;
                unlink(nextchunk, bck, fwd);   //从bin中删除后一个chunk,因为已经合并了,所以不应该放入该size的bin中了,显然bin中的chunk头中包含了16个字节,包含了fd和bk指针
            }
            first_unsorted = unsorted_bin->fd;
            unsorted_bin->fd = p;              //将合并后的chunk放入unsort bin中
            first_unsorted->bk = p;
            set_head(p, size | PREV_INUSE);
            p->bk = unsorted_bin;
            p->fd = first_unsorted;
            set_foot(p, size);
            }
            else {    //将合并后的chunk放入topchunk中
            size += nextsize;
            set_head(p, size | PREV_INUSE);
            av->top = p;
            }
        } while ( (p = nextp) != 0);
        }
    } while (fb++ != maxfb);
    }
    else {
     malloc_init_state (av);               //初始化
    check_malloc_state();
    }
}

static void malloc_init_state(mstate av)
{
    int     i;
    mbinptr bin;
    /* Establish circular links for normal bins */
    for (i = 1; i < NBINS; ++i) {
    bin =   bin_at (av,i);                     // #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - ((sizeof(size_t))<<1)))
    bin->fd = bin->bk = bin;
    }
    av->top_pad        = DEFAULT_TOP_PAD;
    av->n_mmaps_max    = DEFAULT_MMAP_MAX;
    av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
    av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
#if MORECORE_CONTIGUOUS
    set_contiguous(av);
#else
    set_noncontiguous(av);
#endif
    set_max_fast(av,   DEFAULT_MXFAST );
    av->top            = initial_top(av);
    av->pagesize       = malloc_getpagesize;
}

void free(void* mem)
{
    mstate av;
    mchunkptr       p;           /* chunk corresponding to mem */
    size_t size;        /* its size */
    mfastbinptr*    fb;          /* associated fastbin */
    mchunkptr       nextchunk;   /* next contiguous chunk */
    size_t nextsize;    /* its size */
    int             nextinuse;   /* true if nextchunk is used */
    size_t prevsize;    /* size of previous contiguous chunk */
    mchunkptr       bck;         /* misc temp for linking */
    mchunkptr       fwd;         /* misc temp for linking */
    /* free(0) has no effect */
    if (mem == NULL)
    return;
    __MALLOC_LOCK;
    av = get_malloc_state();
    p = mem2chunk(mem);          //#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*(sizeof(size_t))))   向前偏移8byte,得到头,很明显分配的内存块头中只有8个字节,和bin中的chunk不同
    size = chunksize(p);          //得到当前chunk的size
    check_inuse_chunk(p);
    /*
       If eligible, place chunk on a fastbin so it can be found
       and used quickly in malloc.
       */
    if ((unsigned long)(size) <= (unsigned long)(av->max_fast)        //放入fastbin中,并不和相邻的chunk进行合并
#if TRIM_FASTBINS
        /* If TRIM_FASTBINS set, don't place chunks
           bordering top into fastbins */
        && (chunk_at_offset(p, size) != av->top)
#endif
       ) {
    set_fastchunks(av);
    fb = &(av->fastbins[fastbin_index(size)]);
    p->fd = *fb;
    *fb = p;    //放入fastbin的第一个位置上
    }
    /*
       Consolidate other non-mmapped chunks as they arrive.
       */
    else if (!chunk_is_mmapped(p)) {
    set_anychunks(av);
    nextchunk = chunk_at_offset(p, size);
    nextsize = chunksize(nextchunk);
    /* consolidate backward */
    if (!prev_inuse(p)) {   //相邻的前一个chunk空闲
        prevsize = p->prev_size;         //前一块内存chunk的大小
        size += prevsize;       
        p = chunk_at_offset(p, -((long) prevsize));  //前一块chunk的起始地址
        unlink(p, bck, fwd);   //从bin中删除前一个chunk,因为已经合并了,所以不应该放入该size的bin中了,显然bin中的chunk头中包含了16个字节,包含了fd和bk指针
    }
    if (nextchunk != av->top) {    //相邻的下一个chunk不是top
        /* get and clear inuse bit */
        nextinuse = inuse_bit_at_offset(nextchunk, nextsize);//要通过下下一个chunk才能知道下一个chunk的使用状态
        set_head(nextchunk, nextsize);    //清除后一个chunk的PREV_INUSE的标志
        /* consolidate forward */
        if (!nextinuse) {    //相邻的后一个chunk空闲
        unlink(nextchunk, bck, fwd);    //从bin中删除后一个chunk
        size += nextsize;
        }
        /*
           Place the chunk in unsorted chunk list. Chunks are
           not placed into regular bins until after they have
           been given one chance to be used in malloc.
           */
     //放入unsort bin 中, unsort bin在malloc时进行整理,放入chunk对应的正确的size的bin中
        bck = unsorted_chunks(av);
        fwd = bck->fd;
        p->bk = bck;
        p->fd = fwd;
        bck->fd = p;
        fwd->bk = p;
        set_head(p, size | PREV_INUSE);
        set_foot(p, size);
        check_free_chunk(p);
    }
    /*
       If the chunk borders the current high end of memory,
       consolidate into top
       */
    else {         //如果下一个chunk是top,则并入top chunk中
        size += nextsize;
        set_head(p, size | PREV_INUSE);
        av->top = p;
        check_chunk(p);
    }
    /*
       If freeing a large space, consolidate possibly-surrounding
       chunks. Then, if the total unused topmost memory exceeds trim
       threshold, ask malloc_trim to reduce top.
       Unless max_fast is 0, we don't know if there are fastbins
       bordering top, so we cannot tell for sure whether threshold
       has been reached unless fastbins are consolidated.  But we
       don't want to consolidate on each free.  As a compromise,
       consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
       is reached.
       */
    if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
        if (have_fastchunks(av))
        __malloc_consolidate(av);
        if ((unsigned long)(chunksize(av->top)) >=
            (unsigned long)(av->trim_threshold))
        __malloc_trim(av->top_pad, av);
    }
    }
    /*
       If the chunk was allocated via mmap, release via munmap()
       Note that if HAVE_MMAP is false but chunk_is_mmapped is
       true, then user must have overwritten memory. There's nothing
       we can do to catch this error unless DEBUG is set, in which case
       check_inuse_chunk (above) will have triggered error.
       */
    else {     //通过mmap得到的内存块
    size_t offset = p->prev_size;      //对齐后前面剩余的内存
    av->n_mmaps--;
    av->mmapped_mem -= (size + offset);
    munmap((char*)p - offset, size + offset);    //直接调用munmap(void *addr, size_t length),显然size不包括offset的大小
    }
    __MALLOC_UNLOCK;
}

int malloc_trim(size_t pad)
{
  mstate av = get_malloc_state();
  __malloc_consolidate(av);
  return __malloc_trim(pad, av);
}

static int __malloc_trim(size_t pad, mstate av)        //内存紧缩,即归还内存给系统,pad为保留的内存
{
    long  top_size;        /* Amount of top-most memory */
    long  extra;           /* Amount to release */
    long  released;        /* Amount actually released */
    char* current_brk;     /* address returned by pre-check sbrk call */
    char* new_brk;         /* address returned by post-check sbrk call */
    size_t pagesz;
    pagesz = av->pagesize;
    top_size = chunksize(av->top);
    /* Release in pagesize units, keeping at least one page */
    extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;    //判断是否大于一个页大小
    if (extra > 0) {
    /*
       Only proceed if end of memory is where we last set it.
       This avoids problems if there were foreign sbrk calls.
       */
    current_brk = (char*)(MORECORE(0));
    if (current_brk == (char*)(av->top) + top_size) {
        /*
           Attempt to release memory. We ignore MORECORE return value,
           and instead call again to find out where new end of memory is.
           This avoids problems if first call releases less than we asked,
           of if failure somehow altered brk value. (We could still
           encounter problems if it altered brk in some very bad way,
           but the only thing we can do is adjust anyway, which will cause
           some downstream failure.)
           */
        MORECORE(-extra);      //归还内存
        new_brk = (char*)(MORECORE(0));
        if (new_brk != (char*)MORECORE_FAILURE) {
        released = (long)(current_brk - new_brk);
        if (released != 0) {
            /* Success. Adjust top. */
            av->sbrked_mem -= released;
            set_head(av->top, (top_size - released) | PREV_INUSE);
            check_malloc_state();
            return 1;
        }
        }
    }
    }
    return 0;
}

通过阅读以上代码,应该能大致明白dlmalloc的实现逻辑,chunk进行边界标记是为了能够快速的找到相邻的chunk和快速的进行内存合并,分箱(bin)操作是为了能够快速找到最合适大小的chunk,free chunk按照size放入不同的箱子中,
free时很小的chunk直接放入fastbin中,不进行相邻chunk合并,否则进行相邻合并,如果和top相邻,则放入top chunk中,不和top相邻则将合并后的chunk放入unsort箱子中,在malloc时对unsort箱子中的chunk进行整理,放入对应size的箱子中。
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值