关于如何理解Glibc堆管理器(Ⅷ——从源代码理解malloc)

本篇实为个人笔记,可能存在些许错误;若各位师傅发现哪里存在错误,还望指正。感激不尽。

若有图片及文稿引用,将在本篇结尾处著名来源(也有置于篇首的情况)。

        关于glibc堆管理器Ptmalloc2的实际讨论在前几节已经大致结束了,但是笔者仍觉得对其分配机制缺少完整的认识,于是最后两节将直接通过源代码来对其分配和释放规则进行分析

        尽管笔者所用的Ubuntu18.04使用glibc-2.27,但笔者在对照源代码后发现,实际的操作和官方放出的glibc2.29更加接近,因此笔者将引用2.29版本中的源代码进行分析

        为了文章的可读性,笔者将使用“块引用”来表示分支情况,在没有特别标注的情况下(没有说明引用来源时),其中内容均为笔者所写

目录

源代码:

__libc_malloc:

分支:如果使用Tcache

_int_malloc:

分支 1:

分支 2:Fast Bin范围内

分支 2.1:使用Tcache

分支2.2:否则

分支3:Small Bin范围内

分支3.1:链表非空

分支3.2:使用Tcache

分支3.3:否则

 分支4:Large Bin范围内

分支5:使用Tcache

分支6:否则

分支6.1:Unsorted Bin中存在空闲块

分支6.1.1:nb在Small Bin范围&bck为Unsorted Bin最后一个&victim的size足够被分配(有剩余的情况)

分支6.1.2:尺寸刚好(无剩余)

分支6.1.3:否则

分支6.1.4:如果有往Tcache Bin中投放过chunk或是所有Small Bin都被投放完成

分支7:如果nb属于Large Bin

 分支8:其他

分支9:否则

分支9.1:如果Top Chunk足够

分支9.2:否则

参考文章:


源代码:

void *
__libc_malloc (size_t bytes)
{
  mstate ar_ptr;
  void *victim;

  void *(*hook) (size_t, const void *)
    = atomic_forced_read (__malloc_hook);
  if (__builtin_expect (hook != NULL, 0))
    return (*hook)(bytes, RETURN_ADDRESS (0));
#if USE_TCACHE
  /* int_free also calls request2size, be careful to not pad twice.  */
  size_t tbytes;
  checked_request2size (bytes, tbytes);
  size_t tc_idx = csize2tidx (tbytes);

  MAYBE_INIT_TCACHE ();

  DIAG_PUSH_NEEDS_COMMENT;
  if (tc_idx < mp_.tcache_bins
      /*&& tc_idx < TCACHE_MAX_BINS*/ /* to appease gcc */
      && tcache
      && tcache->entries[tc_idx] != NULL)
    {
      return tcache_get (tc_idx);
    }
  DIAG_POP_NEEDS_COMMENT;
#endif

  if (SINGLE_THREAD_P)
    {
      victim = _int_malloc (&main_arena, bytes);
      assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||
	      &main_arena == arena_for_chunk (mem2chunk (victim)));
      return victim;
    }

  arena_get (ar_ptr, bytes);

  victim = _int_malloc (ar_ptr, bytes);
  /* Retry with another arena only if we were able to find a usable arena
     before.  */
  if (!victim && ar_ptr != NULL)
    {
      LIBC_PROBE (memory_malloc_retry, 1, bytes);
      ar_ptr = arena_get_retry (ar_ptr, bytes);
      victim = _int_malloc (ar_ptr, bytes);
    }

  if (ar_ptr != NULL)
    __libc_lock_unlock (ar_ptr->mutex);

  assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||
          ar_ptr == arena_for_chunk (mem2chunk (victim)));
  return victim;
}
static void *
_int_malloc (mstate av, size_t bytes)
{
  INTERNAL_SIZE_T nb;               /* normalized request size */
  unsigned int idx;                 /* associated bin index */
  mbinptr bin;                      /* associated bin */

  mchunkptr victim;                 /* inspected/selected chunk */
  INTERNAL_SIZE_T size;             /* its size */
  int victim_index;                 /* its bin index */

  mchunkptr remainder;              /* remainder from a split */
  unsigned long remainder_size;     /* its size */

  unsigned int block;               /* bit map traverser */
  unsigned int bit;                 /* bit map traverser */
  unsigned int map;                 /* current word of binmap */

  mchunkptr fwd;                    /* misc temp for linking */
  mchunkptr bck;                    /* misc temp for linking */

#if USE_TCACHE
  size_t tcache_unsorted_count;	    /* count of unsorted chunks processed */
#endif

  /*
     Convert request size to internal form by adding SIZE_SZ bytes
     overhead plus possibly more to obtain necessary alignment and/or
     to obtain a size of at least MINSIZE, the smallest allocatable
     size. Also, checked_request2size traps (returning 0) request sizes
     that are so large that they wrap around zero when padded and
     aligned.
   */

  checked_request2size (bytes, nb);

  /* There are no usable arenas.  Fall back to sysmalloc to get a chunk from
     mmap.  */
  if (__glibc_unlikely (av == NULL))
    {
      void *p = sysmalloc (nb, av);
      if (p != NULL)
	alloc_perturb (p, bytes);
      return p;
    }

  /*
     If the size qualifies as a fastbin, first check corresponding bin.
     This code is safe to execute even if av is not yet initialized, so we
     can try it without checking, which saves some time on this fast path.
   */

#define REMOVE_FB(fb, victim, pp)			\
  do							\
    {							\
      victim = pp;					\
      if (victim == NULL)				\
	break;						\
    }							\
  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
	 != victim);					\

  if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
    {
      idx = fastbin_index (nb);
      mfastbinptr *fb = &fastbin (av, idx);
      mchunkptr pp;
      victim = *fb;

      if (victim != NULL)
	{
	  if (SINGLE_THREAD_P)
	    *fb = victim->fd;
	  else
	    REMOVE_FB (fb, pp, victim);
	  if (__glibc_likely (victim != NULL))
	    {
	      size_t victim_idx = fastbin_index (chunksize (victim));
	      if (__builtin_expect (victim_idx != idx, 0))
		malloc_printerr ("malloc(): memory corruption (fast)");
	      check_remalloced_chunk (av, victim, nb);
#if USE_TCACHE
	      /* While we're here, if we see other chunks of the same size,
		 stash them in the tcache.  */
	      size_t tc_idx = csize2tidx (nb);
	      if (tcache && tc_idx < mp_.tcache_bins)
		{
		  mchunkptr tc_victim;

		  /* While bin not empty and tcache not full, copy chunks.  */
		  while (tcache->counts[tc_idx] < mp_.tcache_count
			 && (tc_victim = *fb) != NULL)
		    {
		      if (SINGLE_THREAD_P)
			*fb = tc_victim->fd;
		      else
			{
			  REMOVE_FB (fb, pp, tc_victim);
			  if (__glibc_unlikely (tc_victim == NULL))
			    break;
			}
		      tcache_put (tc_victim, tc_idx);
		    }
		}
#endif
	      void *p = chunk2mem (victim);
	      alloc_perturb (p, bytes);
	      return p;
	    }
	}
    }

  /*
     If a small request, check regular bin.  Since these "smallbins"
     hold one size each, no searching within bins is necessary.
     (For a large request, we need to wait until unsorted chunks are
     processed to find best fit. But for small ones, fits are exact
     anyway, so we can check now, which is faster.)
   */

  if (in_smallbin_range (nb))
    {
      idx = smallbin_index (nb);
      bin = bin_at (av, idx);

      if ((victim = last (bin)) != bin)
        {
          bck = victim->bk;
	  if (__glibc_unlikely (bck->fd != victim))
	    malloc_printerr ("malloc(): smallbin double linked list corrupted");
          set_inuse_bit_at_offset (victim, nb);
          bin->bk = bck;
          bck->fd = bin;

          if (av != &main_arena)
	    set_non_main_arena (victim);
          check_malloced_chunk (av, victim, nb);
#if USE_TCACHE
	  /* While we're here, if we see other chunks of the same size,
	     stash them in the tcache.  */
	  size_t tc_idx = csize2tidx (nb);
	  if (tcache && tc_idx < mp_.tcache_bins)
	    {
	      mchunkptr tc_victim;

	      /* While bin not empty and tcache not full, copy chunks over.  */
	      while (tcache->counts[tc_idx] < mp_.tcache_count
		     && (tc_victim = last (bin)) != bin)
		{
		  if (tc_victim != 0)
		    {
		      bck = tc_victim->bk;
		      set_inuse_bit_at_offset (tc_victim, nb);
		      if (av != &main_arena)
			set_non_main_arena (tc_victim);
		      bin->bk = bck;
		      bck->fd = bin;

		      tcache_put (tc_victim, tc_idx);
	            }
		}
	    }
#endif
          void *p = chunk2mem (victim);
          alloc_perturb (p, bytes);
          return p;
        }
    }

  /*
     If this is a large request, consolidate fastbins before continuing.
     While it might look excessive to kill all fastbins before
     even seeing if there is space available, this avoids
     fragmentation problems normally associated with fastbins.
     Also, in practice, programs tend to have runs of either small or
     large requests, but less often mixtures, so consolidation is not
     invoked all that often in most programs. And the programs that
     it is called frequently in otherwise tend to fragment.
   */

  else
    {
      idx = largebin_index (nb);
      if (atomic_load_relaxed (&av->have_fastchunks))
        malloc_consolidate (av);
    }

  /*
     Process recently freed or remaindered chunks, taking one only if
     it is exact fit, or, if this a small request, the chunk is remainder from
     the most recent non-exact fit.  Place other traversed chunks in
     bins.  Note that this step is the only place in any routine where
     chunks are placed in bins.

     The outer loop here is needed because we might not realize until
     near the end of malloc that we should have consolidated, so must
     do so and retry. This happens at most once, and only when we would
     otherwise need to expand memory to service a "small" request.
   */

#if USE_TCACHE
  INTERNAL_SIZE_T tcache_nb = 0;
  size_t tc_idx = csize2tidx (nb);
  if (tcache && tc_idx < mp_.tcache_bins)
    tcache_nb = nb;
  int return_cached = 0;

  tcache_unsorted_count = 0;
#endif

  for (;; )
    {
      int iters = 0;
      while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
        {
          bck = victim->bk;
          size = chunksize (victim);
          mchunkptr next = chunk_at_offset (victim, size);

          if (__glibc_unlikely (size <= 2 * SIZE_SZ)
              || __glibc_unlikely (size > av->system_mem))
            malloc_printerr ("malloc(): invalid size (unsorted)");
          if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
              || __glibc_unlikely (chunksize_nomask (next) > av->system_mem))
            malloc_printerr ("malloc(): invalid next size (unsorted)");
          if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != size))
            malloc_printerr ("malloc(): mismatching next->prev_size (unsorted)");
          if (__glibc_unlikely (bck->fd != victim)
              || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
            malloc_printerr ("malloc(): unsorted double linked list corrupted");
          if (__glibc_unlikely (prev_inuse (next)))
            malloc_printerr ("malloc(): invalid next->prev_inuse (unsorted)");

          /*
             If a small request, try to use last remainder if it is the
             only chunk in unsorted bin.  This helps promote locality for
             runs of consecutive small requests. This is the only
             exception to best-fit, and applies only when there is
             no exact fit for a small chunk.
           */

          if (in_smallbin_range (nb) &&
              bck == unsorted_chunks (av) &&
              victim == av->last_remainder &&
              (unsigned long) (size) > (unsigned long) (nb + MINSIZE))
            {
              /* split and reattach remainder */
              remainder_size = size - nb;
              remainder = chunk_at_offset (victim, nb);
              unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
              av->last_remainder = remainder;
              remainder->bk = remainder->fd = unsorted_chunks (av);
              if (!in_smallbin_range (remainder_size))
                {
                  remainder->fd_nextsize = NULL;
                  remainder->bk_nextsize = NULL;
                }

              set_head (victim, nb | PREV_INUSE |
                        (av != &main_arena ? NON_MAIN_ARENA : 0));
              set_head (remainder, remainder_size | PREV_INUSE);
              set_foot (remainder, remainder_size);

              check_malloced_chunk (av, victim, nb);
              void *p = chunk2mem (victim);
              alloc_perturb (p, bytes);
              return p;
            }

          /* remove from unsorted list */
          if (__glibc_unlikely (bck->fd != victim))
            malloc_printerr ("malloc(): corrupted unsorted chunks 3");
          unsorted_chunks (av)->bk = bck;
          bck->fd = unsorted_chunks (av);

          /* Take now instead of binning if exact fit */

          if (size == nb)
            {
              set_inuse_bit_at_offset (victim, size);
              if (av != &main_arena)
		set_non_main_arena (victim);
#if USE_TCACHE
	      /* Fill cache first, return to user only if cache fills.
		 We may return one of these chunks later.  */
	      if (tcache_nb
		  && tcache->counts[tc_idx] < mp_.tcache_count)
		{
		  tcache_put (victim, tc_idx);
		  return_cached = 1;
		  continue;
		}
	      else
		{
#endif
              check_malloced_chunk (av, victim, nb);
              void *p = chunk2mem (victim);
              alloc_perturb (p, bytes);
              return p;
#if USE_TCACHE
		}
#endif
            }

          /* place chunk in bin */

          if (in_smallbin_range (size))
            {
              victim_index = smallbin_index (size);
              bck = bin_at (av, victim_index);
              fwd = bck->fd;
            }
          else
            {
              victim_index = largebin_index (size);
              bck = bin_at (av, victim_index);
              fwd = bck->fd;

              /* maintain large bins in sorted order */
              if (fwd != bck)
                {
                  /* Or with inuse bit to speed comparisons */
                  size |= PREV_INUSE;
                  /* if smaller than smallest, bypass loop below */
                  assert (chunk_main_arena (bck->bk));
                  if ((unsigned long) (size)
		      < (unsigned long) chunksize_nomask (bck->bk))
                    {
                      fwd = bck;
                      bck = bck->bk;

                      victim->fd_nextsize = fwd->fd;
                      victim->bk_nextsize = fwd->fd->bk_nextsize;
                      fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim;
                    }
                  else
                    {
                      assert (chunk_main_arena (fwd));
                      while ((unsigned long) size < chunksize_nomask (fwd))
                        {
                          fwd = fwd->fd_nextsize;
			  assert (chunk_main_arena (fwd));
                        }

                      if ((unsigned long) size
			  == (unsigned long) chunksize_nomask (fwd))
                        /* Always insert in the second position.  */
                        fwd = fwd->fd;
                      else
                        {
                          victim->fd_nextsize = fwd;
                          victim->bk_nextsize = fwd->bk_nextsize;
                          fwd->bk_nextsize = victim;
                          victim->bk_nextsize->fd_nextsize = victim;
                        }
                      bck = fwd->bk;
                    }
                }
              else
                victim->fd_nextsize = victim->bk_nextsize = victim;
            }

          mark_bin (av, victim_index);
          victim->bk = bck;
          victim->fd = fwd;
          fwd->bk = victim;
          bck->fd = victim;

#if USE_TCACHE
      /* If we've processed as many chunks as we're allowed while
	 filling the cache, return one of the cached ones.  */
      ++tcache_unsorted_count;
      if (return_cached
	  && mp_.tcache_unsorted_limit > 0
	  && tcache_unsorted_count > mp_.tcache_unsorted_limit)
	{
	  return tcache_get (tc_idx);
	}
#endif

#define MAX_ITERS       10000
          if (++iters >= MAX_ITERS)
            break;
        }

#if USE_TCACHE
      /* If all the small chunks we found ended up cached, return one now.  */
      if (return_cached)
	{
	  return tcache_get (tc_idx);
	}
#endif

      /*
         If a large request, scan through the chunks of current bin in
         sorted order to find smallest that fits.  Use the skip list for this.
       */

      if (!in_smallbin_range (nb))
        {
          bin = bin_at (av, idx);

          /* skip scan if empty or largest chunk is too small */
          if ((victim = first (bin)) != bin
	      && (unsigned long) chunksize_nomask (victim)
	        >= (unsigned long) (nb))
            {
              victim = victim->bk_nextsize;
              while (((unsigned long) (size = chunksize (victim)) <
                      (unsigned long) (nb)))
                victim = victim->bk_nextsize;

              /* Avoid removing the first entry for a size so that the skip
                 list does not have to be rerouted.  */
              if (victim != last (bin)
		  && chunksize_nomask (victim)
		    == chunksize_nomask (victim->fd))
                victim = victim->fd;

              remainder_size = size - nb;
              unlink_chunk (av, victim);

              /* Exhaust */
              if (remainder_size < MINSIZE)
                {
                  set_inuse_bit_at_offset (victim, size);
                  if (av != &main_arena)
		    set_non_main_arena (victim);
                }
              /* Split */
              else
                {
                  remainder = chunk_at_offset (victim, nb);
                  /* We cannot assume the unsorted list is empty and therefore
                     have to perform a complete insert here.  */
                  bck = unsorted_chunks (av);
                  fwd = bck->fd;
		  if (__glibc_unlikely (fwd->bk != bck))
		    malloc_printerr ("malloc(): corrupted unsorted chunks");
                  remainder->bk = bck;
                  remainder->fd = fwd;
                  bck->fd = remainder;
                  fwd->bk = remainder;
                  if (!in_smallbin_range (remainder_size))
                    {
                      remainder->fd_nextsize = NULL;
                      remainder->bk_nextsize = NULL;
                    }
                  set_head (victim, nb | PREV_INUSE |
                            (av != &main_arena ? NON_MAIN_ARENA : 0));
                  set_head (remainder, remainder_size | PREV_INUSE);
                  set_foot (remainder, remainder_size);
                }
              check_malloced_chunk (av, victim, nb);
              void *p = chunk2mem (victim);
              alloc_perturb (p, bytes);
              return p;
            }
        }

      /*
         Search for a chunk by scanning bins, starting with next largest
         bin. This search is strictly by best-fit; i.e., the smallest
         (with ties going to approximately the least recently used) chunk
         that fits is selected.

         The bitmap avoids needing to check that most blocks are nonempty.
         The particular case of skipping all bins during warm-up phases
         when no chunks have been returned yet is faster than it might look.
       */

      ++idx;
      bin = bin_at (av, idx);
      block = idx2block (idx);
      map = av->binmap[block];
      bit = idx2bit (idx);

      for (;; )
        {
          /* Skip rest of block if there are no more set bits in this block.  */
          if (bit > map || bit == 0)
            {
              do
                {
                  if (++block >= BINMAPSIZE) /* out of bins */
                    goto use_top;
                }
              while ((map = av->binmap[block]) == 0);

              bin = bin_at (av, (block << BINMAPSHIFT));
              bit = 1;
            }

          /* Advance to bin with set bit. There must be one. */
          while ((bit & map) == 0)
            {
              bin = next_bin (bin);
              bit <<= 1;
              assert (bit != 0);
            }

          /* Inspect the bin. It is likely to be non-empty */
          victim = last (bin);

          /*  If a false alarm (empty bin), clear the bit. */
          if (victim == bin)
            {
              av->binmap[block] = map &= ~bit; /* Write through */
              bin = next_bin (bin);
              bit <<= 1;
            }

          else
            {
              size = chunksize (victim);

              /*  We know the first chunk in this bin is big enough to use. */
              assert ((unsigned long) (size) >= (unsigned long) (nb));

              remainder_size = size - nb;

              /* unlink */
              unlink_chunk (av, victim);

              /* Exhaust */
              if (remainder_size < MINSIZE)
                {
                  set_inuse_bit_at_offset (victim, size);
                  if (av != &main_arena)
		    set_non_main_arena (victim);
                }

              /* Split */
              else
                {
                  remainder = chunk_at_offset (victim, nb);

                  /* We cannot assume the unsorted list is empty and therefore
                     have to perform a complete insert here.  */
                  bck = unsorted_chunks (av);
                  fwd = bck->fd;
		  if (__glibc_unlikely (fwd->bk != bck))
		    malloc_printerr ("malloc(): corrupted unsorted chunks 2");
                  remainder->bk = bck;
                  remainder->fd = fwd;
                  bck->fd = remainder;
                  fwd->bk = remainder;

                  /* advertise as last remainder */
                  if (in_smallbin_range (nb))
                    av->last_remainder = remainder;
                  if (!in_smallbin_range (remainder_size))
                    {
                      remainder->fd_nextsize = NULL;
                      remainder->bk_nextsize = NULL;
                    }
                  set_head (victim, nb | PREV_INUSE |
                            (av != &main_arena ? NON_MAIN_ARENA : 0));
                  set_head (remainder, remainder_size | PREV_INUSE);
                  set_foot (remainder, remainder_size);
                }
              check_malloced_chunk (av, victim, nb);
              void *p = chunk2mem (victim);
              alloc_perturb (p, bytes);
              return p;
            }
        }

    use_top:
      /*
         If large enough, split off the chunk bordering the end of memory
         (held in av->top). Note that this is in accord with the best-fit
         search rule.  In effect, av->top is treated as larger (and thus
         less well fitting) than any other available chunk since it can
         be extended to be as large as necessary (up to system
         limitations).

         We require that av->top always exists (i.e., has size >=
         MINSIZE) after initialization, so if it would otherwise be
         exhausted by current request, it is replenished. (The main
         reason for ensuring it exists is that we may need MINSIZE space
         to put in fenceposts in sysmalloc.)
       */

      victim = av->top;
      size = chunksize (victim);

      if (__glibc_unlikely (size > av->system_mem))
        malloc_printerr ("malloc(): corrupted top size");

      if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
        {
          remainder_size = size - nb;
          remainder = chunk_at_offset (victim, nb);
          av->top = remainder;
          set_head (victim, nb | PREV_INUSE |
                    (av != &main_arena ? NON_MAIN_ARENA : 0));
          set_head (remainder, remainder_size | PREV_INUSE);

          check_malloced_chunk (av, victim, nb);
          void *p = chunk2mem (victim);
          alloc_perturb (p, bytes);
          return p;
        }

      /* When we are using atomic ops to free fast chunks we can get
         here for all block sizes.  */
      else if (atomic_load_relaxed (&av->have_fastchunks))
        {
          malloc_consolidate (av);
          /* restore original bin index */
          if (in_smallbin_range (nb))
            idx = smallbin_index (nb);
          else
            idx = largebin_index (nb);
        }

      /*
         Otherwise, relay to handle system-dependent cases
       */
      else
        {
          void *p = sysmalloc (nb, av);
          if (p != NULL)
            alloc_perturb (p, bytes);
          return p;
        }
    }
}

         代码量较大,由于接下来大多为文字介绍,如果不对照代码可能有些晦涩,建议读者自行下载源代码或是拷贝上述代码以方便对照

__libc_malloc

        malloc函数在被调用时,会使用__libc_malloc函数进行一定的初始化功能,然后再调用_int_malloc函数进行内存块分配

        而管理器会调用malloc_hook_ini函数对堆进行初始化,然后回调__libc_malloc,但这并不是我们关注的重点,因此这里不会过多介绍

分支:如果使用Tcache

        根据请求bytes大小的空间,调用checked_request2size宏定义将其转换为内存块的大小tbytes,再通过csize2tidx获取对应的Bins结构索引

        如果Tcache还未初始化,则用MAYBE_INIT_TCACHE初始化;否则不执行

        检查tcache索引tc_idx是否合法,以及该索引中是否有空闲块。若有,则直接取出并返回给用户

        否则,判断请求是否由主线程发起。若是,调用_int_malloc申请内存块,并返回给用户

        否则,获取当前线程arena存入ar_ptr,调用_int_malloc申请内存块

        如果当前arena正被其他线程使用,则_int_malloc将会返回NULL,调用失败,直到有空闲的arena出现时,重新调用_int_malloc并返回给用户

        

_int_malloc:

        先用checked_request2size将请求的bytes转换为chunk块的大小nb

分支 1:

        如果arena空间不足(没有可用的arena),调用sysmalloc通过mmap或者brk来分配新堆块,如果成功,就直接返回给用户

         否则,通常此时

pp==NULL

        成立,因此不执行REMOVE_FB,其作用是将pp从Bin中取出

分支 2:Fast Bin范围内

        如果块的大小能由Fast Bins提供服务(即在sizeof(size_t)的返回值范围内),根据nb获取对应Bin的链表索引idx

        使用fastbin()宏定义来获取该链表的表头,指针为fd

        将第一个节点作为victim,如果是单线程情况,则向表头里放入victim的下一个

        如果victim取到了空闲块,获取所在链表索引victim_idx,并做一系列检查

        且函数不返回,继续往下判断分支2.1

分支 2.1:使用Tcache

        根据nb获取对应链表索引tc_idx,如果获取的内容合法(即索引可行的范围),将FastBin表头中的节点fd作为tc_victim

        循环的将tc_victim放入Tcache Bin中

        最后将victim(也就是最早的Fast Bin的第一个节点)返回给用户

        函数结束

分支2.2:否则

        直接将victim返回给用户

        函数结束

分支3:Small Bin范围内

        在Fast Bin没能找到合适块的情况下(比如对应链表为空等等),将进入该分支

        通过smallbin_index获取链表索引idx,bin_at获取链表表头bin

分支3.1:链表非空

        此时victim将成为当前链表最后一个,如下为其宏定义操作

#define last(b)      ((b)->bk)

        令bck为链表倒数第二个,判断bck->fd是否为victim(出于安全性的检查)

        若成功,令链表最后一个为bck,而bck->fd为表头bin

        且函数不返回,继续往下判断分支3.2

分支3.2:使用Tcache

        获取索引tc_idx,检查其合法性,若对应链表中存在空闲块,进入循环

        令tc_victim为Small Bin中此时的最后一个(即分支3.1中所指的bck)

		      bin->bk = bck;
		      bck->fd = bin;

        通过该Unlink操作,并执行tcache_put将 tc_victim放入Tcache Bin中,直到Tcache Bin链表满员,或者该Small Bin为空

        且函数不返回,继续往下进入分支3.3

分支3.3:否则

        将从Small Bin中获取到的victim返回给用户

        函数结束

 分支4:Large Bin范围内

        获取对应链表索引idx

        判断Fast Bin中是否有空闲块,若有,调用malloc_consolidate将其合并且投放到Unsorted Bin

        函数继续往下,判断分支5

分支5:使用Tcache

        如果分支3没能令函数返回,则必然会判断是否进入该分支,如果进入,则进行如下流程

        根据nb获取对应Tcache Bin中的链表索引tc_idx,如果其在可行范围内,令tcache_nb为nb

        函数进行往下

分支6:否则

分支6.1:Unsorted Bin中存在空闲块

        令victim成为Unsorted Bin中最后一个空闲块,bck为倒数第二个

        获取victim的大小size

        通过chunk_at_offset获得在物理地址上相邻的下一个chunk的地址

#define chunk_at_offset(p, s)  ((mchunkptr) (((char *) (p)) + (s)))

        通过一系列安全性检查

分支6.1.1:nb在Small Bin范围&bck为Unsorted Bin最后一个&victim的size足够被分配(有剩余的情况)

        分割victim,将remainder作为victim分割后剩下的部分

        令Unsorted Bin的头尾指向remainder,remainder的头尾也指向表头

        如果remainder的剩余大小不在Small Bin的范围内,将fd_nextsize与bk_nextsize置NULL

        set_head设置Top chunk大小

        并将切割后的victim(非remainder部分)返回给用

        将bck(倒数第二个)作为新的链表尾

分支6.1.2:尺寸刚好(无剩余)

        分支6.1.2.1:使用Tcache

        如果victim是能够放入Tcache Bin中的chunk,那么就将它放入Tcache Bin中,并回到分支6

        分支6.1.2.2:否则

        如果已经没有可放入的内容了,将victim返回给用户

        函数结束

分支6.1.3:否则

        判断victim符合Small Bin还是Large Bin

        并将victim投放到相应的表头中

分支6.1.4:如果有往Tcache Bin中投放过chunk或是所有Small Bin都被投放完成

        通过索引返回一个之前投放的块

分支7:如果nb属于Large Bin

        通过循环与bk_nextsize找到稍比nb大一些的chunk

        如果该chunk与chunk->fd的大小相同,那就让victim成为第二个chunk

        使用unlink_chunk来将victim从链表中摘下

        分割该chunk,将剩余部分放入Unsorted Bin中,并将其他返回给用户

        函数结束

 分支8:其他

        从下一个索引开始进入循环并不断搜索,直到找到一个合适的chunk块 或者 索引超出了最大值(如果当前索引链表为空,就会直接跳过,继续往下)

        如果找到了这样一个块,分割它,并将甚于部分放入Unsorted Bin中,其他的返回给用户

分支9:否则

分支9.1:如果Top Chunk足够

        分割Top chunk,将chunk返回给用户,修改Top chunk的地址和剩余大小

分支9.2:否则

        使用malloc_consolidate合并Fast Bins,并投放到Unsorted Bin中

        使用sysmalloc通过brk或者mmap来开辟新的Heap

 

参考文章:

https://www.zzl14.xyz/2020/04/13/malloc%E6%B5%81%E7%A8%8B/#int-malloc

这位师傅用2.27的源代码也进行了详尽的说明,也比较推荐参考其博客

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: glibc是GNU计划的一部分,是一套C语言标准库。内存管理是其中的一个重要组件。而ptmalloc2是glibc内存管理的一种算法,用于分配和释放内存块。 要下载glibc内存管理ptmalloc2源代码,可以通过以下几个步骤进行: 1. 打开GNU官方网站,找到glibc的相关页面,通常在https://www.gnu.org/software/libc/ 。 2. 在该页面上,找到下载链接或源代码仓库地址,这个地址通常会提供给用户下载最新版本的glibc。 3. 点击下载链接或者复制源代码仓库地址,将其粘贴到浏览地址栏中。 4. 打开该链接后,您将能够下载一个压缩文件(通常是tar.gz或tar.bz2格式),包含了glibc的全部源代码。 5. 下载完毕后,解压压缩文件。您可以使用解压软件,如WinRAR或7-Zip。解压缩后,您将获得一个包含许多目录和文件的文件夹。 6. 在解压后的文件夹中,找到与ptmalloc2相关的源代码文件。通常这些文件会位于glibc源代码malloc目录下。 7. 在malloc目录中,您将能够找到ptmalloc2源代码文件,这些文件名通常以"ptmalloc"或"ptmalloc2"开头。 以上是下载glibc内存管理ptmalloc2源代码的一个大致过程。通过该源代码,您可以深入了解ptmalloc2算法是如何在glibc中实现内存分配和释放的。但是请注意,阅读和理解源代码需要一定的计算机编程经验和相关背景知识。 ### 回答2: glibc是Linux操作系统中非常重要的一个C标准库,ptmalloc2是glibc中负责内存管理的模块之一。该模块负责动态分配和释放内存,并提供了多种内存分配算法。 ptmalloc2源代码分析是深入研究该模块源代码的过程。通过分析ptmalloc2源代码,可以了解到它的实现原理、内存分配算法以及性能优化等方面的细节。 在下载ptmalloc2源代码之后,我们可以通过阅读和分析源代码来了解其内部结构和工作原理。在源代码中,我们可以找到一些关键的数据结构和函数,如malloc、free、realloc等。这些函数实现了动态内存分配和释放的基本功能。 通过阅读源代码,我们可以学习到ptmalloc2内存管理的特点和优势。例如,ptmalloc2采用了分离的空闲链表来管理不同大小的内存块,利用了空闲块合并和分割等技术来提高内存的利用率和性能。此外,源代码还可能包含一些与内存操作相关的底层函数和宏定义。 分析ptmalloc2源代码不仅可以帮助我们理解其内部实现,还可以为我们定位和解决内存管理相关的问题提供指导。如果遇到性能问题或者内存泄漏等现象,我们可以通过分析源代码来找到问题的根源,并提出相应的优化措施。 总之,通过对glibc内存管理模块ptmalloc2的源代码进行分析,我们可以深入了解其实现原理和内部机制,为我们在实际项目中正确、高效地使用内存管理功能提供帮助。 ### 回答3: glibc是Linux系统上使用最广泛的C语言函数库,而ptmalloc2则是glibc中负责内存分配和管理的部分源代码。 首先,需要明确的是,glibc的ptmalloc2源代码并不是一个独立的项目,而是glibc库中的一部分。如果需要下载该源代码,可以通过访问glibc的官方网站或者使用git等工具来获取。 分析glibc内存管理ptmalloc2源代码可以帮助开发者更好地理解和使用glibc的内存分配功能。ptmalloc2实现了一种基于堆的内存分配算法,它采用了多种策略来管理内存,如bin和fastbin等。源代码的分析可以帮助我们了解这些策略的具体实现细节,以及它们在不同场景下的行为。 要对ptmalloc2源代码进行分析,可以首先阅读相关文档,如glibc的官方文档或论文。 掌握ptmalloc2的整体架构、数据结构和算法等基本知识后,可以通过逐行或逐函数地阅读源代码来深入理解其内部工作机制。可以关注一些关键函数的实现,如malloc、free、realloc等,以及相关的数据结构和算法。 此外,还可以参考开源社区中对ptmalloc2源代码的分析和解读,如一些博客文章、论文或代码注释等。这些资源通常提供了对源代码更深入的解释和讨论,对于理解ptmalloc2的实现细节会有所帮助。 总之,通过下载并分析glibc内存管理ptmalloc2源代码,可以帮助我们更好地理解和使用glibc库中的内存分配功能。同时,也可以通过分析源代码来提高我们的代码调试和性能优化能力,并为开发更高效的内存管理算法提供参考。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值