JVM G1 源码分析(一)- 分区Heap Region

1. 简介
G1(Garbage-First Collector)是一种垃圾回收算法,最早在JDK 6 Update 14中作为实验性功能加入,并在JDK 7 Update 4正式JDK,之后在JDK 9 中成为默认垃圾回收算法,在JDK 10中优化了Full GC性能。

G1是面向服务器的垃圾收集器,针对于多核处理器、大堆的服务器场景。它可以满足短停顿的同时支持高吞吐量。

本文及随后几篇文章将分析G1源码和核心原理。

2. 分区Heap Region介绍
G1的堆空间管理与JVM之前常用的串行、并行、CMS等垃圾回收算法有很大的区别。串行、并行、CMS中,堆空间是连续的,或者粗粒度的划分为新生代(eden / survivor)、老年代、永久代。

CMS堆空间

随着计算机硬件的发展,服务器堆内存越来越大,旧时代的垃圾回收算法处理大堆时,性能较差且停顿(STW)时间不可控。

G1将堆拆分成一系列的分区(Heap Region),这样垃圾回收算法在大部分场景下仅需处理一部分HR,而不是整个老年代。极大的提升了收集效率,并使得JVM可以精细控制STW的时间。


3. 分区Heap Region代码分析
分区(HR)是G1堆空间的最小管理单位。我们从HR入手开始分析源码。

3.1 heapRegionType.hpp

  typedef enum {
    FreeTag               = 0,
    EdenTag               = 2,
    SurvTag               = 3,
    StartsHumongousTag    = 12,
    ContinuesHumongousTag = 13,
    OldTag                = 16,
    OpenArchiveTag        = 56,
    ClosedArchiveTag      = 57
  } Tag;


G1的分区类型大概可以分为5类:

  • 自由分区 FHR
  • 新生代分区 YHR,细分为eden分区和survivor分区
  • 大对象分区 HHR,细分为大对象头分区和大对象连续分区,当对象size超过region_size一半时,即被认为是大对象
  • 老年代分区 OHR
  • 归档分区 AHR,细分为开放归档分区和关闭归档分区,区别在于是否允许引用堆外对象。

其中新生代分区又可以分为Eden和Survivor;大对象分区又可以分为:大对象头分区和大对象连续分区。

Humongous Heap Region专门用来存储大对象,只要大小超过了一个Region容量一半的对象即可判定为大对象。Region的大小可以通过参数-XX:G1HeapRegionSize设定,取值范围为1MB~32MB,且应为2的N次幂,默认值为0。

超过了整个Region容量的超级大对象,将会被存放在N个连续的Humongous Region之中,G1的大多数行为都把Humongous Region作为老年代的一部分来进行看待。

3.2 heapRegionBounds.hpp

static const size_t MIN_REGION_SIZE = 1024 * 1024;
static const size_t MAX_REGION_SIZE = 32 * 1024 * 1024;
static const size_t TARGET_REGION_NUMBER = 2048;

HR的大小影响分配和回收的效率,HR过大则回收消费的时间长,HR过小则导致对象内存分配较慢且内存利用率较低。
HR大小上下限定义在heapRegionBounds中,最小1MB,最大32MB。HR的size必须是2的幂次,即仅可为1MB、2MB、4MB、8MB、16MB、32MB。

JVM参数-XX:G1HeapRegionSize可以指定HR size,如果未指定,JVM根据实际情况动态决定。

HR数量默认为2048。

HR大小可由以下方式确定:

1、可以通过参数G1HeapRegionSize来指定大小,这个参数的默认值为0。

[root@jeespring ~]# java -XX:+PrintFlagsFinal -version | grep G1HeapRegionSize
   size_t G1HeapRegionSize          = 0             {product} {default}
java version "11.0.8" 2020-07-14 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)

2、启发式推断,即在不指定HR大小的时候,由G1启发式地推断HR大小。

HR启发式推断根据堆空间的最大值和最小值以及HR个数进行推断,设置InitialHeapSize(默认为0)等价于设置Xms,设置MaxHeapSize(默认为96MB)等价于设置Xmx。

[root@jeespring ~]# java -XX:+PrintCommandLineFlags -version
-XX:G1ConcRefinementThreads=4 -XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=132500800 
-XX:MaxHeapSize=2120012800 -XX:+PrintCommandLineFlags -XX:ReservedCodeCacheSize=251658240 
-XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops 
-XX:+UseG1GC -XX:-UseLargePagesIndividualAllocation
java version "11.0.8" 2020-07-14 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)

最小的region_size 为 1M,在jdk18中,放开了g1的regionsize的32m的限制,最大可以设置到512m,HR个数为2048个。Ergonomics是Java虚拟机(JVM)和垃圾收集调优(例如基于行为的调优)改善应用程序性能的过程,最大的分区大小是32MB,这个JDK18之前的MAX_REGION_SIZE是一致的。

class HeapRegionBounds : public AllStatic {
private:
  // Minimum region size; we won't go lower than that.
  // We might want to decrease this in the future, to deal with small
  // heaps a bit more efficiently.
  static const size_t MIN_REGION_SIZE = 1024 * 1024;

  // Maximum region size determined ergonomically.
  static const size_t MAX_ERGONOMICS_SIZE = 32 * 1024 * 1024;
  // Maximum region size; we don't go higher than that. There's a good
  // reason for having an upper bound. We don't want regions to get too
  // large, otherwise cleanup's effectiveness would decrease as there
  // will be fewer opportunities to find totally empty regions after
  // marking.
  static const size_t MAX_REGION_SIZE = 512 * 1024 * 1024;

  // The automatic region size calculation will try to have around this
  // many regions in the heap.
  static const size_t TARGET_REGION_NUMBER = 2048;

public:
  static inline size_t min_size();
  static inline size_t max_ergonomics_size();
  static inline size_t max_size();
  static inline size_t target_number();
};

size_t HeapRegionBounds::min_size() {
  return MIN_REGION_SIZE;
}

size_t HeapRegionBounds::max_ergonomics_size() {
  return MAX_ERGONOMICS_SIZE;
}

size_t HeapRegionBounds::max_size() {
  return MAX_REGION_SIZE;
}

size_t HeapRegionBounds::target_number() {
  return TARGET_REGION_NUMBER;
}

计算HR size的逻辑主要在setup_heap_region_size函数中取HR size下限(1MB)和max_heap_size / 2048的最大值,赋值给region_size;region_size按2的幂次对齐

void HeapRegion::setup_heap_region_size(size_t max_heap_size) {
  size_t region_size = G1HeapRegionSize;
  // G1HeapRegionSize = 0 means decide ergonomically.
  if (region_size == 0) { // 判断是否设置过对分区大小,如果G1HeapRegionSize为0,表示未设置
    // 根据最大分配内存(max_heap_size)除以HeapRegionBounds::target_number()计算出来分区的大小。
    // region_size不会小于HeapRegionBounds::min_size(),也不会大于HeapRegionBounds::max_ergonomics_size()
    region_size = clamp(max_heap_size / HeapRegionBounds::target_number(),
                        HeapRegionBounds::min_size(),
                        HeapRegionBounds::max_ergonomics_size());
  }

  // Make sure region size is a power of 2. Rounding up since this
  // is beneficial in most cases.
  // 对region_size按2的幂次对齐,并且保证其落在上下限范围内;
  // 例如region_size=3MB,round_up_power_of_2(region_size)后为4MB
  region_size = round_up_power_of_2(region_size);

  // Now make sure that we don't go over or under our limits.
  // 确保region_size落在[1MB, 512MB]之间,对于JDK18之前的版本则是落在[1MB, 32MB]之间
  region_size = clamp(region_size, HeapRegionBounds::min_size(), HeapRegionBounds::max_size());

  // Calculate the log for the region size.
  // 计算region_size是2的幂次方,如果region_size=4*1024*1024,则region_size_log=22
  int region_size_log = log2i_exact(region_size);

  // Now, set up the globals.
  guarantee(LogOfHRGrainBytes == 0, "we should only set it once");
  LogOfHRGrainBytes = region_size_log;

  guarantee(GrainBytes == 0, "we should only set it once");
  GrainBytes = region_size;  // 设置细粒度字节数为计算出来的分区大小(region_size)

  guarantee(GrainWords == 0, "we should only set it once");
  GrainWords = GrainBytes >> LogHeapWordSize;  // 设置细粒度字符数

  // 根据region_size计算卡表的大小
  guarantee(CardsPerRegion == 0, "we should only set it once");
  CardsPerRegion = GrainBytes >> G1CardTable::card_shift();

  LogCardsPerRegion = log2i(CardsPerRegion);

  if (G1HeapRegionSize != GrainBytes) {
    FLAG_SET_ERGO(G1HeapRegionSize, GrainBytes);
  }
}

1、没有配置G1HeapRegionSize
我们线上配置的JVM参数-Xmx4096m 和 -Xms4096m,根据配置计算过程如下:

average_heap_size=(4096m+4096m)/2=4096m

region_size=MAX2(4096m/2048, 1m)=2m

region_size_log=21(log2_long函数是获取以2为底,region_size的对数,2^21=2*1024*1024<=2m)

region_size=2^21=2m(region_size_log左移1位,也就是2^21)

region_size=2m(MIN_REGION_SIZE(1m)<= 2m <=MAX_REGION_SIZE(32m))

2、配置G1HeapRegionSize大小
JVM参数配置:-Xmx4096m 、-Xms4096m 、-XX:G1HeapRegionSize=8m,由于配置了G1HeapRegionSize的大小,FLAG_IS_DEFAULT(G1HeapRegionSize)条件为false,计算过程如下:

region_size=8m

region_size_log=22(2^23=2^3*1024*1024<=8m)

region_size=2^23=8m(region_size_log左移1位,也就是2^23)

region_size=8m(MIN_REGION_SIZE(1m)<= 8m <=MAX_REGION_SIZE(32m))

HR是堆内存管理的核心模型,标记、回收、晋升、疏散、卡表、RSet等G1关键逻辑都围绕HR进行。HeapRegion的定义如下:

// Each heap region is self contained. top() and end() can never
// be set beyond the end of the region. For humongous objects,
// the first region is a StartsHumongous region. If the humongous
// object is larger than a heap region, the following regions will
// be of type ContinuesHumongous. In this case the top() of the
// StartHumongous region and all ContinuesHumongous regions except
// the last will point to their own end. The last ContinuesHumongous
// region may have top() equal the end of object if there isn't
// room for filler objects to pad out to the end of the region.
class HeapRegion : public CHeapObj<mtGC> {
  friend class VMStructs;

  HeapWord* const _bottom;
  HeapWord* const _end;

  HeapWord* volatile _top;

  G1BlockOffsetTablePart _bot_part;

  // When we need to retire an allocation region, while other threads
  // are also concurrently trying to allocate into it, we typically
  // allocate a dummy object at the end of the region to ensure that
  // no more allocations can take place in it. However, sometimes we
  // want to know where the end of the last "real" object we allocated
  // into the region was and this is what this keeps track.
  HeapWord* _pre_dummy_top;

public:
  HeapWord* bottom() const         { return _bottom; }
  HeapWord* end() const            { return _end;    }

  void set_top(HeapWord* value) { _top = value; }
  HeapWord* top() const { return _top; }

  // See the comment above in the declaration of _pre_dummy_top for an
  // explanation of what it is.
  void set_pre_dummy_top(HeapWord* pre_dummy_top) {
    assert(is_in(pre_dummy_top) && pre_dummy_top <= top(), "pre-condition");
    _pre_dummy_top = pre_dummy_top;
  }
  HeapWord* pre_dummy_top() const { return (_pre_dummy_top == nullptr) ? top() : _pre_dummy_top; }
  void reset_pre_dummy_top() { _pre_dummy_top = nullptr; }

  // Returns true iff the given the heap  region contains the
  // given address as part of an allocated object. This may
  // be a potentially, so we restrict its use to assertion checks only.
  bool is_in(const void* p) const {
    return is_in_reserved(p);
  }
  bool is_in(oop obj) const {
    return is_in((void*)obj);
  }
  // Returns true iff the given reserved memory of the space contains the
  // given address.
  bool is_in_reserved(const void* p) const { return _bottom <= p && p < _end; }

  size_t capacity() const { return byte_size(bottom(), end()); }
  size_t used() const { return byte_size(bottom(), top()); }
  size_t free() const { return byte_size(top(), end()); }

  bool is_empty() const { return used() == 0; }

private:

  void reset_after_full_gc_common();

  void clear(bool mangle_space);

  void mangle_unused_area() PRODUCT_RETURN;

  // Try to allocate at least min_word_size and up to desired_size from this region.
  // Returns null if not possible, otherwise sets actual_word_size to the amount of
  // space allocated.
  // This version assumes that all allocation requests to this HeapRegion are properly
  // synchronized.
  inline HeapWord* allocate_impl(size_t min_word_size, size_t desired_word_size, size_t* actual_word_size);
  // Try to allocate at least min_word_size and up to desired_size from this HeapRegion.
  // Returns null if not possible, otherwise sets actual_word_size to the amount of
  // space allocated.
  // This version synchronizes with other calls to par_allocate_impl().
  inline HeapWord* par_allocate_impl(size_t min_word_size, size_t desired_word_size, size_t* actual_word_size);

  inline HeapWord* advance_to_block_containing_addr(const void* addr,
                                                    HeapWord* const pb,
                                                    HeapWord* first_block) const;

public:

  // Returns the address of the block reaching into or starting at addr.
  HeapWord* block_start(const void* addr) const;
  HeapWord* block_start(const void* addr, HeapWord* const pb) const;

  void object_iterate(ObjectClosure* blk);

  // At the given address create an object with the given size. If the region
  // is old the BOT will be updated if the object spans a threshold.
  void fill_with_dummy_object(HeapWord* address, size_t word_size, bool zap = true);

  // Create objects in the given range. The BOT will be updated if needed and
  // the created objects will have their header marked to show that they are
  // dead.
  void fill_range_with_dead_objects(HeapWord* start, HeapWord* end);

  // All allocations are done without updating the BOT. The BOT
  // needs to be kept in sync for old generation regions and
  // this is done by explicit updates when crossing thresholds.
  inline HeapWord* par_allocate(size_t min_word_size, size_t desired_word_size, size_t* word_size);
  inline HeapWord* allocate(size_t word_size);
  inline HeapWord* allocate(size_t min_word_size, size_t desired_word_size, size_t* actual_size);

  // Update BOT if this obj is the first entering a new card (i.e. crossing the card boundary).
  inline void update_bot_for_obj(HeapWord* obj_start, size_t obj_size);

  // Full GC support methods.

  void update_bot_for_block(HeapWord* start, HeapWord* end);

  // Update heap region that has been compacted to be consistent after Full GC.
  void reset_compacted_after_full_gc(HeapWord* new_top);
  // Update skip-compacting heap region to be consistent after Full GC.
  void reset_skip_compacting_after_full_gc();

  // All allocated blocks are occupied by objects in a HeapRegion.
  bool block_is_obj(const HeapWord* p, HeapWord* pb) const;

  // Returns the object size for all valid block starts. If parsable_bottom (pb)
  // is given, calculates the block size based on that parsable_bottom, not the
  // current value of this HeapRegion.
  size_t block_size(const HeapWord* p) const;
  size_t block_size(const HeapWord* p, HeapWord* pb) const;

  // Scans through the region using the bitmap to determine what
  // objects to call size_t ApplyToMarkedClosure::apply(oop) for.
  template<typename ApplyToMarkedClosure>
  inline void apply_to_marked_objects(G1CMBitMap* bitmap, ApplyToMarkedClosure* closure);

  // Update the BOT for the entire region - assumes that all objects are parsable
  // and contiguous for this region.
  void update_bot();

private:
  // The remembered set for this region.
  HeapRegionRemSet* _rem_set;

  // Cached index of this region in the heap region sequence.
  const uint _hrm_index;

  HeapRegionType _type;

  // For a humongous region, region in which it starts.
  HeapRegion* _humongous_start_region;

  static const uint InvalidCSetIndex = UINT_MAX;

  // The index in the optional regions array, if this region
  // is considered optional during a mixed collections.
  uint _index_in_opt_cset;

  // Fields used by the HeapRegionSetBase class and subclasses.
  HeapRegion* _next;
  HeapRegion* _prev;
#ifdef ASSERT
  HeapRegionSetBase* _containing_set;
#endif // ASSERT

  // The start of the unmarked area. The unmarked area extends from this
  // word until the top and/or end of the region, and is the part
  // of the region for which no marking was done, i.e. objects may
  // have been allocated in this part since the last mark phase.
  HeapWord* volatile _top_at_mark_start;

  // The area above this limit is fully parsable. This limit
  // is equal to bottom except from Remark and until the region has been
  // scrubbed concurrently. The scrubbing ensures that all dead objects (with
  // possibly unloaded classes) have beenreplaced with filler objects that
  // are parsable. Below this limit the marking bitmap must be used to
  // determine size and liveness.
  HeapWord* volatile _parsable_bottom;

  // Amount of dead data in the region.
  size_t _garbage_bytes;

  inline void init_top_at_mark_start();

  // Data for young region survivor prediction.
  uint  _young_index_in_cset;
  G1SurvRateGroup* _surv_rate_group;
  int  _age_index;

  // NUMA node.
  uint _node_index;

  void report_region_type_change(G1HeapRegionTraceType::Type to);

  template <class Closure, bool in_gc_pause>
  inline HeapWord* oops_on_memregion_iterate(MemRegion mr, Closure* cl);

  template <class Closure>
  inline HeapWord* oops_on_memregion_iterate_in_unparsable(MemRegion mr, HeapWord* block_start, Closure* cl);

  // Iterate over the references covered by the given MemRegion in a humongous
  // object and apply the given closure to them.
  // Humongous objects are allocated directly in the old-gen. So we need special
  // handling for concurrent processing encountering an in-progress allocation.
  // Returns the address after the last actually scanned or null if the area could
  // not be scanned (That should only happen when invoked concurrently with the
  // mutator).
  template <class Closure, bool in_gc_pause>
  inline HeapWord* do_oops_on_memregion_in_humongous(MemRegion mr,
                                                     Closure* cl);

  inline bool is_marked_in_bitmap(oop obj) const;

  inline HeapWord* next_live_in_unparsable(G1CMBitMap* bitmap, const HeapWord* p, HeapWord* limit) const;
  inline HeapWord* next_live_in_unparsable(const HeapWord* p, HeapWord* limit) const;

public:
  HeapRegion(uint hrm_index,
             G1BlockOffsetTable* bot,
             MemRegion mr,
             G1CardSetConfiguration* config);

  // If this region is a member of a HeapRegionManager, the index in that
  // sequence, otherwise -1.
  uint hrm_index() const { return _hrm_index; }

  // Initializing the HeapRegion not only resets the data structure, but also
  // resets the BOT for that heap region.
  // The default values for clear_space means that we will do the clearing if
  // there's clearing to be done ourselves. We also always mangle the space.
  void initialize(bool clear_space = false, bool mangle_space = SpaceDecorator::Mangle);

  static int    LogOfHRGrainBytes;
  static int    LogCardsPerRegion;

  static size_t GrainBytes;
  static size_t GrainWords;
  static size_t CardsPerRegion;

  static size_t align_up_to_region_byte_size(size_t sz) {
    return (sz + (size_t) GrainBytes - 1) &
                                      ~((1 << (size_t) LogOfHRGrainBytes) - 1);
  }

  // Returns whether a field is in the same region as the obj it points to.
  template <typename T>
  static bool is_in_same_region(T* p, oop obj) {
    assert(p != nullptr, "p can't be null");
    assert(obj != nullptr, "obj can't be null");
    return (((uintptr_t) p ^ cast_from_oop<uintptr_t>(obj)) >> LogOfHRGrainBytes) == 0;
  }

  static size_t max_region_size();
  static size_t min_region_size_in_words();

  // It sets up the heap region size (GrainBytes / GrainWords), as well as
  // other related fields that are based on the heap region size
  // (LogOfHRGrainBytes / CardsPerRegion). All those fields are considered
  // constant throughout the JVM's execution, therefore they should only be set
  // up once during initialization time.
  static void setup_heap_region_size(size_t max_heap_size);

  // An upper bound on the number of live bytes in the region.
  size_t live_bytes() const {
    return used() - garbage_bytes();
  }

  // A lower bound on the amount of garbage bytes in the region.
  size_t garbage_bytes() const { return _garbage_bytes; }

  // Return the amount of bytes we'll reclaim if we collect this
  // region. This includes not only the known garbage bytes in the
  // region but also any unallocated space in it, i.e., [top, end),
  // since it will also be reclaimed if we collect the region.
  size_t reclaimable_bytes() {
    size_t known_live_bytes = live_bytes();
    assert(known_live_bytes <= capacity(), "sanity %u %zu %zu %zu", hrm_index(), known_live_bytes, used(), garbage_bytes());
    return capacity() - known_live_bytes;
  }

  inline bool is_collection_set_candidate() const;

  // Get the start of the unmarked area in this region.
  HeapWord* top_at_mark_start() const;
  void set_top_at_mark_start(HeapWord* value);

  // Retrieve parsable bottom; since it may be modified concurrently, outside a
  // safepoint the _acquire method must be used.
  HeapWord* parsable_bottom() const;
  HeapWord* parsable_bottom_acquire() const;
  void reset_parsable_bottom();

  // Note the start or end of marking. This tells the heap region
  // that the collector is about to start or has finished (concurrently)
  // marking the heap.

  // Notify the region that concurrent marking is starting. Initialize
  // all fields related to the next marking info.
  inline void note_start_of_marking();

  // Notify the region that concurrent marking has finished. Passes the number of
  // bytes between bottom and TAMS.
  inline void note_end_of_marking(size_t marked_bytes);

  // Notify the region that scrubbing has completed.
  inline void note_end_of_scrubbing();

  // Notify the region that the (corresponding) bitmap has been cleared.
  inline void reset_top_at_mark_start();

  // During the concurrent scrubbing phase, can there be any areas with unloaded
  // classes or dead objects in this region?
  // This set only includes old regions - humongous regions only
  // contain a single object which is either dead or live, and young regions are never even
  // considered during concurrent scrub.
  bool needs_scrubbing() const;
  // Same question as above, during full gc. Full gc needs to scrub any region that
  // might be skipped for compaction. This includes young generation regions as the
  // region relabeling to old happens later than scrubbing.
  bool needs_scrubbing_during_full_gc() const { return is_young() || needs_scrubbing(); }

  const char* get_type_str() const { return _type.get_str(); }
  const char* get_short_type_str() const { return _type.get_short_str(); }
  G1HeapRegionTraceType::Type get_trace_type() { return _type.get_trace_type(); }

  bool is_free() const { return _type.is_free(); }

  bool is_young()    const { return _type.is_young();    }
  bool is_eden()     const { return _type.is_eden();     }
  bool is_survivor() const { return _type.is_survivor(); }

  bool is_humongous() const { return _type.is_humongous(); }
  bool is_starts_humongous() const { return _type.is_starts_humongous(); }
  bool is_continues_humongous() const { return _type.is_continues_humongous();   }

  bool is_old() const { return _type.is_old(); }

  bool is_old_or_humongous() const { return _type.is_old_or_humongous(); }

  void set_free();

  void set_eden();
  void set_eden_pre_gc();
  void set_survivor();

  void move_to_old();
  void set_old();

  // For a humongous region, region in which it starts.
  HeapRegion* humongous_start_region() const {
    return _humongous_start_region;
  }

  // Makes the current region be a "starts humongous" region, i.e.,
  // the first region in a series of one or more contiguous regions
  // that will contain a single "humongous" object.
  //
  // obj_top : points to the top of the humongous object.
  // fill_size : size of the filler object at the end of the region series.
  void set_starts_humongous(HeapWord* obj_top, size_t fill_size);

  // Makes the current region be a "continues humongous'
  // region. first_hr is the "start humongous" region of the series
  // which this region will be part of.
  void set_continues_humongous(HeapRegion* first_hr);

  // Unsets the humongous-related fields on the region.
  void clear_humongous();

  void set_rem_set(HeapRegionRemSet* rem_set) { _rem_set = rem_set; }
  // If the region has a remembered set, return a pointer to it.
  HeapRegionRemSet* rem_set() const {
    return _rem_set;
  }

  inline bool in_collection_set() const;

  inline const char* collection_set_candidate_short_type_str() const;

  void prepare_remset_for_scan();

  // Methods used by the HeapRegionSetBase class and subclasses.

  // Getter and setter for the next and prev fields used to link regions into
  // linked lists.
  void set_next(HeapRegion* next) { _next = next; }
  HeapRegion* next()              { return _next; }

  void set_prev(HeapRegion* prev) { _prev = prev; }
  HeapRegion* prev()              { return _prev; }

  void unlink_from_list();

  // Every region added to a set is tagged with a reference to that
  // set. This is used for doing consistency checking to make sure that
  // the contents of a set are as they should be and it's only
  // available in non-product builds.
#ifdef ASSERT
  void set_containing_set(HeapRegionSetBase* containing_set) {
    assert((containing_set != nullptr && _containing_set == nullptr) ||
            containing_set == nullptr,
           "containing_set: " PTR_FORMAT " "
           "_containing_set: " PTR_FORMAT,
           p2i(containing_set), p2i(_containing_set));

    _containing_set = containing_set;
  }

  HeapRegionSetBase* containing_set() { return _containing_set; }
#else // ASSERT
  void set_containing_set(HeapRegionSetBase* containing_set) { }

  // containing_set() is only used in asserts so there's no reason
  // to provide a dummy version of it.
#endif // ASSERT


  // Reset the HeapRegion to default values and clear its remembered set.
  // If clear_space is true, clear the HeapRegion's memory.
  // Callers must ensure this is not called by multiple threads at the same time.
  void hr_clear(bool clear_space);
  // Clear the card table corresponding to this region.
  void clear_cardtable();

  // Notify the region that an evacuation failure occurred for an object within this
  // region.
  void note_evacuation_failure(bool during_concurrent_start);

  // Notify the region that we have partially finished processing self-forwarded
  // objects during evacuation failure handling.
  void note_self_forward_chunk_done(size_t garbage_bytes);

  uint index_in_opt_cset() const {
    assert(has_index_in_opt_cset(), "Opt cset index not set.");
    return _index_in_opt_cset;
  }
  bool has_index_in_opt_cset() const { return _index_in_opt_cset != InvalidCSetIndex; }
  void set_index_in_opt_cset(uint index) { _index_in_opt_cset = index; }
  void clear_index_in_opt_cset() { _index_in_opt_cset = InvalidCSetIndex; }

  double calc_gc_efficiency();

  uint  young_index_in_cset() const { return _young_index_in_cset; }
  void clear_young_index_in_cset() { _young_index_in_cset = 0; }
  void set_young_index_in_cset(uint index) {
    assert(index != UINT_MAX, "just checking");
    assert(index != 0, "just checking");
    assert(is_young(), "pre-condition");
    _young_index_in_cset = index;
  }

  int age_in_surv_rate_group() const;
  bool has_valid_age_in_surv_rate() const;

  bool has_surv_rate_group() const;

  double surv_rate_prediction(G1Predictions const& predictor) const;

  void install_surv_rate_group(G1SurvRateGroup* surv_rate_group);
  void uninstall_surv_rate_group();

  void record_surv_words_in_group(size_t words_survived);

  // Determine if an address is in the parsable or the to-be-scrubbed area.
  inline        bool is_in_parsable_area(const void* const addr) const;
  inline static bool is_in_parsable_area(const void* const addr, const void* const pb);

  bool obj_allocated_since_marking_start(oop obj) const {
    return cast_from_oop<HeapWord*>(obj) >= top_at_mark_start();
  }

  // Update the region state after a failed evacuation.
  void handle_evacuation_failure();

  // Iterate over the objects overlapping the given memory region, applying cl
  // to all references in the region.  This is a helper for
  // G1RemSet::refine_card*, and is tightly coupled with them.
  // mr must not be empty. Must be trimmed to the allocated/parseable space in this region.
  // This region must be old or humongous.
  // Returns the next unscanned address if the designated objects were successfully
  // processed, null if an unparseable part of the heap was encountered (That should
  // only happen when invoked concurrently with the mutator).
  template <bool in_gc_pause, class Closure>
  inline HeapWord* oops_on_memregion_seq_iterate_careful(MemRegion mr, Closure* cl);

  // Routines for managing a list of code roots (attached to the
  // this region's RSet) that point into this heap region.
  void add_code_root(nmethod* nm);
  void add_code_root_locked(nmethod* nm);
  void remove_code_root(nmethod* nm);

  // Applies blk->do_code_blob() to each of the entries in
  // the code roots list for this region
  void code_roots_do(CodeBlobClosure* blk) const;

  uint node_index() const { return _node_index; }
  void set_node_index(uint node_index) { _node_index = node_index; }
};


3.4 g1YoungGenSizer.cpp

uint G1YoungGenSizer::calculate_default_min_length(uint new_number_of_heap_regions) {
  uint default_value = (new_number_of_heap_regions * G1NewSizePercent) / 100;
  return MAX2(1U, default_value);
}

uint G1YoungGenSizer::calculate_default_max_length(uint new_number_of_heap_regions) {
  uint default_value = (new_number_of_heap_regions * G1MaxNewSizePercent) / 100;
  return MAX2(1U, default_value);
}



新生代大小的计算逻辑主要在g1YoungGenSizer中

根据JVM参数-XX:G1NewSizePercent和-XX:G1MaxNewSizePercent计算新生代最小和最大的HR数量

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值