Runtime.getRuntime().availableProcessors()方法在Docker容器中遇到的问题和解决方案

问题

最近在线上环境遇到一个奇怪的问题,仅仅20qps的压测,产生非常多的毛刺,初步判断认为毛刺是由于YGC导致。

线上环境为docker容器,4核8G内存,openjdk8u

排查过程

于是登录线上机器查看GC日志,发现GC Workers: 63,但压测服务器仅4核,显然正常情况下不可能有63个GC线程。

[GC pause (G1 Evacuation Pause) (young), 0.0054131 secs]
 10    [Parallel Time: 3.6 ms, GC Workers: 63]
 11       [GC Worker Start (ms): Min: 1315.3, Avg: 1315.4, Max: 1315.4, Diff: 0.1]
 12       [Ext Root Scanning (ms): Min: 0.3, Avg: 0.5, Max: 0.9, Diff: 0.6, Sum: 1.9]
 13       [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
 14          [Processed Buffers: Min: 0, Avg: 0.0, Max: 0, Diff: 0, Sum: 0]
 15       [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
 16       [Code Root Scanning (ms): Min: 0.0, Avg: 0.2, Max: 0.6, Diff: 0.6, Sum: 0.9]
 17       [Object Copy (ms): Min: 2.5, Avg: 2.7, Max: 3.0, Diff: 0.5, Sum: 11.0]
 18       [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
 19          [Termination Attempts: Min: 1, Avg: 2.2, Max: 4, Diff: 3, Sum: 9]
 20       [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1]
 21       [GC Worker Total (ms): Min: 3.4, Avg: 3.5, Max: 3.6, Diff: 0.1, Sum: 14.0]
 22       [GC Worker End (ms): Min: 1318.9, Avg: 1318.9, Max: 1318.9, Diff: 0.1]
 23    [Code Root Fixup: 0.0 ms]
 24    [Code Root Purge: 0.0 ms]
 25    [Clear CT: 0.1 ms]
 26    [Other: 1.6 ms]
 27       [Choose CSet: 0.0 ms]
 28       [Ref Proc: 0.9 ms]
 29       [Ref Enq: 0.0 ms]
 30       [Redirty Cards: 0.1 ms]
 31       [Humongous Register: 0.1 ms]
 32       [Humongous Reclaim: 0.0 ms]
 33       [Free CSet: 0.2 ms]
 34    [Eden: 204.0M(204.0M)->0.0B(200.0M) Survivors: 0.0B->4096.0K Heap: 204.0M(4096.0M)->3728.6K(4096.0M)]

执行jstack打印堆栈,发现存在几十个C1、C2编译线程。

ParallelGCThreads的计算公式如下:

ParallelGCThreads = 8 + ((N - 8) * 5/8)

把线程数63代入上述公式,得出N=96,恰巧是宿主机的核数。
因此判断JVM获取可用核数错误,拿到的是宿主机核数而非容器可用核数。

availableProcessors()的源码分析

availableProcessors方法在java.lang.Runtime类中,是个native方法。需要跟到hotspot代码中调查。

// Runtime.java
	// native代码
	// 返回JAVA进程可用核数
    public native int availableProcessors();

JDK 8u191之前的代码

// os_linux.cpp
int os::active_processor_count() {
  // Linux doesn't yet have a (official) notion of processor sets,
  // so just return the number of online processors.
  int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN);
  assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check");
  return online_cpus;
}

通过sysconf获取系统参数_SC_NPROCESSORS_ONLN,所以返回的是宿主机可用核数。

JDK 15的代码

JDK 8u191发布了Java Improvements for Docker Containers,支持Docker容器,并添加了两个JVM参数:
-XX:-UseContainerSupport 关闭容器支持
-XX:ActiveProcessorCount 手动指定可用CPU数量

JDK 8u191的代码不好找,直接看JDK 15的吧

// os_linux.cpp
// 如果指定了JVM参数-XX:ActiveProcessorCount, 直接返回-XX:ActiveProcessorCount的值
// 如果在容器里面,调用OSContainer::active_processor_count
// 否则,调用Linux::active_processor_count(
int os::active_processor_count() {
  // User has overridden the number of active processors
  if (ActiveProcessorCount > 0) {
    log_trace(os)("active_processor_count: "
                  "active processor count set by user : %d",
                  ActiveProcessorCount);
    return ActiveProcessorCount;
  }

  int active_cpus;
  if (OSContainer::is_containerized()) {
    active_cpus = OSContainer::active_processor_count();
    log_trace(os)("active_processor_count: determined by OSContainer: %d",
                   active_cpus);
  } else {
  	// 返回当前进程的可用核数,较之前版本增加了cpu亲缘性处理
    active_cpus = os::Linux::active_processor_count();
  }

  return active_cpus;
}
// osContainer_linux.cpp
int OSContainer::active_processor_count() {
  assert(cgroup_subsystem != NULL, "cgroup subsystem not available");
  // 调用cgroup的active_processor_count
  // cgroup是内核提供的资源隔离机制,容器化的基础
  return cgroup_subsystem->active_processor_count();
}
// cgroupSubsystem_linux.cpp
// 如果容器指定了cpu.cfs_period_us和cpu.cfs_quota_us,就用quota除以时间周期
// 如果容器指定了cpu.shares,则使用shares计算,shares是相对值
int CgroupSubsystem::active_processor_count() {
  int quota_count = 0, share_count = 0;
  int cpu_count, limit_count;
  int result;

  CachingCgroupController* contrl = cpu_controller();
  CachedMetric* cpu_limit = contrl->metrics_cache();
  if (!cpu_limit->should_check_metric()) {
    int val = (int)cpu_limit->value();
    log_trace(os, container)("CgroupSubsystem::active_processor_count (cached): %d", val);
    return val;
  }

  cpu_count = limit_count = os::Linux::active_processor_count();
  int quota  = cpu_quota();
  int period = cpu_period();
  int share  = cpu_shares();

  if (quota > -1 && period > 0) {
    quota_count = ceilf((float)quota / (float)period);
    log_trace(os, container)("CPU Quota count based on quota/period: %d", quota_count);
  }
  if (share > -1) {
    share_count = ceilf((float)share / (float)PER_CPU_SHARES);
    log_trace(os, container)("CPU Share count based on shares: %d", share_count);
  }

  if (quota_count !=0 && share_count != 0) {
  	// 如果JVM参数PreferContainerQuotaForCPUCount为true,则返回quota_count
	// 否则返回quota_count和share_count的最小值
    if (PreferContainerQuotaForCPUCount) {
      limit_count = quota_count;
    } else {
      limit_count = MIN2(quota_count, share_count);
    }
  } else if (quota_count != 0) {
    limit_count = quota_count;
  } else if (share_count != 0) {
    limit_count = share_count;
  }

  // cpu count是内核返回的可用核数
  // 返回cpu_count和limit_count的最小值
  result = MIN2(cpu_count, limit_count);
  log_trace(os, container)("OSContainer::active_processor_count: %d", result);

  // Update cached metric to avoid re-reading container settings too often
  cpu_limit->set_value(result, OSCONTAINER_CACHE_TIMEOUT);

  return result;
}

解决方案

升级JDK8u191或JDK9之后版本,升级后,GC线程数、YGC次数和时间均恢复正常。

  • 4
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值