jmh学习笔记-Forking分叉

系列文章目录

jmh学习笔记-源代码编译与bench mode
jmh学习笔记-State共享对象
jmh学习笔记-State共享对象前后置方法
jmh学习笔记-代码清除
jmh学习笔记-常量折叠
jmh学习笔记-Forking分叉
jmh学习笔记-环境配置
jmh学习笔记-缓存行的处理方式
jmh学习笔记-自定义项目引入jmh



前言

众所周知,JVM擅长于配置文件引导的优化。 这对基准测试是不利的,因为不同的测试可以将其配置文件混合在一起,然后为每个测试呈现“一致不良”的代码。 Forking(在单独的进程中运行)每个测试都可以帮助规避此问题。 JMH将默认Forking测试。


提示:以下是本篇文章正文内容,下面案例可供参考

一、循环优化

用户在基准测试方法中进行循环将很诱人。比如说我们现在需要测试两个正整数相加的消耗

/*
 * Suppose we want to measure how much it takes to sum two integers:
 */
int x = 1;
int y = 2;

/*
 * This is what you do with JMH.
 */

@Benchmark
public int measureRight() {
    return (x + y);
}

但是我们很容易按照如下方式测试

/*
 * The following tests emulate the naive looping.
 * This is the Caliper-style benchmark.
 */
private int reps(int reps) {
    int s = 0;
    for (int i = 0; i < reps; i++) {
        s += (x + y);
    }
    return s;
}

/*
 * We would like to measure this with different repetitions count.
 * Special annotation is used to get the individual operation cost.
 */

@Benchmark
@OperationsPerInvocation(1)
public int measureWrong_1() {
    return reps(1);
}

@Benchmark
@OperationsPerInvocation(10)
public int measureWrong_10() {
    return reps(10);
}

@Benchmark
@OperationsPerInvocation(100)
public int measureWrong_100() {
    return reps(100);
}

@Benchmark
@OperationsPerInvocation(1_000)
public int measureWrong_1000() {
    return reps(1_000);
}

@Benchmark
@OperationsPerInvocation(10_000)
public int measureWrong_10000() {
    return reps(10_000);
}

@Benchmark
@OperationsPerInvocation(100_000)
public int measureWrong_100000() {
    return reps(100_000);
}

其中的OperationsPerInvocation注解用于获取每一次操作的时间(也就是x+y的时间)

最后的测试结果如下

Benchmark                               Mode  Cnt  Score   Error  Units
JMHSample_11_Loops.measureRight         avgt    5  3.859 ± 0.078  ns/op
JMHSample_11_Loops.measureWrong_1       avgt    5  4.027 ± 0.169  ns/op
JMHSample_11_Loops.measureWrong_10      avgt    5  0.447 ± 0.015  ns/op
JMHSample_11_Loops.measureWrong_100     avgt    5  0.051 ± 0.002  ns/op
JMHSample_11_Loops.measureWrong_1000    avgt    5  0.042 ± 0.001  ns/op
JMHSample_11_Loops.measureWrong_10000   avgt    5  0.026 ± 0.001  ns/op
JMHSample_11_Loops.measureWrong_100000  avgt    5  0.032 ± 0.004  ns/op

您可能会注意到重复次数越多,所衡量的操作的“感知”成本就越低。 到现在为止,我们每次加法都以1/30 ns进行,这远远超出了硬件的实际能力。 发生这种情况的原因是,在循环测试中,JVM做了巨大的优化。 因此:不要过度使用循环,请依靠JMH正确进行测量。

This happens because the loop is heavily unrolled/pipelined, and the operation to be measured is hoisted from the loop. Morale: don’t overuse loops, rely on JMH to get the measurement right.

二、Forking

在jmh的Benchmark方法当中可以通过@Fork(0)注解指示不需要进行fork.

package org.openjdk.jmh.samples;

import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;

import java.util.concurrent.TimeUnit;

@State(Scope.Thread)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
public class JMHSample_12_Forking {

    /**
     * JVMs are notoriously good at profile-guided optimizations. This is bad
     * for benchmarks, because different tests can mix their profiles together,
     * and then render the "uniformly bad" code for every test. Forking (running
     * in a separate process) each test can help to evade this issue.
     *
     * JMH will fork the tests by default.
     */

    /*
     * Suppose we have this simple counter interface, and two implementations.
     * Even though those are semantically the same, from the JVM standpoint,
     * those are distinct classes.
     */

    public interface Counter {
        int inc();
    }

    public static class Counter1 implements Counter {
        private int x;

        @Override
        public int inc() {
            return x++;
        }
    }

    public static class Counter2 implements Counter {
        private int x;

        @Override
        public int inc() {
            return x++;
        }
    }

    /**
     * And this is how we measure it.
     * Note this is susceptible for same issue with loops we mention in previous examples.
     */

    public int measure(Counter c) {
        int s = 0;
        for (int i = 0; i < 10; i++) {
            s += c.inc();
        }
        return s;
    }

    /*
     * These are two counters.
     */
    Counter c1 = new Counter1();
    Counter c2 = new Counter2();

    /*
     * We first measure the Counter1 alone...
     * Fork(0) helps to run in the same JVM.
     */

    @Benchmark
    @Fork(0)
    public int measure_1_c1() {
        return measure(c1);
    }

    /*
     * Then Counter2...
     */

    @Benchmark
    @Fork(0)
    public int measure_2_c2() {
        return measure(c2);
    }

    /*
     * Then Counter1 again...
     */

    @Benchmark
    @Fork(0)
    public int measure_3_c1_again() {
        return measure(c1);
    }

    /*
     * These two tests have explicit @Fork annotation.
     * JMH takes this annotation as the request to run the test in the forked JVM.
     * It's even simpler to force this behavior for all the tests via the command
     * line option "-f". The forking is default, but we still use the annotation
     * for the consistency.
     *
     * This is the test for Counter1.
     */

    @Benchmark
    @Fork(1)
    public int measure_4_forked_c1() {
        return measure(c1);
    }

    /*
     * ...and this is the test for Counter2.
     */

    @Benchmark
    @Fork(1)
    public int measure_5_forked_c2() {
        return measure(c2);
    }

    /**
     * ============================== HOW TO RUN THIS TEST: ====================================
     *
     * Note that C1 is faster, C2 is slower, but the C1 is slow again! This is because
     * the profiles for C1 and C2 had merged together. Notice how flawless the measurement
     * is for forked runs.
     *
     * You can run this test:
     *
     * a) Via the command line:
     *    $ mvn clean install
     *    $ java -jar target/benchmarks.jar JMHSample_12
     *
     * b) Via the Java API:
     *    (see the JMH homepage for possible caveats when running from IDE:
     *      http://openjdk.java.net/projects/code-tools/jmh/)
     */

    public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(JMHSample_12_Forking.class.getSimpleName())
                .build();

        new Runner(opt).run();
    }

}

其中measure_4_forked_c1和measure_5_forked_c2包含了明确的@Fork注解。 JMH将此注释作为在Forking JVM中运行测试的请求。 通过命令行选项“ -f”对所有测试强制执行此行为甚至更简单。 Forking是默认设置,但我们仍使用注释来保持一致性。 这是对Counter1的测试。而measure_1_c1、measure_2_c2、measure_3_c1_again则使用的@Fork(0)注解不支持Forking,也就是说在同一个JVM中测试。另外在这里我们测试的就是前面容易被JVM优化的循环测试。测试结果如下

# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=50844:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_12_Forking.measure_4_forked_c1

# Run progress: 0.00% complete, ETA 00:08:20
# Fork: 1 of 1
# Warmup Iteration   1: 8.488 ns/op
# Warmup Iteration   2: 8.218 ns/op
# Warmup Iteration   3: 6.671 ns/op
# Warmup Iteration   4: 6.663 ns/op
# Warmup Iteration   5: 6.899 ns/op
Iteration   1: 6.768 ns/op
Iteration   2: 6.700 ns/op
Iteration   3: 6.708 ns/op
Iteration   4: 6.763 ns/op
Iteration   5: 7.307 ns/op


Result "org.openjdk.jmh.samples.JMHSample_12_Forking.measure_4_forked_c1":
  6.849 ±(99.9%) 0.993 ns/op [Average]
  (min, avg, max) = (6.700, 6.849, 7.307), stdev = 0.258
  CI (99.9%): [5.856, 7.842] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=50844:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_12_Forking.measure_5_forked_c2

# Run progress: 20.00% complete, ETA 00:06:45
# Fork: 1 of 1
# Warmup Iteration   1: 8.300 ns/op
# Warmup Iteration   2: 9.186 ns/op
# Warmup Iteration   3: 7.463 ns/op
# Warmup Iteration   4: 6.907 ns/op
# Warmup Iteration   5: 7.318 ns/op
Iteration   1: 6.927 ns/op
Iteration   2: 7.158 ns/op
Iteration   3: 6.943 ns/op
Iteration   4: 7.087 ns/op
Iteration   5: 6.963 ns/op


Result "org.openjdk.jmh.samples.JMHSample_12_Forking.measure_5_forked_c2":
  7.015 ±(99.9%) 0.391 ns/op [Average]
  (min, avg, max) = (6.927, 7.015, 7.158), stdev = 0.102
  CI (99.9%): [6.625, 7.406] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=50844:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_12_Forking.measure_1_c1

# Run progress: 40.00% complete, ETA 00:05:03
# Fork: N/A, test runs in the host VM
# *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. ***
# *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. ***
# Warmup Iteration   1: 4.305 ns/op
# Warmup Iteration   2: 4.546 ns/op
# Warmup Iteration   3: 4.470 ns/op
# Warmup Iteration   4: 4.399 ns/op
# Warmup Iteration   5: 4.369 ns/op
Iteration   1: 4.569 ns/op
Iteration   2: 4.515 ns/op
Iteration   3: 4.721 ns/op
Iteration   4: 4.461 ns/op
Iteration   5: 4.431 ns/op

Result "org.openjdk.jmh.samples.JMHSample_12_Forking.measure_1_c1":
  4.539 ±(99.9%) 0.440 ns/op [Average]
  (min, avg, max) = (4.431, 4.539, 4.721), stdev = 0.114
  CI (99.9%): [4.100, 4.979] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=50844:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_12_Forking.measure_2_c2

# Run progress: 60.00% complete, ETA 00:03:21
# Fork: N/A, test runs in the host VM
# *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. ***
# *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. ***
# Warmup Iteration   1: 30.568 ns/op
# Warmup Iteration   2: 30.998 ns/op
# Warmup Iteration   3: 31.023 ns/op
# Warmup Iteration   4: 29.811 ns/op
# Warmup Iteration   5: 30.510 ns/op
Iteration   1: 29.734 ns/op
Iteration   2: 29.887 ns/op
Iteration   3: 29.195 ns/op
Iteration   4: 29.704 ns/op
Iteration   5: 29.972 ns/op

Result "org.openjdk.jmh.samples.JMHSample_12_Forking.measure_2_c2":
  29.699 ±(99.9%) 1.163 ns/op [Average]
  (min, avg, max) = (29.195, 29.699, 29.972), stdev = 0.302
  CI (99.9%): [28.535, 30.862] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=50844:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_12_Forking.measure_3_c1_again

# Run progress: 80.00% complete, ETA 00:01:40
# Fork: N/A, test runs in the host VM
# *** WARNING: Non-forked runs may silently omit JVM options, mess up profilers, disable compiler hints, etc. ***
# *** WARNING: Use non-forked runs only for debugging purposes, not for actual performance runs. ***
# Warmup Iteration   1: 30.810 ns/op
# Warmup Iteration   2: 30.696 ns/op
# Warmup Iteration   3: 30.316 ns/op
# Warmup Iteration   4: 30.294 ns/op
# Warmup Iteration   5: 31.576 ns/op
Iteration   1: 30.348 ns/op
Iteration   2: 30.903 ns/op
Iteration   3: 30.388 ns/op
Iteration   4: 30.568 ns/op
Iteration   5: 30.562 ns/op

Result "org.openjdk.jmh.samples.JMHSample_12_Forking.measure_3_c1_again":
  30.554 ±(99.9%) 0.843 ns/op [Average]
  (min, avg, max) = (30.348, 30.554, 30.903), stdev = 0.219
  CI (99.9%): [29.710, 31.397] (assumes normal distribution)


# Run complete. Total time: 00:08:22

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                                 Mode  Cnt   Score   Error  Units
JMHSample_12_Forking.measure_1_c1         avgt    5   4.539 ± 0.440  ns/op
JMHSample_12_Forking.measure_2_c2         avgt    5  29.699 ± 1.163  ns/op
JMHSample_12_Forking.measure_3_c1_again   avgt    5  30.554 ± 0.843  ns/op
JMHSample_12_Forking.measure_4_forked_c1  avgt    5   6.849 ± 0.993  ns/op
JMHSample_12_Forking.measure_5_forked_c2  avgt    5   7.015 ± 0.391  ns/op

请注意在非Forking模式下,measure_1_c1较快,measure_2_c2较慢,但measure_3_c1_again又变慢了! 这是因为C1和C2的配置文件已合并在一起。 注意分叉运行的测量是多么完美。

三、使用

分叉还允许估计运行间差异。 JVM是复杂的系统,不确定性是它们固有的特性。 这要求我们始终将运行间差异视为实验中的影响之一。幸运的是,分叉可将结果汇总到多个JVM启动中。

@State(Scope.Thread)
public static class SleepyState {
    public long sleepTime;

    @Setup
    public void setup() {
        sleepTime = (long) (Math.random() * 1000);
    }
}

为了引入易于测量的运行间差异,我们构建了性能因运行而异的工作负载。 请注意,许多工作负载将具有类似的行为,但是我们人为地指出了这一点 。对于上面的对象实例而言,就是通过random随机决定休眠时间,每次运行结果都可能不一样。然后我们Forking不同的个数(分别为1个、5个、20个),并进行测试。

@Benchmark
@Fork(1)
public void baseline(SleepyState s) throws InterruptedException {
    TimeUnit.MILLISECONDS.sleep(s.sleepTime);
}

@Benchmark
@Fork(5)
public void fork_1(SleepyState s) throws InterruptedException {
    TimeUnit.MILLISECONDS.sleep(s.sleepTime);
}

@Benchmark
@Fork(20)
public void fork_2(SleepyState s) throws InterruptedException {
    TimeUnit.MILLISECONDS.sleep(s.sleepTime);
}

对应基准测试汇总如下

package org.openjdk.jmh.samples;

import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;

import java.util.concurrent.TimeUnit;

@State(Scope.Thread)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
public class JMHSample_13_RunToRun {

    /**
     * Forking also allows to estimate run-to-run variance.
     *
     * JVMs are complex systems, and the non-determinism is inherent for them.
     * This requires us to always account the run-to-run variance as the one
     * of the effects in our experiments.
     *
     * Luckily, forking aggregates the results across several JVM launches.
     */

    /**
     * In order to introduce readily measurable run-to-run variance, we build
     * the workload which performance differs from run to run. Note that many workloads
     * will have the similar behavior, but we do that artificially to make a point.
     */

    @State(Scope.Thread)
    public static class SleepyState {
        public long sleepTime;

        @Setup
        public void setup() {
            sleepTime = (long) (Math.random() * 1000);
        }
    }

    /*
     * Now, we will run this different number of times.
     */

    @Benchmark
    @Fork(1)
    public void baseline(SleepyState s) throws InterruptedException {
        TimeUnit.MILLISECONDS.sleep(s.sleepTime);
    }

    @Benchmark
    @Fork(5)
    public void fork_1(SleepyState s) throws InterruptedException {
        TimeUnit.MILLISECONDS.sleep(s.sleepTime);
    }

    @Benchmark
    @Fork(20)
    public void fork_2(SleepyState s) throws InterruptedException {
        TimeUnit.MILLISECONDS.sleep(s.sleepTime);
    }

    /**
     *
     * Note the baseline is random within [0..1000] msec; and both forked runs
     * are estimating the average 500 msec with some confidence.
     */

    public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(JMHSample_13_RunToRun.class.getSimpleName())
                .warmupIterations(0)
                .measurementIterations(3)
                .build();

        new Runner(opt).run();
    }

}

以上设置了measurementIterations为3,然后又fork了不同次数,最后迭代的次数是fork次数 * measurementIterations次数,测试结果如下所示

# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=51991:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: <none>
# Measurement: 3 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_13_RunToRun.baseline

# Run progress: 0.00% complete, ETA 00:13:00
# Fork: 1 of 1
Iteration   1: 709.435 ms/op
Iteration   2: 709.065 ms/op
Iteration   3: 708.975 ms/op


Result "org.openjdk.jmh.samples.JMHSample_13_RunToRun.baseline":
  709.158 ±(99.9%) 4.447 ms/op [Average]
  (min, avg, max) = (708.975, 709.158, 709.435), stdev = 0.244
  CI (99.9%): [704.711, 713.605] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=51991:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: <none>
# Measurement: 3 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_13_RunToRun.fork_1

# Run progress: 3.85% complete, ETA 00:13:59
# Fork: 1 of 5
Iteration   1: 302.316 ms/op
Iteration   2: 301.994 ms/op
Iteration   3: 302.005 ms/op

# Run progress: 7.69% complete, ETA 00:13:03
# Fork: 2 of 5
Iteration   1: 850.282 ms/op
Iteration   2: 850.281 ms/op
Iteration   3: 849.989 ms/op

# Run progress: 11.54% complete, ETA 00:12:21
# Fork: 3 of 5
Iteration   1: 927.105 ms/op
Iteration   2: 926.988 ms/op
Iteration   3: 927.013 ms/op

# Run progress: 15.38% complete, ETA 00:11:44
# Fork: 4 of 5
Iteration   1: 359.042 ms/op
Iteration   2: 359.052 ms/op
Iteration   3: 359.068 ms/op

# Run progress: 19.23% complete, ETA 00:11:07
# Fork: 5 of 5
Iteration   1: 803.041 ms/op
Iteration   2: 803.117 ms/op
Iteration   3: 803.147 ms/op


Result "org.openjdk.jmh.samples.JMHSample_13_RunToRun.fork_1":
  648.296 ±(99.9%) 291.066 ms/op [Average]
  (min, avg, max) = (301.994, 648.296, 927.105), stdev = 272.264
  CI (99.9%): [357.230, 939.362] (assumes normal distribution)


# JMH version: 1.26
# VM version: JDK 1.8.0_121, Java HotSpot(TM) 64-Bit Server VM, 25.121-b13
# VM invoker: C:\Program Files\Java\jdk1.8.0_121\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\lib\idea_rt.jar=51991:D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.1\bin -Dfile.encoding=UTF-8
# Warmup: <none>
# Measurement: 3 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.openjdk.jmh.samples.JMHSample_13_RunToRun.fork_2

# Run progress: 23.08% complete, ETA 00:10:37
# Fork: 1 of 20
Iteration   1: 245.112 ms/op
Iteration   2: 245.007 ms/op
Iteration   3: 245.054 ms/op

# Run progress: 26.92% complete, ETA 00:10:03
# Fork: 2 of 20
Iteration   1: 658.986 ms/op
Iteration   2: 659.065 ms/op
Iteration   3: 659.956 ms/op

# Run progress: 30.77% complete, ETA 00:09:33
# Fork: 3 of 20
Iteration   1: 454.038 ms/op
Iteration   2: 454.198 ms/op
Iteration   3: 454.169 ms/op

# Run progress: 34.62% complete, ETA 00:09:01
# Fork: 4 of 20
Iteration   1: 679.990 ms/op
Iteration   2: 679.997 ms/op
Iteration   3: 679.995 ms/op

# Run progress: 38.46% complete, ETA 00:08:29
# Fork: 5 of 20
Iteration   1: 775.311 ms/op
Iteration   2: 775.211 ms/op
Iteration   3: 775.045 ms/op

# Run progress: 42.31% complete, ETA 00:07:56
# Fork: 6 of 20
Iteration   1: 357.022 ms/op
Iteration   2: 356.992 ms/op
Iteration   3: 357.035 ms/op

# Run progress: 46.15% complete, ETA 00:07:24
# Fork: 7 of 20
Iteration   1: 904.990 ms/op
Iteration   2: 904.985 ms/op
Iteration   3: 905.043 ms/op

# Run progress: 50.00% complete, ETA 00:06:54
# Fork: 8 of 20
Iteration   1: 199.008 ms/op
Iteration   2: 199.063 ms/op
Iteration   3: 199.000 ms/op

# Run progress: 53.85% complete, ETA 00:06:22
# Fork: 9 of 20
Iteration   1: 494.102 ms/op
Iteration   2: 494.144 ms/op
Iteration   3: 494.081 ms/op

# Run progress: 57.69% complete, ETA 00:05:50
# Fork: 10 of 20
Iteration   1: 146.106 ms/op
Iteration   2: 146.023 ms/op
Iteration   3: 146.064 ms/op

# Run progress: 61.54% complete, ETA 00:05:17
# Fork: 11 of 20
Iteration   1: 687.038 ms/op
Iteration   2: 687.017 ms/op
Iteration   3: 687.216 ms/op

# Run progress: 65.38% complete, ETA 00:04:46
# Fork: 12 of 20
Iteration   1: 165.017 ms/op
Iteration   2: 164.995 ms/op
Iteration   3: 165.022 ms/op

# Run progress: 69.23% complete, ETA 00:04:13
# Fork: 13 of 20
Iteration   1: 561.048 ms/op
Iteration   2: 561.003 ms/op
Iteration   3: 560.984 ms/op

# Run progress: 73.08% complete, ETA 00:03:41
# Fork: 14 of 20
Iteration   1: 808.065 ms/op
Iteration   2: 808.048 ms/op
Iteration   3: 808.129 ms/op

# Run progress: 76.92% complete, ETA 00:03:10
# Fork: 15 of 20
Iteration   1: 687.036 ms/op
Iteration   2: 687.007 ms/op
Iteration   3: 687.024 ms/op

# Run progress: 80.77% complete, ETA 00:02:38
# Fork: 16 of 20
Iteration   1: 645.038 ms/op
Iteration   2: 645.001 ms/op
Iteration   3: 644.997 ms/op

# Run progress: 84.62% complete, ETA 00:02:06
# Fork: 17 of 20
Iteration   1: 577.016 ms/op
Iteration   2: 577.032 ms/op
Iteration   3: 577.030 ms/op

# Run progress: 88.46% complete, ETA 00:01:35
# Fork: 18 of 20
Iteration   1: 669.045 ms/op
Iteration   2: 669.024 ms/op
Iteration   3: 669.019 ms/op

# Run progress: 92.31% complete, ETA 00:01:03
# Fork: 19 of 20
Iteration   1: 115.005 ms/op
Iteration   2: 115.006 ms/op
Iteration   3: 114.999 ms/op

# Run progress: 96.15% complete, ETA 00:00:31
# Fork: 20 of 20
Iteration   1: 155.052 ms/op
Iteration   2: 155.059 ms/op
Iteration   3: 155.019 ms/op


Result "org.openjdk.jmh.samples.JMHSample_13_RunToRun.fork_2":
  499.213 ±(99.9%) 110.650 ms/op [Average]
  (min, avg, max) = (114.999, 499.213, 905.043), stdev = 247.484
  CI (99.9%): [388.563, 609.863] (assumes normal distribution)


# Run complete. Total time: 00:13:42

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                       Mode  Cnt    Score     Error  Units
JMHSample_13_RunToRun.baseline  avgt    3  709.158 ±   4.447  ms/op
JMHSample_13_RunToRun.fork_1    avgt   15  648.296 ± 291.066  ms/op
JMHSample_13_RunToRun.fork_2    avgt   60  499.213 ± 110.650  ms/op

Note the baseline is random within [0…1000] msec; and both forked runs are estimating the average 500 msec with some confidence.


总结

JMH will fork the tests by default.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lang20150928

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值