Java Benchmarking with JMH (Java Microbenchmarking Harness)

 

Java Benchmarking with JMH (Java Microbenchmarking Harness)

When it comes to measuring the performance of small code snippets in Java, traditional techniques like System.currentTimeMillis() or System.nanoTime() are insufficient and unreliable due to factors like JVM optimizations, Just-In-Time (JIT) compilation, dead code elimination, and warm-up delays.

This is where JMH (Java Microbenchmarking Harness) comes into play — a Java benchmarking framework developed by the same team that works on the Java Virtual Machine at Oracle. JMH is designed specifically for writing and running benchmarks correctly in the JVM.





๐Ÿง  What is JMH?

JMH (Java Microbenchmarking Harness) is a Java library for creating benchmarks to measure the performance of individual methods and snippets of Java code with high accuracy.

It helps mitigate the inaccuracies caused by:

  • JVM warm-up time

  • JIT optimizations

  • Dead code elimination

  • GC interruptions

  • CPU cache effects


๐Ÿ”ง Why You Should Use JMH

Using JMH offers the following benefits:

  • Accurate and reproducible benchmarks

  • Built-in support for warm-up iterations

  • Isolation of benchmarking logic from setup logic

  • Multithreaded benchmarking support

  • Fine-grained control over benchmark execution


๐Ÿš€ Getting Started with JMH

๐Ÿ“ฆ 1. Add JMH to Your Project

If you're using Maven, add the following to your pom.xml:

<properties>
    <jmh.version>1.37</jmh.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.openjdk.jmh</groupId>
        <artifactId>jmh-core</artifactId>
        <version>${jmh.version}</version>
    </dependency>
    <dependency>
        <groupId>org.openjdk.jmh</groupId>
        <artifactId>jmh-generator-annprocess</artifactId>
        <version>${jmh.version}</version>
        <scope>provided</scope>
    </dependency>
</dependencies>

For Gradle, use:

dependencies {
    implementation 'org.openjdk.jmh:jmh-core:1.37'
    annotationProcessor 'org.openjdk.jmh:jmh-generator-annprocess:1.37'
}

๐Ÿงช 2. Writing a Simple Benchmark

import org.openjdk.jmh.annotations.*;

import java.util.concurrent.TimeUnit;

@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Thread)
public class MyBenchmark {

    private int[] numbers;

    @Setup
    public void setup() {
        numbers = new int[1000];
        for (int i = 0; i < numbers.length; i++) {
            numbers[i] = i;
        }
    }

    @Benchmark
    public int sumLoop() {
        int sum = 0;
        for (int n : numbers) {
            sum += n;
        }
        return sum;
    }

    @Benchmark
    public int sumStream() {
        return java.util.Arrays.stream(numbers).sum();
    }
}

๐Ÿƒ 3. Running the Benchmark

You can run it using the JMH main class:

mvn clean install
java -jar target/benchmarks.jar

⚙️ Key JMH Annotations Explained

Annotation Description
@Benchmark Marks a method as a benchmark target
@BenchmarkMode Defines the mode of the benchmark: Throughput, AverageTime, SampleTime, etc.
@OutputTimeUnit Specifies the time unit of the result (e.g., ms, ns)
@State Scope of state object: Thread, Group, or Benchmark
@Setup / @TearDown Initialization and cleanup logic for each benchmark iteration
@Param Allows benchmarking with multiple input parameters

๐Ÿ“Š Benchmark Modes

JMH provides different modes to suit different benchmarking goals:

  • Throughput: Measures how many operations are completed per time unit.

  • AverageTime: Measures average time per operation.

  • SampleTime: Samples execution time of operations randomly.

  • SingleShotTime: Measures time for a single method invocation.

  • AllModes: Runs all benchmark modes.


๐Ÿ”„ Warmup and Iterations

The JVM optimizes code dynamically at runtime, so it's important to allow the JVM to warm up before measuring performance.

@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 10, time = 1)
@Fork(1)

These control:

  • Warmup: Number of warm-up iterations before real measurements

  • Measurement: Actual iterations used for benchmarking

  • Fork: Number of JVM forks (new JVM instances)


๐Ÿ›ก️ Best Practices

  • Always compare two or more versions of the same logic to gain insight.

  • Avoid allocating new objects in the benchmark method.

  • Use Blackhole to prevent dead code elimination:

    @Benchmark
    public void testBlackhole(Blackhole blackhole) {
        int result = someMethod();
        blackhole.consume(result);
    }
    

๐Ÿ“Œ Real-world Use Case

Suppose you're optimizing a sorting algorithm and want to compare native Java sort vs. your custom implementation.

JMH allows you to:

  • Benchmark both under the same controlled environment

  • Compare throughput and latency

  • Understand performance tradeoffs under different data volumes


๐Ÿงพ Summary

JMH is a powerful and reliable tool to benchmark Java code precisely. If you're writing performance-critical applications or simply curious about which implementation performs better, JMH is a must-have in your toolbox.

๐Ÿ“ฃ Final Thoughts

Accurate benchmarking is not optional — it's essential when optimizing performance. JMH gives you the right tools to do it correctly. Mastering JMH helps you make data-driven decisions rather than relying on guesswork.

If you found this article helpful, feel free to share it or leave a comment below. Happy benchmarking! ๐Ÿš€


Previous
Next Post »