Benchmarking with Hyperfine (with examples)

Benchmarking with Hyperfine (with examples)

Hyperfine is a command-line benchmarking tool that allows you to measure the performance of your commands and programs. By running your commands multiple times and providing detailed statistics, Hyperfine enables you to identify bottlenecks and optimize your code. In this article, we will explore different use cases of Hyperfine by illustrating eight examples.

1. Basic benchmark

The basic benchmark command hyperfine 'make' allows you to measure the time it takes to execute the make command. This is useful to get a baseline measurement of the execution time for a specific command.

Code:

hyperfine 'make'

Motivation: By benchmarking the execution time of a command, you can identify if any changes to the system or code have a significant impact on its performance. This can help you pinpoint performance regressions or measure the effectiveness of optimization efforts.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • 'make': The command being benchmarked.

Example Output:

Benchmark #1: make
  Time (mean ± σ):     2.045 s ±  0.021 s    [User: 1.972 s, System: 0.720 s]
  Range (min … max):   2.023 s …  2.105 s    10 runs

The output shows the mean execution time, standard deviation (σ), and the breakdown into user and system time. It also displays the range of execution times for the 10 benchmark runs.

2. Comparative benchmark

With the comparative benchmark command hyperfine 'make target1' 'make target2', you can compare the execution times of two different commands or targets.

Code:

hyperfine 'make target1' 'make target2'

Motivation: Comparative benchmarking is useful for assessing the performance differences between different options, configurations, or target builds. It helps you choose the most efficient option based on time taken to execute.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • 'make target1': The first command being benchmarked.
  • 'make target2': The second command being benchmarked.

Example Output:

Benchmark #1: make target1
  Time (mean ± σ):      3.250 s ±  0.092 s    [User: 2.780 s, System: 0.930 s]
  Range (min … max):    3.140 s …  3.409 s    10 runs

Benchmark #2: make target2
  Time (mean ± σ):      4.320 s ±  0.128 s    [User: 3.810 s, System: 1.330 s]
  Range (min … max):    4.155 s …  4.503 s    10 runs

Summary
  'make target1' ran
    1.33 ± 0.09 times faster than 'make target2'

The output provides the mean execution time, standard deviation (σ), and the breakdown into user and system time for each benchmarked command. It also includes a summary that compares the speeds of the two commands.

3. Changing the minimum number of benchmarking runs

The --min-runs option of Hyperfine allows you to specify the minimum number of benchmarking runs. By default, Hyperfine runs a command 10 times, but you can increase or decrease this number based on your requirements.

Code:

hyperfine --min-runs 7 'make'

Motivation: By increasing the number of benchmarking runs, you can obtain a more accurate measure of the command’s execution time, especially if it has high variability.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • --min-runs 7: The minimum number of benchmarking runs to perform.
  • 'make': The command being benchmarked.

Example Output:

Benchmark #1: make
  Time (mean ± σ):     2.067 s ±  0.018 s    [User: 1.991 s, System: 0.723 s]
  Range (min … max):   2.044 s …  2.102 s    7 runs

The output shows the mean execution time, standard deviation (σ), and the breakdown into user and system time. It also displays the range of execution times for the 7 benchmark runs.

4. Performing benchmark with warmup

By specifying the --warmup option followed by the number of warmup runs, Hyperfine allows you to perform a warmup before the actual benchmarking. Warmup runs help to reduce the impact of one-off costs, such as cache misses.

Code:

hyperfine --warmup 5 'make'

Motivation: Warmup runs are beneficial for more accurate benchmarking as they allow the system to adjust and load necessary resources before the actual benchmarking begins. This ensures that the subsequent benchmark runs are not influenced by initial setup costs.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • --warmup 5: The number of warmup runs to perform.
  • 'make': The command being benchmarked.

Example Output:

Benchmark #1: make
  Time (mean ± σ):     2.063 s ±  0.033 s    [User: 1.989 s, System: 0.720 s]
  Range (min … max):   2.019 s …  2.133 s    10 runs (5 warmup)

The output shows the mean execution time, standard deviation (σ), and the breakdown into user and system time. It also displays the range of execution times for the 10 benchmark runs, with the first 5 runs being warmup runs.

5. Running a command before each benchmark run

The --prepare option allows you to specify a command that will be run before each benchmark run. This is useful for performing any necessary setup or cleanup tasks before the actual command is measured.

Code:

hyperfine --prepare 'make clean' 'make'

Motivation: Running a preparation command before each benchmark run helps ensure a consistent environment for each iteration. It can be useful to clean up any artifacts from previous runs or to set up the system for accurate benchmarking.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • --prepare 'make clean': The preparation command to run before each benchmark run.
  • 'make': The command being benchmarked.

Example Output:

Benchmark #1: make
  Time (mean ± σ):     2.052 s ±  0.019 s    [User: 1.980 s, System: 0.723 s]
  Range (min … max):   2.029 s …  2.084 s    10 runs

Summary
  'make clean' ran
    2.35 ± 0.18 times faster than 'make'

The output shows the mean execution time, standard deviation (σ), and the breakdown into user and system time for the benchmarked command. It also includes a summary that compares the speeds of the preparation command and the benchmarked command.

6. Benchmark with changing parameter

By using the --parameter-scan option, you can perform a benchmark where a single parameter changes for each run. This is useful for determining how different parameter values affect the execution time of a command.

Code:

hyperfine --prepare 'make clean' --parameter-scan num_threads 1 10 'make -j {num_threads}'

Motivation: Benchmarking with changing parameters allows you to understand how different configurations or settings affect the performance of a command. This can help you optimize your code based on different thread or parameter values.

Explanation:

  • hyperfine: The Hyperfine command itself.
  • --prepare 'make clean': The preparation command to run before each benchmark run.
  • --parameter-scan num_threads 1 10: The parameter num_threads to scan with values ranging from 1 to 10.
  • 'make -j {num_threads}': The command being benchmarked, where {num_threads} is replaced with the current parameter value.

Example Output:

Benchmark #1: make -j 1
  Time (mean ± σ):     2.000 s ±  0.018 s    [User: 1.926 s, System: 0.710 s]
  Range (min … max):   1.978 s …  2.047 s    10 runs

Benchmark #2: make -j 2
  Time (mean ± σ):     1.688 s ±  0.029 s    [User: 2.482 s, System: 1.303 s]
  Range (min … max):   1.619 s …  1.733 s    10 runs

...
...

Benchmark #9: make -j 9
  Time (mean ± σ):     1.327 s ±  0.035 s    [User: 3.020 s, System: 1.436 s]
  Range (min … max):   1.273 s …  1.397 s    10 runs

Benchmark #10: make -j 10
  Time (mean ± σ):     1.335 s ±  0.024 s    [User: 3.061 s, System: 1.431 s]
  Range (min … max):   1.312 s …  1.380 s    10 runs

Summary
  'make' with -j 6 ran
    1.52 ± 0.03 times faster than 'make' with -j 1
    1.80 ± 0.04 times faster than 'make' with -j 2
      ...
    2.52 ± 0.06 times faster than 'make' with -j 9

The output shows the mean execution time, standard deviation (σ), and the breakdown into user and system time for each benchmarked command with varying parameter values. It also includes a summary that compares the speeds of each command.

With these eight examples, you can now harness the power of Hyperfine to benchmark your commands and optimize them based on accurate performance measurements. Whether you want to measure a single command or compare different options, Hyperfine provides the tools you need to make data-driven decisions.

Related Posts

How to use the command 'locale' (with examples)

How to use the command 'locale' (with examples)

The locale command is used to obtain locale-specific information, such as language, date and time formats, and numeric formats.

Read More
How to use the command `hcloud` (with examples)

How to use the command `hcloud` (with examples)

The command hcloud is a CLI (Command Line Interface) tool for Hetzner Cloud.

Read More
How to use the command "httpie" (with examples)

How to use the command "httpie" (with examples)

HTTPie is a user-friendly command-line tool that makes it easy to make HTTP requests.

Read More