Comprehensive Guide to Using the 'hyperfine' Command-Line Tool (with examples)
Hyperfine is a command-line benchmarking tool designed for simplicity and ease of use while providing accurate results. It allows users to measure and compare the execution time of commands and programs. This tool is particularly beneficial when you need to evaluate performance metrics, check for consistency in execution times, or compare the efficiency of different implementation strategies.
Use Case 1: Run a Basic Benchmark with at Least 10 Runs
Code:
hyperfine 'make'
Motivation:
Running a basic benchmark is often the first step when you want to evaluate the performance of a command, such as make
, which is used to compile programs. Performing at least 10 runs ensures the results have statistical significance, providing a more reliable measure of how the command performs under consistent conditions.
Explanation:
hyperfine
is the command-line tool being invoked to start the benchmarking process.'make'
is the command being tested. The single quotes indicate that the command should be treated as a single argument by hyperfine, allowing it to execute it multiple times.
Example Output:
Benchmark #1: make
Time (mean ± σ): 30.4 ms ± 2.1 ms [User: 28.7 ms, System: 1.6 ms]
Range (min … max): 28.1 ms … 35.6 ms
Use Case 2: Run a Comparative Benchmark
Code:
hyperfine 'make target1' 'make target2'
Motivation:
Comparative benchmarking is useful when you want to measure the relative efficiency of two commands, such as building different targets within a Makefile. This can help identify which build targets are more optimized and where performance improvements might be made.
Explanation:
hyperfine
initiates the benchmarking process for each command provided.'make target1'
is the first command to be benchmarked. It represents building a specific target in the Makefile.'make target2'
is the second command for comparison, allowing for side-by-side performance evaluation.
Example Output:
Benchmark #1: make target1
Time (mean ± σ): 40.2 ms ± 3.5 ms [User: 38.4 ms, System: 1.8 ms]
Range (min … max): 36.7 ms … 45.6 ms
Benchmark #2: make target2
Time (mean ± σ): 35.8 ms ± 2.3 ms [User: 34.0 ms, System: 1.8 ms]
Range (min … max): 33.5 ms … 40.1 ms
Use Case 3: Change Minimum Number of Benchmarking Runs
Code:
hyperfine --min-runs 7 'make'
Motivation:
Adjusting the minimum number of runs is helpful when you wish to balance between statistical accuracy and time spent collecting data. Reducing the number of runs can save time in cases where computation or execution is resource-intensive or when you’re doing preliminary checks.
Explanation:
--min-runs 7
specifies that hyperfine should perform at least 7 runs of the specified command, ensuring a minimum dataset even before statistical analysis.'make'
remains the command being benchmarked.
Example Output:
Benchmark #1: make
Time (mean ± σ): 28.3 ms ± 1.5 ms [User: 27.0 ms, System: 1.3 ms]
Range (min … max): 25.9 ms … 30.4 ms
Use Case 4: Perform Benchmark with Warmup
Code:
hyperfine --warmup 5 'make'
Motivation:
Performing warmup runs can be particularly useful for commands that might experience initial latency due to disk caching or other startup processes. By executing a number of warmup runs, you ensure that the performance measurements are taken once these initial conditions have stabilized.
Explanation:
--warmup 5
tells hyperfine to perform 5 preliminary executions of the command. These are not measured but allow for system stabilization, such as cache warmup.'make'
is the target command for the benchmark.
Example Output:
Benchmark #1: make
Time (mean ± σ): 29.6 ms ± 2.0 ms [User: 28.3 ms, System: 1.3 ms]
Range (min … max): 26.7 ms … 33.2 ms
Use Case 5: Run a Command Before Each Benchmark Run
Code:
hyperfine --prepare 'make clean' 'make'
Motivation:
Running a preparation command ensures that the environment is in a consistent state before each measurement. This is particularly vital when actions like clearing cache or ensuring dependencies are up-to-date are needed to prevent skewed results from affecting the command.
Explanation:
--prepare 'make clean'
runsmake clean
, which typically removes any files generated during previous builds, ensuring each build starts from the same baseline.'make'
is the command being benchmarked.
Example Output:
Benchmark #1: make
Time (mean ± σ): 30.7 ms ± 1.8 ms [User: 29.0 ms, System: 1.7 ms]
Range (min … max): 28.4 ms … 34.5 ms
Use Case 6: Run a Benchmark with a Changing Parameter
Code:
hyperfine --prepare 'make clean' --parameter-scan num_threads 1 10 'make -j {num_threads}'
Motivation:
Scanning a parameter lets you evaluate how performance scales when altering a particular variable, such as the number of threads or processes. In this case, you’re seeing how the compilation time changes with different levels of parallelism (-j
option in make
). This is invaluable for tuning a build system to the hardware capabilities at hand.
Explanation:
--prepare 'make clean'
ensures each benchmark run starts from a clean state.--parameter-scan num_threads 1 10
specifies a variable,num_threads
, iterating its value from 1 to 10 across different runs.'make -j {num_threads}'
is the command where the placeholder{num_threads}
is replaced with the current parameter value for each run.
Example Output:
Parameter: num_threads=1
Time (mean ± σ): 45.3 ms ± 2.6 ms
Parameter: num_threads=2
Time (mean ± σ): 36.7 ms ± 1.9 ms
...
Parameter: num_threads=10
Time (mean ± σ): 25.8 ms ± 1.7 ms
Conclusion:
The hyperfine command-line tool provides an efficient and versatile way to measure and compare the performance of commands and programs. By allowing adjustments such as the number of runs, use of warmup periods, and pre-benchmark setup, hyperfine ensures that your benchmarks are not only accurate but representative of the command’s typical execution conditions. These use cases demonstrate how hyperfine can assist in fine-tuning software performance and guiding optimization efforts.