How to Use the Command 'cargo bench' (with Examples)
The command cargo bench
is an essential tool in the Rust programming ecosystem. It is used primarily for compiling and executing benchmarks for Rust projects. Benchmarks help developers to evaluate performance and optimize their code by providing insights into execution times and performance characteristics. This command can be customized with various options to tailor the benchmarking process to fit different needs.
Execute all Benchmarks of a Package
Code:
cargo bench
Motivation:
Running all benchmarks for a package is a fundamental step when you’re interested in getting an overall performance snapshot of your Rust project. By executing all the defined benchmarks, you can identify bottlenecks and areas that require optimization, aiding in performance tuning across your entire codebase.
Explanation:
cargo bench
: This basic command compiles and runs all benchmarks in the current Rust package.
Example Output:
Running target/release/deps/example-benchmark
test bench_function_1 ... bench: 100,000 ns/iter (+/- 5,000)
test bench_function_2 ... bench: 5,000 ns/iter (+/- 200)
Don’t Stop When a Benchmark Fails
Code:
cargo bench --no-fail-fast
Motivation:
In a complex Rust project, you may have multiple benchmarks running consecutively. Using the --no-fail-fast
option ensures that the benchmark suite continues to execute even if one of them encounters an error or failure. This is particularly useful during development when identifying all potential issues concurrently is more efficient than addressing them sequentially.
Explanation:
--no-fail-fast
: Instructs Cargo to continue running benchmarks regardless of any failures, allowing you to gather more comprehensive benchmarking data in a single run.
Example Output:
Running target/release/deps/example-benchmark
test bench_function_1 ... FAILED
test bench_function_2 ... bench: 5,000 ns/iter (+/- 200)
test bench_function_3 ... bench: 20,000 ns/iter (+/- 1,500)
Compile, but Don’t Run Benchmarks
Code:
cargo bench --no-run
Motivation:
Sometimes, you may wish to compile all the benchmarks without actually executing them, perhaps to verify that the benchmarks are written correctly and without syntax errors. This can be useful in development environments where compiling time is long and you want to address potential compilation issues first before actually running the performance tests.
Explanation:
--no-run
: This option stops Cargo after the benchmarks have been compiled, skipping their execution.
Example Output:
Compiling example v0.1.0 (/path/to/package)
Finished bench [optimized] target(s) in 0.50s
Benchmark the Specified Benchmark
Code:
cargo bench --bench benchmark
Motivation:
Focusing on a specific benchmark is necessary when you are zeroing in on optimizing or verifying the performance of a particular function or module within your project. Isolating this benchmark allows you to iterate quickly and observe the impacts of any code changes without running unnecessary tests.
Explanation:
--bench benchmark
: Specifies the particular benchmark to run, allowing tighter control over what you wish to evaluate.
Example Output:
Running target/release/deps/benchmark
test bench_specific_function ... bench: 4,800 ns/iter (+/- 300)
Benchmark with the Given Profile
Code:
cargo bench --profile profile_name
Motivation:
Different profiles allow developers to customize how the benchmarks are compiled and run, depending on whether you want to simulate release conditions, profiling, or another setup. This is helpful when your benchmarks might rely on different compilation settings or flags which affect performance characteristics.
Explanation:
--profile profile_name
: Allows you to specify a non-default compile profile for running benchmarks that can be customized in your Cargo.toml configuration file.
Example Output:
Running target/profile_name/deps/example-benchmark
test bench_function_custom_profile ... bench: 10,000 ns/iter (+/- 1,000)
Benchmark All Example Targets
Code:
cargo bench --examples
Motivation:
In Rust, example targets can be used for understanding how a library is used and also to test performance in more realistic or complete scenarios. By benchmarking all examples, you can gain insight into the real-world performance implications of your code as opposed to synthetic benchmarks.
Explanation:
--examples
: Runs benchmarks on all example targets in the package, which are typically located in theexamples
directory.
Example Output:
Running target/release/examples/example1
test example_bench_1 ... bench: 25,000 ns/iter (+/- 2,500)
Running target/release/examples/example2
test example_bench_2 ... bench: 15,000 ns/iter (+/- 1,000)
Benchmark All Binary Targets
Code:
cargo bench --bins
Motivation:
Binary targets in a Rust package often represent standalone applications or tools within the workspace. By benchmarking these binaries, you can assess the performance impact of the application-level functionality and not just the library code.
Explanation:
--bins
: Executes benchmarks on all binary targets defined in your package, typically specified under the[bin]
section in Cargo.toml.
Example Output:
Running target/release/deps/bin1
test bin_bench_function_1 ... bench: 40,000 ns/iter (+/- 3,000)
Running target/release/deps/bin2
test bin_bench_function_2 ... bench: 30,000 ns/iter (+/- 2,000)
Benchmark the Package’s Library
Code:
cargo bench --lib
Motivation:
Benchmarking the library part of a package is crucial when your library is designed to be reused across multiple projects, ensuring that your core algorithms and functions meet performance requirements across different workloads and uses.
Explanation:
--lib
: Targets the package’s library, evaluating its performance separate from binaries or examples.
Example Output:
Running target/release/deps/library-benchmark
test library_bench_1 ... bench: 12,000 ns/iter (+/- 800)
Conclusion:
Utilizing cargo bench
in these various configurations equips Rust developers with detailed insights into their code performance, enabling them to craft efficient, fast, and reliable software. Each use case from running comprehensive benchmarks to targeting specific profiles or examples is pivotal during development and beyond, ensuring projects can meet stringent performance requirements.