-
Notifications
You must be signed in to change notification settings - Fork 313
Initial performance test infrastructure #3110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
LarryOsterman
merged 18 commits into
Azure:main
from
LarryOsterman:larryo/add_perf_tests
Oct 10, 2025
Merged
Changes from 16 commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
be02767
Checkpoint
LarryOsterman fde683f
Test structure works; parallel doesn't
LarryOsterman 8d58f99
Added progress tracker
LarryOsterman b168643
Added KeyVault test and aligned tracker output with that of C++
LarryOsterman 9e603ac
Cleaned up test creation logic
LarryOsterman fd7c7cc
Renamed perf structures; cleaned up perf traces; added initial storag…
LarryOsterman 90fb660
Cleaned up warnings
LarryOsterman 51bbcf1
Don't fail tests if no test is selected
LarryOsterman 84a6500
Start hooking test context into perf logic
LarryOsterman 113b883
Generate output json file with perf output
LarryOsterman e5b24c0
Updated tests
LarryOsterman 13e5e2d
Updates to get perf automation to work
LarryOsterman 830cabc
Removed specific versioned packages
LarryOsterman cf5e9fe
Cleaned up some test declaration logic; added start to perf autoring …
LarryOsterman 008e340
Removed commented out bicep logic
LarryOsterman 9266de9
Removed commented out test logic
LarryOsterman 1d2dd32
Test fixes
LarryOsterman 26b347e
PR feedback
LarryOsterman File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
# Requirements for performance tests | ||
|
||
Each performance test consists of three phases: | ||
|
||
1) Warmup | ||
1) Test operation | ||
1) Cleanup | ||
LarryOsterman marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
## Common test inputs | ||
|
||
* Duration of the test in seconds | ||
* Number of iterations of the main test loop | ||
* Parallel - number of operations to execute in parallel | ||
* Disable test cleanup | ||
* Test Proxy servers. | ||
LarryOsterman marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
* Results file - location to write test outputs | ||
* Warmup - Duration of the warmup in seconds. | ||
* TLS | ||
* Allow untrusted TLS certificates | ||
* Advanced options | ||
* Print job statistics (?) | ||
* Track latency and print per-operation latency statistics | ||
* Target throughput (operations/second) (?) | ||
* Language specific options | ||
* Max I/O completion threads | ||
* Minimum number of asynchronous I/O threads in the thread pool | ||
* Minimum number of worker threads the thread pool creates on demand | ||
* Sync - run a synchronous version of the test | ||
|
||
## Expected test outputs | ||
|
||
Each test is expected to generate the following elements: | ||
|
||
* Package Versions - a set of packages tested and their versions. | ||
* Operations per second - Double precision float | ||
* Standard Output of the test | ||
* Standard Error of the test | ||
* Exception - Text of any exceptions thrown during the test. | ||
* Average CPU Use during the test - Double precision float. | ||
* Average memory use during the test - Double precision float. | ||
|
||
## Perf Test Harness | ||
|
||
Each performance test defines a `get_metadata()` function which returns a `TestMetadata` structure. | ||
|
||
A `TestMetadata` structure contains the following fields | ||
|
||
```rust | ||
pub struct TestMetadata { | ||
name: &'static str | ||
description: &'static str | ||
options: &'static[&'static TestOption] | ||
} | ||
``` | ||
|
||
A `TestOptions` defines a set of options for the test which will be merged with the common test inputs to define the command line for the performance test. | ||
|
||
```rust | ||
pub struct TestOption { | ||
/// The name of the test option. This is used as the key in the `TestArguments` map. | ||
name: &'static str, | ||
|
||
long_activator: &str, | ||
|
||
short_activator:&str, | ||
LarryOsterman marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
/// Display message - displayed in the --help message. | ||
display_message: &[str], | ||
|
||
/// Expected argument count | ||
expected_args_len: u16, | ||
|
||
/// Required | ||
mandatory: bool, | ||
|
||
/// Argument value is sensitive and should be sanitized. | ||
sensitive: bool, | ||
} | ||
``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,118 @@ | ||
# Performance Tests | ||
|
||
The Azure SDK defines a standardized set of performance tests which use a test framework defined by the [PerfAutomation tool](https://github.com/Azure/azure-sdk-tools/tree/main/tools/perf-automation). | ||
|
||
Performance tests are defined in a "perf" directory under the package root. | ||
|
||
By convention, all performance tests are named "perf" and are invoked via: | ||
|
||
```bash | ||
cargo test --package <package name> --test perf -- <perf test name> <perf test arguments> | ||
LarryOsterman marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
``` | ||
|
||
where `package name` is the name of the rust package, `perf test name` is the name of the test you want to run, and `perf test arguments` is the arguments to that test. | ||
|
||
Each performance test has the following standardized parameters: | ||
|
||
* `--iterations <count>` - the number of iterations to run the test for. Default: 1 | ||
* `--sync` - Run only synchronous tests. (ignored) | ||
* `--parallel <count>` - the number of concurrent tasks to use when running each test. Default: 1 | ||
* `--no-progress` - disable the once per second progress report. | ||
* `--duration <seconds>` - the duration of each test in seconds. Default: 30 | ||
* `--warmup <seconds>` - the duration of the warmup period in seconds. Default: 5 | ||
* `--test-results <file>` - the file to write test results to (Default: tests/results.json) | ||
* `--help` - show help. | ||
|
||
Each test has its own set of parameters which are specific to the test. | ||
|
||
## Test authoring | ||
|
||
Performance tests have three phases: | ||
|
||
1) Setup - Establish any resources needed to run the test. | ||
1) Run - Actually perform the test. | ||
1) Cleanup - Cleanup any resources used by the test. | ||
|
||
Each is defined by functions on the `PerfTest` trait. | ||
|
||
### Test Metadata | ||
|
||
Tests are defined by an instance of a `PerfTestMetadata` structure, which defines the name of the test, and other information about the test. | ||
LarryOsterman marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
A perf test has a name (`get_secret`, `list_blobs`, `upload_blob`, etc), a short description, a set of test options, and a pointer to a function which returns an instance of the test. | ||
|
||
Each perf test also has a set of command line options that are specific to the individual test, these are defined by a `PerfTestOptions` structure. It contains fields like help text for the option, activators | ||
|
||
Here is an example of test metadata for a performance test: | ||
|
||
```rust | ||
PerfTestMetadata { | ||
name: "get_secret", | ||
description: "Get a secret from Key Vault", | ||
options: vec![PerfTestOption { | ||
name: "vault_url", | ||
display_message: "The URL of the Key Vault to use in the test", | ||
mandatory: true, | ||
short_activator: 'u', | ||
long_activator: "vault-url", | ||
expected_args_len: 1, | ||
..Default::default() | ||
}], | ||
create_test: Self::create_new_test, | ||
} | ||
``` | ||
|
||
This defines a test named `get_secret` with a single required "vault_url" option. | ||
|
||
For this test, the `create_new_test` function looks like: | ||
|
||
```rust | ||
fn create_new_test(runner: PerfRunner) -> CreatePerfTestReturn { | ||
async move { | ||
let vault_url_ref: Option<&String> = runner.try_get_test_arg("vault_url")?; | ||
let vault_url = vault_url_ref | ||
.expect("vault_url argument is mandatory") | ||
.clone(); | ||
Ok(Box::new(GetSecrets { | ||
vault_url, | ||
random_key_name: OnceLock::new(), | ||
client: OnceLock::new(), | ||
}) as Box<dyn PerfTest>) | ||
} | ||
.boxed() | ||
} | ||
``` | ||
|
||
### Declaring Tests | ||
|
||
The process of authoring tests starts with the cargo.toml file for your package. | ||
|
||
Add the following to the `cargo.toml` file: | ||
|
||
```toml | ||
[[test]] | ||
name = "perf" | ||
path = "perf/get_secret.rs" | ||
harness = false | ||
``` | ||
|
||
This declares a test named `perf` (which is required for the perf automation tests) located in a directory named `perf` in a module named `get_secret.rs`. It also declares the test as *not* requiring the standard test harness - that's because the test defines its own test harness. | ||
|
||
The contents of the test file should have the following: | ||
|
||
```rust | ||
#[tokio::main] | ||
async fn main() -> azure_core::Result<()> { | ||
let runner = PerfRunner::new( | ||
env!("CARGO_MANIFEST_DIR"), | ||
file!(), | ||
vec![GetSecrets::test_metadata()], | ||
)?; | ||
|
||
runner.run().await?; | ||
|
||
Ok(()) | ||
} | ||
``` | ||
|
||
This declares a perf test runner with the defined test metadata and runs the performance test. If your performance test has more than one performance test, then it should be added to the final parameter to the `PerfRunner::new()` function. |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.