Mobile benchmarking SDK for Rust. Build and run Rust benchmarks on Android and iOS, locally or on BrowserStack, with a library-first workflow.
mobench provides a Rust API and a CLI for running benchmarks on real mobile devices. You define benchmarks in Rust, generate mobile bindings automatically, and drive execution from the CLI with consistent output formats (JSON, Markdown, CSV).
For programmatic CI integrations, mobench exposes typed request/result types (RunRequest, RunResult, DeviceSelection, Report) via the crate API.
#[benchmark]marks functions and registers them viainventorymobench-sdkbuilds mobile artifacts, provides the timing harness, and generates app templates from embedded assets- UniFFI proc macros generate Kotlin and Swift bindings directly from Rust types
- The CLI writes a benchmark spec (function, iterations, warmup) and packages it into the app
- Mobile apps call
run_benchmarkvia the generated bindings and return timing samples - The CLI collects results locally or from BrowserStack and writes summaries
crates/mobench(mobench): CLI tool that builds, runs, and fetches benchmarkscrates/mobench-sdk(mobench-sdk): core SDK with timing harness, builders, registry, and codegencrates/mobench-macros(mobench-macros):#[benchmark]proc macrocrates/sample-fns: sample benchmarks and UniFFI bindingsexamples/basic-benchmark: minimal SDK integration exampleexamples/ffi-benchmark: full UniFFI/FFI surface example
# Install the CLI (fast)
cargo binstall mobench
# Or build from source
cargo install mobench
# Add the SDK to your project
cargo add mobench-sdk inventory
# Check prerequisites before building
cargo mobench doctor --target both
cargo mobench config validate --config bench-config.toml
cargo mobench check --target android
cargo mobench check --target ios
# Build artifacts (outputs to target/mobench/ by default)
cargo mobench build --target android
cargo mobench build --target ios
# Build with progress output for clearer feedback
cargo mobench build --target android --progress
# Run a benchmark locally
cargo mobench run --target android --function sample_fns::fibonacci
# Run on BrowserStack (use --release for smaller APK uploads)
cargo mobench run --target android --function sample_fns::fibonacci \
--devices "Google Pixel 7-13.0" --release
# List available BrowserStack devices
cargo mobench devices --platform android
# Resolve matrix devices deterministically for CI
cargo mobench devices resolve --platform android --profile default --device-matrix device-matrix.yaml
# Fixture lifecycle helpers
cargo mobench fixture init
cargo mobench fixture verify
cargo mobench fixture cache-key
# View benchmark results summary
cargo mobench summary target/mobench/results.json
# CI one-command orchestration with stable outputs
cargo mobench ci run --target android --function sample_fns::fibonacci --local-only
# Reporting helpers from standardized outputs
cargo mobench report summarize --summary target/mobench/ci/summary.json
cargo mobench report github --pr 123 --summary target/mobench/ci/summary.jsonCI contract outputs are written to target/mobench/ci/:
summary.jsonsummary.mdresults.csv
mobench supports a mobench.toml configuration file for project settings:
[project]
crate = "bench-mobile"
library_name = "bench_mobile"
[android]
package = "com.example.bench"
min_sdk = 24
[ios]
bundle_id = "com.example.bench"
deployment_target = "15.0"
[benchmarks]
default_function = "my_crate::my_benchmark"
default_iterations = 100
default_warmup = 10CLI flags override config file values when provided.
- In
cargo mobench run --config <FILE>mode,--device-matrix <FILE>overridesdevice_matrixfrom the config file. - For regression comparisons,
--baselineshould point to a previous run summary; if it resolves to the same output path, mobench snapshots the prior file before writing the candidate summary.
BENCH_SDK_INTEGRATION.md: SDK integration guideBUILD.md: build prerequisites and troubleshootingTESTING.md: testing guide and device workflowsBROWSERSTACK_CI_INTEGRATION.md: BrowserStack CI setupdocs/CONTRACT_CI_V1.md: frozen v1 CI input/output/error contractdocs/adr/0001-mobench-ci-contract-v1.md: CI contract ADR and compatibility policydocs/schemas/: machine-readable CI/summary schema artifactsdocs/MIGRATION_GUIDE.md: migration guide (placeholder, linked from ADR)FETCH_RESULTS_GUIDE.md: fetching and summarizing resultsPROJECT_PLAN.md: goals and backlogCLAUDE.md: developer guide
For benchmarks that require expensive setup (like generating test data or initializing connections), you can exclude setup time from measurements using the setup attribute.
Without setup/teardown, expensive initialization is measured as part of your benchmark:
#[benchmark]
fn verify_proof() {
let proof = generate_complex_proof(); // This is measured (bad!)
verify(&proof); // This is what we want to measure
}Use the setup attribute to run initialization once before timing begins:
// Setup function runs once before all iterations (not timed)
fn setup_proof() -> ProofInput {
generate_complex_proof() // Takes 5 seconds, but not measured
}
#[benchmark(setup = setup_proof)]
fn verify_proof(input: &ProofInput) {
verify(&input.proof); // Only this is measured
}For benchmarks that mutate their input, use per_iteration to get fresh data each iteration:
fn generate_random_vec() -> Vec<i32> {
(0..1000).map(|_| rand::random()).collect()
}
#[benchmark(setup = generate_random_vec, per_iteration)]
fn sort_benchmark(data: Vec<i32>) {
let mut data = data;
data.sort(); // Each iteration gets a fresh unsorted vec
}For resources that need cleanup (database connections, temp files, etc.):
fn setup_db() -> Database { Database::connect("test.db") }
fn cleanup_db(db: Database) { db.close(); std::fs::remove_file("test.db").ok(); }
#[benchmark(setup = setup_db, teardown = cleanup_db)]
fn db_query(db: &Database) {
db.query("SELECT * FROM users");
}| Pattern | Use Case |
|---|---|
#[benchmark] |
Simple benchmarks with no setup or fast inline setup |
#[benchmark(setup = fn)] |
Expensive one-time setup, reused across iterations |
#[benchmark(setup = fn, per_iteration)] |
Benchmarks that mutate input, need fresh data each time |
#[benchmark(setup = fn, teardown = fn)] |
Resources requiring cleanup (connections, files, etc.) |
- Added CI contract-oriented commands and workflows:
cargo mobench ci runcargo mobench config validatecargo mobench devices resolvecargo mobench fixture init|build|verify|cache-keycargo mobench report summarize|github
- Standardized CI outputs under
target/mobench/ci/with schema-backed metadata. - Added baseline comparison source support (
path|url|artifact:<path>) and regression labels. - Improved local action safety for workflow input handling and sticky PR comment publishing.
- Fixed iOS CI target setup (
x86_64-apple-ios) and preserved CI outputs on regression exit.
- Setup and teardown support:
#[benchmark]macro now supportssetup,teardown, andper_iterationattributes for excluding expensive initialization from timing measurementsfn setup_data() -> Vec<u8> { vec![0u8; 10_000_000] } #[benchmark(setup = setup_data)] fn process_data(data: &Vec<u8>) { // Only this is measured, not the setup }
- New
checkcommand: Validates prerequisites (NDK, Xcode, Rust targets, etc.) before buildingcargo mobench check --target android cargo mobench check --target ios
- New
verifycommand: Validates registry, spec, and artifacts - New
summarycommand: Displays benchmark result statistics (avg/min/max/median) - New
devicescommand: Lists available BrowserStack devices with validation --progressflag: Simplified step-by-step output forbuildandruncommands- Consolidated
mobench-runnerintomobench-sdk: The timing harness is now part ofmobench-sdkas thetimingmodule, simplifying the dependency graph - SDK improvements:
#[benchmark]macro now validates function signature at compile time (no params, returns())- New
debug_benchmarks!()macro for verifying benchmark registration - Better error messages with available benchmarks list
- BrowserStack improvements:
- Better credential error messages with setup instructions
- Artifact pre-flight validation before uploads
- Upload progress indication with file sizes
- Dashboard link printed immediately when build starts
- Improved device fuzzy matching with suggestions
- Fix iOS XCUITest test name mismatch: Changed BrowserStack
only-testingfilter to usetestLaunchAndCaptureBenchmarkReport
- Fix iOS XCUITest BrowserStack detection: Added Info.plist to the UITests target template, resolving issues where BrowserStack could not properly detect and run XCUITest bundles
- Improved video capture for BrowserStack: Increased post-benchmark delay from 0.5s to 5.0s to ensure benchmark results are captured in BrowserStack video recordings
- Better UX during benchmark runs: iOS app now shows "Running benchmarks..." text before results appear, providing visual feedback during execution
- Template sync: Synchronized top-level iOS/Android templates with SDK-embedded templates for consistency
- Initial public release with
--releaseflag support package-xcuitestcommand for iOS BrowserStack testing- Updated mobile timing display and documentation
MIT licensed — World Foundation 2026.