Qutility Frontier is an open-source Python package for implementing scalable and hardware-agnostic quantum benchmarking protocols. The package provides implementations of recently proposed scalable benchmarks and offers tools to generate benchmark instances in a reproducible form. The Clifford Volume Benchmark implemented in this package is part of the EU Quantum Flagship KPIs for quantum computer benchmarking (see: https://arxiv.org/pdf/2512.19653).
In contrast to component-level tests, this benchmark suite targets system-level characterization, aiming to capture the computational performance of the full quantum processor. It focuses on protocols designed to map the performance of the entire quantum processor (end-to-end), rather than benchmarking isolated components.
Benchmarking quantum devices at scale is challenging, particularly because many benchmarking protocols rely on full-fledged quantum algorithms. However, evaluating these benchmarks typically requires classical calculations to validate the output of the quantum algorithm. Since this classical verification step does not scale efficiently with system size, validating such quantum algorithms becomes infeasible for large-scale devices.
In addition, the lack of standardization across quantum SDKs and provider workflows creates a significant incompatibility gap: applications and algorithms are often difficult to realize across different platforms and may require multiple independent implementations. This makes cross-platform comparison both difficult and tedious. This project addresses both issues by:
- providing scalable, platform-independent benchmarks, and
- representing benchmark circuits in a simple intermediate format based on OpenQASM, so that the same benchmark instance can be exported and executed across multiple platforms.
This package is not intended to serve as a tool for executing circuits directly on hardware providers. Instead, its purpose is to provide a convenient solution for generating benchmark circuits in OpenQASM format, which can then be executed using each provider’s recommended workflow. In addition, we collect and include real-world use cases demonstrating how these circuits can be imported into different SDKs and run on both simulators and real quantum hardware.
Development status: This project is under active development.
-
Open-source Python package designed to simplify the implementation of platform-independent quantum benchmark protocols.
-
A benchmark base class with a well-defined workflow for benchmark instance creation, circuit generation, serialization and saving, loading, and re-evaluation, with customizable methods for each step.
-
A lightweight Python-based representation of quantum circuits, enabling intuitive and flexible implementation of benchmark logic while remaining independent of any specific SDK.
-
Hardware-agnostic circuit export via OpenQASM (QASM 2 / QASM 3), with optional SDK-specific adaptations (e.g., gate aliasing).
-
A JSON schema to store complete benchmark instances, including benchmark metadata and generated circuits, experimental results (shot counts), and evaluation results (scores, pass/fail conditions, and derived metrics), together with utilities for saving and reloading benchmark instances reproducibly.
This package currently includes two implementations of scalable benchmarks introduced in the accompanying paper: https://arxiv.org/abs/2512.19413 :
-
Clifford Volume Benchmark - efficiently verifiable using stabilizer techniques and measuring stabilizer and destabilizer observables to quantify device performance.
-
Free-Fermion Volume Benchmark - based on Gaussian / free-fermionic circuits and evaluating device performance through Majorana-mode based observables.
- Notebooks, including tutorials and demos, demonstrating the usage of the benchmarks and provided utilities are available in the
notebooks/folder. - We provide tutorials demonstrating how the implemented benchmarks - Clifford Volume and Free-Fermion Volume - can be used in practice, how the supporting utilities can be applied, and how these benchmarks integrate with SDKs such as Qiskit, tket, and Braket. Additional demonstrations with other frameworks (e.g., Cirq and Bloqade) are planned for future releases, and community contributions are very welcome.
- Python >= 3.10, < 3.13
- Required dependencies:
numpy >= 1.21scipy >= 1.8matplotlib >= 3.5stim >= 1.12jsonschema >= 4.25.1(Only needed if you want to validate benchmark JSON files against the schema)
Recommended if you want to run the tutorial notebooks and use external SDKs:
jupyterlab >= 4.0notebook >= 7.0ipykernel >= 6.0ipython >= 8.0jsonschema >= 4.25.1openqasm3 >= 1.0.1qiskit >= 1.4.5qiskit-aer >= 0.17.2qiskit-qasm3-import >= 0.6.0amazon-braket-sdk >= 1.104.1boto3 >= 1.40.66pytket >= 2.11.0pytket-qiskit >= 0.74.0rustworkx >= 0.17.1
Useful if you're contributing or developing locally:
pytest >= 8.4.2(testing)ruff >= 0.14.3(linting/formatting)jupyterlab >= 4.0notebook >= 7.0ipykernel >= 6.0ipython >= 8.0jsonschema >= 4.25.1
You can install directly from GitHub using:
# Install the package
pip install --upgrade pip
pip install "git+https://github.com/faulhornlabs/qutility-frontier.git"
Note that installing via pip install git+... installs only the package. Tutorial notebooks and other extra files are not kept locally. To get the full repository (including notebooks/), clone and install it manually:
git clone https://github.com/faulhornlabs/qutility-frontier.git
cd qutility-frontier
# Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install the package
pip install --upgrade pip
pip install .git clone https://github.com/faulhornlabs/qutility-frontier.git
cd qutility-frontier
# Create and activate a conda environment
conda create -n qutility-frontier python=3.11
conda activate qutility-frontier
# Install the package
pip install --upgrade pip
pip install .Editable mode (
pip install -e .) is recommended only during development, since changes in the source code are applied immediately without reinstalling.
Development tools (dev)
Includes tools for testing and code quality checks (e.g. pytest, ruff), and is recommended if you plan to contribute to the project or develop new features.
pip install ".[dev]"Tutorial + SDK extras (tutorials)
Installs the extra packages needed to run the tutorial notebooks and use external quantum SDKs (e.g. Qiskit, Braket, PyTKET).
pip install ".[tutorials]"Note: This package has been tested on Windows using Conda, with Python 3.10 and 3.12.
Example: Clifford Volume Benchmark.
from frontier import CliffordVolumeBenchmark
from frontier import QasmEmitterOptions
emitter = QasmEmitterOptions(format="qasm3", target_sdk="qiskit") # or "braket", "tket", or None
benchmark = CliffordVolumeBenchmark(
number_of_qubits=5,
sample_size=10,
emitter_options=emitter,
shots=512,
)
benchmark.create_benchmark() # generates samples and (by default) auto-saves JSON under .benchmarks/Access the generated circuits:
# Flat list of OpenQASM programs (one per circuit)
qasm_programs = benchmark.get_all_circuits()
# Flat list of circuit IDs in the same order
circuit_ids = benchmark.get_all_circuit_ids()
# Access full structure (including observable strings)
samples = benchmark.samples
first_circuit = samples[0]["circuits"][0] # <- sample index and circuit index
print(first_circuit["circuit_id"])
print(first_circuit["observable"])
print(first_circuit["qasm"])- Convert/import each OpenQASM program to your platform.
- Execute and collect counts: a mapping from bitstring → integer count.
Provide counts as a dictionary keyed by circuit_id:
counts_by_circuit_id = {
"0_stab_0": {"00000": 260, "00001": 252},
"0_destab_0": {"00000": 255, "11111": 257},
# ...
}
#or as an list of counts ordered as the circtuis
list_of_counts = [
{"00000": 260, "00001": 252},
{"00000": 255, "11111": 257},
# ...
]
benchmark.add_experimental_results(
counts_by_circuit_id,
#list_of_counts,
platform="my_provider",
experiment_id="run_001",
)evaluation = benchmark.evaluate_benchmark()
print(evaluation)Some benchmarks also provide built-in plotting helpers, for details see the documentation of the benchmarks.
The Clifford Volume Benchmark samples random n-qubit Clifford unitaries, then probes the output state using a set of measured stabilizers (ideal expectation value 1) and destabilizers (ideal expectation value 0). The benchmark passes for width n when stabilizers stay above a threshold and destabilizers stay below a threshold in magnitude.
See: readme_Clifford_benchmark.md for the full protocol and interpretation.
The Free-Fermion Volume (FFV) Benchmark samples random SO(2n) transformations (Gaussian/free-fermionic unitaries), constructs circuits from a decomposition into elementary rotations, and evaluates the device by measuring Majorana-mode observables (mapped to Pauli strings). It checks “parallel” and “orthogonal” projection values against recommended thresholds.
See: readme_FreeFermion_becnhmark.md for the full protocol and interpretation.
A benchmark instance is stored as a single JSON document containing:
- benchmark metadata (name, id, number of qubits, sample size, target format/SDK, shots),
- a list of samples, each with its circuit list (
circuit_id,qasm,observable, and metadata), - optional experimental results (counts),
- optional evaluation results.
This enables reproducible generation, execution, and scoring while remaining platform-agnostic.
- Generate a benchmark instance and export circuits (OpenQASM).
- Execute circuits using the provider’s preferred workflow. (For examples see the tutorials in the
notebooks/folder.) - Attach counts back to the benchmark instance.
- Evaluate and store results (score + derived metrics).
See the package documentation here.
Contributions, feature proposals, and benchmark extensions are very welcome. Please see the Contributing Guide for details on how to get started.
One of the benchmarks implemented in this package (the Clifford Volume Benchmark) is included in the set of Key Performance Indicators (KPIs) defined within the EU Quantum Flagship initiative for quantum computer benchmarking. The implementation provided here has also been collected as part of this initiative.
For details, see: https://arxiv.org/pdf/2512.19653