|
| 1 | +# Performance Benchmarks |
| 2 | + |
| 3 | +> **Last updated:** March 2026 · **VADP version:** 0.3.x · **Python:** 3.13 · **OS:** Windows 11 (AMD64) |
| 4 | +> |
| 5 | +> All benchmarks use `time.perf_counter()` with 10,000 iterations (unless noted). |
| 6 | +> Numbers are from a development workstation — CI runs on `ubuntu-latest` GitHub-hosted runners. |
| 7 | +
|
| 8 | +## TL;DR |
| 9 | + |
| 10 | +| What you care about | Number | |
| 11 | +|---|---| |
| 12 | +| **Policy evaluation (single rule)** | **0.012 ms** (p50) — 72K ops/sec | |
| 13 | +| **Policy evaluation (100 rules)** | **0.029 ms** (p50) — 31K ops/sec | |
| 14 | +| **Kernel enforcement (allow path)** | **0.091 ms** (p50) — 9.3K ops/sec | |
| 15 | +| **Adapter governance overhead** | **0.004–0.006 ms** (p50) — 130K–230K ops/sec | |
| 16 | +| **Circuit breaker check** | **0.0005 ms** (p50) — 1.66M ops/sec | |
| 17 | +| **Concurrent throughput (50 agents)** | **35,481 ops/sec** | |
| 18 | + |
| 19 | +**Bottom line:** Policy enforcement adds **< 0.1 ms** per action. At 1,000 concurrent agents, the governance layer is not the bottleneck — your LLM API call is 100–1000× slower. |
| 20 | + |
| 21 | +--- |
| 22 | + |
| 23 | +## 1. Policy Evaluation |
| 24 | + |
| 25 | +Measures `PolicyEvaluator.evaluate()` — the core enforcement path every agent action passes through. |
| 26 | + |
| 27 | +| Benchmark | ops/sec | p50 (ms) | p95 (ms) | p99 (ms) | |
| 28 | +|---|---:|---:|---:|---:| |
| 29 | +| Single rule evaluation | 72,386 | 0.012 | 0.019 | 0.081 | |
| 30 | +| 10-rule policy | 67,044 | 0.014 | 0.018 | 0.074 | |
| 31 | +| 100-rule policy | 31,016 | 0.029 | 0.047 | 0.116 | |
| 32 | +| SharedPolicy cross-project eval | 120,500 | 0.008 | 0.010 | 0.026 | |
| 33 | +| YAML policy load (cold, 10 rules) | 111 | 8.403 | 12.571 | 21.835 | |
| 34 | + |
| 35 | +**Key takeaway:** Rule count scales linearly. Even with 100 rules, p99 is under 0.12 ms. YAML loading is a cold-start cost (once per deployment, not per action). |
| 36 | + |
| 37 | +Source: [`packages/agent-os/benchmarks/bench_policy.py`](packages/agent-os/benchmarks/bench_policy.py) |
| 38 | + |
| 39 | +## 2. Kernel Enforcement |
| 40 | + |
| 41 | +Measures `StatelessKernel.execute()` — the full enforcement path including policy evaluation, audit logging, and execution context management. |
| 42 | + |
| 43 | +| Benchmark | ops/sec | p50 (ms) | p95 (ms) | p99 (ms) | |
| 44 | +|---|---:|---:|---:|---:| |
| 45 | +| Kernel execute (allow) | 9,285 | 0.091 | 0.224 | 0.398 | |
| 46 | +| Kernel execute (deny) | 11,731 | 0.071 | 0.199 | 0.422 | |
| 47 | +| Circuit breaker state check | 1,662,638 | 0.001 | 0.001 | 0.001 | |
| 48 | + |
| 49 | +### Concurrent Throughput |
| 50 | + |
| 51 | +| Concurrency | Total ops | Wall time (s) | ops/sec | |
| 52 | +|---:|---:|---:|---:| |
| 53 | +| 50 agents × 200 ops each | 10,000 | 0.282 | 35,481 | |
| 54 | + |
| 55 | +**Key takeaway:** Deny path is slightly faster than allow (no downstream execution). Circuit breaker overhead is negligible (sub-microsecond). At 50 concurrent agents, throughput exceeds 35K ops/sec. |
| 56 | + |
| 57 | +Source: [`packages/agent-os/benchmarks/bench_kernel.py`](packages/agent-os/benchmarks/bench_kernel.py) |
| 58 | + |
| 59 | +## 3. Audit System |
| 60 | + |
| 61 | +Measures audit entry creation, querying, and serialization — the observability overhead. |
| 62 | + |
| 63 | +| Benchmark | ops/sec | p50 (ms) | p95 (ms) | p99 (ms) | |
| 64 | +|---|---:|---:|---:|---:| |
| 65 | +| Audit entry write | 212,565 | 0.003 | 0.007 | 0.015 | |
| 66 | +| Audit entry serialization | 247,175 | 0.004 | 0.006 | 0.008 | |
| 67 | +| Execution time tracking | 510,071 | 0.002 | 0.003 | 0.003 | |
| 68 | +| Audit log query (10K entries) | 1,119 | 0.810 | 1.537 | 1.935 | |
| 69 | + |
| 70 | +**Key takeaway:** Audit writes add ~3 µs per action. Querying 10K entries takes ~1 ms (in-memory scan). For production deployments, external append-only stores (e.g., OpenTelemetry export) are recommended for large-scale query workloads. |
| 71 | + |
| 72 | +Source: [`packages/agent-os/benchmarks/bench_audit.py`](packages/agent-os/benchmarks/bench_audit.py) |
| 73 | + |
| 74 | +## 4. Framework Adapter Overhead |
| 75 | + |
| 76 | +Measures the governance check overhead per framework adapter — the cost added to each tool call or agent step. |
| 77 | + |
| 78 | +| Adapter | ops/sec | p50 (ms) | p95 (ms) | p99 (ms) | |
| 79 | +|---|---:|---:|---:|---:| |
| 80 | +| GovernancePolicy init (startup) | 189,403 | 0.005 | 0.007 | 0.013 | |
| 81 | +| Tool allowed check | 7,506,344 | 0.000 | 0.000 | 0.000 | |
| 82 | +| Pattern match (per call) | 130,817 | 0.006 | 0.013 | 0.029 | |
| 83 | +| **OpenAI** adapter | 132,340 | 0.006 | 0.013 | 0.031 | |
| 84 | +| **LangChain** adapter | 225,128 | 0.004 | 0.007 | 0.010 | |
| 85 | +| **Anthropic** adapter | 213,598 | 0.004 | 0.007 | 0.011 | |
| 86 | +| **LlamaIndex** adapter | 215,934 | 0.004 | 0.006 | 0.011 | |
| 87 | +| **CrewAI** adapter | 230,223 | 0.004 | 0.006 | 0.010 | |
| 88 | +| **AutoGen** adapter | 191,390 | 0.005 | 0.007 | 0.010 | |
| 89 | +| **Google Gemini** adapter | 139,730 | 0.005 | 0.011 | 0.027 | |
| 90 | +| **Mistral** adapter | 148,880 | 0.006 | 0.009 | 0.020 | |
| 91 | +| **Semantic Kernel** adapter | 138,810 | 0.006 | 0.012 | 0.015 | |
| 92 | + |
| 93 | +**Key takeaway:** All adapters add **< 0.03 ms** (p99) per tool call. This is 3–4 orders of magnitude below a typical LLM API round-trip (200–2000 ms). The governance layer is invisible to end users. |
| 94 | + |
| 95 | +Source: [`packages/agent-os/benchmarks/bench_adapters.py`](packages/agent-os/benchmarks/bench_adapters.py) |
| 96 | + |
| 97 | +## 5. Agent SRE (Reliability Engineering) |
| 98 | + |
| 99 | +Measures chaos engineering, SLO enforcement, and observability primitives. |
| 100 | + |
| 101 | +| Benchmark | ops/sec | p50 (µs) | p99 (µs) | |
| 102 | +|---|---:|---:|---:| |
| 103 | +| Fault injection | 1,060,108 | 0.60 | 1.90 | |
| 104 | +| Chaos template init | 221,270 | 3.20 | 11.80 | |
| 105 | +| Chaos schedule eval | 360,531 | 2.20 | 4.40 | |
| 106 | +| SLO evaluation | 48,747 | 18.70 | 49.20 | |
| 107 | +| Error budget calculation | 58,229 | 15.70 | 42.50 | |
| 108 | +| Burn rate alert | 49,593 | 16.30 | 50.10 | |
| 109 | +| SLI recording | 618,961 | 1.10 | 4.10 | |
| 110 | + |
| 111 | +**Key takeaway:** SRE operations are sub-50 µs at p99. SLI recording (the hot path for every action) is ~1 µs. These can run alongside every agent action without measurable impact. |
| 112 | + |
| 113 | +Source: [`packages/agent-sre/benchmarks/`](packages/agent-sre/benchmarks/) |
| 114 | + |
| 115 | +## 6. Memory Footprint |
| 116 | + |
| 117 | +Measured with `tracemalloc` — PolicyEvaluator with 100 rules, 1,000 evaluations: |
| 118 | + |
| 119 | +| Metric | Value | |
| 120 | +|---|---| |
| 121 | +| Evaluator instance (100 rules) | ~2 KB | |
| 122 | +| Per-evaluation context overhead | ~0.5 KB | |
| 123 | +| Peak process memory (Python runtime + evaluator + 1K evals) | ~126 MB | |
| 124 | + |
| 125 | +> **Note:** The 126 MB peak includes the entire Python runtime, standard library, and imported modules. The evaluator itself is a small fraction. For comparison, a bare `python -c "pass"` process uses ~15 MB. |
| 126 | +
|
| 127 | +## Methodology |
| 128 | + |
| 129 | +### Hardware |
| 130 | + |
| 131 | +These benchmarks were run on a development workstation. CI runs on GitHub-hosted `ubuntu-latest` runners (2-core, 7 GB RAM). Expect ±20% variance between runs due to shared infrastructure. |
| 132 | + |
| 133 | +### Measurement |
| 134 | + |
| 135 | +- **Timer:** `time.perf_counter()` (nanosecond resolution) |
| 136 | +- **Iterations:** 10,000 per benchmark (100,000 for circuit breaker, 1,000 for YAML load) |
| 137 | +- **Percentiles:** Sorted latency array, index-based selection |
| 138 | +- **Warm-up:** None (benchmarks measure cold-start-inclusive performance) |
| 139 | + |
| 140 | +### Reproducing |
| 141 | + |
| 142 | +```bash |
| 143 | +# Clone and install |
| 144 | +git clone https://github.com/microsoft/agent-governance-toolkit.git |
| 145 | +cd agent-governance-toolkit |
| 146 | + |
| 147 | +# Policy, kernel, audit, adapter benchmarks |
| 148 | +cd packages/agent-os |
| 149 | +pip install -e ".[dev]" |
| 150 | +python benchmarks/bench_policy.py |
| 151 | +python benchmarks/bench_kernel.py |
| 152 | +python benchmarks/bench_audit.py |
| 153 | +python benchmarks/bench_adapters.py |
| 154 | + |
| 155 | +# SRE benchmarks |
| 156 | +cd ../agent-sre |
| 157 | +pip install -e ".[dev]" |
| 158 | +python benchmarks/bench_chaos.py |
| 159 | +python benchmarks/bench_slo.py |
| 160 | +``` |
| 161 | + |
| 162 | +### CI Integration |
| 163 | + |
| 164 | +Benchmarks run automatically on every release via the [`benchmarks.yml`](.github/workflows/benchmarks.yml) workflow. Results are uploaded as workflow artifacts for comparison across releases. |
| 165 | + |
| 166 | +## Comparison Context |
| 167 | + |
| 168 | +For context, here's where the governance overhead sits relative to typical agent operations: |
| 169 | + |
| 170 | +| Operation | Typical latency | |
| 171 | +|---|---| |
| 172 | +| **Policy evaluation (this toolkit)** | **0.01–0.03 ms** | |
| 173 | +| **Full kernel enforcement** | **0.07–0.10 ms** | |
| 174 | +| **Adapter overhead** | **0.004–0.006 ms** | |
| 175 | +| Python function call | 0.001 ms | |
| 176 | +| Redis read (local) | 0.1–0.5 ms | |
| 177 | +| Database query (simple) | 1–10 ms | |
| 178 | +| LLM API call (GPT-4) | 200–2,000 ms | |
| 179 | +| LLM API call (Claude Sonnet) | 300–3,000 ms | |
| 180 | + |
| 181 | +The governance layer adds less overhead than a single Redis read and is 10,000× faster than an LLM call. |
0 commit comments