Skip to content

Commit 529d5e4

Browse files
Add framework to measure code base performance ✨ 💎 (#558)
- Add framework to measure code base performance. This is to refactor more comfortably Cython code and core features, keeping an eye on performance. - Download the `benchmark-reports` artifact from the GitHub Workflow to see it in action. https://github.com/Neoteroi/BlackSheep/actions/runs/14950087869
1 parent 1237b1e commit 529d5e4

File tree

13 files changed

+1398
-0
lines changed

13 files changed

+1398
-0
lines changed

.github/workflows/perf.yml

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
####################################################################################
2+
# Runs benchmarks for BlackSheep source code for various versions of Python
3+
# and Operating System and publishes the results.
4+
# See the perf folder for more information.
5+
####################################################################################
6+
name: Benchmark
7+
8+
on:
9+
push:
10+
paths:
11+
- 'perf/**'
12+
workflow_dispatch:
13+
14+
jobs:
15+
perf-tests:
16+
strategy:
17+
fail-fast: false
18+
matrix:
19+
python-version: ["3.11", "3.12", "3.13"]
20+
os: [ubuntu-latest, macos-latest, windows-latest]
21+
runs-on: ${{ matrix.os }}
22+
23+
steps:
24+
- uses: actions/checkout@v4
25+
with:
26+
fetch-depth: 9
27+
submodules: false
28+
29+
- name: Use Python ${{ matrix.python-version }}
30+
uses: actions/setup-python@v5
31+
with:
32+
python-version: ${{ matrix.python-version }}
33+
34+
- name: Install dependencies
35+
run: |
36+
pip install -r requirements.txt
37+
pip install flake8
38+
39+
- name: Compile Cython extensions
40+
run: |
41+
cython blacksheep/url.pyx
42+
cython blacksheep/exceptions.pyx
43+
cython blacksheep/headers.pyx
44+
cython blacksheep/cookies.pyx
45+
cython blacksheep/contents.pyx
46+
cython blacksheep/messages.pyx
47+
cython blacksheep/scribe.pyx
48+
cython blacksheep/baseapp.pyx
49+
python setup.py build_ext --inplace
50+
51+
- name: Install dependencies for benchmark
52+
run: |
53+
cd perf
54+
pip install -r req.txt
55+
56+
- name: Run benchmark
57+
shell: bash
58+
run: |
59+
export PYTHONPATH="."
60+
python perf/main.py --times 5
61+
62+
- name: Upload results
63+
uses: actions/upload-artifact@v4
64+
with:
65+
name: benchmark-results-${{ matrix.os }}-${{ matrix.python-version }}
66+
path: benchmark_results
67+
68+
genreport:
69+
runs-on: ubuntu-latest
70+
needs: [perf-tests]
71+
steps:
72+
- uses: actions/checkout@v4
73+
with:
74+
fetch-depth: 9
75+
submodules: false
76+
77+
- name: Download a distribution artifact
78+
uses: actions/download-artifact@v4
79+
with:
80+
pattern: benchmark-results-*
81+
merge-multiple: true
82+
path: benchmark_results
83+
84+
- name: Use Python 3.13
85+
uses: actions/setup-python@v5
86+
with:
87+
python-version: '3.13'
88+
89+
- name: Install dependencies
90+
run: |
91+
cd perf
92+
pip install -r req.txt
93+
94+
- name: Generate report
95+
shell: bash
96+
run: |
97+
ls -R benchmark_results
98+
chmod -R 755 benchmark_results
99+
100+
export PYTHONPATH="."
101+
python perf/genreport.py
102+
python perf/genreport.py --output windows-results.xlsx --platform Windows
103+
python perf/genreport.py --output linux-results.xlsx --platform Linux
104+
python perf/genreport.py --output macos-results.xlsx --platform macOS
105+
106+
- name: Upload reports
107+
uses: actions/upload-artifact@v4
108+
with:
109+
name: benchmark-reports
110+
path: "**/*.xlsx" # Upload all .xlsx files

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,3 +128,6 @@ nice-cat.jpg
128128
venv*
129129

130130
.local/*
131+
benchmark_results/
132+
*.xlsx
133+
.~lock*

perf/README.md

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# Benchmark
2+
3+
This folder contains scripts to benchmark the performance of the library. The
4+
purpose of these benchmarks is to measure how changes in code affect
5+
performance, across Git commits, Python versions, and operating system.
6+
7+
Benchmarks measure execution time and memory utilization.
8+
9+
> [!TIP]
10+
>
11+
> Download the results from the GitHub Workflow.
12+
> The `benchmark-reports` artifacts include Excel files with tables and charts.
13+
>
14+
> [![Build](https://github.com/Neoteroi/BlackSheep/workflows/Benchmark/badge.svg)](https://github.com/Neoteroi/BlackSheep/actions/workflows/perf.yml)
15+
16+
The code can both collect information and compare it depending on the Git
17+
commit SHA.
18+
19+
```bash
20+
pip install -r req.txt
21+
```
22+
23+
From the root folder:
24+
25+
```bash
26+
# Run the benchmark suite
27+
export PYTHONPATH="."
28+
29+
python perf/main.py
30+
31+
# To run more than onces:
32+
python perf/main.py --times 3
33+
34+
# Generate XLSX report
35+
python perf/genreport.py
36+
```
37+
38+
Run to generate results from different points in history:
39+
40+
```bash
41+
python perf/historyrun.py --commits 82ed065 1237b1e
42+
```
43+
44+
## Code organization
45+
46+
Benchmarks are organized in such way that each file can be run interactively using
47+
**iPython**, but are also automatically imported by `main.py` following the convention
48+
that benchmark functions have names starting with `benchmark_`.
49+
50+
To run a single benchmark using **iPython**, or [`cProfile`](https://docs.python.org/3.13/library/profile.html#profile-cli):
51+
52+
```bash
53+
export PYTHONPATH="."
54+
55+
ipython perf/benchmarks/writeresponse.py timeit
56+
57+
python -m cProfile -s tottime perf/benchmarks/writeresponse.py | head -n 50
58+
```
59+
60+
## Debugging with Visual Studio Code
61+
62+
To debug specific files with VS Code, use a `.vscode\launch.json` file like:
63+
64+
```json
65+
{
66+
"version": "0.2.0",
67+
"configurations": [
68+
{
69+
"name": "Python Debugger: Current File",
70+
"type": "debugpy",
71+
"request": "launch",
72+
"program": "${file}",
73+
"console": "integratedTerminal",
74+
"env": {
75+
"PYTHONPATH": "${workspaceFolder}"
76+
}
77+
}
78+
]
79+
}
80+
```
81+
82+
## When modifying benchmark code
83+
84+
```bash
85+
export PYTHONPATH="."
86+
rm -rf benchmark_results && python perf/main.py && python perf/genreport.py
87+
```

perf/benchmarks/__init__.py

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
import gc
2+
import time
3+
from contextlib import contextmanager
4+
from dataclasses import dataclass
5+
from typing import TypedDict
6+
7+
8+
class BenchmarkResult(TypedDict):
9+
total_time: float
10+
avg_time: float
11+
iterations: int
12+
13+
14+
@dataclass
15+
class TimerResult:
16+
elapsed_time: float
17+
18+
19+
@contextmanager
20+
def timer():
21+
result = TimerResult(-1)
22+
start_time = time.perf_counter() # Use perf_counter for high-resolution timing
23+
yield result
24+
end_time = time.perf_counter()
25+
result.elapsed_time = end_time - start_time
26+
27+
28+
async def async_benchmark(func, iterations: int) -> BenchmarkResult:
29+
# warmup
30+
warmup_iterations = max(1, min(100, iterations // 10))
31+
for _ in range(warmup_iterations):
32+
await func()
33+
34+
# Collect garbage to ensure fair comparison
35+
gc.collect()
36+
37+
# actual timing
38+
with timer() as result:
39+
for _ in range(iterations):
40+
await func()
41+
42+
return {
43+
"total_time": result.elapsed_time,
44+
"avg_time": result.elapsed_time / iterations,
45+
"iterations": iterations,
46+
}
47+
48+
49+
def sync_benchmark(func, iterations: int) -> BenchmarkResult:
50+
# warmup
51+
warmup_iterations = max(1, min(100, iterations // 10))
52+
for _ in range(warmup_iterations):
53+
func()
54+
55+
# Collect garbage to ensure fair comparison
56+
gc.collect()
57+
58+
# actual timing
59+
with timer() as result:
60+
for _ in range(iterations):
61+
func()
62+
63+
return {
64+
"total_time": result.elapsed_time,
65+
"avg_time": result.elapsed_time / iterations,
66+
"iterations": iterations,
67+
}
68+
69+
70+
def main_run(func):
71+
"""
72+
Run the benchmark function and print the results.
73+
74+
To use with iPython:
75+
PYTHONPATH="." ipython perf/benchmarks/filename.py timeit
76+
77+
To use with asyncio:
78+
PYTHONPATH="." ipython perf/benchmarks/filename.py
79+
"""
80+
import asyncio
81+
import sys
82+
83+
if len(sys.argv) > 1 and sys.argv[1] == "timeit":
84+
from IPython import get_ipython
85+
86+
ipython = get_ipython()
87+
if ipython:
88+
ipython.run_line_magic("timeit", f"asyncio.run({func.__name__}(1))")
89+
else:
90+
print("ERROR: Use iPython to run the benchmark with timeit.")
91+
sys.exit(1)
92+
else:
93+
asyncio.run(func())

0 commit comments

Comments
 (0)