Skip to content

perf: add benchmark suite with CI comparison#10

Merged
RostiMelk merged 21 commits intomainfrom
perf/add-benchmarks
Feb 6, 2026
Merged

perf: add benchmark suite with CI comparison#10
RostiMelk merged 21 commits intomainfrom
perf/add-benchmarks

Conversation

@RostiMelk
Copy link
Member

Adds per-feature and worst-case benchmarks using mitata, with Welch's t-test for statistical significance (via jstat). On PRs, CI runs benchmarks on both the PR and main, then compares and flags regressions.

Also consolidates static assets: logos to static/logos/, doc images to static/docs/.

- mitata benchmarks for per-feature and worst-case scenarios
- Welch's t-test (via jstat) for statistical significance
- CI compares PR benchmarks against main, flags regressions
- Move logos to static/logos/, docs images to static/docs/
@vercel
Copy link

vercel bot commented Feb 6, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
react-logo-soup Ready Ready Preview, Comment Feb 6, 2026 9:24pm

Request Review

@github-actions
Copy link

github-actions bot commented Feb 6, 2026

Benchmark Comparison: main vs perf/add-benchmarks

Threshold: 5%+ change, >100us absolute delta, and statistically significant (p<0.05).

Benchmark main perf/add-benchmarks Change p-value Verdict
content detection (1 logo) 1.39 ms 1.40 ms +0.3% 0.697 unchanged
render pass (20 logos) 1.84 us 2.04 us +11.0% <0.001 *** unchanged
mount 20 logos (no detection) 3.22 us 3.52 us +9.2% 0.360 unchanged
mount 20 logos (defaults) 29.05 ms 29.00 ms -0.2% 0.818 unchanged

No regressions detected.

react-logo-soup Benchmark Report

Test fixtures: 63 real SVGs from static/logos/. 2000ms budget per bench, 30-2000 samples.

Feature Comparisons (Welch's t-test)

Test A B Delta Sig
densityAware: true vs false 1.42 ms 935.93 us +52.2% <0.001 YES ***
alignBy: visual-center-y vs bounds 1.80 us 66 ns +2639.1% <0.001 YES ***
cropToContent: true vs false 2.20 ms 41 ns +5326832.4% <0.001 YES ***

A/B columns match the order in the test name. Sig: * p<0.05, ** p<0.01, *** p<0.001.

Full benchmark output in the CI job logs.

…ut dir

- Drop mitata, jstat, @resvg/resvg-js (3 deps removed)
- Add @napi-rs/canvas for real Skia-backed pixel rendering
- Load all 63 SVGs from static/logos/ — real measurements, no synthetics
- Inline Welch's t-test (no jstat) — self-contained welch.ts
- Single collectSamples timing loop (was duplicated with mitata)
- All bench output goes to tmp/ (single gitignore entry)
- CI: only yaml writes GITHUB_STEP_SUMMARY, simplified job URL
- Seed PRNG removed (no longer needed — fixtures are deterministic from real SVGs)
Key benchmarks:
- measure: single logo pipeline with median-sized real logo (not thin outlier)
- getVCT x 20: per-render cost with real normalized logos
- mount 20 logos: real-world scenario — 20 different logos, default settings

A/B comparisons:
- density ON vs OFF: quantifies densityAware cost
- visual-center-y vs bounds: render-path alignment cost
- bbox scaling: large vs small resolution impact

Dropped: sub-100ns math benchmarks (calcDims, createNormalizedLogo),
worstCase 1 (≈ fullPipeline), linear 100-vs-20 comparison,
calcDims density A/B (always 'not significant'), normalized100 fixture.
- densityAware: true vs false
- alignBy: visual-center-y vs bounds
- cropToContent: true vs false

Replaced bbox scaling test (internal detail, unrealistic sizes)
with cropToContent (real user option). Removed unused bboxSmall/
bboxLarge fixtures and detectContentBoundingBox import.
@RostiMelk RostiMelk merged commit 89b0f43 into main Feb 6, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant