Skip to content

Conversation

martincostello
Copy link
Member

Relates to #6567.

Changes

Reduce exporter benchmarks' total runtime by reducing the maximum NumberOfBatches value from 100 to 20.

While running the benchmarks for #6567, I noticed that it took 75 minutes just to get to the warm-up phase of running the OTLP exporter benchmarks for gRPC.

Reducing the value to 20 completes the the three benchmarks in ~50 minutes, so ~100 minutes to do a before and after comparison on the same machine.

The results in #6567 show that the time and memory increases linearly with the number of batches, so I don't think there's any need to run 100 batches as that implies a runtime of 400s per iteration for no real benefit of improving accuracy of the benchmarks.

Merge requirement checklist

  • CONTRIBUTING guidelines followed (license requirements, nullable enabled, static analysis, etc.)
  • Unit tests added/updated
  • Appropriate CHANGELOG.md files updated for non-trivial changes
  • Changes in public API reviewed (if applicable)

Reduce benchmarks' total runtime by reducing the maximum `NumberOfBatches` value from 100 to 20.
@martincostello martincostello requested a review from a team as a code owner October 6, 2025 13:25
@Copilot Copilot AI review requested due to automatic review settings October 6, 2025 13:25
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR reduces the runtime of exporter benchmarks by decreasing the maximum NumberOfBatches parameter from 100 to 20 across three benchmark classes. This change addresses performance concerns where benchmarks were taking excessively long to complete (75 minutes just for warm-up phase), reducing total runtime to approximately 50 minutes while maintaining benchmark accuracy since results scale linearly.

Key changes:

  • Updated benchmark parameter values to reduce execution time
  • Maintained benchmark coverage with sufficient data points for accuracy

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
test/Benchmarks/Exporter/ZipkinExporterBenchmarks.cs Updated NumberOfBatches parameter from 100 to 20
test/Benchmarks/Exporter/OtlpHttpExporterBenchmarks.cs Updated NumberOfBatches parameter from 100 to 20
test/Benchmarks/Exporter/OtlpGrpcExporterBenchmarks.cs Updated NumberOfBatches parameter from 100 to 20

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@github-actions github-actions bot added the perf Performance related label Oct 6, 2025
Copy link

codecov bot commented Oct 6, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 86.62%. Comparing base (6a70665) to head (f46ffc2).
⚠️ Report is 1 commits behind head on main.
✅ All tests successful. No failed tests found.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #6571      +/-   ##
==========================================
- Coverage   86.70%   86.62%   -0.08%     
==========================================
  Files         258      258              
  Lines       11895    11895              
==========================================
- Hits        10313    10304       -9     
- Misses       1582     1591       +9     
Flag Coverage Δ
unittests-Project-Experimental 86.56% <ø> (+0.18%) ⬆️
unittests-Project-Stable 86.52% <ø> (-0.10%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 2 files with indirect coverage changes

@rajkumar-rangaraj rajkumar-rangaraj merged commit 7f8d57a into open-telemetry:main Oct 6, 2025
38 checks passed
@martincostello martincostello deleted the reduce-benchmark-runtime branch October 6, 2025 16:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

perf Performance related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants