feat(ml): add stateless bundle-local size-aware batching and benchmark#37532
feat(ml): add stateless bundle-local size-aware batching and benchmark#37532Eliaaazzz wants to merge 1 commit intoapache:masterfrom
Conversation
Summary of ChangesHello @Eliaaazzz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances Apache Beam's capabilities for machine learning inference by introducing a novel size-aware batching mechanism. The new Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
907ddfd to
501bf5c
Compare
|
Assigning reviewers: R: @jrmccluskey for label python. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
4948b6e to
142962d
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #37532 +/- ##
============================================
- Coverage 40.06% 39.99% -0.07%
Complexity 3404 3404
============================================
Files 1177 1178 +1
Lines 187083 187526 +443
Branches 3581 3581
============================================
+ Hits 74947 75002 +55
- Misses 108744 109132 +388
Partials 3392 3392
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
142962d to
541fd95
Compare
Updates #37531
Summary
This PR adds an opt-in stateless bundle-local size-aware batching path for variable-length inference workloads in
RunInference.It introduces
SortAndBatchElementsinapache_beam/transforms/util.py, which:StartBundle→FinishBundle)len(x), overridable viaelement_size_fn)max_batch_size,max_batch_weight)Default behavior remains unchanged unless this path is enabled.
Motivation
BatchElementsis count-based. For heavy-tail length distributions, long outliers can inflate padding cost for many short elements in the same batch, increasing tail latency and reducing effective throughput.This PR provides a stateless (bundle-local, no shuffle) way to improve batch composition under variable-length inputs.
Mechanism clarification
A strict-control ablation is included to isolate effects:
max_batch_weight: significant gainIn this workload, gains are primarily consistent with boundary changes under weight constraints after size-aware ordering, rather than intra-batch reordering alone.
Benchmark methodology
Script:
apache_beam/transforms/sort_and_batch_benchmark.pyPareto (heavy-tail) results
Configuration:
max_batch_size=32,max_batch_weight=2000Baseline → Stateless:
13.00x → 3.19x(↓75.5%)2.2 → 7.3 Ktok/s(↑230.4%)18924.3 → 5728.6 ms(↓69.7%)283.3 → 64.4 ms(↓77.3%)1011.9 → 632.5 ms(↓37.5%)313 → 321(+3%)Scope
Included in this PR:
SortAndBatchElements)Not included in this PR:
Files changed
apache_beam/transforms/util.pyapache_beam/transforms/util_test.pyapache_beam/transforms/sort_and_batch_benchmark.pyNotes
Claims in this PR are scoped to the Pareto heavy-tail setup used above. Broader-distribution conclusions and stateful/global strategy are follow-up work.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.