feat: speedup report generation by using multiple cores for tokenizer#207
feat: speedup report generation by using multiple cores for tokenizer#207
Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on optimizing the report generation process by introducing parallel tokenization, which utilizes multiple CPU cores to process text data more efficiently. It also refines the handling of output data by supporting a new structured format for text model outputs and incorporates an early stopping mechanism for test sessions, enhancing control and responsiveness during long-running tests. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
469355c to
1143a57
Compare
There was a problem hiding this comment.
Code Review
This pull request introduces parallel batch tokenization using ThreadPoolExecutor to enhance the efficiency of calculating output sequence lengths and Time Per Output Token (TPOT) metrics, leveraging the GIL-releasing nature of HuggingFace tokenizers. It also updates the output_sequence_from_data function to support a new TextModelOutput list format while maintaining backward compatibility, and adds early stopping logic to the load generator. Review feedback suggests moving an import statement for PEP 8 compliance, replacing hardcoded indices with named constants for improved maintainability, and using strict=True in zip calls to prevent silent data truncation.
I am having trouble creating individual review comments. Click here to see my feedback.
src/inference_endpoint/metrics/reporter.py (52)
To improve code clarity and adhere to PEP 8 guidelines, it's recommended to move this import statement to the top of the file with the other imports. This makes dependencies clear and avoids repeated import lookups if the function is called multiple times.
src/inference_endpoint/metrics/reporter.py (627-628)
Using hardcoded indices like 1 and 2 to access elements from decoded_data makes the code less readable and brittle. If the array_like representation of TextModelOutput ever changes, this code will break in a non-obvious way.
To improve maintainability, consider defining named constants for these indices at the module level. For example:
_TEXT_MODEL_OUTPUT_OUTPUT_IDX = 1
_TEXT_MODEL_OUTPUT_REASONING_IDX = 2src/inference_endpoint/metrics/reporter.py (1094)
The uuids and token_counts lists are expected to have the same length. Using strict=True in the zip call will enforce this invariant and raise a ValueError if the lengths differ. This is safer than strict=False (the default before Python 3.10), which silently truncates to the shorter list and could hide bugs.
rows = list(zip(uuids, token_counts, strict=True))
src/inference_endpoint/metrics/reporter.py (1211-1213)
The batch_uuids and token_counts lists should have the same length. Using strict=True in zip will ensure this and raise an error on a mismatch, preventing potential silent errors or incorrect metric calculations. The current strict=False could hide bugs by truncating to the shorter list.
for sample_uuid, n_non_first_tokens in zip(
batch_uuids, token_counts, strict=True
):
There was a problem hiding this comment.
Pull request overview
This PR aims to speed up metrics report generation by parallelizing token counting across CPU cores, and updates the event-output decoding/tests to support TextModelOutput’s msgspec array_like encoding.
Changes:
- Add threaded batch tokenization to compute output token counts in parallel for
get_output_sequence_lengthsandderive_TPOT. - Extend
output_sequence_from_datato parse msgspec-taggedTextModelOutputencoded as a JSON list, and update/add unit tests to cover supported formats. - Add early-stop checks during sample issuance in the benchmark session loop.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
src/inference_endpoint/metrics/reporter.py |
Implements parallel batch tokenization and updates COMPLETE-event output decoding to support TextModelOutput list encoding. |
src/inference_endpoint/load_generator/session.py |
Adds early-stop handling during performance/accuracy sample issuance loops. |
tests/unit/metrics/test_reporter.py |
Updates existing tests to encode TextModelOutput and adds coverage for output_sequence_from_data. |
tests/conftest.py |
Updates test event payloads to TextModelOutput and makes the test tokenizer compatible with batch __call__. |
Comments suppressed due to low confidence (5)
src/inference_endpoint/metrics/reporter.py:1203
- In
derive_TPOT,token_countsare computed fromnon_first_chunktext assembled earlier fromoutput_sequenceandreasoning_sequence. ForTextModelOutputproduced byOpenAISSEAccumulator, the first chunk can bereasoning[0], and the non-first text should bereasoning[1:] + output; the current chunk assembly appends reasoning after output, which makes the token denominator incorrect for reasoning-enabled outputs. Consider decoding toTextModelOutputand usingTextModelOutput.text_after_first_chunk()(or otherwise preserving the real stream order) before batch tokenization.
if ttft is None:
# Non-streaming mode for this sample - error
raise RuntimeError(
src/inference_endpoint/metrics/reporter.py:1213
zip(batch_uuids, token_counts, strict=False)can silently truncate if the tokenizer returns an unexpected number of rows. Usingstrict=True(or validating lengths) would prevent silently dropping samples from TPOT computation.
if reporting_mode == TPOTReportingMode.TOKEN_WEIGHTED:
repeats.append(n_non_first_tokens)
else:
src/inference_endpoint/metrics/reporter.py:1213
n_non_first_tokensis later used as a divisor when computing TPOT. Add a guard forn_non_first_tokens <= 0(skip/flag malformed samples) to avoid a potentialZeroDivisionErrorif tokenization yields zero tokens.
if reporting_mode == TPOTReportingMode.TOKEN_WEIGHTED:
repeats.append(n_non_first_tokens)
else:
src/inference_endpoint/metrics/reporter.py:628
- The list-handling branch assumes any top-level JSON list is a tagged
TextModelOutputarray and reads output from index 1. This will mis-parse other list payloads (e.g., a plain chunk list) and can silently drop the first element. Validatedecoded_data[0] == "TextModelOutput"(and expected length) before interpreting the list as a struct, otherwise fall back to(None, None)or legacy handling.
if "output" not in decoded_data:
logging.warning("Dictionary data missing required 'output' key")
return None, None
# Extract output - can be string or list of strings
output = (
_output_sequence_to_str(decoded_data["output"])
if join_chunks
src/inference_endpoint/metrics/reporter.py:1095
zip(..., strict=False)will silently truncate if_parallel_batch_tokenizeever returns a mismatched length (e.g., unexpected tokenizer behavior). Usingstrict=True(or an explicit length assertion) would fail fast and prevent silently dropping samples from the metric output.
then `X` will contribute `len(tokenize(S)) - 1` entries in the table, each with the value:
`(b - a) / len(tokenize(S) - 1)`
If the sample was completed in non-streaming mode however, then `a` is assumed to be 0, and `X` will
instead contribute `len(tokenize(S))` entries, each with the value: `b / len(tokenize(S))`
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
695156b to
d6e5d7b
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
viraatc
left a comment
There was a problem hiding this comment.
Review Council — Multi-AI Code Review
Reviewed by: Codex + Claude | Depth: standard
Found 2 issues across 2 files:
- 0 critical/high
- 2 medium
- 0 low
Review Council — Multi-AI Code ReviewReviewed by: Codex + Claude | Depth: standard Found 2 issues across 2 files:
Note: 3 additional findings from Codex (stop-wait during busy-wait scheduling, ThreadPoolExecutor sizing, |
viraatc
left a comment
There was a problem hiding this comment.
(supplemental — from Claude agent, completed after initial review was posted)
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Move ThreadPoolExecutor import to module level (PEP 8) - Remove unused logger variable - Use strict=True in zip() calls to catch length mismatches - Add comment explaining early-stop timing in session loop Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use os.sched_getaffinity() for cgroup-aware CPU count with fallback - Cap ThreadPoolExecutor max_workers to min(n_workers, len(chunks)) - Add test for threaded path of _parallel_batch_tokenize Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add outer loop break in accuracy generator early-stop to prevent leaking one sample per remaining generator after stop_requested - Skip ThreadPoolExecutor on single-core machines (n_workers <= 1) - Add __call__ to perf test CharTokenizer to match callable protocol Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix misleading log messages after early stop (now distinguishes "aborted early" from "all samples issued") - Fix monkeypatch raising=False for cross-platform sched_getaffinity - Fix docstring: 2 CPUs → 4 CPUs to match actual test setup - Add thread-safety note to _parallel_batch_tokenize docstring Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2b0ce76 to
a8c8b2e
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Skip threading for non-Fast tokenizers (is_fast check) to avoid thread-safety issues with Python-only tokenizer backends - Only log accuracy issuance messages when accuracy generators exist Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What does this PR do?
TextModelOutputfrom its msgspec array-like wire formatcloses #208
Type of change
Testing
Checklist