Skip to content

feat: speedup report generation by using multiple cores for tokenizer#207

Open
viraatc wants to merge 6 commits intomainfrom
feat/viraatc-parallel-tokenizer
Open

feat: speedup report generation by using multiple cores for tokenizer#207
viraatc wants to merge 6 commits intomainfrom
feat/viraatc-parallel-tokenizer

Conversation

@viraatc
Copy link
Copy Markdown
Collaborator

@viraatc viraatc commented Mar 25, 2026

What does this PR do?

  • Speed up report generation by parallelizing tokenization across available CPU cores
  • Support decoding TextModelOutput from its msgspec array-like wire format
  • Allow benchmark sessions to stop early mid-issuance

closes #208

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor/cleanup

Testing

  • Tests added/updated
  • All tests pass locally
  • Manual testing completed

Checklist

  • Code follows project style
  • Pre-commit hooks pass
  • Documentation updated (if needed)

@viraatc viraatc requested a review from a team as a code owner March 25, 2026 00:11
Copilot AI review requested due to automatic review settings March 25, 2026 00:11
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 25, 2026

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@github-actions github-actions bot requested review from arekay-nv and nvzhihanj March 25, 2026 00:11
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing the report generation process by introducing parallel tokenization, which utilizes multiple CPU cores to process text data more efficiently. It also refines the handling of output data by supporting a new structured format for text model outputs and incorporates an early stopping mechanism for test sessions, enhancing control and responsiveness during long-running tests.

Highlights

  • Parallel Tokenization for Report Generation: Implemented a new _parallel_batch_tokenize function that leverages ThreadPoolExecutor to tokenize texts across multiple CPU cores, significantly speeding up report generation, especially for HuggingFace tokenizers which release the GIL.
  • Enhanced Output Data Handling: The output_sequence_from_data function now supports a new list-based format for TextModelOutput (tagged msgspec array-like structs), alongside existing string and dictionary formats, improving flexibility and data integrity.
  • Early Stop Mechanism in Test Sessions: Added checks for self.stop_requested within performance and accuracy test generators, allowing for graceful early termination of sample issuance during testing sessions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces parallel batch tokenization using ThreadPoolExecutor to enhance the efficiency of calculating output sequence lengths and Time Per Output Token (TPOT) metrics, leveraging the GIL-releasing nature of HuggingFace tokenizers. It also updates the output_sequence_from_data function to support a new TextModelOutput list format while maintaining backward compatibility, and adds early stopping logic to the load generator. Review feedback suggests moving an import statement for PEP 8 compliance, replacing hardcoded indices with named constants for improved maintainability, and using strict=True in zip calls to prevent silent data truncation.

I am having trouble creating individual review comments. Click here to see my feedback.

src/inference_endpoint/metrics/reporter.py (52)

medium

To improve code clarity and adhere to PEP 8 guidelines, it's recommended to move this import statement to the top of the file with the other imports. This makes dependencies clear and avoids repeated import lookups if the function is called multiple times.

src/inference_endpoint/metrics/reporter.py (627-628)

medium

Using hardcoded indices like 1 and 2 to access elements from decoded_data makes the code less readable and brittle. If the array_like representation of TextModelOutput ever changes, this code will break in a non-obvious way.

To improve maintainability, consider defining named constants for these indices at the module level. For example:

_TEXT_MODEL_OUTPUT_OUTPUT_IDX = 1
_TEXT_MODEL_OUTPUT_REASONING_IDX = 2

src/inference_endpoint/metrics/reporter.py (1094)

medium

The uuids and token_counts lists are expected to have the same length. Using strict=True in the zip call will enforce this invariant and raise a ValueError if the lengths differ. This is safer than strict=False (the default before Python 3.10), which silently truncates to the shorter list and could hide bugs.

        rows = list(zip(uuids, token_counts, strict=True))

src/inference_endpoint/metrics/reporter.py (1211-1213)

medium

The batch_uuids and token_counts lists should have the same length. Using strict=True in zip will ensure this and raise an error on a mismatch, preventing potential silent errors or incorrect metric calculations. The current strict=False could hide bugs by truncating to the shorter list.

        for sample_uuid, n_non_first_tokens in zip(
            batch_uuids, token_counts, strict=True
        ):

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to speed up metrics report generation by parallelizing token counting across CPU cores, and updates the event-output decoding/tests to support TextModelOutput’s msgspec array_like encoding.

Changes:

  • Add threaded batch tokenization to compute output token counts in parallel for get_output_sequence_lengths and derive_TPOT.
  • Extend output_sequence_from_data to parse msgspec-tagged TextModelOutput encoded as a JSON list, and update/add unit tests to cover supported formats.
  • Add early-stop checks during sample issuance in the benchmark session loop.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
src/inference_endpoint/metrics/reporter.py Implements parallel batch tokenization and updates COMPLETE-event output decoding to support TextModelOutput list encoding.
src/inference_endpoint/load_generator/session.py Adds early-stop handling during performance/accuracy sample issuance loops.
tests/unit/metrics/test_reporter.py Updates existing tests to encode TextModelOutput and adds coverage for output_sequence_from_data.
tests/conftest.py Updates test event payloads to TextModelOutput and makes the test tokenizer compatible with batch __call__.
Comments suppressed due to low confidence (5)

src/inference_endpoint/metrics/reporter.py:1203

  • In derive_TPOT, token_counts are computed from non_first_chunk text assembled earlier from output_sequence and reasoning_sequence. For TextModelOutput produced by OpenAISSEAccumulator, the first chunk can be reasoning[0], and the non-first text should be reasoning[1:] + output; the current chunk assembly appends reasoning after output, which makes the token denominator incorrect for reasoning-enabled outputs. Consider decoding to TextModelOutput and using TextModelOutput.text_after_first_chunk() (or otherwise preserving the real stream order) before batch tokenization.
            if ttft is None:
                # Non-streaming mode for this sample - error
                raise RuntimeError(

src/inference_endpoint/metrics/reporter.py:1213

  • zip(batch_uuids, token_counts, strict=False) can silently truncate if the tokenizer returns an unexpected number of rows. Using strict=True (or validating lengths) would prevent silently dropping samples from TPOT computation.
                if reporting_mode == TPOTReportingMode.TOKEN_WEIGHTED:
                    repeats.append(n_non_first_tokens)
            else:

src/inference_endpoint/metrics/reporter.py:1213

  • n_non_first_tokens is later used as a divisor when computing TPOT. Add a guard for n_non_first_tokens <= 0 (skip/flag malformed samples) to avoid a potential ZeroDivisionError if tokenization yields zero tokens.
                if reporting_mode == TPOTReportingMode.TOKEN_WEIGHTED:
                    repeats.append(n_non_first_tokens)
            else:

src/inference_endpoint/metrics/reporter.py:628

  • The list-handling branch assumes any top-level JSON list is a tagged TextModelOutput array and reads output from index 1. This will mis-parse other list payloads (e.g., a plain chunk list) and can silently drop the first element. Validate decoded_data[0] == "TextModelOutput" (and expected length) before interpreting the list as a struct, otherwise fall back to (None, None) or legacy handling.
        if "output" not in decoded_data:
            logging.warning("Dictionary data missing required 'output' key")
            return None, None

        # Extract output - can be string or list of strings
        output = (
            _output_sequence_to_str(decoded_data["output"])
            if join_chunks

src/inference_endpoint/metrics/reporter.py:1095

  • zip(..., strict=False) will silently truncate if _parallel_batch_tokenize ever returns a mismatched length (e.g., unexpected tokenizer behavior). Using strict=True (or an explicit length assertion) would fail fast and prevent silently dropping samples from the metric output.
        then `X` will contribute `len(tokenize(S)) - 1` entries in the table, each with the value:
             `(b - a) / len(tokenize(S) - 1)`
        If the sample was completed in non-streaming mode however, then `a` is assumed to be 0, and `X` will
        instead contribute `len(tokenize(S))` entries, each with the value: `b / len(tokenize(S))`

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copilot AI review requested due to automatic review settings March 25, 2026 00:52
@viraatc viraatc force-pushed the feat/viraatc-parallel-tokenizer branch from 695156b to d6e5d7b Compare March 25, 2026 00:52
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Collaborator Author

@viraatc viraatc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Council — Multi-AI Code Review

Reviewed by: Codex + Claude | Depth: standard

Found 2 issues across 2 files:

  • 0 critical/high
  • 2 medium
  • 0 low

@viraatc
Copy link
Copy Markdown
Collaborator Author

viraatc commented Mar 31, 2026

Review Council — Multi-AI Code Review

Reviewed by: Codex + Claude | Depth: standard

Found 2 issues across 2 files:

  • 0 critical/high
  • 2 medium
  • 0 low
# File Line Severity Category Reviewer(s) Summary
1 src/inference_endpoint/load_generator/session.py 113 medium bug Both Inner break only exits inner accuracy generator loop; outer loop continues, issuing one extra sample per remaining generator after stop
2 src/inference_endpoint/metrics/reporter.py 60 medium testing Codex _parallel_batch_tokenize uses callable protocol (tokenizer(...)) but tests/performance/test_reporter.py::CharTokenizer only has .tokenize() — perf tests break with TypeError

Note: 3 additional findings from Codex (stop-wait during busy-wait scheduling, ThreadPoolExecutor sizing, os.cpu_count() overestimate) were filtered as they overlap with issues already raised and resolved in prior review rounds.

Copy link
Copy Markdown
Collaborator Author

@viraatc viraatc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(supplemental — from Claude agent, completed after initial review was posted)

Copilot AI review requested due to automatic review settings March 31, 2026 05:21
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

viraatc and others added 5 commits March 30, 2026 23:01
- Move ThreadPoolExecutor import to module level (PEP 8)
- Remove unused logger variable
- Use strict=True in zip() calls to catch length mismatches
- Add comment explaining early-stop timing in session loop

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use os.sched_getaffinity() for cgroup-aware CPU count with fallback
- Cap ThreadPoolExecutor max_workers to min(n_workers, len(chunks))
- Add test for threaded path of _parallel_batch_tokenize

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add outer loop break in accuracy generator early-stop to prevent
  leaking one sample per remaining generator after stop_requested
- Skip ThreadPoolExecutor on single-core machines (n_workers <= 1)
- Add __call__ to perf test CharTokenizer to match callable protocol

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix misleading log messages after early stop (now distinguishes
  "aborted early" from "all samples issued")
- Fix monkeypatch raising=False for cross-platform sched_getaffinity
- Fix docstring: 2 CPUs → 4 CPUs to match actual test setup
- Add thread-safety note to _parallel_batch_tokenize docstring

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 31, 2026 06:01
@viraatc viraatc force-pushed the feat/viraatc-parallel-tokenizer branch from 2b0ce76 to a8c8b2e Compare March 31, 2026 06:01
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- Skip threading for non-Fast tokenizers (is_fast check) to avoid
  thread-safety issues with Python-only tokenizer backends
- Only log accuracy issuance messages when accuracy generators exist

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@viraatc viraatc requested a review from nv-alicheng March 31, 2026 06:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: optimize report generation time

2 participants