Skip to content

Conversation

@venkywonka
Copy link
Collaborator

@venkywonka venkywonka commented Dec 18, 2025

Description

1. Make draft_vocab_size optional for Eagle3, and default to target_vocab_size

  • Currently to launch eagle3 in trtllm-serve, user needs to explicitly supply draft_vocab_size as part of the nested speculative_decoding_config in the config.yaml.
  • vLLM assumes the draft_vocab_size == target_vocab_size in the default, unspecified situation without throwing an error. Following the same behavior would improve usability and keeping our API tight with vLLM for interop.

2. Deprecate decoding_type: Eagle for pytorch backend, introduce decoding_type: Eagle3.

  • Eagle (v1, v2) are fundamentally different from Eagle3 - and should not be conflated.
  • Related discussion: https://nvidia.slack.com/archives/C059LSY62BT/p1766398949447109
  • users can potentially get confused here - where Eagle seems to imply Eagle3
  • It is nice that a small clarification is mentioned "Eagle (for Eagle3)" - but it would be better if it was clearly named decoding_type: Eagle3 to start with:
image
  • Currently, TRTLLM pytorch backend doe not support Eagle and only supports Eagle 3. The Legacy (TRT) backend is exactly opposite - supports Eagle and does not support Eagle 3.
  • This PR makes this clear by:
    • For the default, pytorch backend: Printing a warning for users providing decoding_type: Eagle to switch over - so its non-breaking - and explicitly inform them that TRTLLM pytorch backend treats decoding_type: Eagle as Eagle3 and therefore expects Eagle 3 draft checkpoint. The user must be explicitly noted that providing an Eagle draft checkpoint will result in errors - currently this is rather implicit and not documented clearly.
    • For TRT legacy backend: Specifying decoding_type: Eagle3 will raise an error. decoding_type: Eagle will continue to work as usual.

Summary by CodeRabbit

  • New Features

    • Eagle3 decoding support now available with PyTorch backend
    • Added Eagle3 configuration class for speculative decoding
  • Documentation

    • Updated documentation to reflect Eagle3 as the preferred decoding option
  • Bug Fixes

    • Improved draft vocabulary size handling with automatic defaults and warnings
  • Tests

    • Added Eagle3 configuration and backend validation tests

✏️ Tip: You can customize this high-level summary in your review settings.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28985 [ run ] triggered by Bot. Commit: 10a4571

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calling this out: previously, removing this would fail.
but since the default is vocab_size anyways, the behavior after current changes should be identical (+ a warning noting user that vocab_size was chosen as default)

@venkywonka venkywonka changed the title Fix eagle3 draft vocab fallback [TRTC-122][bug] Fix eagle3 draft vocab fallback Dec 18, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #28985 [ run ] completed with state SUCCESS. Commit: 10a4571
/LLM/main/L0_MergeRequest_PR pipeline #22209 completed with status: 'SUCCESS'

@venkywonka venkywonka changed the title [TRTC-122][bug] Fix eagle3 draft vocab fallback [TRTC-122][bug] Eagle Specdec UX improvements Dec 22, 2025
@venkywonka venkywonka self-assigned this Dec 23, 2025
@venkywonka venkywonka marked this pull request as ready for review December 23, 2025 10:33
@venkywonka venkywonka requested review from a team as code owners December 23, 2025 10:33
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 23, 2025

📝 Walkthrough

Walkthrough

This pull request introduces Eagle3 as a new speculative decoding type option. Documentation updates reflect the decoding_type preference from Eagle to Eagle3. Core implementation adds Eagle3 suffix detection, default draft_vocab_size handling with warnings, and a new Eagle3DecodingConfig class with TensorRT validation that blocks Eagle3 usage on TRT backend while enabling PyTorch support.

Changes

Cohort / File(s) Summary
Documentation Updates
docs/source/blogs/tech_blog/blog11_GPT_OSS_Eagle3.md, docs/source/blogs/tech_blog/blog6_Llama4_maverick_eagle_guide.md, docs/source/features/speculative-decoding.md, docs/source/legacy/advanced/speculative-decoding.md, examples/models/core/qwen/README.md
Updated decoding_type configuration examples and references from Eagle to Eagle3; added clarification notes about draft_vocab_size handling in Eagle3 sections
Model Architecture & Speculative Decoding
tensorrt_llm/_torch/models/modeling_auto.py
Added Eagle3 suffix detection flag; extends condition for prepending "Eagle3" to model_arch when Eagle3 suffix originally present or draft_vocab_size exists
Speculative Model Implementation
tensorrt_llm/_torch/models/modeling_speculative.py
Introduced default draft_vocab_size handling with warning when missing from config; replicated draft_vocab_size initialization in Eagle3DraftModel and Eagle3ForCausalLM; adjusted hidden_size derivation from config
LLM Arguments Configuration
tensorrt_llm/llmapi/llm_args.py
Added Eagle3DecodingConfig class extending EagleDecodingConfig; extended dispatcher logic to recognize "Eagle3" in DecodingBaseConfig.from_dict and top-level parsing; introduced TensorRT validation that raises ValueError for Eagle3 with TRT backend
Test Configuration Updates
tests/unittest/_torch/speculative/test_eagle3.py
Removed draft_vocab_size field from test_deepseek_eagle3 and test_multi_eagle3 configurations to rely on default handling
New Test Cases
tests/unittest/llmapi/test_llm_args.py
Added three new tests: Eagle3 config parsing validation, Eagle warning for PyTorch backend usage, and Eagle3 error assertion for TensorRT backend usage

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The description comprehensively covers both main objectives (draft_vocab_size optionality and Eagle/Eagle3 deprecation), provides clear rationale, and includes test coverage notes. All major sections are present and well-articulated.
Title check ✅ Passed The title '[TRTC-122][feat] Eagle3 Specdec UX improvements' accurately describes the pull request's main objective: introducing Eagle3 speculative decoding UX improvements, including draft_vocab_size optionality and decoding_type deprecation/introduction.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
docs/source/features/speculative-decoding.md (1)

136-142: Optional: Add language identifier to YAML code block.

The YAML code block at line 136 is missing a language identifier. While this doesn't affect functionality, adding yaml would improve syntax highlighting.

🔎 Suggested improvement
-```
+```yaml
 disable_overlap_scheduler: true
 speculative_config:
   decoding_type: Eagle3
tensorrt_llm/_torch/models/modeling_speculative.py (2)

180-186: Add stacklevel=2 to warnings.warn for correct caller attribution.

Without stacklevel, the warning will point to this line rather than the caller's location, making debugging harder.

🔎 Proposed fix
         if not hasattr(config, "draft_vocab_size"):
             warnings.warn(
                 "Pretrained config does not define 'draft_vocab_size'; assuming it matches 'vocab_size'. "
                 "If the draft head uses a different vocabulary, set 'draft_vocab_size' explicitly "
-                "before exporting to TensorRT-LLM.")
+                "before exporting to TensorRT-LLM.",
+                stacklevel=2)
             config.draft_vocab_size = config.vocab_size

312-326: Add stacklevel=2 to warnings.warn for correct caller attribution.

Same issue as in Eagle3DraftModel.__init__.

🔎 Proposed fix
         config = model_config.pretrained_config
         if not hasattr(config, "draft_vocab_size"):
             warnings.warn(
                 "Pretrained config does not define 'draft_vocab_size'; assuming it matches 'vocab_size'. "
                 "If the draft head uses a different vocabulary, set 'draft_vocab_size' explicitly "
-                "before exporting to TensorRT-LLM.")
+                "before exporting to TensorRT-LLM.",
+                stacklevel=2)
             config.draft_vocab_size = config.vocab_size
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b4f26e and 3888cd8.

📒 Files selected for processing (11)
  • docs/source/blogs/tech_blog/blog11_GPT_OSS_Eagle3.md
  • docs/source/blogs/tech_blog/blog6_Llama4_maverick_eagle_guide.md
  • docs/source/features/speculative-decoding.md
  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
  • docs/source/legacy/advanced/speculative-decoding.md
  • examples/models/core/qwen/README.md
  • tensorrt_llm/_torch/models/modeling_auto.py
  • tensorrt_llm/_torch/models/modeling_speculative.py
  • tensorrt_llm/llmapi/llm_args.py
  • tests/unittest/_torch/speculative/test_eagle3.py
  • tests/unittest/llmapi/test_llm_args.py
💤 Files with no reviewable changes (1)
  • tests/unittest/_torch/speculative/test_eagle3.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/_torch/models/modeling_speculative.py
  • tensorrt_llm/_torch/models/modeling_auto.py
  • tensorrt_llm/llmapi/llm_args.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/_torch/models/modeling_speculative.py
  • tensorrt_llm/_torch/models/modeling_auto.py
  • tensorrt_llm/llmapi/llm_args.py
🧠 Learnings (25)
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.

Applied to files:

  • docs/source/features/speculative-decoding.md
  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-04T07:33:10.618Z
Learnt from: MrGeva
Repo: NVIDIA/TensorRT-LLM PR: 7219
File: tensorrt_llm/_torch/auto_deploy/compile/backends/torch_cudagraph.py:162-168
Timestamp: 2025-09-04T07:33:10.618Z
Learning: When users explicitly provide cuda_graph_batch_sizes in TorchCudagraphCompiler, respect their choices and only sanitize the values (clamp, dedupe, sort) without forcing additional batch sizes like 1 or max_batch_size. Only add commonly-used batch sizes when falling back to the heuristic.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-08T04:10:19.038Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6728
File: cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp:966-966
Timestamp: 2025-08-08T04:10:19.038Z
Learning: TensorRT plugins currently don't support padding functionality, and TensorRT is not getting new features (in maintenance mode). This means that duplicating parameters like mExpertHiddenSize in function calls, even with TODO comments, can be acceptable as pragmatic solutions within these constraints.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-12-12T10:07:36.866Z
Learnt from: lirundong
Repo: NVIDIA/TensorRT-LLM PR: 9725
File: tensorrt_llm/_torch/custom_ops/cuda_tile_custom_ops.py:110-178
Timestamp: 2025-12-12T10:07:36.866Z
Learning: In PyTorch custom operators registered with torch.library.custom_op, mutable operators that return None and specify mutates_args do NOT require a register_fake decorator. The mutation tracking is handled automatically without needing a FakeTensor kernel, as documented in the PyTorch tutorial on mutable Python custom operators.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-24T03:31:28.908Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
  • docs/source/legacy/advanced/speculative-decoding.md
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-17T15:07:01.420Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 6968
File: cpp/tensorrt_llm/thop/loraOp.cpp:133-141
Timestamp: 2025-08-17T15:07:01.420Z
Learning: In TensorRT-LLM's LoRA implementation, the LoraImpl::run() method handles setStream() internally in _runGemm() (line 51 in lora.cpp), along with setWorkspace(). The stream parameter flows from loraOp.cpp through LoraImpl::run() to _runGemm() where setStream() is called appropriately. Adding setStream() in loraOp.cpp would be redundant and goes against the intended architectural design.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-08-17T15:07:01.420Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 6968
File: cpp/tensorrt_llm/thop/loraOp.cpp:133-141
Timestamp: 2025-08-17T15:07:01.420Z
Learning: In TensorRT-LLM's LoRA implementation, the LoraImpl::run() method handles setStream() internally in _runGemm(), along with setWorkspace(). Both stream and workspace are passed as arguments to run(), so there's no need to call setStream() explicitly in loraOp.cpp - this avoids redundancy and follows the intended architectural separation.

Applied to files:

  • docs/source/features/torch_compile_and_piecewise_cuda_graph.md
📚 Learning: 2025-11-27T09:23:18.742Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 9511
File: tests/integration/defs/examples/serve/test_serve.py:136-186
Timestamp: 2025-11-27T09:23:18.742Z
Learning: In TensorRT-LLM testing, when adding test cases based on RCCA commands, the command format should be copied exactly as it appears in the RCCA case, even if it differs from existing tests. For example, some RCCA commands for trtllm-serve may omit the "serve" subcommand while others include it.

Applied to files:

  • docs/source/legacy/advanced/speculative-decoding.md
  • docs/source/blogs/tech_blog/blog6_Llama4_maverick_eagle_guide.md
🧬 Code graph analysis (2)
tests/unittest/llmapi/test_llm_args.py (1)
tensorrt_llm/llmapi/llm_args.py (17)
  • from_dict (198-216)
  • from_dict (262-263)
  • from_dict (292-293)
  • from_dict (443-444)
  • from_dict (471-472)
  • from_dict (488-489)
  • from_dict (586-595)
  • from_dict (725-746)
  • from_dict (802-803)
  • from_dict (877-878)
  • from_dict (953-954)
  • from_dict (991-992)
  • from_dict (1027-1028)
  • from_dict (1043-1044)
  • from_dict (1078-1082)
  • from_dict (1121-1122)
  • Eagle3DecodingConfig (931-932)
tensorrt_llm/_torch/models/modeling_speculative.py (4)
tensorrt_llm/_torch/models/modeling_llama.py (1)
  • config (1092-1093)
tests/unittest/_torch/executor/test_pytorch_model_engine.py (1)
  • config (61-62)
tensorrt_llm/_torch/models/modeling_utils.py (1)
  • config (525-526)
tensorrt_llm/_torch/models/checkpoints/base_weight_mapper.py (1)
  • config (166-169)
🪛 LanguageTool
docs/source/features/torch_compile_and_piecewise_cuda_graph.md

[grammar] ~130-~130: Ensure spelling is correct
Context: ...atus For hot models like deepseek/qwen/lllama, we’ve already wrapped some large modul...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)


[grammar] ~191-~191: Use a hyphen to join words.
Context: ...n for AllReduce & RMSNorm. 1. AllReduce related fusion: Fuse the following opera...

(QB_NEW_EN_HYPHEN)


[grammar] ~217-~217: Use a hyphen to join words.
Context: ...d by user config 4. Insert multi-stream related custom op: since the Fx graph ex...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
docs/source/features/speculative-decoding.md

136-136: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.10)
tests/unittest/llmapi/test_llm_args.py

143-143: DecodingBaseConfig may be undefined, or defined from star imports

(F405)


151-151: Eagle3DecodingConfig may be undefined, or defined from star imports

(F405)


159-159: Unused function argument: args

(ARG001)


159-159: Unused function argument: kwargs

(ARG001)


164-164: DecodingBaseConfig may be undefined, or defined from star imports

(F405)


173-173: TorchLlmArgs may be undefined, or defined from star imports

(F405)


180-180: DecodingBaseConfig may be undefined, or defined from star imports

(F405)


190-190: TrtLlmArgs may be undefined, or defined from star imports

(F405)

tensorrt_llm/_torch/models/modeling_speculative.py

1-1: The file is executable but no shebang is present

(EXE002)


181-181: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


314-314: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)

tensorrt_llm/llmapi/llm_args.py

2431-2434: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (19)
docs/source/blogs/tech_blog/blog11_GPT_OSS_Eagle3.md (1)

87-87: LGTM! Documentation correctly reflects Eagle3 decoding type.

The update to use Eagle3 as the decoding_type aligns with the PR's objective to introduce Eagle3 as a distinct configuration option and improve clarity between Eagle (v1/v2) and Eagle3.

docs/source/features/torch_compile_and_piecewise_cuda_graph.md (2)

53-53: LGTM! Clarification improves documentation accuracy.

The addition of "Specify max capture batch size" makes it clearer what this parameter controls for generation-only CUDA graphs.


93-94: LGTM! Documentation correctly updated to Eagle3.

Consistent with the PR's goal to distinguish Eagle3 from legacy Eagle configurations.

docs/source/legacy/advanced/speculative-decoding.md (1)

174-175: LGTM! Helpful clarification for users.

This note provides clear guidance on the draft_vocab_size defaulting behavior introduced in this PR, helping users understand when they need to explicitly set this parameter versus when they can rely on the default.

docs/source/features/speculative-decoding.md (1)

128-133: LGTM! Documentation clearly explains Eagle3 usage and backward compatibility.

The updates appropriately:

  • List Eagle3 as the preferred decoding type
  • Document that Eagle is accepted as a PyTorch-backend alias
  • Recommend using Eagle3 for clarity

This aligns well with the PR's goal to reduce confusion between Eagle (v1/v2) and Eagle3.

tests/unittest/llmapi/test_llm_args.py (3)

142-152: LGTM! Test appropriately verifies Eagle3 config parsing.

This test ensures that the new Eagle3 decoding type correctly parses to an Eagle3DecodingConfig instance, validating the from_dict dispatch mechanism introduced in the PR.


154-177: LGTM! Test correctly verifies backward compatibility warning.

The test appropriately verifies that:

  1. Using Eagle decoding type on PyTorch backend still works (backward compatibility)
  2. A warning is emitted to guide users toward using Eagle3

The monkeypatch approach to capture warnings is clean and appropriate.


179-191: LGTM! Test correctly verifies TensorRT backend restriction.

This test appropriately ensures that the TensorRT backend rejects Eagle3 decoding type, as the legacy TensorRT backend only supports the original Eagle implementation. The error message provides clear guidance to users.

tensorrt_llm/_torch/models/modeling_auto.py (2)

17-17: LGTM! Early detection of Eagle3 suffix is correctly placed.

Detecting the Eagle3 suffix before any string manipulations ensures the flag is set accurately, which is then used in the conditional logic below.


33-35: LGTM! Extended condition correctly handles Eagle3 checkpoints.

The updated condition now treats models as Eagle3 when either:

  • They have a draft_vocab_size attribute (original check), OR
  • They had an "Eagle3" suffix in the architecture name (new check)

This properly handles Eagle3 checkpoints that identify themselves via naming convention, aligning with the PR's objective to improve Eagle3 checkpoint detection.

docs/source/blogs/tech_blog/blog6_Llama4_maverick_eagle_guide.md (1)

71-71: LGTM! Docker run example correctly uses Eagle3.

The embedded configuration in the docker run command now correctly specifies Eagle3 as the decoding type, ensuring the guide provides accurate instructions for users.

examples/models/core/qwen/README.md (2)

840-841: LGTM! Documentation correctly references Eagle3.

The example configuration now uses Eagle3 as the decoding type, providing accurate guidance for users configuring Eagle3 speculative decoding with Qwen models.


858-858: LGTM! YAML example correctly uses Eagle3.

Consistent with other documentation updates in this PR, the configuration example now properly specifies Eagle3.

tensorrt_llm/_torch/models/modeling_speculative.py (1)

1-1: LGTM!

The warnings import is correctly added to support the new draft_vocab_size fallback warnings.

tensorrt_llm/llmapi/llm_args.py (5)

728-739: LGTM!

The Eagle3 dispatch entry is correctly added to the from_dict mapping, enabling proper deserialization of Eagle3 configs.


931-934: LGTM!

Clean subclass design that reuses EagleDecodingConfig behavior while explicitly identifying as Eagle3 decoding type.


2430-2434: LGTM!

The validation correctly rejects Eagle3 on the TensorRT backend with a clear, actionable error message. The check is properly ordered before the EagleDecodingConfig check since Eagle3DecodingConfig is a subclass.


2436-2449: LGTM!

The EagleDecodingConfig validation for TensorRT backend correctly handles legacy Eagle with appropriate assertion messaging.


2953-2959: LGTM!

Good use of type(x) is EagleDecodingConfig to distinguish the base class from Eagle3DecodingConfig subclass. The warning appropriately guides users to use the explicit Eagle3 type, and the assertion message correctly references "EAGLE3 weights" for the PyTorch context.

@venkywonka venkywonka force-pushed the fix-eagle3-draft-vocab-fallback branch from 3888cd8 to 2218e4d Compare December 23, 2025 10:46
@venkywonka venkywonka changed the title [TRTC-122][bug] Eagle Specdec UX improvements [TRTC-122][feat] Eagle Specdec UX improvements Dec 23, 2025
@venkywonka venkywonka changed the title [TRTC-122][feat] Eagle Specdec UX improvements [TRTC-122][feat] Eagle3 Specdec UX improvements Dec 23, 2025
@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29603 [ run ] triggered by Bot. Commit: 2218e4d

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29603 [ run ] completed with state FAILURE. Commit: 2218e4d
/LLM/main/L0_MergeRequest_PR pipeline #22770 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka venkywonka force-pushed the fix-eagle3-draft-vocab-fallback branch from 2218e4d to 21b499a Compare January 5, 2026 04:05
@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30543 [ run ] triggered by Bot. Commit: 21b499a

Introduce speculative_config.decoding_type: Eagle3 for the PyTorch backend, warn when using Eagle as an alias, and reject Eagle3 on the TensorRT backend. Update docs/examples and add unit tests.

Signed-off-by: Venky Ganesh <[email protected]>
De-duplicate draft_vocab_size fallback warning text and clarify that decoding_type: Eagle is a PyTorch-backend alias for Eagle3 (EAGLE v1/v2 draft checkpoints are incompatible).

Signed-off-by: Venky Ganesh <[email protected]>
Add test_eagle3_defaults_draft_vocab_size_when_missing to explicitly
test the fallback behavior when draft_vocab_size is missing from
pretrained config.

Restore draft_vocab_size in existing test configs (test_deepseek_eagle3,
test_multi_eagle3) rather than relying on the fallback path.

Update test imports to use Eagle3DecodingConfig.

Signed-off-by: Venky Ganesh <[email protected]>
- Export Eagle3DecodingConfig from llmapi
- Add _decoding_type_alias tracking for Eagle→Eagle3 mapping on PyTorch
- Update from_dict to map 'Eagle' to Eagle3DecodingConfig on PyTorch backend
- Show deprecation warning when 'Eagle' is used on PyTorch backend
- Reject 'Eagle3' on TensorRT backend with clear error message
- Update docs and examples to use Eagle3DecodingConfig
- Update test imports to Eagle3DecodingConfig

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
@venkywonka venkywonka force-pushed the fix-eagle3-draft-vocab-fallback branch from 6fcc572 to 429b650 Compare January 7, 2026 02:30
@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30817 [ run ] triggered by Bot. Commit: 429b650

@venkywonka
Copy link
Collaborator Author

/bot run

@venkywonka venkywonka enabled auto-merge (squash) January 7, 2026 03:24
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30824 [ run ] triggered by Bot. Commit: 364ea9a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30824 [ run ] completed with state SUCCESS. Commit: 364ea9a
/LLM/main/L0_MergeRequest_PR pipeline #23805 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30876 [ run ] triggered by Bot. Commit: 364ea9a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30876 [ run ] completed with state SUCCESS. Commit: 364ea9a
/LLM/main/L0_MergeRequest_PR pipeline #23838 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30917 [ run ] triggered by Bot. Commit: 7f9e013

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30917 [ run ] completed with state SUCCESS. Commit: 7f9e013
/LLM/main/L0_MergeRequest_PR pipeline #23878 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30966 [ run ] triggered by Bot. Commit: 7f9e013

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30966 [ run ] completed with state SUCCESS. Commit: 7f9e013
/LLM/main/L0_MergeRequest_PR pipeline #23925 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31032 [ run ] triggered by Bot. Commit: 7f9e013

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31032 [ run ] completed with state SUCCESS. Commit: 7f9e013
/LLM/main/L0_MergeRequest_PR pipeline #23977 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants