Skip to content

Conversation

@DylanChen-NV
Copy link
Collaborator

@DylanChen-NV DylanChen-NV commented Sep 17, 2025

Description

Fix eagle3 fp8 kv target model + bf16 draft model + chunked prefill by mandating the use of FP8 FMHA in draft layers, because the attention of the 2nd chunk context phase needs to load FP8 KV cache.

Test Coverage

A test of eagle3 + FP8 KV target model + bf16 draft model + chunked prefill is added:
tests/unittest/_torch/speculative/test_eagle3.py::test_llama_eagle3[True-TRTLLM-False-True-True-True-True-True-True]

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@DylanChen-NV DylanChen-NV requested review from a team as code owners September 17, 2025 11:34
@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from 4a85e76 to 87ec43c Compare September 17, 2025 11:40
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 17, 2025

📝 Walkthrough

Walkthrough

A new boolean flag is_eagle3 is added and propagated through Python and C++ attention paths. It gates FP8-related scaling/paths in dispatch and initialization, adjusts parameter conversion, extends spec-decoding boolean params to four, updates planning/forward APIs, and augments tests to toggle FP8 target behavior.

Changes

Cohort / File(s) Summary of changes
C++ attention dispatch gating
cpp/tensorrt_llm/common/attentionOp.cpp
Added FusedQKVMaskedAttentionDispatchParams::is_eagle3 and used it to gate static activation scaling, FP8 context FMHA scaling, FP8 paged KV cache path, and E4M3 selection. Propagated flag in enqueueGeneration and convertMMHAParamsToXQAParams to skip setting related scales under Eagle3.
C++ attention op interface
cpp/tensorrt_llm/common/attentionOp.h
Added AttentionOp::mIsEagle3 (default false). Updated data() tuple to include mIsEagle3 after mHasFullAttentionMask, shifting subsequent elements.
THOP bridge updates
cpp/tensorrt_llm/thop/attentionOp.cpp
Expanded spec_decoding_bool_params from 3 to 4 booleans; maps 4th to op->mIsEagle3. Updated validation/error message accordingly.
PyTorch TRT-LLM wrapper
tensorrt_llm/_torch/attention_backend/trtllm.py
Added plan(..., is_eagle3: bool = False) and stored self.is_eagle3. Forward reads is_eagle3 from kwargs and passes it into planning. Extended spec_decoding_bool_params to include is_eagle3.
Speculative model shim
tensorrt_llm/_torch/models/modeling_speculative.py
Initializes self.is_eagle3 = True in Eagle3Attention.init.
Module attention forward
tensorrt_llm/_torch/modules/attention.py
_attn_impl passes is_eagle3=getattr(self, "is_eagle3", False) to self.attn.forward.
Tests and FP8 target toggling
tests/unittest/_torch/speculative/test_eagle3.py
Added fp8_target parameterization and function arg. Selects FP8 model variant and sets kv_cache_dtype to 'fp8' when enabled. Passes dtype to KvCacheConfig.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Module as Attention Module
  participant Wrapper as TrtllmAttentionWrapper
  participant THOP as THOP Binding
  participant CppOp as AttentionOp (C++)
  participant Dispatch as Fused QKV Dispatch

  User->>Module: forward(..., possibly is_eagle3)
  Module->>Wrapper: forward(..., is_eagle3=getattr(self,"is_eagle3", False))
  Wrapper->>Wrapper: plan(..., is_eagle3)
  Wrapper->>THOP: spec_decoding_bool_params[..., is_eagle3]
  THOP->>CppOp: mIsEagle3 = bools[3]
  CppOp->>Dispatch: enqueueGeneration(..., is_eagle3)
  Dispatch->>Dispatch: Gate FP8 scaling/paths if is_eagle3
  Dispatch-->>CppOp: results
  CppOp-->>Wrapper: attention output
  Wrapper-->>Module: output
  Module-->>User: output
  note over Dispatch,CppOp: When is_eagle3=true, skip static activation scaling,<br/>FP8 context FMHA scaling, and related scale settings.
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title follows the required format with an NVBugs ID and [fix] type, and it succinctly summarizes the core issue addressed by the PR—the interaction between Eagle3’s FP8 KV target model, BF16 draft model, and chunked prefill—making it clear to reviewers.
Description Check ✅ Passed The PR description follows the repository template by providing a clear “## Description” section that explains the issue and its resolution, a “## Test Coverage” section listing the new unit test for Eagle3 with FP8 KV target and BF16 draft in chunked prefill, and a “## PR Checklist” confirming review items.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18997 [ run ] triggered by Bot

@PerkzZheng
Copy link
Collaborator

@DylanChen-NV I am not quite sure I understand the fix here. if we disable FP8 context fmha directly, does it mean chunked prefill won't work ? I think the problem is that we should add more clear debug messages to make sure users disable the chunked prefill -> paged context fmha disabled -> fp8 context fmha disabled. see https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/tensorrt_llm/thop/attentionOp.cpp#L606

@PerkzZheng PerkzZheng requested a review from yuxianq September 17, 2025 12:44
@tensorrt-cicd
Copy link
Collaborator

PR_Github #18997 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14245 completed with status: 'FAILURE'

@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19010 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19010 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14255 completed with status: 'FAILURE'

# Context MLA uses separate qkv instead of paged_context_fmha
use_paged_context_fmha = False

is_eagle3 = kwargs.get("is_eagle3", False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a is_eagle3: bool = False instead of using kwargs.


// FP8 FMHA should be used with fp8 workflow together.
if (mFP8ContextFMHA || mFP8ContextMLA)
if ((mFP8ContextFMHA || mFP8ContextMLA) && !mIsEagle3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need mIsEagle3? Can we set mFP8ContextMLA/mPagedContextFMHA to false in cpp/tensorrt_llm/thop/attentionOp.cpp instead so that the common attention op can keep unchanged?

@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from 87ec43c to 224a486 Compare September 30, 2025 07:25
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20348 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20348 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15353 completed with status: 'FAILURE'

@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from 224a486 to a9d1c08 Compare October 7, 2025 03:55
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20706 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20706 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15642 completed with status: 'FAILURE'

@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20719 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20719 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15655 completed with status: 'FAILURE'

@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from a9d1c08 to ba293b3 Compare October 9, 2025 03:40
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20845 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20953 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20953 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15849 completed with status: 'FAILURE'

@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from 5570d3b to a7276c8 Compare October 10, 2025 06:10
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20979 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20979 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15866 completed with status: 'FAILURE'

@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from a7276c8 to c02bcc4 Compare October 10, 2025 09:09
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21017 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21017 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15891 completed with status: 'FAILURE'

Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
Signed-off-by: Dylan Chen <[email protected]>
@DylanChen-NV DylanChen-NV force-pushed the fix/eagle3_fp8_target_model_and_chunked branch from c02bcc4 to b81a11a Compare October 13, 2025 03:40
Signed-off-by: Dylan Chen <[email protected]>
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21142 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21142 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15969 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

# elif fp8_fmha_for_eagle3:
elif self.has_fp8_kv_cache and not self.has_fp8_qdq and out_scale is not None:
# Force to use FP8 FMHA for (eagle3 + FP8 target model + BF16/FP16 draft model) in draft layers
out_dtype = torch.float8_e4m3fn
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is said, this is not true for all cases. on Blackwell, the fp8 fmha kernels can output bf16 directly. In which case, we want to avoid explicitly doing the conversion after attention op.

we better add a flag or something (it is not clear to me yet), which is false by default, so that it won't break other workflows.

mrope_config["mrope_position_deltas"] = mrope_position_deltas

# Be forced to use FP8 FMHA for BF16/FP16 model with FP8 KV cache (e.g. eagle3 + FP8 target model + BF16/FP16 draft model)
forced_to_fp8_fmha = not self.has_quant_scale and self.quant_config is not None and self.quant_config.layer_quant_mode.has_fp8_kv_cache(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above. we can add the conversion kernel inside the attention op (https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/tensorrt_llm/common/attentionOp.cpp), so that if the output dtype is not support on Hopper/Ampere (using fmha_v2), we can invoke the conversion kernel. Exposing the logic outside the attention will complicate the design as this is only needed by fmha_v2.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @PerkzZheng I have moved the logic to attentionOp, and have distinguished the behaviors of Blackwell and pre-Blackwell. CI failure has been fixed locally. Could you please review it again? Thanks.

Signed-off-by: Dylan Chen <[email protected]>
@DylanChen-NV
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21351 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21351 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16118 completed with status: 'FAILURE'

Signed-off-by: Dylan Chen <[email protected]>
// Run the fmha kernel.
mFmhaDispatcher->run(fmhaParams);
if (mFP8FmhaForEagle3 && !mFmhaDispatcher->useTllmGen() && !mFP8AttenOutput)
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we better add some comments here to describe the logic.

if mrope_position_deltas is not None:
mrope_config["mrope_position_deltas"] = mrope_position_deltas

# Be forced to use FP8 FMHA for BF16/FP16 model with FP8 KV cache (e.g. eagle3 + FP8 target model + BF16/FP16 draft model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems too specific (more like a WAR). @yuxianq do you have any insights about this ? thanks.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it is too specific. The purpose of this PR is to add a way to explicitly control whether we use fp8 fmha outside attention op. How about add a force_fp8_fmha to attention (false by default) and only enable it in eagle3 case? We don't need to add new fields to the common AttentionOp.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that makes sense to me. Thanks!

@DylanChen-NV
Copy link
Collaborator Author

Closing the PR since it's so specialized. The fix has been moved to #8910

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants