Skip to content

Remove getEnvEnablePDL in favor of enable_pdl parameter #1446

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

yongwww
Copy link
Collaborator

@yongwww yongwww commented Aug 10, 2025

πŸ“Œ Description

πŸ” Related Issues

πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

βœ… Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

πŸ§ͺ Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yongwww, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've refactored the mechanism for enabling Programmatic Dependent Launch (PDL) within the codebase. Previously, this feature was controlled by checking a global environment variable. Now, I've removed that environment variable check and instead introduced an explicit enable_pdl boolean parameter. This parameter is passed directly to all relevant CUDA kernel launch functions and associated utility calls across various components, including MoE, quantization, and GEMM operations. This change makes the control over PDL more explicit, localized, and easier to manage programmatically, improving clarity and reducing reliance on global state.

Highlights

  • Removal of Global Environment Variable Check: The global getEnvEnablePDL function and its declarations have been completely removed from the codebase.
  • Introduction of Explicit enable_pdl Parameter: A new enable_pdl boolean parameter has been added to numerous CUDA kernel launch functions and related helper functions across various modules, including Mixture-of-Experts (MoE) operations, quantization routines, and GEMM (General Matrix Multiply) implementations.
  • Struct Updates for PDL Control: The TmaWarpSpecializedGroupedGemmInput and TllmGenFmhaRunnerParams structs have been updated to include the enable_pdl flag, allowing this setting to be carried through data structures.
  • Propagation of PDL Control Through Call Stack: The enable_pdl parameter is now explicitly propagated through the entire call stack, from Python wrappers down to the underlying CUDA kernel launches, ensuring direct control over the Programmatic Dependent Launch feature.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the codebase to remove the dependency on the getEnvEnablePDL function, replacing it with an explicit enable_pdl parameter. This is a positive change that improves code clarity and testability by making dependencies explicit. The changes are extensive and mostly mechanical. I've identified a couple of critical issues where the new parameter was not propagated correctly through the call stack, which would result in compilation errors. Please see the detailed comments for suggestions on how to fix these issues.

@@ -3005,7 +3007,7 @@ void CutlassMoeFCRunner<T, WeightType, OutputType, InputType, BackBoneType, Enab
static_cast<UnfusedGemmOutputType const*>(gemm_output), fc2_fp8_quant, fc1_expert_biases,
bias_is_broadcast, expert_first_token_offset, num_experts_per_node, inter_size,
expanded_num_rows, fc1_activation_type, quant_params, use_per_expert_act_scale,
fc2_fp4_act_flat, stream);
fc2_fp4_act_flat, stream, enable_pdl);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The enable_pdl variable is used here, but it's not defined within the scope of gemm1. This will cause a compilation error. This issue also exists for gemm2 and its callees. The enable_pdl parameter needs to be plumbed through several functions.

Specifically, you need to:

  1. Add bool enable_pdl to the function signatures of CutlassMoeFCRunner::gemm1 and CutlassMoeFCRunner::gemm2 in csrc/nv_internal/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h and their implementations in this file.
  2. Pass enable_pdl from runMoe to gemm1 and gemm2.
  3. If BlockScaleFC1 and BlockScaleFC2 are called from gemm1 and gemm2 respectively, you'll need to add bool enable_pdl to their signatures and pass it down as well.

@@ -4593,7 +4601,7 @@

prepareRouting(num_tokens, workspace_ptr_char, stream);
prepareQuantParams(num_tokens, workspace_ptr_char, stream);
prepareTmaWsInputs(num_tokens, workspace_ptr_char, expert_weights, stream);
prepareTmaWsInputs(num_tokens, workspace_ptr_char, expert_weights, stream, false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to prepareRouting on line 4602 is missing the new enable_pdl argument. The function signature for prepareRouting was updated to include bool enable_pdl, but this call site was not updated, which will cause a compilation error. You should probably pass false here, similar to this call to prepareTmaWsInputs.

@@ -259,7 +259,8 @@ void buildMinLatencyActiveExpertMaps(int* num_active_experts_per_node,
int const experts_per_token, int const start_expert,
int const end_expert, int const num_experts_per_node,
int const cluster_rank, int const cluster_size,
int const num_experts_smem, cudaStream_t const stream) {
int const num_experts_smem, cudaStream_t const stream,
bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stream should be the last argument, and please do not set a default value for enable_pdl

@@ -589,7 +590,7 @@ void blockExpertPrefixSum(int const* token_selected_experts, int* blocked_expert
int* blocked_row_to_unpermuted_row, int64_t const num_tokens,
int64_t const num_experts_per_node, int64_t const num_experts_per_token,
int64_t const num_tokens_per_block, int64_t const num_blocks_per_seq,
int const start_expert_id, cudaStream_t stream) {
int const start_expert_id, cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -705,7 +706,7 @@ __global__ void globalExpertPrefixSumKernel(int const* blocked_expert_counts,
void globalExpertPrefixSum(int const* blocked_expert_counts, int* blocked_expert_counts_cumsum,
int64_t* expert_first_token_offset, int64_t const num_experts_per_node,
int64_t const num_tokens_per_block, int64_t const num_blocks_per_seq,
cudaStream_t stream) {
cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -785,7 +786,7 @@ void mergeExpertPrefixSum(int const* blocked_expert_counts, int const* blocked_e
int* permuted_token_selected_experts, int* permuted_row_to_unpermuted_row,
int* unpermuted_row_to_permuted_row, int64_t const num_tokens,
int64_t const num_experts_per_node, int64_t const num_tokens_per_block,
int64_t const num_blocks_per_seq, cudaStream_t stream) {
int64_t const num_blocks_per_seq, cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -842,25 +843,25 @@ void threeStepBuildExpertMapsSortFirstToken(
int64_t* expert_first_token_offset, int* blocked_expert_counts,
int* blocked_expert_counts_cumsum, int* blocked_row_to_unpermuted_row, int64_t const num_tokens,
int64_t const num_experts_per_node, int64_t const num_experts_per_token,
int const start_expert_id, cudaStream_t stream) {
int const start_expert_id, cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

sync_check_cuda_error(stream);

globalExpertPrefixSum(blocked_expert_counts, blocked_expert_counts_cumsum,
expert_first_token_offset, num_experts_per_node, num_tokens_per_block,
num_blocks_per_seq, stream);
num_blocks_per_seq, stream, enable_pdl);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

sync_check_cuda_error(stream);

mergeExpertPrefixSum(blocked_expert_counts, blocked_expert_counts_cumsum,
blocked_row_to_unpermuted_row, permuted_token_selected_experts,
permuted_row_to_unpermuted_row, unpermuted_row_to_permuted_row, num_tokens,
num_experts_per_node, num_tokens_per_block, num_blocks_per_seq, stream);
num_experts_per_node, num_tokens_per_block, num_blocks_per_seq, stream, enable_pdl);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -1624,7 +1625,7 @@ void expandInputRowsKernelLauncher(InputActivationsType const* unpermuted_input,
int64_t* expert_first_token_offset,
TmaWarpSpecializedGroupedGemmInput::ElementSF* fc1_act_sf_flat,
TmaWarpSpecializedGroupedGemmInput::ElementSF const* input_sf,
void const* prequant_scales, cudaStream_t stream) {
void const* prequant_scales, cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -1910,7 +1911,8 @@ void finalizeMoeRoutingKernelLauncher(
int const* permuted_row_to_unpermuted_row, int const* token_selected_experts,
int64_t const* expert_first_token_offset, int64_t const num_rows, int64_t const cols,
int64_t const experts_per_token, int64_t const num_experts_per_node,
MOEParallelismConfig parallelism_config, bool const enable_alltoall, cudaStream_t stream) {
MOEParallelismConfig parallelism_config, bool const enable_alltoall, cudaStream_t stream,
bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -2231,7 +2233,7 @@ void doActivation(T* output, GemmOutputType const* gemm_result, float const* fp8
int64_t inter_size, int64_t expanded_num_tokens, ActivationType activation_type,
QuantParams const& quant_params, bool use_per_expert_act_scale,
TmaWarpSpecializedGroupedGemmInput::ElementSF* fc2_act_sf_flat,
cudaStream_t stream) {
cudaStream_t stream, bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@zhyncs
Copy link
Member

zhyncs commented Aug 12, 2025

@yongwww qq is this pr ready?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants