Skip to content

fix: use float base in pow to avoid Inductor dtype mismatch#627

Merged
kylesayrs merged 3 commits intovllm-project:mainfrom
Bias92:fix/inductor-pow-dtype-mismatch
Mar 12, 2026
Merged

fix: use float base in pow to avoid Inductor dtype mismatch#627
kylesayrs merged 3 commits intovllm-project:mainfrom
Bias92:fix/inductor-pow-dtype-mismatch

Conversation

@Bias92
Copy link
Copy Markdown
Contributor

@Bias92 Bias92 commented Mar 11, 2026

when torch.compile with capture_scalar_outputs=True traces through calculate_range, the integer base in 2**num_bits causes num_bits to enter the Inductor graph as a symbolic int64. Inductor then emits libdevice.pow(float32, int64) in Triton codegen which fails on the type mismatch.

changing to 2.0** makes Python produce a float result directly so the Inductor graph stays type-consistent and Triton compiles cleanly.

related pytorch issue: pytorch/pytorch#177131
needed by: vllm-project/llm-compressor#2384

Bias92 added a commit to Bias92/llm-compressor that referenced this pull request Mar 11, 2026
compile inner _compute_candidate_error via torch.compile(dynamic=True).
early stopping preserved in outer loop. compile flag added as oneshot arg.

requires: vllm-project/compressed-tensors#627
related: pytorch/pytorch#177131
Bias92 added a commit to Bias92/llm-compressor that referenced this pull request Mar 11, 2026
compile inner _compute_candidate_error via torch.compile(dynamic=True).
early stopping preserved in outer loop. compile flag added as oneshot arg.

requires: vllm-project/compressed-tensors#627
related: pytorch/pytorch#177131
Signed-off-by: Jaewoo Kim <pewpewplay315@gmail.com>
Bias92 and others added 2 commits March 12, 2026 02:24
Signed-off-by: Bias92 <gongnobi@gmail.com>
Signed-off-by: Bias92 <gongnobi@gmail.com>
@Bias92 Bias92 force-pushed the fix/inductor-pow-dtype-mismatch branch from 57a0dc4 to f6e0dfb Compare March 11, 2026 17:24
Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@kylesayrs kylesayrs enabled auto-merge (squash) March 12, 2026 22:01
@kylesayrs kylesayrs merged commit 239391f into vllm-project:main Mar 12, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants