Skip to content

Conversation

@huntcsg
Copy link

@huntcsg huntcsg commented Jan 7, 2026

Problem

When using torch.compile with bfloat16 inference, Qwen image models fail with:

No backend can handle 'apply_rope1': eager: freqs_cis: dtype torch.bfloat16 not in {torch.float32}

Root Cause

  1. Commit 4cd88186 ('Use single apply_rope function across models') refactored all models to use a shared apply_rope1 function and correctly set Qwen's image_rotary_emb to .to(torch.float32)

  2. Commit c4a6b389 ('Lower ltxv mem usage') inadvertently reverted Qwen back to .to(x.dtype) (bfloat16), breaking compatibility with the shared apply_rope1 function

  3. The apply_rope1 function uses addcmul_ in-place operations that fail under torch.compile when freqs_cis is bfloat16

Fix

This restores the correct behavior by ensuring image_rotary_emb is always float32 for the Qwen model, matching the expected dtype for apply_rope1.

Testing

Tested with Qwen Image Edit workflows on cloud deployment.

Fixes the error when using torch.compile with bfloat16 inference:
'No backend can handle apply_rope1: eager: freqs_cis: dtype torch.bfloat16 not in {torch.float32}'

The apply_rope1 function uses addcmul_ in-place operations that fail
under torch.compile when freqs_cis is bfloat16. This restores the
behavior from commit 4cd8818 which was inadvertently reverted in
commit c4a6b38.
@comfyanonymous
Copy link
Member

This should be fixed if you update your comfyui with the requirements.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants