-
Notifications
You must be signed in to change notification settings - Fork 84
Open
Description
First of all, thanks for developing and open-sourcing this amazing project! I ran into an issue while integrating it into my workflow and was hoping you could help.
Environment
- Python: 3.10.9
- PyTorch: 2.4.0
- CUDA: 12.4
- GPU: NVIDIA A800 80GB
Description
I'm integrating spas_sage2_attn_meansim_topk_cuda into a text-to-image model (Qwen-Image) by replacing the original attention computation:
if self.use_sparse_attention:
# Transpose for attention: [B, S, H, D] -> [B, H, S, D]
joint_query = joint_query.transpose(1, 2)
joint_key = joint_key.transpose(1, 2)
joint_value = joint_value.transpose(1, 2)
joint_hidden_states = spas_sage2_attn_meansim_topk_cuda(
joint_query,
joint_key,
joint_value,
topk=0.5,
is_causal=False
)
# Transpose for output: [B, H, S, D] -> [B, S, H, D]
joint_hidden_states = joint_hidden_states.transpose(1, 2)
else:
# Compute joint attention
joint_hidden_states = dispatch_attention_fn(
joint_query,
joint_key,
joint_value,
attn_mask=attention_mask,
dropout_p=0.0,
is_causal=False,
backend=self._attention_backend,
)Everything else remains unchanged. The model works perfectly with dispatch_attention_fn, but when using spas_sage2_attn_meansim_topk_cuda:
- NaN values appear randomly at unpredictable diffusion steps
- Once NaN occurs, it propagates through subsequent steps
- Final output image is completely black
Any ideas on what might be causing this and how to fix it? Thanks in advance!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels