-
Notifications
You must be signed in to change notification settings - Fork 13.7k
CUDA: attention sinks for mma FlashAttention #15157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3532,7 +3532,8 @@ static bool ggml_backend_cuda_device_supports_op(ggml_backend_dev_t dev, const g | |
| return op->src[1]->ne[0] == 576 && op->src[2]->ne[0] == 512 && op->src[3] && gqa_ratio % 16 == 0; | ||
| } | ||
| // TODO: more general-purpose attention sink support [TAG_ATTN_SINKS] | ||
| if (op->src[4] && op->src[0]->ne[0] != 64 && op->src[0]->ne[0] != 128) { // currently only sinks for head_size 64 and 128 are supported | ||
| if (op->src[4] && !fp16_mma_available(ggml_cuda_info().devices[dev_ctx->device].cc) | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is the
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not for model support because the vector kernels I think cover all currently available models with attention sinks. But this enables running tests with attention sinks and head sizes != 64/128 so I thought it would be better to adjust. |
||
| && op->src[0]->ne[0] != 64 && op->src[0]->ne[0] != 128) { | ||
| return false; | ||
| } | ||
| if (op->src[0]->ne[0] == 192) { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JohannesGaessler I think this breaks Volta, since
fp16_mma_availableis true but the wmma kernel doesn't yet support attention sinksThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, this should be
turing_mma_available.