Skip to content

Are there plans to support sparse attention inference on 4090 and 5090? #184

@helloyongyang

Description

@helloyongyang

I found that flex_flash_attn only supports sm90. Are there plans to support sparse attention inference on 4090 and 5090?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions