Skip to content

Actions: flash-algo/flash-sparse-attention

Actions

Auto assign reviewers and assignees

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
41 workflow runs
41 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Refactor attention mask and bias handling for efficiency
Auto assign reviewers and assignees #16: Pull request #177 opened by LoserCheems
15h 18m 31s
Bump version to 1.1.8
Auto assign reviewers and assignees #15: Pull request #176 opened by LoserCheems
51s
Increase GitHub Actions build timeout to 6 hours
Auto assign reviewers and assignees #14: Pull request #175 opened by LoserCheems
2m 37s
Remove CUDA architecture '120' for compatibility
Auto assign reviewers and assignees #13: Pull request #174 opened by LoserCheems
10s
Expand build matrix for ARM64 and additional CUDA architectures
Auto assign reviewers and assignees #12: Pull request #173 opened by LoserCheems
1m 12s
Refine build matrix and CUDA architecture specifications
Auto assign reviewers and assignees #11: Pull request #172 opened by LoserCheems
13s
Add support for targeted GPU architecture builds
Auto assign reviewers and assignees #10: Pull request #171 opened by LoserCheems
4m 51s
[FEATURE SUPPORT] Optional mask/bias (3D & 4D)
Auto assign reviewers and assignees #9: Pull request #170 opened by LoserCheems
13s
Implement Unified Sparse Mask Strategy with Block-Level Skipping
Auto assign reviewers and assignees #4: Pull request #164 opened by Copilot AI
Action required
Add optional mask & bias inputs with adaptive computation skipping
Auto assign reviewers and assignees #3: Pull request #162 opened by Copilot AI
Action required
Update installation requirements and streamline process
Auto assign reviewers and assignees #2: Pull request #160 opened by LoserCheems
10s
Add manual PyPI publishing workflow
Auto assign reviewers and assignees #1: Pull request #159 opened by LoserCheems
11s