Skip to content

Actions: HKUSTDial/flash-sparse-attention

Actions

Auto assign reviewers and assignees

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
86 workflow runs
86 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Update CuTe namespace and enhance dependencies
Auto assign reviewers and assignees #89: Pull request #262 opened by LoserCheems
7s
Init cute version
Auto assign reviewers and assignees #88: Pull request #261 opened by LoserCheems
8s
Add return type annotations for attention functions
Auto assign reviewers and assignees #87: Pull request #260 opened by LoserCheems
9s
Update docstrings in attention functions for consistency
Auto assign reviewers and assignees #86: Pull request #259 opened by LoserCheems
7s
Add documentation for MkDocs setup and API reference
Auto assign reviewers and assignees #84: Pull request #257 opened by LoserCheems
7s
Refactor GitHub Actions workflows for package building and publishing
Auto assign reviewers and assignees #83: Pull request #256 opened by LoserCheems
1m 45s
Refactor masking logic in backward kernel functions
Auto assign reviewers and assignees #82: Pull request #255 opened by LoserCheems
9s
Update repository URLs and improve documentation
Auto assign reviewers and assignees #79: Pull request #252 opened by LoserCheems
9s
Update project structure and dependencies
Auto assign reviewers and assignees #77: Pull request #250 opened by LoserCheems
11s
Enhance sparse attention implementation and documentation
Auto assign reviewers and assignees #75: Pull request #248 opened by LoserCheems
9s
[FEATURE] Implement dense attention with masking support
Auto assign reviewers and assignees #74: Pull request #247 opened by LoserCheems
8s
Add softmax threshold parameter for enhanced flexibility
Auto assign reviewers and assignees #73: Pull request #246 opened by LoserCheems
9s
[BUG FIX] Update stride parameters for consistency
Auto assign reviewers and assignees #72: Pull request #245 opened by LoserCheems
12s
Add benchmark functions for Triton attention operations
Auto assign reviewers and assignees #69: Pull request #242 opened by LoserCheems
8s
[BUG FIX] Update launch configuration for RTX Pro 6000
Auto assign reviewers and assignees #68: Pull request #241 opened by LoserCheems
13s
Refactor backward kernels for clarity and optimization
Auto assign reviewers and assignees #67: Pull request #240 opened by LoserCheems
9s
Enhance forward kernel for block range and masking logic
Auto assign reviewers and assignees #66: Pull request #239 opened by LoserCheems
7s