Skip to content

[core] use kernels to support _flash_3_hub attention backend #33176

[core] use kernels to support _flash_3_hub attention backend

[core] use kernels to support _flash_3_hub attention backend #33176

Triggered via pull request September 1, 2025 14:08
Status Cancelled
Total duration 1m 35s
Artifacts

pr_tests.yml

on: pull_request
check_code_quality
36s
check_code_quality
check_repository_consistency
32s
check_repository_consistency
LoRA tests with PEFT main
9s
LoRA tests with PEFT main
Matrix: run_fast_tests
Matrix: run_staging_tests
Fit to window
Zoom out
Zoom in

Annotations

11 errors
LoRA tests with PEFT main
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists
Fast PyTorch Models & Schedulers CPU tests
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists
PyTorch Example CPU tests
Value cannot be null. (Parameter 'ContainerId')
PyTorch Example CPU tests
The operation was canceled.
PyTorch Example CPU tests
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists
Hub tests for models, schedulers, and pipelines
Value cannot be null. (Parameter 'ContainerId')
Hub tests for models, schedulers, and pipelines
The operation was canceled.
Hub tests for models, schedulers, and pipelines
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists
Fast tests for PRs
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists
Fast PyTorch Pipeline CPU tests
A task was canceled.
Fast PyTorch Pipeline CPU tests
Canceling since a higher priority waiting request for Fast tests for PRs-fa3-from-kernels exists