Skip to content

[core] use kernels to support _flash_3_hub attention backend #33177

[core] use kernels to support _flash_3_hub attention backend

[core] use kernels to support _flash_3_hub attention backend #33177

Triggered via pull request September 1, 2025 14:10
Status Success
Total duration 14m 23s
Artifacts 5

pr_tests.yml

on: pull_request
check_code_quality
35s
check_code_quality
check_repository_consistency
29s
check_repository_consistency
LoRA tests with PEFT main
9m 44s
LoRA tests with PEFT main
Matrix: run_fast_tests
Matrix: run_staging_tests
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size Digest
pr_main_test_reports
18.6 KB
sha256:c73dd89840f3db85c1982ee284a259502645a53acb041d2535e06651933e3e91
pr_pytorch_examples_torch_example_cpu_test_reports
6.37 KB
sha256:ab2902a9548d262955c5342f6e84478634c52b775bbbe547a93fc2fe77845b4c
pr_pytorch_models_torch_cpu_models_schedulers_test_reports
28.1 KB
sha256:6dd208ad8666cf20307039d4f7ef732ae7c7e976e0230700caee46166da969d4
pr_pytorch_pipelines_torch_cpu_pipelines_test_reports
84.8 KB
sha256:60b57f18cc46158212ac48e0c53cde793e3c1bfee69f429b58512043c9452dd9
pr_torch_hub_test_reports
3.94 KB
sha256:1d03873b123ea6cde5f34447f2496a0b5b5eda6178c036cea945e4bddf615250