Skip to content

[core] use kernels to support _flash_3_hub attention backend #33101

[core] use kernels to support _flash_3_hub attention backend

[core] use kernels to support _flash_3_hub attention backend #33101

Triggered via pull request August 28, 2025 09:32
Status Failure
Total duration 14m 6s
Artifacts 5

pr_tests.yml

on: pull_request
check_code_quality
32s
check_code_quality
check_repository_consistency
27s
check_repository_consistency
LoRA tests with PEFT main
6m 23s
LoRA tests with PEFT main
Matrix: run_fast_tests
Matrix: run_staging_tests
Fit to window
Zoom out
Zoom in

Annotations

2 errors
LoRA tests with PEFT main
Process completed with exit code 1.
LoRA tests with PEFT main
Process completed with exit code 1.

Artifacts

Produced during runtime
Name Size Digest
pr_main_test_reports
15.1 KB
sha256:107f83d97f679335b18f0cb28210e4cb9d74905258126f068eef4c05b4096f0f
pr_pytorch_examples_torch_example_cpu_test_reports
6.43 KB
sha256:a7f201e393d793f107ac2898eb3c48b68ec17cbbb9d469d1e5fa0170606b3117
pr_pytorch_models_torch_cpu_models_schedulers_test_reports
27.9 KB
sha256:1b1575f486f26075e92731e7be0b6a81dfc03fe6aa678c08c30da9c78a4065b2
pr_pytorch_pipelines_torch_cpu_pipelines_test_reports
85.6 KB
sha256:6ba6974751c1c9fa0a39eff12bf7c8abf8212cd3c61c8fc7fb050f4217f58dd2
pr_torch_hub_test_reports
3.94 KB
sha256:60acb169895433470524aeb9db369eabd0728359f834641f5b35f198dacbaa1a