[LoRA] allow big CUDA tests to run properly for LoRA (and others) #25500
This run and associated checks have been archived and are scheduled for deletion.
Learn more about checks retention
pr_tests.yml
on: pull_request
Annotations
7 warnings
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
pr_flax_flax_cpu_test_reports
Expired
|
3.32 KB |
sha256:99e8ff9a3f00717e919cf64cd5b3311d80f0888abd76939e29128d47bf9c3bb8
|
|
|
pr_pytorch_examples_torch_example_cpu_test_reports
Expired
|
5.54 KB |
sha256:415c3d3738a7a774eea84c19c81c14ed5cedf5069dbdf5c25f707b57efffbe4b
|
|
|
pr_pytorch_models_torch_cpu_models_schedulers_test_reports
Expired
|
19.9 KB |
sha256:71f3d23c06a43af6629918f88ee0f827c22110fdb0f3d169be4f493c0401d636
|
|
|
pr_pytorch_pipelines_torch_cpu_pipelines_test_reports
Expired
|
65.2 KB |
sha256:d43c1e6b5dd5b7e249d8b7f7b40a42f34f6166d48a82df762d7ba722b30426a8
|
|
|
pr_torch_hub_test_reports
Expired
|
4.04 KB |
sha256:8e59ce491cbe718a2e7b4d157c2c0025371b926eadadb7580098bb182466bd76
|
|