-
Couldn't load subscription status.
- Fork 6.4k
[CI] Improvements to conditional GPU PR tests #10859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for improving this!
| - "src/diffusers/pipeline_loading_utils.py" | ||
| workflow_dispatch: | ||
| env: | ||
| DIFFUSERS_IS_CI: yes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to run any slow tests here? If so, we should set RUN_SLOW=1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's running on a PR there is a high likelihood of multiple pushes occurring on a branch.
This would end up triggering slow tests many times. Plus having to wait for them to finish before merge could be a bit overkill. Fast GPU tests with tiny versions of models should be able to detect serious breaking functionality.
|
|
||
| mixin_cls = ModelTesterMixin | ||
|
|
||
| elif args.type == "lora": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"others" and "schedulers" aren't covered here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Others and Schedulers are quite fast to run and don't involve downloading any large models. In this case it would print an empty string and the full test suite would run for those modules.
.github/workflows/pr_tests_gpu.yml
Outdated
| CUBLAS_WORKSPACE_CONFIG: :16:8 | ||
| run: | | ||
| python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ | ||
| -s -v -k "not Flax and not Onnx and $TEST_PATTERN" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am getting the following for pipeline type:
test_StableDiffusionMixin_component or test_attention_slicing_forward_pass or test_callback_cfg or test_callback_inputs or test_cfg or test_components_function or test_cpu_offload_forward_pass_twice or test_dict_tuple_outputs_equivalent or test_encode_prompt_works_in_isolation or test_float16_inference or test_group_offloading_inference or test_inference_batch_consistent or test_inference_batch_single_identical or test_layerwise_casting_inference or test_loading_with_incorrect_variants_raises_error or test_loading_with_variants or test_model_cpu_offload_forward_pass or test_num_images_per_prompt or test_pipeline_call_signature or test_save_load_dduf or test_save_load_float16 or test_save_load_local or test_save_load_optional_components or test_sequential_cpu_offload_forward_pass or test_sequential_offload_forward_pass_twice or test_serialization_with_variants or test_to_device or test_to_dtype or test_xformers_attention_forwardGenerator_passIs the "or" joining prefix expected? Is it needed for -k to work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. We only want to run the tests in PipelineTesterMixin, however since that is inherited by the testing class, we need some way to extract just the tests found in the Mixin class. This would grab those methods and pass them to -k so that only those test methods run. The or separator is needed to allow pytest to match multiple methods.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
What does this PR do?
Conditional GPU tests on PR is a good idea but I think the current way might be making the PR CI too noisy/slow. This PR
Adds a dedicated workflow for conditional GPU tests that
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.