Skip to content

Conversation

@DN6
Copy link
Collaborator

@DN6 DN6 commented Feb 21, 2025

What does this PR do?

Conditional GPU tests on PR is a good idea but I think the current way might be making the PR CI too noisy/slow. This PR

Adds a dedicated workflow for conditional GPU tests that

  1. Sets the pipeline fetcher cutoff to a very high number so that only the always-test pipelines run on PRs
  2. Adds a utility script to fetch the methods of the core testing Mixins e.g PipelineTesterMixin, ModelTesterMixin and injects those methods as filter keys so only core functionality is tested.
  3. Removes single file testing (it always downloads large models to run the tests)

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@DN6 DN6 requested review from hlky and sayakpaul February 21, 2025 11:14
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for improving this!

- "src/diffusers/pipeline_loading_utils.py"
workflow_dispatch:
env:
DIFFUSERS_IS_CI: yes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to run any slow tests here? If so, we should set RUN_SLOW=1.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's running on a PR there is a high likelihood of multiple pushes occurring on a branch.

This would end up triggering slow tests many times. Plus having to wait for them to finish before merge could be a bit overkill. Fast GPU tests with tiny versions of models should be able to detect serious breaking functionality.


mixin_cls = ModelTesterMixin

elif args.type == "lora":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"others" and "schedulers" aren't covered here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others and Schedulers are quite fast to run and don't involve downloading any large models. In this case it would print an empty string and the full test suite would run for those modules.

CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and $TEST_PATTERN" \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am getting the following for pipeline type:

test_StableDiffusionMixin_component or test_attention_slicing_forward_pass or test_callback_cfg or test_callback_inputs or test_cfg or test_components_function or test_cpu_offload_forward_pass_twice or test_dict_tuple_outputs_equivalent or test_encode_prompt_works_in_isolation or test_float16_inference or test_group_offloading_inference or test_inference_batch_consistent or test_inference_batch_single_identical or test_layerwise_casting_inference or test_loading_with_incorrect_variants_raises_error or test_loading_with_variants or test_model_cpu_offload_forward_pass or test_num_images_per_prompt or test_pipeline_call_signature or test_save_load_dduf or test_save_load_float16 or test_save_load_local or test_save_load_optional_components or test_sequential_cpu_offload_forward_pass or test_sequential_offload_forward_pass_twice or test_serialization_with_variants or test_to_device or test_to_dtype or test_xformers_attention_forwardGenerator_pass

Is the "or" joining prefix expected? Is it needed for -k to work?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. We only want to run the tests in PipelineTesterMixin, however since that is inherited by the testing class, we need some way to extract just the tests found in the Mixin class. This would grab those methods and pass them to -k so that only those test methods run. The or separator is needed to allow pytest to match multiple methods.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@DN6 DN6 merged commit cc7b5b8 into main Feb 25, 2025
11 checks passed
@sayakpaul sayakpaul deleted the gpu-test-pr branch February 25, 2025 04:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants