Skip to content

Conversation

@DN6
Copy link
Collaborator

@DN6 DN6 commented Sep 6, 2024

What does this PR do?

This PR:

  1. Replaces all instances of runwayml/stable-diffusion-v1-5 with Jiali/stable-diffusion-1.5. Until we find a permanent host for the SD1.5 checkpoint. Switching to a finetune would require updating all the slices. So this is a temporary solution.
  2. Replace all instances of runwayml/stable-diffusion-v1-5-inpainting with botp/stable-diffusion-v1-5-inpainting
  3. Removes a few redundant tests in SD2. We have inference tests that just run the same test and just change the scheduler. This isn't providing much signal.
  4. Update LoRA tests to not rely on runwayml/stable-diffusion-v1-5

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@DN6 DN6 requested review from sayakpaul and yiyixuxu September 7, 2024 08:41
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments.

Comment on lines +159 to +160
expected_slice = np.array([0.1199, 0.1171, 0.1229, 0.1188, 0.1210, 0.1147, 0.1260, 0.1346, 0.1152])
assert np.abs(image_slice - expected_slice).max() < 0.003
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably because of the runner change?

assert image.shape == (1, 512, 512, 3)
expected_slice = np.array([0.9983, 1.0, 1.0, 1.0, 1.0, 0.9989, 0.9994, 0.9976, 0.9977])
assert np.abs(image_slice - expected_slice).max() < 3e-3
expected_slice = np.array([0.1509, 0.1492, 0.1531, 0.1485, 0.1501, 0.1465, 0.1581, 0.1690, 0.1499])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same (runner change effects)?

gc.collect()
torch.cuda.empty_cache()

def test_stable_diffusion_adapter_color(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think testing for different variants isn't too bad in general. Instead of deleting them how about we keep all of them but one (perhaps the most downloaded one; depth) commented? We could also do the same for ControlNet tests.

WDYT?

@sayakpaul
Copy link
Member

I think the failing tests are unrelated.

@sayakpaul sayakpaul merged commit 1e8cf27 into main Sep 12, 2024
12 of 14 checks passed
@sayakpaul sayakpaul deleted the nightly-precision branch September 12, 2024 14:51
sayakpaul added a commit that referenced this pull request Dec 23, 2024
* update

* update

* update

* update

* update

---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: YiYi Xu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants