-
Notifications
You must be signed in to change notification settings - Fork 6.5k
[fix bug] PixArt inference_steps=1 #11079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@bot /style |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@bot /style |
| try: | ||
| # For LCM one step sampling | ||
| latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).denoised | ||
| except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| except: | |
| except Exception: |
At minimum due to ruff. What's the exception type raised here? Is there anything else we can check instead of relying on try/except?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is due to the Output difference between
diffusers/src/diffusers/schedulers/scheduling_lcm.py
Lines 48 to 49 in 1001425
| prev_sample: torch.Tensor | |
| denoised: Optional[torch.Tensor] = None |
and
diffusers/src/diffusers/schedulers/scheduling_ddim.py
Lines 46 to 47 in 1001425
| prev_sample: torch.Tensor | |
| pred_original_sample: Optional[torch.Tensor] = None |
What do you think is better to write here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gentle ping @hlky
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With return_dict=False a tuple is returned, both denoised and pred_original_sample are at index 1 so we can do
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[1]There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. updated.
… which works for both lcm and dmd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @lawrence-cj
|
Failing tests are unrelated. |
This PR fix the bug when PixArt inference with
num_inference_steps=1