Skip to content

Conversation

@sayakpaul
Copy link
Member

What does this PR do?

Needed for fine-tuning.

class CogVideoXLoraLoaderMixin(LoraBaseMixin):
r"""
Load LoRA layers into [`CogVideoXTransformer3DModel`]. Specific to [`CogVideoX`].
Load LoRA layers into [`CogVideoXTransformer3DModel`]. Specific to [`CogVideoXPipeline`].
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated change but doesn't hurt I guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks okay to fix here

super().unfuse_lora(components=components)


class Mochi1LoraLoaderMixin(LoraBaseMixin):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A copy-paste of the Cog LoRA loader classes, indicated by the "Copied from ..." comments.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

def test_simple_inference_with_text_denoiser_lora_unfused(self):
super().test_simple_inference_with_text_denoiser_lora_unfused(expected_atol=9e-3)

@unittest.skip("Not supported in Mochi.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good since not supporting T5 finetuning!

Copy link
Contributor

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Don't see any major differences in the lora mixin, so I think everything should be good to merge once the finetuning script is working - going to take a look at it now, thanks!

@sayakpaul
Copy link
Member Author

so I think everything should be good to merge once the finetuning script is working

@a-r-r-o-w thanks! Do you think it could make sense to merge regardless with the tests, etc.? Because this will also allow others to experiment from main. I am okay otherwise too.

@sayakpaul
Copy link
Member Author

The tests failing are related to fact that our CI is currently in PyTorch 2.4. #9961 should fix this. Additional info:
https://huggingface.slack.com/archives/C014N4749J9/p1732002990541779

@sayakpaul
Copy link
Member Author

Seems like the NaN tests are failing with PyTorch 2.5 and when using "CPU" as the test device. Will investigate deeper. It doesn't happen on PyTorch 2.4. Note that all the tests pass when using a GPU even with PyTorch 2.5.

@yiyixuxu can I merge this PR or should we first investigate the NaN issue? It's failing for other PRs too.

Cc: @BenjaminBossan as well. It's the safe_merge argument in the merge() method of peft. Does this sound familiar?

@yiyixuxu
Copy link
Collaborator

@yiyixuxu can I merge this PR or should we first investigate the NaN issue? It's failing for other PRs too.

@a-r-r-o-w wants to test it with fine-tune script first, no?

@sayakpaul
Copy link
Member Author

@a-r-r-o-w wants to test it with fine-tune script first, no?

Well, that is currently done by me here huggingface/finetrainers#90 with Aryan's reviews. However, I don't think the LoRA implementation is dependent on the fine-tuning experiments as it's less likely gonna change (and it's similar to CogVideoX). But okay to wait and will defer to @a-r-r-o-w.

@a-r-r-o-w
Copy link
Contributor

I think should be okay to merge without waiting for finetuning script to work or having an available checkpoint since it will unblock others trying to work on finetuning scripts based on Diffusers (I don't know if there is anyone apart from us yet though). Thanks!

@yiyixuxu yiyixuxu merged commit 805aa93 into main Nov 20, 2024
16 of 18 checks passed
@yiyixuxu yiyixuxu deleted the mochi-1-lor branch November 20, 2024 22:07
sayakpaul added a commit that referenced this pull request Dec 23, 2024
* feat: add lora support to Mochi-1.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants