Skip to content

Conversation

@kentdan3msu
Copy link
Contributor

What does this PR do?

This PR changes the behavior of the PeftAdapterMixin interface class so that LoRA modules loaded with load_lora_adapter set the self._hf_peft_config_loaded flag to True after successful injection. This change fixes issue #11148.

Previously, the self._hf_peft_config_loaded flag was not set in load_lora_adapter(), only in add_adapter() and add_adapters(). Some functions within the PEFT code such as {enable,disable}_adapters() depend on this flag being set before they will function properly. I believe this change should bring parity between the add_adapter()/add_adapters() and load_lora_adapter() functions.

Since the PeftAdapterMixin class is used by many classes, such as SD3Transformer2DModel (which I was using when I encountered the original issue), this could potentially fix LoRA loading/enabling/disabling issues in many classes across the diffusers library.

Small note: in add_adapter()/add_adapters(), the flag is set before the module is actually injected. In my proposed PR, I end up doing it afterwards, in case there is an issue with the injection process. If there is a good reason to set the flag earlier, I can alter the PR accordingly.

Before submitting

Who can review?

@sayakpaul

Sets the `_hf_peft_config_loaded` flag if a LoRA is successfully loaded in `load_lora_adapter`. Fixes bug huggingface/issues/11148
@sayakpaul
Copy link
Member

Thanks very much!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@sayakpaul sayakpaul merged commit de6a88c into huggingface:main Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants