Skip to content

Commit 93bff6c

Browse files
authored
Merge branch 'main' into tests-encode-prompt
2 parents 7d7599f + f10d3c6 commit 93bff6c

File tree

11 files changed

+2386
-11
lines changed

11 files changed

+2386
-11
lines changed

docs/source/en/api/loaders/lora.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
2323
- [`LTXVideoLoraLoaderMixin`] provides similar functions for [LTX-Video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
2424
- [`SanaLoraLoaderMixin`] provides similar functions for [Sana](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana).
2525
- [`HunyuanVideoLoraLoaderMixin`] provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video).
26+
- [`Lumina2LoraLoaderMixin`] provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
2627
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
2728
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
2829

@@ -68,6 +69,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
6869

6970
[[autodoc]] loaders.lora_pipeline.HunyuanVideoLoraLoaderMixin
7071

72+
## Lumina2LoraLoaderMixin
73+
74+
[[autodoc]] loaders.lora_pipeline.Lumina2LoraLoaderMixin
75+
7176
## AmusedLoraLoaderMixin
7277

7378
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# DreamBooth training example for Lumina2
2+
3+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.
4+
5+
The `train_dreambooth_lora_lumina2.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
6+
7+
8+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
9+
10+
## Running locally with PyTorch
11+
12+
### Installing the dependencies
13+
14+
Before running the scripts, make sure to install the library's training dependencies:
15+
16+
**Important**
17+
18+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
19+
20+
```bash
21+
git clone https://github.com/huggingface/diffusers
22+
cd diffusers
23+
pip install -e .
24+
```
25+
26+
Then cd in the `examples/dreambooth` folder and run
27+
```bash
28+
pip install -r requirements_sana.txt
29+
```
30+
31+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
32+
33+
```bash
34+
accelerate config
35+
```
36+
37+
Or for a default accelerate configuration without answering questions about your environment
38+
39+
```bash
40+
accelerate config default
41+
```
42+
43+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
44+
45+
```python
46+
from accelerate.utils import write_basic_config
47+
write_basic_config()
48+
```
49+
50+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
51+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.
52+
53+
54+
### Dog toy example
55+
56+
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
57+
58+
Let's first download it locally:
59+
60+
```python
61+
from huggingface_hub import snapshot_download
62+
63+
local_dir = "./dog"
64+
snapshot_download(
65+
"diffusers/dog-example",
66+
local_dir=local_dir, repo_type="dataset",
67+
ignore_patterns=".gitattributes",
68+
)
69+
```
70+
71+
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.
72+
73+
Now, we can launch training using:
74+
75+
```bash
76+
export MODEL_NAME="Alpha-VLLM/Lumina-Image-2.0"
77+
export INSTANCE_DIR="dog"
78+
export OUTPUT_DIR="trained-lumina2-lora"
79+
80+
accelerate launch train_dreambooth_lora_lumina2.py \
81+
--pretrained_model_name_or_path=$MODEL_NAME \
82+
--instance_data_dir=$INSTANCE_DIR \
83+
--output_dir=$OUTPUT_DIR \
84+
--mixed_precision="bf16" \
85+
--instance_prompt="a photo of sks dog" \
86+
--resolution=1024 \
87+
--train_batch_size=1 \
88+
--gradient_accumulation_steps=4 \
89+
--use_8bit_adam \
90+
--learning_rate=1e-4 \
91+
--report_to="wandb" \
92+
--lr_scheduler="constant" \
93+
--lr_warmup_steps=0 \
94+
--max_train_steps=500 \
95+
--validation_prompt="A photo of sks dog in a bucket" \
96+
--validation_epochs=25 \
97+
--seed="0" \
98+
--push_to_hub
99+
```
100+
101+
For using `push_to_hub`, make you're logged into your Hugging Face account:
102+
103+
```bash
104+
huggingface-cli login
105+
```
106+
107+
To better track our training experiments, we're using the following flags in the command above:
108+
109+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
110+
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
111+
112+
## Notes
113+
114+
Additionally, we welcome you to explore the following CLI arguments:
115+
116+
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only.
117+
* `--system_prompt`: A custom system prompt to provide additional personality to the model.
118+
* `--max_sequence_length`: Maximum sequence length to use for text embeddings.
119+
120+
121+
We provide several options for optimizing memory optimization:
122+
123+
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
124+
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
125+
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
126+
127+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2) of the `LuminaPipeline` to know more about the model.
Lines changed: 206 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,206 @@
1+
# coding=utf-8
2+
# Copyright 2024 HuggingFace Inc.
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
import logging
17+
import os
18+
import sys
19+
import tempfile
20+
21+
import safetensors
22+
23+
24+
sys.path.append("..")
25+
from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402
26+
27+
28+
logging.basicConfig(level=logging.DEBUG)
29+
30+
logger = logging.getLogger()
31+
stream_handler = logging.StreamHandler(sys.stdout)
32+
logger.addHandler(stream_handler)
33+
34+
35+
class DreamBoothLoRAlumina2(ExamplesTestsAccelerate):
36+
instance_data_dir = "docs/source/en/imgs"
37+
pretrained_model_name_or_path = "hf-internal-testing/tiny-lumina2-pipe"
38+
script_path = "examples/dreambooth/train_dreambooth_lora_lumina2.py"
39+
transformer_layer_type = "layers.0.attn.to_k"
40+
41+
def test_dreambooth_lora_lumina2(self):
42+
with tempfile.TemporaryDirectory() as tmpdir:
43+
test_args = f"""
44+
{self.script_path}
45+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
46+
--instance_data_dir {self.instance_data_dir}
47+
--resolution 32
48+
--train_batch_size 1
49+
--gradient_accumulation_steps 1
50+
--max_train_steps 2
51+
--learning_rate 5.0e-04
52+
--scale_lr
53+
--lr_scheduler constant
54+
--lr_warmup_steps 0
55+
--output_dir {tmpdir}
56+
--max_sequence_length 16
57+
""".split()
58+
59+
test_args.extend(["--instance_prompt", ""])
60+
run_command(self._launch_args + test_args)
61+
# save_pretrained smoke test
62+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
63+
64+
# make sure the state_dict has the correct naming in the parameters.
65+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
66+
is_lora = all("lora" in k for k in lora_state_dict.keys())
67+
self.assertTrue(is_lora)
68+
69+
# when not training the text encoder, all the parameters in the state dict should start
70+
# with `"transformer"` in their names.
71+
starts_with_transformer = all(key.startswith("transformer") for key in lora_state_dict.keys())
72+
self.assertTrue(starts_with_transformer)
73+
74+
def test_dreambooth_lora_latent_caching(self):
75+
with tempfile.TemporaryDirectory() as tmpdir:
76+
test_args = f"""
77+
{self.script_path}
78+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
79+
--instance_data_dir {self.instance_data_dir}
80+
--resolution 32
81+
--train_batch_size 1
82+
--gradient_accumulation_steps 1
83+
--max_train_steps 2
84+
--cache_latents
85+
--learning_rate 5.0e-04
86+
--scale_lr
87+
--lr_scheduler constant
88+
--lr_warmup_steps 0
89+
--output_dir {tmpdir}
90+
--max_sequence_length 16
91+
""".split()
92+
93+
test_args.extend(["--instance_prompt", ""])
94+
run_command(self._launch_args + test_args)
95+
# save_pretrained smoke test
96+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
97+
98+
# make sure the state_dict has the correct naming in the parameters.
99+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
100+
is_lora = all("lora" in k for k in lora_state_dict.keys())
101+
self.assertTrue(is_lora)
102+
103+
# when not training the text encoder, all the parameters in the state dict should start
104+
# with `"transformer"` in their names.
105+
starts_with_transformer = all(key.startswith("transformer") for key in lora_state_dict.keys())
106+
self.assertTrue(starts_with_transformer)
107+
108+
def test_dreambooth_lora_layers(self):
109+
with tempfile.TemporaryDirectory() as tmpdir:
110+
test_args = f"""
111+
{self.script_path}
112+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
113+
--instance_data_dir {self.instance_data_dir}
114+
--resolution 32
115+
--train_batch_size 1
116+
--gradient_accumulation_steps 1
117+
--max_train_steps 2
118+
--cache_latents
119+
--learning_rate 5.0e-04
120+
--scale_lr
121+
--lora_layers {self.transformer_layer_type}
122+
--lr_scheduler constant
123+
--lr_warmup_steps 0
124+
--output_dir {tmpdir}
125+
--max_sequence_length 16
126+
""".split()
127+
128+
test_args.extend(["--instance_prompt", ""])
129+
run_command(self._launch_args + test_args)
130+
# save_pretrained smoke test
131+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
132+
133+
# make sure the state_dict has the correct naming in the parameters.
134+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
135+
is_lora = all("lora" in k for k in lora_state_dict.keys())
136+
self.assertTrue(is_lora)
137+
138+
# when not training the text encoder, all the parameters in the state dict should start
139+
# with `"transformer"` in their names. In this test, we only params of
140+
# `self.transformer_layer_type` should be in the state dict.
141+
starts_with_transformer = all(self.transformer_layer_type in key for key in lora_state_dict)
142+
self.assertTrue(starts_with_transformer)
143+
144+
def test_dreambooth_lora_lumina2_checkpointing_checkpoints_total_limit(self):
145+
with tempfile.TemporaryDirectory() as tmpdir:
146+
test_args = f"""
147+
{self.script_path}
148+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
149+
--instance_data_dir={self.instance_data_dir}
150+
--output_dir={tmpdir}
151+
--resolution=32
152+
--train_batch_size=1
153+
--gradient_accumulation_steps=1
154+
--max_train_steps=6
155+
--checkpoints_total_limit=2
156+
--checkpointing_steps=2
157+
--max_sequence_length 16
158+
""".split()
159+
160+
test_args.extend(["--instance_prompt", ""])
161+
run_command(self._launch_args + test_args)
162+
163+
self.assertEqual(
164+
{x for x in os.listdir(tmpdir) if "checkpoint" in x},
165+
{"checkpoint-4", "checkpoint-6"},
166+
)
167+
168+
def test_dreambooth_lora_lumina2_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
169+
with tempfile.TemporaryDirectory() as tmpdir:
170+
test_args = f"""
171+
{self.script_path}
172+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
173+
--instance_data_dir={self.instance_data_dir}
174+
--output_dir={tmpdir}
175+
--resolution=32
176+
--train_batch_size=1
177+
--gradient_accumulation_steps=1
178+
--max_train_steps=4
179+
--checkpointing_steps=2
180+
--max_sequence_length 166
181+
""".split()
182+
183+
test_args.extend(["--instance_prompt", ""])
184+
run_command(self._launch_args + test_args)
185+
186+
self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"})
187+
188+
resume_run_args = f"""
189+
{self.script_path}
190+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
191+
--instance_data_dir={self.instance_data_dir}
192+
--output_dir={tmpdir}
193+
--resolution=32
194+
--train_batch_size=1
195+
--gradient_accumulation_steps=1
196+
--max_train_steps=8
197+
--checkpointing_steps=2
198+
--resume_from_checkpoint=checkpoint-4
199+
--checkpoints_total_limit=2
200+
--max_sequence_length 16
201+
""".split()
202+
203+
resume_run_args.extend(["--instance_prompt", ""])
204+
run_command(self._launch_args + resume_run_args)
205+
206+
self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-6", "checkpoint-8"})

0 commit comments

Comments
 (0)