Skip to content

Commit 5130cc3

Browse files
authored
Merge branch 'main' into allow-device-placement-bnb
2 parents 2ddcbf1 + 074e123 commit 5130cc3

File tree

5 files changed

+22
-7
lines changed

5 files changed

+22
-7
lines changed

docs/source/en/api/pipelines/flux.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -333,3 +333,15 @@ image.save("flux-fp8-dev.png")
333333
[[autodoc]] FluxControlImg2ImgPipeline
334334
- all
335335
- __call__
336+
337+
## FluxPriorReduxPipeline
338+
339+
[[autodoc]] FluxPriorReduxPipeline
340+
- all
341+
- __call__
342+
343+
## FluxFillPipeline
344+
345+
[[autodoc]] FluxFillPipeline
346+
- all
347+
- __call__

examples/dreambooth/README_flux.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ accelerate launch train_dreambooth_flux.py \
118118

119119
To better track our training experiments, we're using the following flags in the command above:
120120

121-
* `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
121+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
122122
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
123123

124124
> [!NOTE]

examples/dreambooth/README_sd3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ accelerate launch train_dreambooth_sd3.py \
105105

106106
To better track our training experiments, we're using the following flags in the command above:
107107

108-
* `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
108+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
109109
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
110110

111111
> [!NOTE]

examples/dreambooth/README_sdxl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ accelerate launch train_dreambooth_lora_sdxl.py \
9999

100100
To better track our training experiments, we're using the following flags in the command above:
101101

102-
* `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
102+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
103103
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
104104

105105
Our experiments were conducted on a single 40GB A100 GPU.

examples/dreambooth/train_dreambooth_lora_sd3.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1294,10 +1294,13 @@ def save_model_hook(models, weights, output_dir):
12941294
for model in models:
12951295
if isinstance(model, type(unwrap_model(transformer))):
12961296
transformer_lora_layers_to_save = get_peft_model_state_dict(model)
1297-
elif isinstance(model, type(unwrap_model(text_encoder_one))):
1298-
text_encoder_one_lora_layers_to_save = get_peft_model_state_dict(model)
1299-
elif isinstance(model, type(unwrap_model(text_encoder_two))):
1300-
text_encoder_two_lora_layers_to_save = get_peft_model_state_dict(model)
1297+
elif isinstance(model, type(unwrap_model(text_encoder_one))): # or text_encoder_two
1298+
# both text encoders are of the same class, so we check hidden size to distinguish between the two
1299+
hidden_size = unwrap_model(model).config.hidden_size
1300+
if hidden_size == 768:
1301+
text_encoder_one_lora_layers_to_save = get_peft_model_state_dict(model)
1302+
elif hidden_size == 1280:
1303+
text_encoder_two_lora_layers_to_save = get_peft_model_state_dict(model)
13011304
else:
13021305
raise ValueError(f"unexpected save model: {model.__class__}")
13031306

0 commit comments

Comments
 (0)