You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/advanced_diffusion_training/README_flux.md
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,6 +65,17 @@ write_basic_config()
65
65
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
66
66
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
67
67
68
+
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
69
+
```bash
70
+
huggingface-cli login
71
+
```
72
+
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
73
+
74
+
> [!NOTE]
75
+
> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`:
76
+
> `pip install wandb`
77
+
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
78
+
68
79
### Target Modules
69
80
When LoRA was first adapted from language models to diffusion models, it was applied to the cross-attention layers in the Unet that relate the image representations with the prompts that describe them.
70
81
More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore
0 commit comments