Skip to content

Commit b3191c6

Browse files
committed
add to readme about hf login and wandb installation to address #10142 (comment)
1 parent b572635 commit b3191c6

File tree

2 files changed

+22
-0
lines changed

2 files changed

+22
-0
lines changed

examples/advanced_diffusion_training/README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,17 @@ write_basic_config()
6767
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
6868
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
6969

70+
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
71+
```bash
72+
huggingface-cli login
73+
```
74+
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
75+
76+
> [!NOTE]
77+
> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`:
78+
> `pip install wandb`
79+
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
80+
7081
### Pivotal Tuning
7182
**Training with text encoder(s)**
7283

examples/advanced_diffusion_training/README_flux.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,17 @@ write_basic_config()
6565
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
6666
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
6767

68+
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
69+
```bash
70+
huggingface-cli login
71+
```
72+
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
73+
74+
> [!NOTE]
75+
> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`:
76+
> `pip install wandb`
77+
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
78+
6879
### Target Modules
6980
When LoRA was first adapted from language models to diffusion models, it was applied to the cross-attention layers in the Unet that relate the image representations with the prompts that describe them.
7081
More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore

0 commit comments

Comments
 (0)