diff --git a/examples/advanced_diffusion_training/README.md b/examples/advanced_diffusion_training/README.md index cd8c5feda9f0..504ae1471f44 100644 --- a/examples/advanced_diffusion_training/README.md +++ b/examples/advanced_diffusion_training/README.md @@ -67,6 +67,17 @@ write_basic_config() When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment. +Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub: +```bash +huggingface-cli login +``` +This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter. + +> [!NOTE] +> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`: +> `pip install wandb` +> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`. + ### Pivotal Tuning **Training with text encoder(s)** diff --git a/examples/advanced_diffusion_training/README_flux.md b/examples/advanced_diffusion_training/README_flux.md index 8817431bede5..1f83235ad50a 100644 --- a/examples/advanced_diffusion_training/README_flux.md +++ b/examples/advanced_diffusion_training/README_flux.md @@ -65,6 +65,17 @@ write_basic_config() When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment. +Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub: +```bash +huggingface-cli login +``` +This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter. + +> [!NOTE] +> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`: +> `pip install wandb` +> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`. + ### Target Modules When LoRA was first adapted from language models to diffusion models, it was applied to the cross-attention layers in the Unet that relate the image representations with the prompts that describe them. More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore