You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/training/lora.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ accelerate config default
77
77
78
78
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
79
79
80
-
```bash
80
+
```py
81
81
from accelerate.utils import write_basic_config
82
82
83
83
write_basic_config()
@@ -170,7 +170,7 @@ Aside from setting up the LoRA layers, the training script is more or less the s
170
170
171
171
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
172
172
173
-
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our yown Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
173
+
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
174
174
175
175
- saved model checkpoints
176
176
-`pytorch_lora_weights.safetensors` (the trained LoRA weights)
GlueGen is a minimal adapter that allow alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
753
+
GlueGen is a minimal adapter that allow alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
754
754
755
755
Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main)
@@ -1755,7 +1755,7 @@ with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
1755
1755
```
1756
1756
1757
1757
The following code compares the performance of the original stable diffusion xl pipeline with the ipex-optimized pipeline.
1758
-
By using this optimized pipeline, we can get about 1.4-2 times performance boost with BFloat16 on fourth generation of Intel Xeon CPUs,
1758
+
By using this optimized pipeline, we can get about 1.4-2 times performance boost with BFloat16 on fourth generation of Intel Xeon CPUs,
1759
1759
code-named Sapphire Rapids.
1760
1760
1761
1761
```python
@@ -1826,7 +1826,7 @@ This approach is using (optional) CoCa model to avoid writing image description.
1826
1826
1827
1827
This SDXL pipeline support unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
1828
1828
1829
-
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
1829
+
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
1830
1830
1831
1831
```python
1832
1832
from diffusers import DiffusionPipeline
@@ -3397,7 +3397,7 @@ invert_prompt = "A lying cat"
3397
3397
input_image = "siamese.jpg"
3398
3398
steps = 50
3399
3399
3400
-
# Provide prompt used for generation. Same if reconstruction
3400
+
# Provide prompt used for generation. Same if reconstruction
3401
3401
prompt = "A lying cat"
3402
3402
# or different if editing.
3403
3403
prompt = "A lying dog"
@@ -3493,7 +3493,7 @@ output_frames = pipe(
3493
3493
mask_end=0.8,
3494
3494
mask_strength=0.5,
3495
3495
negative_prompt='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
0 commit comments