@@ -25,7 +25,7 @@ You'll also need access to SDXL by accepting the model license at [diffusers/sdx
2525
2626### Basic Training Example
2727
28- bash
28+ ``` bash
2929export MODEL_NAME=" diffusers/sdxl-instructpix2pix-768"
3030export DATASET_ID=" fusing/instructpix2pix-1000-samples"
3131
@@ -51,7 +51,7 @@ python train_instruct_pix2pix_lora_sdxl.py \
5151--report_to=wandb \
5252-- push_to_hub \
5353-- enable_xformers_memory_efficient_attention
54-
54+ ```
5555
5656
5757## LoRA Configuration
@@ -72,7 +72,7 @@ The script includes LoRA-specific hyperparameters:
7272
7373### Multi-GPU Training
7474
75- bash
75+ ``` bash
7676accelerate launch --mixed_precision=" fp16" --multi_gpu train_instruct_pix2pix_lora_sdxl.py \
7777--pretrained_model_name_or_path=$MODEL_NAME \
7878--dataset_name=$DATASET_ID \
@@ -95,23 +95,23 @@ accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix_lo
9595--report_to=wandb \
9696-- push_to_hub \
9797-- enable_xformers_memory_efficient_attention
98-
98+ ```
9999### Resume from Checkpoint
100100
101- bash
101+ ``` bash
102102python train_instruct_pix2pix_lora_sdxl.py \
103103--pretrained_model_name_or_path=$MODEL_NAME \
104104--dataset_name=$DATASET_ID \
105105--resume_from_checkpoint=" ./output/checkpoint-5000" \
106106--output_dir=" ./output" \
107107-- enable_xformers_memory_efficient_attention
108-
108+ ```
109109
110110### Using a Custom VAE
111111
112112For improved quality and stability, use madebyollin's fp16-fix VAE:
113113
114- bash
114+ ``` bash
115115python train_instruct_pix2pix_lora_sdxl.py \
116116--pretrained_model_name_or_path=$MODEL_NAME \
117117--pretrained_vae_model_name_or_path=" madebyollin/sdxl-vae-fp16-fix" \
@@ -135,7 +135,7 @@ python train_instruct_pix2pix_lora_sdxl.py \
135135--report_to=wandb \
136136-- push_to_hub \
137137-- enable_xformers_memory_efficient_attention
138-
138+ ```
139139## Key Arguments
140140
141141### Model & Data
@@ -171,7 +171,7 @@ python train_instruct_pix2pix_lora_sdxl.py \
171171
172172After training, load and use your LoRA model:
173173
174- python
174+ ``` bash
175175import torch
176176from diffusers import StableDiffusionXLInstructPix2PixPipeline
177177from PIL import Image
@@ -207,7 +207,7 @@ guidance_scale=4.0,
207207).images[0]
208208
209209edited_image.save(" edited_image.png" )
210-
210+ ```
211211
212212### Inference Parameters
213213
0 commit comments