Skip to content

Commit f1e0c7c

Browse files
Refactor instructpix2pix lora to support peft (huggingface#10205)
* make base code changes referred from train_instructpix2pix script in examples * change code to use PEFT as discussed in issue 10062 * update README training command * update README training command * refactor variable name and freezing unet * Update examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py Co-authored-by: Sayak Paul <[email protected]> * update README installation instructions. * cleanup code using make style and quality --------- Co-authored-by: Sayak Paul <[email protected]>
1 parent b94cfd7 commit f1e0c7c

File tree

2 files changed

+263
-125
lines changed

2 files changed

+263
-125
lines changed

examples/research_projects/instructpix2pix_lora/README.md

Lines changed: 33 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,42 @@
22
This extended LoRA training script was authored by [Aiden-Frost](https://github.com/Aiden-Frost).
33
This is an experimental LoRA extension of [this example](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py). This script provides further support add LoRA layers for unet model.
44

5+
## Running locally with PyTorch
6+
### Installing the dependencies
7+
8+
Before running the scripts, make sure to install the library's training dependencies:
9+
10+
**Important**
11+
12+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
13+
```bash
14+
git clone https://github.com/huggingface/diffusers
15+
cd diffusers
16+
pip install .
17+
```
18+
19+
Then cd in the example folder and run
20+
```bash
21+
pip install -r requirements.txt
22+
```
23+
24+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
25+
26+
```bash
27+
accelerate config
28+
```
29+
30+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
31+
32+
533
## Training script example
634

735
```bash
836
export MODEL_ID="timbrooks/instruct-pix2pix"
937
export DATASET_ID="instruction-tuning-sd/cartoonization"
1038
export OUTPUT_DIR="instructPix2Pix-cartoonization"
1139

12-
accelerate launch finetune_instruct_pix2pix.py \
40+
accelerate launch train_instruct_pix2pix_lora.py \
1341
--pretrained_model_name_or_path=$MODEL_ID \
1442
--dataset_name=$DATASET_ID \
1543
--enable_xformers_memory_efficient_attention \
@@ -24,7 +52,10 @@ accelerate launch finetune_instruct_pix2pix.py \
2452
--rank=4 \
2553
--output_dir=$OUTPUT_DIR \
2654
--report_to=wandb \
27-
--push_to_hub
55+
--push_to_hub \
56+
--original_image_column="original_image" \
57+
--edited_image_column="cartoonized_image" \
58+
--edit_prompt_column="edit_prompt"
2859
```
2960

3061
## Inference

0 commit comments

Comments
 (0)