Skip to content

Commit ef03777

Browse files
committed
Merge branch 'main' into feat/check-doc-listing
2 parents 80be186 + f10d3c6 commit ef03777

34 files changed

+3451
-539
lines changed

.github/workflows/pr_style_bot.yml

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
name: PR Style Bot
2+
3+
on:
4+
issue_comment:
5+
types: [created]
6+
7+
permissions:
8+
contents: write
9+
pull-requests: write
10+
11+
jobs:
12+
run-style-bot:
13+
if: >
14+
contains(github.event.comment.body, '@bot /style') &&
15+
github.event.issue.pull_request != null
16+
runs-on: ubuntu-latest
17+
18+
steps:
19+
- name: Extract PR details
20+
id: pr_info
21+
uses: actions/github-script@v6
22+
with:
23+
script: |
24+
const prNumber = context.payload.issue.number;
25+
const { data: pr } = await github.rest.pulls.get({
26+
owner: context.repo.owner,
27+
repo: context.repo.repo,
28+
pull_number: prNumber
29+
});
30+
31+
// We capture both the branch ref and the "full_name" of the head repo
32+
// so that we can check out the correct repository & branch (including forks).
33+
core.setOutput("prNumber", prNumber);
34+
core.setOutput("headRef", pr.head.ref);
35+
core.setOutput("headRepoFullName", pr.head.repo.full_name);
36+
37+
- name: Check out PR branch
38+
uses: actions/checkout@v3
39+
env:
40+
HEADREPOFULLNAME: ${{ steps.pr_info.outputs.headRepoFullName }}
41+
HEADREF: ${{ steps.pr_info.outputs.headRef }}
42+
with:
43+
# Instead of checking out the base repo, use the contributor's repo name
44+
repository: ${{ env.HEADREPOFULLNAME }}
45+
ref: ${{ env.HEADREF }}
46+
# You may need fetch-depth: 0 for being able to push
47+
fetch-depth: 0
48+
token: ${{ secrets.GITHUB_TOKEN }}
49+
50+
- name: Debug
51+
env:
52+
HEADREPOFULLNAME: ${{ steps.pr_info.outputs.headRepoFullName }}
53+
HEADREF: ${{ steps.pr_info.outputs.headRef }}
54+
PRNUMBER: ${{ steps.pr_info.outputs.prNumber }}
55+
run: |
56+
echo "PR number: ${{ env.PRNUMBER }}"
57+
echo "Head Ref: ${{ env.HEADREF }}"
58+
echo "Head Repo Full Name: ${{ env.HEADREPOFULLNAME }}"
59+
60+
- name: Set up Python
61+
uses: actions/setup-python@v4
62+
63+
- name: Install dependencies
64+
run: |
65+
pip install .[quality]
66+
67+
- name: Download Makefile from main branch
68+
run: |
69+
curl -o main_Makefile https://raw.githubusercontent.com/huggingface/diffusers/main/Makefile
70+
71+
- name: Compare Makefiles
72+
run: |
73+
if ! diff -q main_Makefile Makefile; then
74+
echo "Error: The Makefile has changed. Please ensure it matches the main branch."
75+
exit 1
76+
fi
77+
echo "No changes in Makefile. Proceeding..."
78+
rm -rf main_Makefile
79+
80+
- name: Run make style and make quality
81+
run: |
82+
make style && make quality
83+
84+
- name: Commit and push changes
85+
id: commit_and_push
86+
env:
87+
HEADREPOFULLNAME: ${{ steps.pr_info.outputs.headRepoFullName }}
88+
HEADREF: ${{ steps.pr_info.outputs.headRef }}
89+
PRNUMBER: ${{ steps.pr_info.outputs.prNumber }}
90+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
91+
run: |
92+
echo "HEADREPOFULLNAME: ${{ env.HEADREPOFULLNAME }}, HEADREF: ${{ env.HEADREF }}"
93+
# Configure git with the Actions bot user
94+
git config user.name "github-actions[bot]"
95+
git config user.email "github-actions[bot]@users.noreply.github.com"
96+
97+
# Make sure your 'origin' remote is set to the contributor's fork
98+
git remote set-url origin "https://x-access-token:${GITHUB_TOKEN}@github.com/${{ env.HEADREPOFULLNAME }}.git"
99+
100+
# If there are changes after running style/quality, commit them
101+
if [ -n "$(git status --porcelain)" ]; then
102+
git add .
103+
git commit -m "Apply style fixes"
104+
# Push to the original contributor's forked branch
105+
git push origin HEAD:${{ env.HEADREF }}
106+
echo "changes_pushed=true" >> $GITHUB_OUTPUT
107+
else
108+
echo "No changes to commit."
109+
echo "changes_pushed=false" >> $GITHUB_OUTPUT
110+
fi
111+
112+
- name: Comment on PR with workflow run link
113+
if: steps.commit_and_push.outputs.changes_pushed == 'true'
114+
uses: actions/github-script@v6
115+
with:
116+
script: |
117+
const prNumber = parseInt(process.env.prNumber, 10);
118+
const runUrl = `${process.env.GITHUB_SERVER_URL}/${process.env.GITHUB_REPOSITORY}/actions/runs/${process.env.GITHUB_RUN_ID}`
119+
120+
await github.rest.issues.createComment({
121+
owner: context.repo.owner,
122+
repo: context.repo.repo,
123+
issue_number: prNumber,
124+
body: `Style fixes have been applied. [View the workflow run here](${runUrl}).`
125+
});
126+
env:
127+
prNumber: ${{ steps.pr_info.outputs.prNumber }}

.github/workflows/pr_tests.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@ name: Fast tests for PRs
22

33
on:
44
pull_request:
5-
branches:
6-
- main
5+
branches: [main]
6+
types: [synchronize]
77
paths:
88
- "src/diffusers/**.py"
99
- "benchmarks/**.py"

docs/source/en/api/loaders/lora.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
2323
- [`LTXVideoLoraLoaderMixin`] provides similar functions for [LTX-Video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
2424
- [`SanaLoraLoaderMixin`] provides similar functions for [Sana](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana).
2525
- [`HunyuanVideoLoraLoaderMixin`] provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video).
26+
- [`Lumina2LoraLoaderMixin`] provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
2627
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
2728
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
2829

@@ -68,6 +69,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
6869

6970
[[autodoc]] loaders.lora_pipeline.HunyuanVideoLoraLoaderMixin
7071

72+
## Lumina2LoraLoaderMixin
73+
74+
[[autodoc]] loaders.lora_pipeline.Lumina2LoraLoaderMixin
75+
7176
## AmusedLoraLoaderMixin
7277

7378
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# DreamBooth training example for Lumina2
2+
3+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.
4+
5+
The `train_dreambooth_lora_lumina2.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
6+
7+
8+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
9+
10+
## Running locally with PyTorch
11+
12+
### Installing the dependencies
13+
14+
Before running the scripts, make sure to install the library's training dependencies:
15+
16+
**Important**
17+
18+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
19+
20+
```bash
21+
git clone https://github.com/huggingface/diffusers
22+
cd diffusers
23+
pip install -e .
24+
```
25+
26+
Then cd in the `examples/dreambooth` folder and run
27+
```bash
28+
pip install -r requirements_sana.txt
29+
```
30+
31+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
32+
33+
```bash
34+
accelerate config
35+
```
36+
37+
Or for a default accelerate configuration without answering questions about your environment
38+
39+
```bash
40+
accelerate config default
41+
```
42+
43+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
44+
45+
```python
46+
from accelerate.utils import write_basic_config
47+
write_basic_config()
48+
```
49+
50+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
51+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.
52+
53+
54+
### Dog toy example
55+
56+
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
57+
58+
Let's first download it locally:
59+
60+
```python
61+
from huggingface_hub import snapshot_download
62+
63+
local_dir = "./dog"
64+
snapshot_download(
65+
"diffusers/dog-example",
66+
local_dir=local_dir, repo_type="dataset",
67+
ignore_patterns=".gitattributes",
68+
)
69+
```
70+
71+
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.
72+
73+
Now, we can launch training using:
74+
75+
```bash
76+
export MODEL_NAME="Alpha-VLLM/Lumina-Image-2.0"
77+
export INSTANCE_DIR="dog"
78+
export OUTPUT_DIR="trained-lumina2-lora"
79+
80+
accelerate launch train_dreambooth_lora_lumina2.py \
81+
--pretrained_model_name_or_path=$MODEL_NAME \
82+
--instance_data_dir=$INSTANCE_DIR \
83+
--output_dir=$OUTPUT_DIR \
84+
--mixed_precision="bf16" \
85+
--instance_prompt="a photo of sks dog" \
86+
--resolution=1024 \
87+
--train_batch_size=1 \
88+
--gradient_accumulation_steps=4 \
89+
--use_8bit_adam \
90+
--learning_rate=1e-4 \
91+
--report_to="wandb" \
92+
--lr_scheduler="constant" \
93+
--lr_warmup_steps=0 \
94+
--max_train_steps=500 \
95+
--validation_prompt="A photo of sks dog in a bucket" \
96+
--validation_epochs=25 \
97+
--seed="0" \
98+
--push_to_hub
99+
```
100+
101+
For using `push_to_hub`, make you're logged into your Hugging Face account:
102+
103+
```bash
104+
huggingface-cli login
105+
```
106+
107+
To better track our training experiments, we're using the following flags in the command above:
108+
109+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
110+
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
111+
112+
## Notes
113+
114+
Additionally, we welcome you to explore the following CLI arguments:
115+
116+
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only.
117+
* `--system_prompt`: A custom system prompt to provide additional personality to the model.
118+
* `--max_sequence_length`: Maximum sequence length to use for text embeddings.
119+
120+
121+
We provide several options for optimizing memory optimization:
122+
123+
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
124+
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
125+
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
126+
127+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2) of the `LuminaPipeline` to know more about the model.

0 commit comments

Comments
 (0)