Skip to content

Commit b878b4c

Browse files
committed
lora
1 parent 0bac7ba commit b878b4c

File tree

2 files changed

+36
-0
lines changed

2 files changed

+36
-0
lines changed

docs/source/en/api/attnprocessor.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,10 @@ An attention processor is a class for applying different types of attention mech
2828

2929
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
3030

31+
## Allegro
32+
33+
[[autodoc]] models.attention_processor.AllegroAttnProcessor2_0
34+
3135
## AuraFlow
3236

3337
[[autodoc]] models.attention_processor.AuraFlowAttnProcessor2_0
@@ -106,6 +110,22 @@ An attention processor is a class for applying different types of attention mech
106110

107111
[[autodoc]] models.attention_processor.LuminaAttnProcessor2_0
108112

113+
## Mochi
114+
115+
[[autodoc]] models.attention_processor.MochiAttnProcessor2_0
116+
117+
[[autodoc]] models.attention_processor.MochiVaeAttnProcessor2_0
118+
119+
## Sana
120+
121+
[[autodo]] models.attention_processor.SanaLinearAttnProcessor2_0
122+
123+
[[autodoc]] models.attention_processor.SanaMultscaleAttnProcessor2_0
124+
125+
[[autodoc]] models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0
126+
127+
[[autodoc]] models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0
128+
109129
## Stable Audio
110130

111131
[[autodoc]] models.attention_processor.StableAudioAttnProcessor2_0
@@ -121,3 +141,7 @@ An attention processor is a class for applying different types of attention mech
121141
[[autodoc]] models.attention_processor.XFormersAttnProcessor
122142

123143
[[autodoc]] models.attention_processor.XFormersAttnAddedKVProcessor
144+
145+
## XLAFlashAttnProcessor2_0
146+
147+
[[autodoc]] models.attention_processor.XLAFlashAttnProcessor2_0

src/diffusers/models/attention_processor.py

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5423,21 +5423,33 @@ def __call__(self, attn: SanaMultiscaleLinearAttention, hidden_states: torch.Ten
54235423

54245424

54255425
class LoRAAttnProcessor:
5426+
r"""
5427+
Processor for implementing attention with LoRA.
5428+
"""
54265429
def __init__(self):
54275430
pass
54285431

54295432

54305433
class LoRAAttnProcessor2_0:
5434+
r"""
5435+
Processor for implementing attention with LoRA (enabled by default if you're using PyTorch 2.0).
5436+
"""
54315437
def __init__(self):
54325438
pass
54335439

54345440

54355441
class LoRAXFormersAttnProcessor:
5442+
r"""
5443+
Processor for implementing attention with LoRA using xFormers.
5444+
"""
54365445
def __init__(self):
54375446
pass
54385447

54395448

54405449
class LoRAAttnAddedKVProcessor:
5450+
r"""
5451+
Processor for implementing attention with LoRA with extra learnable key and value matrices for the text encoder.
5452+
"""
54415453
def __init__(self):
54425454
pass
54435455

0 commit comments

Comments
 (0)