Skip to content

Commit 06ee4db

Browse files
authored
[Chore] add dummy lora attention processors to prevent failures in other libs (#8777)
add dummy lora attention processors to prevent failures in other libs
1 parent 84bbd2f commit 06ee4db

File tree

1 file changed

+20
-0
lines changed

1 file changed

+20
-0
lines changed

src/diffusers/models/attention_processor.py

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2775,6 +2775,26 @@ def __call__(
27752775
return hidden_states
27762776

27772777

2778+
class LoRAAttnProcessor:
2779+
def __init__(self):
2780+
pass
2781+
2782+
2783+
class LoRAAttnProcessor2_0:
2784+
def __init__(self):
2785+
pass
2786+
2787+
2788+
class LoRAXFormersAttnProcessor:
2789+
def __init__(self):
2790+
pass
2791+
2792+
2793+
class LoRAAttnAddedKVProcessor:
2794+
def __init__(self):
2795+
pass
2796+
2797+
27782798
ADDED_KV_ATTENTION_PROCESSORS = (
27792799
AttnAddedKVProcessor,
27802800
SlicedAttnAddedKVProcessor,

0 commit comments

Comments
 (0)