Merging LoRA and getting unexpected results (Is my math wrong?) #123
Unanswered
AI-Casanova
asked this question in
Q&A
Replies: 1 comment
-
Alright, I verified that loading the LoRAs via Automatic1111 and the Additional Networks extension produce the same results. Also, merging at 0.5*√2 (ie 0.7071) produces the exact same results at weight 1 as .5 at strength 2 However I'm not sure how to generalize that to dissimilar ratios (ie 0.4 and 0.6) though I will test 0.4√2 0.6√2 This solves one of my issues, normalizing to weight 1 However, I still am getting somewhat different results than what using both LoRA at 0.5 weight produces. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm attempting to use Automatic1111 to test the strengths at which I want to merge two LoRA.
In A1111:
<Lora:X:.5> <Lora:Y:.5>
yields result AUsing
merge_lora.py --ratios .5 .5
gives similar but different results if I do<Lora:Merge:2>
in A1111merge_lora.py --ratios 1 1
, gives the same results as the last merge if I do<Lora:Merge:.5>
Is Automatic1111 calculating the LoRA addition differently, which results in the slight differences?
At what ratio should I merge to arrive at a LoRA normalizes to alpha=1?
(I tried 0.75 0.75 but that results in a 3rd outcome)
Beta Was this translation helpful? Give feedback.
All reactions