Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Not logging this as a bug because I'm not sure if it's a bug or a configuration issue.
While training loras with AI Toolkit (Qwen 2512 in this case, quantized with an "accuracy recovery adapter") I find myself unable to replicate the training sample outputs in sdnext.
The lora seems to work perfect in the AI Toolkit samples, but it's performing very poorly in sdnext.
To test it more to the root cause, I did a comparison of outputs at zero training steps (pre-training baseline, core model, no lora yet):
AI Toolkit
Prompt: sacbf, candid documentary photo, a woman (sacbf will be the future token, but it's pre-training)
The same prompt/seed/guidance scale/steps in sdnext, on-the-fly sdnq quantization int6. Zero resemblance, besides the greenish background.
Any idea how to approach this?
Beta Was this translation helpful? Give feedback.
All reactions