Replies: 3 comments
-
Perhaps it is necessary to reverse conversion from NF4 to FP8/FP16? |
Beta Was this translation helpful? Give feedback.
0 replies
-
See also the section about lora precision |
Beta Was this translation helpful? Give feedback.
0 replies
-
Nothing about flux1-dev-bnb-nf4-v2-based training, just inference and related. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is that possible?
LORA trained on fp16 unet + fp16 T5XXL give poor generation results on bnb-nf4 (less character recognition, weaker style adherence, etc).
Web GUI from bmaltais/kohya_ss throws an error when starting training.
Beta Was this translation helpful? Give feedback.
All reactions