Flux Lora Patching crashes - add CPU for Lora patching possible? #1353
-
Using Flux NF4 with 4GB VRAM 16GB RAM and presumably running out of vram. 50mb and 35mb Loras work. but 18mb loras crash?
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Not sure if that will help but you can set in Forge |
Beta Was this translation helpful? Give feedback.
-
i have 8gb vram and 24gb ram, but when i load lora in forge it takes a massive amount if time, im using a gguf model for generation. |
Beta Was this translation helpful? Give feedback.
Not sure if that will help but you can set in Forge
Diffusion in low Bits
toAutomatic (LoRa in fp16)
to skip patching loras and they will have better quality. This will takes more vram but what is 18 mb for 4GB.