Replies: 1 comment 20 replies
-
Why don't you look at other discussions? If your shark works fine, don't use auto1111. |
Beta Was this translation helpful? Give feedback.
20 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
So i am new to this ai stuff and i tried nod-ai SHARK for stable diffusion first but then also tried automatic 1111 because of the many features it has.
When i start generating my vram goes up to 24/24 gb and stays there even when finished generating. Same on a 5700xt.
The problem is when i want to use hiresfx or i create an image bigger than like +-700*700 px I get an "not enough memory" crash because there is none because of the leak.
I heard that it is a directml-torch bug, is that true?
Is there anyway to fix this myself or a workaround.
Thanks for the help
Beta Was this translation helpful? Give feedback.
All reactions