Replies: 2 comments
-
So does it work now? Also, see here. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Yeah it's working now. CPU only basically. I tried a version compiled for amd but it crashed. Probably because it can't handle an APU. AMD has really dropped the ball on this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Update: Set COMMANDLINE_ARGS= --precision full --no-half --skip-torch-cuda-test in webui user bat appears to bypass this issue.
AMD Ryzen 7 5700U, apu. (gpu cpu same die). Windows 11. Why would easydiffusion work and this would not?
I get a lengthy screen of error messages.
"RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'"
"RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
"Stable diffusion model failed to load"
So yeah...
I can run easydiffusion but not AUTOMATIC1111. I got it installed, and I selected a model that does work on my machine from easydiffusion but it will not generate. I get a lengthy error message. I fed the error message to chatgpt, but it was apparently too vague for it to really understand. It said the following: "Switch to a different data type: Instead of using 'Half', you could try using 'Float' (also known as float32) or 'Double' (also known as float64). Depending on your specific use case, this may or may not be an option. Use a GPU with support for float16 operations..."
I'm aware of some hardware issues I have, AMD APU, no nvidia, but as I said it worked with easydiffusion so I'm not inclined to just write it off as a lack of hardware. If anyone has any ideas or can point me to where I should look I'd be appreciative.
Beta Was this translation helpful? Give feedback.
All reactions