Can't get Stable Diffusion DirectML to work on my GPU #14941
Unanswered
CanisDirusPrime
asked this question in
Q&A
Replies: 1 comment 3 replies
-
You read every guide except for this. Re-install everything from scratch by following the steps from the link. When you are done installing, edit the |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am running a 5800X3d CPU with 32g of ram and a AMD RX6950XT. I've tried every explanation I've found both on the AMD forums and here to get Stable Diffusion DirectML to run on my GPU. It just refuses to do so. Been trying for a week now, to no avail. I do not have an ONNX tab nor an Olive tab in the UI, either (both of which have been downloaded several times). Any help would be greatly appreciated. Here's links to some of the things I've tried (just a few of the ones I've tried, btw):
https://community.amd.com/t5/ai/how-to-automatic1111-stable-diffusion-webui-with-directml/ba-p/649027
https://www.youtube.com/watch?v=mKxt0kxD5C0
https://github.com/microsoft/Olive/blob/main/examples/directml/stable_diffusion/README.md#setup
And of course the README for SD and for DiretML versions, too.
I've downloaded the Stable-Diffusion-WebUI-DirectML, the k-diffusion and Stability-AI's stablediffusion Extensions, also. None of these seem to make a difference.
I've enabled the ONNX runtime in settings, enabled Olive in settings (along with all the check boxes required) added the sd_unet checkpoint model thing (whatever you call it) under quick settings.
My webui-user.bat file ARGS are: --no-half --precision full --no-half-vae --medvram --opt-sub-quad-attention --opt-split-attention-v1 --skip-torch-cuda-test. Tried adding --use-directml, but that just throws an error and it won't start at all.
I've also tried it with and without xformers
The one thing that gives me hope is my GPU usage flashes up to between 4 and 7% for a spit second then the CPU goes to 40-50% and it sits there for 3-5 minutes before i get the image.
Finally, I have tried both the standard stable_diffusion_webui and the stable_diffusion_webui_diretml versions with all of the options, to no avail. I've tried running them from miniconda and python 3.10.6 directly and in different environments (I have a couple, olive-env and automatic_dmlplugin, mainly)
Here's Conda code that runs at startup:
(automatic1111_olive) C:\stable-diffusion-webui-directml>webui-user.bat
venv "C:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Version: 1.7.0
Commit hash: 7ed2ff1
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning:
pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it frompytorch_lightning.utilities
instead.rank_zero_deprecation(
Invalid version: 'onnxruntime-directml'
Warning: Failed to install onnxruntime-directml package, DirectML extension will not work.
Launching Web UI with arguments: --no-half --precision full --no-half-vae --medvram --opt-sub-quad-attention --opt-split-attention-v1 --skip-torch-cuda-test
Style database not found: C:\stable-diffusion-webui-directml\styles.csv
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
C:\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
C:\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
ControlNet preprocessor location: C:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2024-02-16 15:33:08,478 - ControlNet - INFO - ControlNet v1.1.440
2024-02-16 15:33:08,592 - ControlNet - INFO - ControlNet v1.1.440
*** Error loading script: img2img.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui-directml\modules\scripts.py", line 469, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\stable-diffusion-webui-directml\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\stable-diffusion-webui-directml\extensions\stablediffusion\scripts\img2img.py", line 16, in
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
*** Error loading script: txt2img.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui-directml\modules\scripts.py", line 469, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\stable-diffusion-webui-directml\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\stable-diffusion-webui-directml\extensions\stablediffusion\scripts\txt2img.py", line 14, in
from imwatermark import WatermarkEncoder
ModuleNotFoundError: No module named 'imwatermark'
Loading weights [ec41bd2a82] from C:\stable-diffusion-webui-directml\models\Stable-diffusion\photon_v1.safetensors
2024-02-16 15:33:09,002 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Startup time: 4.9s (prepare environment: 7.8s, initialize shared: 1.4s, load scripts: 1.1s, create ui: 0.8s, gradio launch: 0.5s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 3.9s (load weights from disk: 1.0s, create model: 0.5s, apply weights to model: 2.0s, apply float(): 0.4s, calculate empty prompt: 0.1s).
When generating a txt2img prompt (Flying golden dragon), this is the code I see:
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:27<00:00, 7.35s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:24<00:00, 7.21s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:24<00:00, 7.27s/it]
And here's my sensor panel while this is going on:
And the image it produced (not a bad image, actually, for just 3 words):
If ya'll need any other info to help me figure this out, I'm all ready to give it. Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions