RX 6600 is not used #6184
jilek-josef
started this conversation in
General
Replies: 1 comment 3 replies
-
I have encountered the same issue. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi I am on bare metal ubuntu and I succesfully managed to setup SD Web UI aside of one thing my CPU does all the hard job. I use this command to run TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py --no-half --skip-torch-cuda-test however my GPU RX 6600 is not used. Please help.
Log:
(venv) pc@pc:~/stable-diffusion-webui$ TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' HSA_OVERRIDE_GFX_VERSION=10.3.0 python launch.py --no-half --skip-torch-cuda-test
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
Commit hash: e672cfb
Installing requirements for Web UI
Launching Web UI with arguments: --no-half
/home/pc/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
return torch._C._cuda_getDeviceCount() > 0
Warning: caught exception 'Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice', memory monitor disabled
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [3e1a125f] from /home/pc/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-full-opt.ckpt
Applying cross attention optimization (InvokeAI).
Model loaded.
Textual inversion embeddings loaded(0):
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Beta Was this translation helpful? Give feedback.
All reactions