AMD GPU with WSL2 #6902
ciufflegend
started this conversation in
General
Replies: 1 comment
-
WSL2 does not support AMD GPU |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I've a Radeon 380X and I'm trying to compute using the GPU with WSL2, Ubuntu 22.04.
I receive this traceback, someone has been able to make AMD GPU work with WSL2?
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2/' python launch.py --precision full --no-half Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] Commit hash: e0e80050091ea7f58ae17c69f31d1b5de5e0ae20 Installing torch and torchvision Traceback (most recent call last): File "/mnt/d/AI_Art/stable-diffusion-webui/launch.py", line 314, in <module> prepare_environment() File "/mnt/d/AI_Art/stable-diffusion-webui/launch.py", line 228, in prepare_environment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "/mnt/d/AI_Art/stable-diffusion-webui/launch.py", line 89, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "/mnt/d/AI_Art/stable-diffusion-webui/launch.py", line 65, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "/mnt/d/AI_Art/stable-diffusion-webui/venv/bin/python" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: <empty> stderr: /mnt/d/AI_Art/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.) return torch._C._cuda_getDeviceCount() > 0 Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Beta Was this translation helpful? Give feedback.
All reactions