Skip to content

Solutions for PyTorch installation failures.(CUDA is not available.) #4

@EzioGuevara

Description

@EzioGuevara

Environment

GPU: AMD Radeon RX 7900 XTX (gfx1100)
OS: Windows 11 + WSL2 Ubuntu 24.04
WSL Version: 2.6.2.0 (The latest Linux kernel version will also work; for example, mine is 6.6.87.2-microsoft-standard-WSL2.)
Driver: AMD Software Adrenalin Edition 32.0.22029.9039 for WSL2 (In theory, using the latest game version would also work. This isn't actually the main point.)
Python: 3.12
ROCm: 7.1.1 (Theoretically, <7.1.1 just like 6.4.2 is possible.)
PyTorch: 2.6.0+rocm6.4.2

Problem

ROCm installation succeeds and rocminfo detects GPU correctly, but PyTorch fails:
Symptoms

# ROCm can see GPU
$ rocminfo | grep "gfx"
  Name:                    gfx1100
# But PyTorch cannot
$ python3 -c "import torch; print(torch.cuda.is_available())"
False

Root Cause
PyTorch wheels bundle an incompatible libhsa-runtime64.so library that conflicts with WSL2's DirectX-based GPU passthrough (/dev/dxg). WSL2 doesn't use traditional /dev/kfd devices.

Solution

After installing PyTorch wheels, remove incompatible HSA runtime libraries.

 **location=$(pip show torch | grep Location | awk -F ": " '{print $2}')
cd ${location}/torch/lib/ 
rm libhsa-runtime64.so**

However, this method only works with ROCm 6.4.2 and is not compatible with the latest version, 7.1.1 (as of December 15, 2025).

Result

bash$ python3 -c "import torch; print(torch.cuda.is_available())"
true
$ python3 -c "import torch; print(torch.cuda.get_device_name(0))"
AMD Radeon RX 7900 XTX

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions