Skip to content

Add support for PyTorch ≥2.5 with CUDA 12.x to enable RTX 50xx compatibility #32

@polecatspeaks

Description

@polecatspeaks

Description:
I’m running ai-dock on a Windows 11 host (Docker Desktop + WSL2) with an NVIDIA RTX 5070/3060 combo. The current PyTorch container (e.g. pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime) in ai-dock isn’t compatible with the RTX 50xx series under CUDA 11.x. As a result, GPU passthrough fails and inference falls back to CPU.

It would be awesome if ai-dock could offer (or document) an official PyTorch 2.5+ Docker image with CUDA 12.x support, so that:

  • docker run --rm --gpus all pytorch/pytorch:2.5.0-cuda12.1-cudnn8-runtime nvidia-smi works out of the box
  • All existing ai-dock services (WebUI, ComfyUI, Deforum, Whisper) can run with full GPU acceleration on RTX 50xx cards

Environment:

  • Host OS: Windows 11 Pro, build 22621 (Docker Desktop w/ WSL2 backend)
  • GPUs: NVIDIA RTX 5070 + RTX 3060
  • Docker: 24.0.6, Docker Compose v2.20.2
  • Current ai-dock branch: main (commit abcdef1234)
  • Observed PyTorch image: pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime

Steps to reproduce:

  1. git clone https://github.com/bmadore/ai-dock.git && cd ai-dock
  2. Configure Docker Desktop to expose WSL2 GPUs.
  3. docker run --rm --gpus all pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime nvidia-smi
  4. Observe that the container fails to detect the RTX 5070 card (falls back to CPU).

Expected behavior:

  • The container lists both RTX 50xx GPUs under nvidia-smi
  • ai-dock services leverage CUDA 12.x for fast, GPU-accelerated inference

Actual behavior:

  • GPU passthrough errors or shows no available devices
  • Inference degrades to CPU-only, which is impractical for large models

Possible solutions:

  • Update ai-dock’s default PyTorch image to pytorch/pytorch:2.5.0-cuda12.1-cudnn8-runtime (or later)
  • Provide an environment variable or config option to override the PyTorch image tag
  • Document a recommended custom image for RTX 50xx support

Additional context:
I’ve verified that PyTorch 2.5 Docker images with CUDA 12.x correctly support my RTX 50xx hardware under WSL2. Happy to test and contribute any patches or validation steps!

Thanks for the work on this stack, I appreciate all the effort and dev time gone into it.
— Chris

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions