Skip to content

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input. #580

@Sumsarium

Description

@Sumsarium

Hi,
Since switching from TensorFlow to PyTorch, we’ve encountered issues when running on a GPU (though the timing may just coincide with the switch).

The error occurs primarily with metagenomic samples, while:

  • There are no problems when running on CPU (but takes days instead of hours).
  • Prokaryotic isolate genomes run fine on GPU (typically 1-5 contigs).

Here’s what we’ve tested so far:

  • Pip installation (latest test with Medaka v2.2.0 and 2.1.1 and 2.1.0)
  • Same issue via conda
  • Same issue across multiple metagenomic samples
  • (Error seems related to small contigs)
  • Tested on different servers (RTX 3090 and RTX 4090, each with 24 GB VRAM)

The error:

[11:59:14 - Feature] Pileup counts do not span requested region, requested u6790815ctg:0-3882, received 1-3881.
Traceback (most recent call last):
  File "/data/testrun/medaka_venv/bin/medaka", line 7, in <module>
    sys.exit(main())
             ^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/medaka/medaka.py", line 932, in main
    args.func(args)
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/medaka/prediction.py", line 238, in predict
    new_remainders = run_prediction(
                     ^^^^^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/medaka/prediction.py", line 47, in run_prediction
    class_probs = model.predict_on_batch(batch)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/medaka/models.py", line 313, in predict_on_batch
    x = self.forward(x).detach().cpu()
        ^^^^^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/medaka/architectures/gru.py", line 66, in forward
    x = self.gru(x)[0]
        ^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/testrun/medaka_venv/lib/python3.12/site-packages/torch/nn/modules/rnn.py", line 1394, in forward
    result = _VF.gru(
             ^^^^^^^^
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
Failed to run medaka inference.

Installed via pip:

Package                  Version
------------------------ -----------
certifi                  2025.11.12
cffi                     2.0.0
charset-normalizer       3.4.4
cloudpickle              3.1.2
edlib                    1.3.9.post1
filelock                 3.20.0
fsspec                   2025.12.0
h5py                     3.15.1
idna                     3.11
importlib_metadata       8.7.0
intervaltree             3.1.0
Jinja2                   3.1.6
MarkupSafe               3.0.3
medaka                   2.2.0
mpmath                   1.3.0
networkx                 3.6.1
numpy                    2.3.5
nvidia-cublas-cu12       12.8.4.1
nvidia-cuda-cupti-cu12   12.8.90
nvidia-cuda-nvrtc-cu12   12.8.93
nvidia-cuda-runtime-cu12 12.8.90
nvidia-cudnn-cu12        9.10.2.21
nvidia-cufft-cu12        11.3.3.83
nvidia-cufile-cu12       1.13.1.3
nvidia-curand-cu12       10.3.9.90
nvidia-cusolver-cu12     11.7.3.90
nvidia-cusparse-cu12     12.5.8.93
nvidia-cusparselt-cu12   0.7.1
nvidia-nccl-cu12         2.27.5
nvidia-nvjitlink-cu12    12.8.93
nvidia-nvshmem-cu12      3.3.20
nvidia-nvtx-cu12         12.8.90
ont-fast5-api            4.1.3
ont-mappy                2.28
ont-parasail             1.3.4
orjson                   3.11.5
packaging                25.0
pip                      25.3
progressbar33            2.4
pyabpoa                  1.5.5
pycparser                2.23
pysam                    0.23.0
pyspoa                   0.3.2
pyvers                   0.1.0
requests                 2.32.5
setuptools               80.9.0
sortedcontainers         2.4.0
sympy                    1.14.0
tensordict               0.10.0
toml                     0.10.2
torch                    2.9.1
tqdm                     4.67.1
triton                   3.5.1
typing_extensions        4.15.0
urllib3                  2.6.2
wurlitzer                3.1.1
zipp                     3.23.0

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:01:00.0  On |                  N/A |
|  0%   31C    P8             23W /  420W |     398MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            2811      G   /usr/lib/xorg/Xorg                      194MiB |
|    0   N/A  N/A            3226      G   /usr/bin/gnome-shell                     25MiB |
|    0   N/A  N/A            3934      G   .../teamviewer/tv_bin/TeamViewer         14MiB |
|    0   N/A  N/A            4717      G   /usr/bin/nautilus                        15MiB |
|    0   N/A  N/A            5765      G   /proc/self/exe                           65MiB |
+-----------------------------------------------------------------------------------------+

Linux 6.14.0-37-generic #37~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 20 10:25:38 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

I hope that you are able to assist. Any help would be much appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions