Skip to content

Cuda 11 Compatibility #7

@jamesheatonrdm

Description

@jamesheatonrdm

I am wanting to train a model using my GPU, it is a GeForce 3060, which is only compatible with CUDA 11 and above.

If I use the required CUDA (10.1) and Pytorch (0.4.1) versions as specified, I cannot send anything to the GPU. Calling .to(device) causes python to hang for multiple minutes and then results in the error CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

If I try to use the required version of PyTorch for my CUDA version by following the instructions here: https://pytorch.org/get-started/locally/ when I try to run the trainer using this version I get:
ninja: build stopped: subcommand failed

This is because of multiple errors in the c++ files due to deprecation. e.g. :
/home/james/ML/Projects/Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum/venv/lib/python3.6/site-packages/torch/include/ATen/Functions.h:467:22: note: no known conversion for argument 1 from ‘at::DeprecatedTypeProperties’ to ‘c10::IntArrayRef {aka c10::ArrayRef<long int>}’

My questions are as follows:

Is it possible to build the required version of pytorch for this project (0.4.1) with CUDA11?

If not, is the only way to fix this just go through and fix all of the deprecation errors?

Any help is appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions