Skip to content

RuntimeError:CUDA error:API call is not supported in the installed CUDA driver #82

@LikeLidoA

Description

@LikeLidoA

hello @chenwydj
555~
I stuck in step one.....
My environment : Hardware:A100-40G Nvidia-Driver:450.xxx(forgot the details) CUDA11.0 python3.6.9 cuDNN8.0.4 TensorRT7.2.5.1
Before I run "train_search.py",I have checked all the requirements. The samples of TensorRT run well.
After I run "train_search.py", the "train" is fine,I get a folder named like "search-pretrain-256x512_F12.L16_batch3-20220608-xxxxxx",there are something in it. Also I can see the terminal show all the 20 epoch have finished
But when it comes to "validation",something wrong.
the terminal show "use TensorRT for latency test"
then comes"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver"

Is update the driver the only way I can solve the problem、、、?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions