How Can I use onnxruntime_server work on CUDA backend. #7053
Unanswered
TaylorHere
asked this question in
Other Q&A
Replies: 2 comments 1 reply
-
Looks like I built version 1.7.0 and use a lower CUDA version, I'm trying to use the document referred version of CUDA and cuDNN |
Beta Was this translation helpful? Give feedback.
1 reply
-
onnxruntime server is deprecated. We don't support it any more. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I build the onnxruntime_server binary from master branch within a container build from Dockerfile.ubuntu_server
follow by Server build Instructions
then I deploy the binary as
But I found that there is no GPU memory or GPU-Util usage at all, looks like it's not working with a CUDA environment
Also I tried to use -DUSE_CUDA=1 flag to build the binary, but I got output like:
How can I enable the CUDA support?
Beta Was this translation helpful? Give feedback.
All reactions