How to use use_tf32 in Python CUDAExecutionProvider? #20193
Unanswered
DavorJordacevic
asked this question in
API Q&A
Replies: 2 comments 3 replies
-
Hey Davor,
providers = [("CUDAExecutionProvider", {"use_tf32": 0})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_model.onnx", sess_options=sess_options, providers=providers) Hope that helps :) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hello, Thank you. Is this version (1.18) available to download? I am getting this error with the onnxruntime-gpu==1.17.1
Here is the code:
|
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Can someone explain is it possible to use use_tf32 flag via Python API for CUDAExecutionProvider?
Beta Was this translation helpful? Give feedback.
All reactions