You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was going to submit an issue for this but it's likely not really an issue with ONNX but rather an issue with the unmitigated disaster that is still trying to work with CUDA.
Any ideas?
Describe the bug
Attempting to optimise a model with ORTOptimizer throws this exception:
Traceback (most recent call last):
...
File "/home/ubuntu/elevate.jobtitles/.mamba/envs/jobtitles/lib/python3.7/site-packages/optimum/onnxruntime/optimization.py", line 142, in export
only_onnxruntime=optimization_config.optimize_with_onnxruntime_only,
File "/home/ubuntu/elevate.jobtitles/.mamba/envs/jobtitles/lib/python3.7/site-packages/onnxruntime/transformers/optimizer.py", line 238, in optimize_model
disabled_optimizers=disabled_optimizers,
File "/home/ubuntu/elevate.jobtitles/.mamba/envs/jobtitles/lib/python3.7/site-packages/onnxruntime/transformers/optimizer.py", line 103, in optimize_by_onnxruntime
onnx_model_path, sess_options, providers=["CUDAExecutionProvider"], **kwargs
File "/home/ubuntu/elevate.jobtitles/.mamba/envs/jobtitles/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/ubuntu/elevate.jobtitles/.mamba/envs/jobtitles/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:122 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:116 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=0 ; hostname=ip-172-31-33-171 ; expr=cudaSetDevice(info_.device_id);
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 22.04
ONNX Runtime installed from (source or binary): pip
I'm sure some version of something somewhere is not quite what something else was expecting.
It's just ridiculous that in 2022 working with CUDA is still such a mess.
According to https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements "Onnx Runtime built with CUDA 11.4 should be compatible with any CUDA 11.x version" so this should work? CUDA 11.4 is no longer even available from nvidia's ubuntu repo.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I was going to submit an issue for this but it's likely not really an issue with ONNX but rather an issue with the unmitigated disaster that is still trying to work with CUDA.
Any ideas?
Describe the bug
Attempting to optimise a model with ORTOptimizer throws this exception:
System information
I'm sure some version of something somewhere is not quite what something else was expecting.
It's just ridiculous that in 2022 working with CUDA is still such a mess.
According to https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements "Onnx Runtime built with CUDA 11.4 should be compatible with any CUDA 11.x version" so this should work? CUDA 11.4 is no longer even available from nvidia's ubuntu repo.
Many thanks.
Beta Was this translation helpful? Give feedback.
All reactions