What is the correct way to create an OrtValue from a pytorch tensor on CUDA? #7529
-
In python, if I have a numpy array, But if I have a torch tensor, Is there a way to create the OrtValue directly from the the torch tensor, such that the tensor doesn't have to be copied to CPU memory and then back to GPU memory? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 7 replies
-
Hmm - I don't think we currently have a way to do this. What you want is to be able to provide a data pointer and metadata for that dat pointer: type, shape, and device information. May I ask what your scenario is ? Do you want to feed an output from Torch as input to an ONNX graph via ORT's python interface and thereby looking to reduce the round-trip latency for the data ? |
Beta Was this translation helpful? Give feedback.
-
On the other hand, is there a way to convert OrtValue to Pytorch tensor on the same device? I need to run some tensor operations provided by PyTorch before passing the OrtValue to the next module. |
Beta Was this translation helpful? Give feedback.
-
Has there been any progress made on this issue yet? |
Beta Was this translation helpful? Give feedback.
-
Hi all, |
Beta Was this translation helpful? Give feedback.
Hmm - I don't think we currently have a way to do this. What you want is to be able to provide a data pointer and metadata for that dat pointer: type, shape, and device information.
May I ask what your scenario is ? Do you want to feed an output from Torch as input to an ONNX graph via ORT's python interface and thereby looking to reduce the round-trip latency for the data ?