Skip to content
Discussion options

You must be logged in to vote

I have solved this problem by updating to the latest build of OnnxRuntime 1.14

There is still a slight problem that it becomes slower than CUDA again if you change the batch size of your input, whereas CUDA lets you change the batch size with no penalty.

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@fdwr
Comment options

fdwr Feb 2, 2023
Collaborator

Comment options

You must be logged in to vote
1 reply
@fdwr
Comment options

fdwr Feb 3, 2023
Collaborator

Answer selected by elephantpanda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider
3 participants