Replies: 1 comment
-
|
I have not heard of anyone who has done so. As long as you can extract the raw memory underlying the PyTorch arrays (which I assume must be possible for custom CUDA kernels to be possible), it should not be so difficult to use Futhark's C API to operate on them. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I could not find any resources about writing custom CUDA kernels for the pytorch framework. Is it something that has been investigated ?
Beta Was this translation helpful? Give feedback.
All reactions