MLX on Mac as model experimentation and Nvidia CUDA deployment #2338
-
I saw some cuda related code in the library. I was thinking if this was meant for supporting workflow like Nvidia CUDA deployment for large scale training after building a model on Apple device initially as an experimentation, so there won't need to be code translation to pytorch, which is very time consuming when going from JAX style to autograd. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It goes in both directions. You can develop locally on you Mac and run at scale on Nvidia GPUs in the cloud. And then you and others use the same model you train in the cloud and use it on your Mac and potentially export / deploy it on any Apple platform that supports Metal. |
Beta Was this translation helpful? Give feedback.
It goes in both directions. You can develop locally on you Mac and run at scale on Nvidia GPUs in the cloud. And then you and others use the same model you train in the cloud and use it on your Mac and potentially export / deploy it on any Apple platform that supports Metal.