MLX on CUDA #2422
awni
announced in
Announcements
MLX on CUDA
#2422
Replies: 4 comments 7 replies
-
Does MLX fall back to CPU for the missing operations? |
Beta Was this translation helpful? Give feedback.
1 reply
-
What's the general usage of mlx with CUDA support? Macs with external GPUs? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Hello, thx for the handworks)) any chance it will work on arm cpu soon? (nvidia jetson/Ampere Altra Max etc) |
Beta Was this translation helpful? Give feedback.
2 replies
-
This is great, was waiting for a long time! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Quick Start
The first version of the MLX CUDA back-end is up on PyPi and ready to use. To get started:
The CUDA back-end works well for many common cases with notable exceptions listed below. For example, you can use
mlx-lm
to generate text and train or fine-tune LLMs.To do so, first install
mlx-lm
:Then generate text with:
mlx_lm.generate \ --model meta-llama/Llama-3.2-3B-Instruct \ --prompt "Write a story about Einstein" \ -m 512
And LoRA fine-tune with:
Dependencies
Note the current minimum requirements on the CUDA toolkit and driver.
Missing Operations
mx.linalg.svd
,mx.linalg.eig
, etc)As always if you run into any problems, like missing operations, bugs, or performance cliffs please open an issue with steps to reproduce.
Beta Was this translation helpful? Give feedback.
All reactions