Replies: 2 comments
-
@mudler ❤️ |
Beta Was this translation helpful? Give feedback.
-
The support for Apple silicon is a bit sparse at the moment. One of the main point is that I don't have such HW to test things on, so I can't really test things if not via CI, which makes it really hard to iterate development on. That being said, the llama.cpp backend should work, but some reported errors of missing libraries, that are fixed by following the steps to compile it in the CI. I plan to have a cycle to dedicate to bring Metal and Apple support on par, but, as I said it is a slow process as I don't have physical HW to test this on, and it has been a back and forth because of this already. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I don't see Apple GPUs listed on https://github.com/mudler/LocalAGI?tab=readme-ov-file#%EF%B8%8F-hardware-configurations. Is it not a supported HW config, and will fall back to CPU? https://localai.io/features/gpu-acceleration/ says Metal is still in development. Does anyone know what the status is?
Beta Was this translation helpful? Give feedback.
All reactions