Replies: 2 comments 3 replies
-
Pre installed models are only present in the All-in-One images. You can use the standard images, that does not have any pre-installed model, see https://localai.io/docs/reference/container-images/ for a list depending on your HW. |
Beta Was this translation helpful? Give feedback.
3 replies
-
Seconding this for @mudler but for adding SOTA models like LLaMA3 or maybe CodeQwen + DeepSeekCoder, directly from HuggingFace? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
someone can help mi to build a docker-compose.yaml file for build a LocalAI version that support CUDA 12 WITHOUT pre-installed models? Thanks a lot to all!
Beta Was this translation helpful? Give feedback.
All reactions