Replies: 1 comment 4 replies
-
|
Alpaca doesn't have full support for AMD GPUs due to the flatpak sandbox, if you need to run Ollama with GPU I recommend you run Ollama on a docker and just connect it to Alpaca using the 'connect to remote instance' feature available on the preferences dialog |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This is my first time using a Local AI and Alpaca. Everything is working fine except I have one question. My GPU is not being used at all when the AI is giving me a output. Typically local AI models use the GPU for most of the processing so I'm a bit confused why it's only using my CPU. Why is that? I am running Fedora Silverblue, Alpaca Flatpak in the Gnome Store, and a AMD GPU if that helps.
Beta Was this translation helpful? Give feedback.
All reactions