Using standalone llama.cpp (not embedded) #1091
Unanswered
hopperath-dot
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
With the latest update, I cannot seem to update the config to use my local llama.cpp server (built with proper rocm params). If I just change the port under llama c/c++ (offline) and do not start the internal server (which uses slow cpu mode) proxyai appears to default to ollama settings, trying to connect on localhost:11434. I had this working fine on the earlier version of the plugin. I may be missing something simple, but so far I have been able to figure it out. Any suggestions appreciated!
Beta Was this translation helpful? Give feedback.
All reactions