Skip to content
Discussion options

You must be logged in to vote

Gemini helped solved this for me. I had to have the following environment variables to make this work with my local Ollama instance:
AI_CHAT_ENABLED: true
OPENAI_BASE_URI: http://[your server IP]:[Ollama Port]/v1
OPENAI_MODEL: [whatever model you're using]
OPENAI_API_KEY: 'ollama-placeholder-key'

The key changes were to ensure the "/v1" was in the base url, and that you put a dummy API key (what you see above is exactly what is in my install).

Hope that helps someone!

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@Cowboy-Ry
Comment options

Answer selected by Cowboy-Ry
@hackeresq
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants