Skip to content
Discussion options

You must be logged in to vote

Solution: to get the "/api/chat/completions" endpoint working with ollama-hosted models, one need to configure the config.toml as if it was an openai-model-provider:

[models.providers.openai]
models = ["llama3.3", "gpt-oss", "mistral-large"]
api_key_env = "OPENAI_API_KEY"
base_url = "https://123.45.67.89/api"

Replies: 3 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by kapsner
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@mdrxy
Comment options

@kapsner
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants