-
Notifications
You must be signed in to change notification settings - Fork 0
Using Ollama
Griffen Fargo edited this page Aug 12, 2025
·
5 revisions
The easiest way to use Ollama out of the box with coco is to add or update the service option in the coco config to be ollama.
If you want to go deeper than shorthand aliases of openai and ollama and tinker with the underlying model, temperature, endpoint, or other settings available in the OpenAILLMService or OllamaLLMService interfaces.
{
"defaultBranch": "main",
"mode": "interactive",
"service": {
"provider": "ollama",
"model": "qwen2.5-coder:latest",
"endpoint": "http://localhost:11434",
"maxConcurrent": 1,
"tokenLimit": 2024,
"temperature": 0.4,
"maxParsingAttempts": 3,
"authentication": {
"type": "None"
}
}
}