Solution: GPT-OSS Agent Mode Workaround Using Ollama Model Aliases #7035
Replies: 3 comments
-
| 
         This worked great, thank you!  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         If the LG EXAONE 4.0 model emits <tool_call> to you, then by creating an alias as gpt-4o for the EXAONE, tool calling will also work.  | 
  
Beta Was this translation helpful? Give feedback.
-
        
 this worked succesfully for a few requests, but then, without any changes, this error occurs:  
my config.yaml:  
     | 
  
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Update (7. Aug 2025)
Solution: GPT-OSS Agent Mode Workaround Using Ollama Model Aliases
I encountered the same issue as others with GPT-OSS models where the Agent and Plan mode was greyed out.
Here's a workaround solution that doesn't require complex configuration changes:
Root Cause
Continue has an internal whitelist of model names that are recognized as "agent-capable". GPT-OSS model names like
gpt-oss:120bare not on this list, causing the Agent mode to be disabled.Solution: Create Model Aliases in Ollama
Instead of trying to trick Continue with
requestOptionsor complex configurations, simply create an alias in Ollama:Continue Configuration
I used this basic configuration with the aliased name:
{ "models": [ { "title": "GPT-OSS 120B (Local)", "provider": "openai", "model": "gpt-4o", // the alias Continue accepts "apiBase": "http://localhost:11434/v1", // or your ngrok URL (I used ngrok) "apiKey": "ollama", // default for local Ollama "roles": ["chat", "edit", "apply"], "capabilities": ["tool_use"], "supportsTools": true } ] }Result
gpt-oss:120bandgpt-4o)Why This Works
Continue recognizes
gpt-4oas agent-capable, while Ollama serves your actual GPT-OSS model under this alias. It's a clean solution that works with Continue's current model recognition logic.Here’s a concise “undo” note you can append to the GitHub comment so everyone knows how to roll back the workaround later:
How to Undo the Workaround
When Continue adds native support for GPT-OSS model names, you can revert in two quick steps:
# Remove only the alias tag – the original model stays untouched ollama rm gpt-4o{ "models": [ { "model": "gpt-oss:120b", // switch back to the original name "apiBase": "http://localhost:11434/v1", "provider": "openai" "apiKey": "ollama" } ] }That’s it—no data loss, no extra cleanup needed.
Tested with: Continue extension, Ollama, GPT-OSS 120B model
Hope this helps others with the same issue! ❤️
-Lasse
Beta Was this translation helpful? Give feedback.
All reactions