|
| 1 | +--- |
| 2 | +title: Model Capabilities |
| 3 | +description: Understanding and configuring model capabilities for tools and image support |
| 4 | +keywords: [capabilities, tools, function calling, image input, config] |
| 5 | +--- |
| 6 | + |
| 7 | +Continue needs to know what features your models support to provide the best experience. This guide explains how model capabilities work and how to configure them. |
| 8 | + |
| 9 | +## What are Model Capabilities? |
| 10 | + |
| 11 | +Model capabilities tell Continue what features a model supports: |
| 12 | + |
| 13 | +- **`tool_use`** - Whether the model can use tools and functions |
| 14 | +- **`image_input`** - Whether the model can process images |
| 15 | + |
| 16 | +Without proper capability configuration, you may encounter issues like: |
| 17 | + |
| 18 | +- Agent mode being unavailable (requires tools) |
| 19 | +- Tools not working at all |
| 20 | +- Image uploads being disabled |
| 21 | + |
| 22 | +## How Continue Detects Capabilities |
| 23 | + |
| 24 | +Continue uses a two-tier system for determining model capabilities: |
| 25 | + |
| 26 | +### 1. Automatic Detection (Default) |
| 27 | + |
| 28 | +Continue automatically detects capabilities based on your provider and model name. For example: |
| 29 | + |
| 30 | +- **OpenAI**: GPT-4 and GPT-3.5 Turbo models support tools |
| 31 | +- **Anthropic**: Claude 3.5+ models support both tools and images |
| 32 | +- **Ollama**: Most models support tools, vision models support images |
| 33 | +- **Google**: All Gemini models support function calling |
| 34 | + |
| 35 | +This works well for popular models, but may not cover custom deployments or newer models. |
| 36 | + |
| 37 | +For implementation details, see: |
| 38 | + |
| 39 | +- [toolSupport.ts](https://github.com/continuedev/continue/blob/main/core/llm/toolSupport.ts) - Tool capability detection logic |
| 40 | +- [@continuedev/llm-info](https://www.npmjs.com/package/@continuedev/llm-info) - Image support detection |
| 41 | + |
| 42 | +### 2. Manual Configuration |
| 43 | + |
| 44 | +You can add capabilities to models that Continue doesn't automatically detect in your `config.yaml`. |
| 45 | + |
| 46 | +<Note> |
| 47 | +You cannot override autodetection - you can only add capabilities. Continue will always use its built-in knowledge about your model in addition to any capabilities you specify. |
| 48 | +</Note> |
| 49 | + |
| 50 | +```yaml |
| 51 | +models: |
| 52 | + - name: my-custom-gpt4 |
| 53 | + provider: openai |
| 54 | + apiBase: https://my-deployment.com/v1 |
| 55 | + model: gpt-4-custom |
| 56 | + capabilities: |
| 57 | + - tool_use |
| 58 | + - image_input |
| 59 | +``` |
| 60 | +
|
| 61 | +## When to Add Capabilities |
| 62 | +
|
| 63 | +Add capabilities when: |
| 64 | +
|
| 65 | +1. **Using custom deployments** - Your API endpoint serves a model with different capabilities than the standard version |
| 66 | +2. **Using newer models** - Continue doesn't yet recognize a newly released model |
| 67 | +3. **Experiencing issues** - Autodetection isn't working correctly for your setup |
| 68 | +4. **Using proxy services** - Some proxy services modify model capabilities |
| 69 | +
|
| 70 | +## Configuration Examples |
| 71 | +
|
| 72 | +### Basic Configuration |
| 73 | +
|
| 74 | +Add tool support for a model that Continue doesn't recognize: |
| 75 | +
|
| 76 | +```yaml |
| 77 | +models: |
| 78 | + - name: custom-model |
| 79 | + provider: openai |
| 80 | + model: my-fine-tuned-gpt4 |
| 81 | + capabilities: |
| 82 | + - tool_use |
| 83 | +``` |
| 84 | +
|
| 85 | +<Info> |
| 86 | + The `tool_use` capability is for native tool/function calling support. The |
| 87 | + model must actually support tools for this to work. |
| 88 | +</Info> |
| 89 | + |
| 90 | +<Warning> |
| 91 | + **Experimental**: System message tools are available as an experimental feature |
| 92 | + for models without native tool support. These are not automatically used as a |
| 93 | + fallback and must be explicitly configured. Most models are trained for native |
| 94 | + tools, so system message tools may not work as well. |
| 95 | +</Warning> |
| 96 | + |
| 97 | +### Disable Capabilities |
| 98 | + |
| 99 | +Explicitly set no capabilities (autodetection will still apply): |
| 100 | + |
| 101 | +```yaml |
| 102 | +models: |
| 103 | + - name: limited-claude |
| 104 | + provider: anthropic |
| 105 | + model: claude-4.0-sonnet |
| 106 | + capabilities: [] # Empty array doesn't disable autodetection |
| 107 | +``` |
| 108 | + |
| 109 | +<Warning> |
| 110 | + An empty capabilities array does not disable autodetection. Continue will |
| 111 | + still detect and use the model's actual capabilities. To truly limit a model's |
| 112 | + capabilities, you would need to use a model that doesn't support those |
| 113 | + features. |
| 114 | +</Warning> |
| 115 | + |
| 116 | +### Multiple Capabilities |
| 117 | + |
| 118 | +Enable both tools and image support: |
| 119 | + |
| 120 | +```yaml |
| 121 | +models: |
| 122 | + - name: multimodal-gpt |
| 123 | + provider: openai |
| 124 | + model: gpt-4-vision-preview |
| 125 | + capabilities: |
| 126 | + - tool_use |
| 127 | + - image_input |
| 128 | +``` |
| 129 | + |
| 130 | +## Common Scenarios |
| 131 | + |
| 132 | +Some providers and custom deployments may require explicit capability configuration: |
| 133 | + |
| 134 | +- **OpenRouter**: May not preserve the original model's capabilities |
| 135 | +- **Custom API endpoints**: May have different capabilities than standard models |
| 136 | +- **Local models**: May need explicit capabilities if using non-standard model names |
| 137 | + |
| 138 | +Example configuration: |
| 139 | + |
| 140 | +```yaml |
| 141 | +models: |
| 142 | + - name: custom-deployment |
| 143 | + provider: openai |
| 144 | + apiBase: https://custom-api.company.com/v1 |
| 145 | + model: custom-gpt |
| 146 | + capabilities: |
| 147 | + - tool_use # If supports function calling |
| 148 | + - image_input # If supports vision |
| 149 | +``` |
| 150 | + |
| 151 | +## Troubleshooting |
| 152 | + |
| 153 | +For troubleshooting capability-related issues like Agent mode being unavailable or tools not working, see the [Troubleshooting guide](/troubleshooting#agent-mode-is-unavailable-or-tools-arent-working). |
| 154 | + |
| 155 | +## Best Practices |
| 156 | + |
| 157 | +1. **Start with autodetection** - Only override if you experience issues |
| 158 | +2. **Test after changes** - Verify tools and images work as expected |
| 159 | +3. **Keep Continue updated** - Newer versions improve autodetection |
| 160 | + |
| 161 | +Remember: Setting capabilities only adds to autodetection. Continue will still use its built-in knowledge about your model in addition to your specified capabilities. |
| 162 | + |
| 163 | +## Model Capability Support |
| 164 | + |
| 165 | +This matrix shows which models support tool use and image input capabilities. Continue auto-detects these capabilities, but you can override them if needed. |
| 166 | + |
| 167 | +### OpenAI |
| 168 | + |
| 169 | +| Model | Tool Use | Image Input | Context Window | |
| 170 | +| :------------ | -------- | ----------- | -------------- | |
| 171 | +| o3 | Yes | No | 128k | |
| 172 | +| o3-mini | Yes | No | 128k | |
| 173 | +| GPT-4o | Yes | Yes | 128k | |
| 174 | +| GPT-4 Turbo | Yes | Yes | 128k | |
| 175 | +| GPT-4 | Yes | No | 8k | |
| 176 | +| GPT-3.5 Turbo | Yes | No | 16k | |
| 177 | + |
| 178 | +### Anthropic |
| 179 | + |
| 180 | +| Model | Tool Use | Image Input | Context Window | |
| 181 | +| :---------------- | -------- | ----------- | -------------- | |
| 182 | +| Claude 4 Sonnet | Yes | Yes | 200k | |
| 183 | +| Claude 3.5 Sonnet | Yes | Yes | 200k | |
| 184 | +| Claude 3.5 Haiku | Yes | Yes | 200k | |
| 185 | + |
| 186 | +### Google |
| 187 | + |
| 188 | +| Model | Tool Use | Image Input | Context Window | |
| 189 | +| :--------------- | -------- | ----------- | -------------- | |
| 190 | +| Gemini 2.5 Pro | Yes | Yes | 2M | |
| 191 | +| Gemini 2.0 Flash | Yes | Yes | 1M | |
| 192 | + |
| 193 | +### Mistral |
| 194 | + |
| 195 | +| Model | Tool Use | Image Input | Context Window | |
| 196 | +| :-------------- | -------- | ----------- | -------------- | |
| 197 | +| Devstral Medium | Yes | No | 32k | |
| 198 | +| Mistral | Yes | No | 32k | |
| 199 | + |
| 200 | +### DeepSeek |
| 201 | + |
| 202 | +| Model | Tool Use | Image Input | Context Window | |
| 203 | +| :---------------- | -------- | ----------- | -------------- | |
| 204 | +| DeepSeek V3 | Yes | No | 128k | |
| 205 | +| DeepSeek Coder V2 | Yes | No | 128k | |
| 206 | +| DeepSeek Chat | Yes | No | 64k | |
| 207 | + |
| 208 | +### xAI |
| 209 | + |
| 210 | +| Model | Tool Use | Image Input | Context Window | |
| 211 | +| :----- | -------- | ----------- | -------------- | |
| 212 | +| Grok 4 | Yes | Yes | 128k | |
| 213 | + |
| 214 | +### Moonshot AI |
| 215 | + |
| 216 | +| Model | Tool Use | Image Input | Context Window | |
| 217 | +| :------ | -------- | ----------- | -------------- | |
| 218 | +| Kimi K2 | Yes | Yes | 128k | |
| 219 | + |
| 220 | +### Qwen |
| 221 | + |
| 222 | +| Model | Tool Use | Image Input | Context Window | |
| 223 | +| :---------------- | -------- | ----------- | -------------- | |
| 224 | +| Qwen Coder 3 480B | Yes | No | 128k | |
| 225 | + |
| 226 | +### Ollama (Local Models) |
| 227 | + |
| 228 | +| Model | Tool Use | Image Input | Context Window | |
| 229 | +| :------------- | -------- | ----------- | -------------- | |
| 230 | +| Qwen 3 Coder | Yes | No | 32k | |
| 231 | +| Devstral Small | Yes | No | 32k | |
| 232 | +| Llama 3.1 | Yes | No | 128k | |
| 233 | +| Llama 3 | Yes | No | 8k | |
| 234 | +| Mistral | Yes | No | 32k | |
| 235 | +| Codestral | Yes | No | 32k | |
| 236 | +| Gemma 3 4B | Yes | No | 8k | |
| 237 | + |
| 238 | +### Notes |
| 239 | + |
| 240 | +- **Tool Use**: Function calling support (tools are required for Agent mode) |
| 241 | +- **Image Input**: Processing images |
| 242 | +- **Context Window**: Maximum number of tokens the model can process in a single request |
| 243 | + |
| 244 | +--- |
| 245 | + |
| 246 | +**Is your model missing or incorrect?** Help improve this documentation! You can edit this page on GitHub using the link below. |
0 commit comments