You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: packages/docs/docs/providers/ollama.md
+27Lines changed: 27 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,6 +64,11 @@ export default {
64
64
// Optional: Custom base URL (defaults to http://localhost:11434)
65
65
// baseUrl: 'http://localhost:11434',
66
66
67
+
// Manual override for context window size (in tokens)
68
+
// This is particularly useful for Ollama models since MyCoder may not know
69
+
// the context window size for all possible models
70
+
contextWindow:32768, // Example for a 32k context window model
71
+
67
72
// Other MyCoder settings
68
73
maxTokens:4096,
69
74
temperature:0.7,
@@ -81,6 +86,28 @@ Confirmed models with tool calling support:
81
86
82
87
If using other models, verify their tool calling capabilities before attempting to use them with MyCoder.
83
88
89
+
## Context Window Configuration
90
+
91
+
Ollama supports a wide variety of models, and MyCoder may not have pre-configured context window sizes for all of them. Since the context window size is used to:
92
+
93
+
1. Track token usage percentage
94
+
2. Determine when to trigger automatic history compaction
95
+
96
+
It's recommended to manually set the `contextWindow` configuration option when using Ollama models. This ensures proper token tracking and timely history compaction to prevent context overflow.
97
+
98
+
For example, if using a model with a 32k context window:
99
+
100
+
```javascript
101
+
exportdefault {
102
+
provider:'ollama',
103
+
model:'your-model-name',
104
+
contextWindow:32768, // 32k context window
105
+
// other settings...
106
+
};
107
+
```
108
+
109
+
You can find the context window size for your specific model in the model's documentation or by checking the Ollama model card.
110
+
84
111
## Hardware Requirements
85
112
86
113
Running large language models locally requires significant hardware resources:
0 commit comments