You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**Ollama Integration**: Fixed Ollama models to properly generate code and provide meaningful evaluation scores instead of always returning 100%. Now uses Ollama's chat API for better conversation handling and includes working demo script.
Copy file name to clipboardExpand all lines: docs/guides/ollama-integration.md
+48-19Lines changed: 48 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Ollama Integration Guide
2
2
3
-
CodeOptiX supports local Ollama models, allowing you to run evaluations without API keys!
3
+
CodeOptiX supports local Ollama models, allowing you to run evaluations without API keys! ✅ **Now working correctly** - generates code and provides proper security evaluations.
4
4
5
5
---
6
6
@@ -14,6 +14,27 @@ CodeOptiX supports local Ollama models, allowing you to run evaluations without
14
14
15
15
---
16
16
17
+
## ✅ Recent Updates
18
+
19
+
**CodeOptiX now works correctly with Ollama!** Recent fixes ensure:
While Ollama works great for evaluations, there are some limitations:
334
+
### ⚠️ Known Limitations
306
335
307
336
#### Evolution Support
308
-
- **Limited support for `codeoptix evolve`**: The evolution feature uses GEPA optimization, which requires processing very long prompts. Ollama may fail with 404 errors or timeouts on complex evolution tasks.
309
-
- **Recommendation**: Use cloud providers (OpenAI, Anthropic, Google) for full evolution capabilities. For basic evolution testing, try smaller models like `llama3.1:8b` with minimal iterations.
337
+
- **Limited support for `codeoptix evolve`**: The evolution feature uses GEPA optimization, which requires processing very long prompts. Ollama may fail with timeouts on complex evolution tasks.
338
+
- **Recommendation**: Use cloud providers (OpenAI, Anthropic, Google) for full evolution capabilities.
310
339
311
-
#### Performance
312
-
- Large models (e.g., `gpt-oss:120b`) require significant RAM and may be slow on consumer hardware.
313
-
- Evolution tasks are computationally intensive and may not complete reliably with Ollama.
340
+
#### Performance Considerations
341
+
- Large models (e.g., `gpt-oss:120b`) require significant RAM and may be slow on consumer hardware
342
+
- Evolution tasks are computationally intensive and may not complete reliably with Ollama
314
343
315
344
For advanced features like evolution, consider cloud providers or contact us for tailored enterprise solutions.
0 commit comments