-
Notifications
You must be signed in to change notification settings - Fork 896
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Prerequisites
- I have searched existing issues and discussions
- I have read the documentation
- This is a question (not a bug report or feature request)
Question
Most of the LLM services mentioned in the Readme are often too expensive. Since OpenCode is FOSS, maybe smaller models (<36B total parameters) can be used through Ollama or Jan or MLX?
The only problems I can see with this issue is:
- Using Hybrid Attention models to 4x the speed but possibly losing performance
- MoE models being reported to be street-smart but not "book smart", needing better harness
- Using multiple models, some for planning/reasoning and others for coding/tools
Context
No response
Doctor Output (Optional)
Question Category
Configuration
Additional Information
No response
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested