Skip to content

[Question]: Can Oh My OpenCode use local small language models (<32B)? #585

@TomLucidor

Description

@TomLucidor

Prerequisites

  • I have searched existing issues and discussions
  • I have read the documentation
  • This is a question (not a bug report or feature request)

Question

Most of the LLM services mentioned in the Readme are often too expensive. Since OpenCode is FOSS, maybe smaller models (<36B total parameters) can be used through Ollama or Jan or MLX?

The only problems I can see with this issue is:

  • Using Hybrid Attention models to 4x the speed but possibly losing performance
  • MoE models being reported to be street-smart but not "book smart", needing better harness
  • Using multiple models, some for planning/reasoning and others for coding/tools

Context

No response

Doctor Output (Optional)

Question Category

Configuration

Additional Information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions