Orch is a Neovim plugin + optional Go daemon (orchd) that lets you run multiple LLMs in parallel, compare outputs side‑by‑side, diff and merge patches hunk‑by‑hunk, stream results as they arrive, and apply patches directly to your buffer — all without leaving Neovim.
Think of it as:
Cursor / Windsurf / Claude Desktop — but entirely inside Neovim, fully model‑agnostic, and fully under your control.
- Configure any number of models.
- Run them in parallel with a single command.
- Compare outputs across OpenAI, Anthropic, Ollama, and more.
- Unified results buffer
- Per‑model scratch sections
- Side‑by‑side floating comparison windows
- Hunk‑by‑hunk merge mode with:
- Original view
- Patched view
- Unified diff
- Reopen last hunk preview with:
:OrchHunkPreview
orchd supports true streaming output:
{"event":"result","name":"gpt4","text":"..."}
{"event":"result","name":"sonnet","text":"..."}
{"event":"done"}
The Neovim streaming backend updates live as tokens arrive.
Toggle at any time:
:OrchToggleStreaming
:OrchAsk— freeform prompt:OrchExplain— explain code:OrchRefactor— refactor with diff/merge mode:OrchTestGen— generate tests:OrchApply— apply full model output:OrchMerge— interactive merge:OrchCompareModels— side‑by‑side all‑model view:OrchAskPick— Telescope‑powered model picker
:OrchPrintConfig— view effective configorchd --doctor— check env vars, providers, PATHorchd --config— print full JSON request schemaorchd --check-request— validate JSON against schema
Already implemented:
- Streaming responses
- Per‑model floating windows
- Diff mode with hunk merging
- Apply‑patch UI
- Model picker via Telescope
- Go backend with concurrency + streaming
Coming soon:
- Judge model (best‑of‑N selection)
- Merge model (response synthesizer)
- Conflict detection
- Workspace‑wide transformations
- Project presets
{
"oorrwullie/orch",
config = function()
local orch = require("orch")
orch.setup({
models = {
{
name = "sonnet",
provider = "anthropic",
model = "claude-3.5-sonnet",
api_key = os.getenv("ANTHROPIC_API_KEY"),
},
{
name = "gpt4",
provider = "openai",
model = "gpt-4.1",
api_key = os.getenv("OPENAI_API_KEY"),
},
{
name = "local_llama",
provider = "ollama",
model = "llama3",
},
},
backend = {
mode = "orchd", -- or "lua"
orchd_cmd = { "orchd" },
timeout_seconds = 30,
streaming = false,
},
keymaps = {
enabled = true,
prefix = "<leader>o",
}
})
require("orch.config").apply_keymaps()
end
}:OrchAsk "Explain this"
v
:OrchAsk "Refactor this"
:OrchRefactor
:OrchMerge
:OrchHunkPreview
:OrchToggleStreaming
:OrchCompareModels
With prefix = "<leader>o":
| Mapping | Action |
|---|---|
<leader>oa |
OrchAsk |
<leader>oe |
OrchExplain |
<leader>or |
OrchRefactor |
<leader>ot |
OrchTestGen |
<leader>oo |
Apply first model |
<leader>om |
Interactive merge |
<leader>op |
Reopen last preview |
<leader>os |
Toggle streaming |
Orchd is a small Go binary responsible for:
- Concurrent multi‑model fan‑out
- Streaming
- Provider abstraction
- Timeouts
- Versioning & structured CLI tools
make
sudo make install
make release VERSION=v0.1.0 MODULE_PATH=github.com/oorrwullie/orchd
| Flag | Description |
|---|---|
--version |
Print version |
--help |
Show help |
--config |
Print config schema |
--doctor |
Env + provider diagnostic |
--check-request |
Validate JSON without calling APIs |
Full CLI docs are in docs/cli.md
Schema docs are in docs/config.md
:OrchPrintConfig
:messages
orchd --doctor
orchd --check-request < config.example.json
which orchd
- Missing API keys
orchdnot on PATH- Wrong
backend.mode
MIT License.
Contributions welcome! Especially:
- Provider adapters
- New UI modes
- Merge/judge models
- Documentation
Give Neovim the multi‑model, streaming, diff‑driven coding intelligence of modern AI IDEs — without any vendor lock‑in or closed ecosystem.