Chat with local LLMs (like DeepSeek Coder) directly inside Neovim using an interactive markdown buffer and the Ollama CLI.
The chat lives inside a Markdown file, so code snippets and text formatting work naturally with tools like marksman or markdown-preview.
Perfect for in-editor AI coding assistance, quick explanations, or note-taking
with models like deepseek-coder-v2, llama3, codellama, etc.
- Interactive chat with Ollama in a Markdown buffer
- Auto-inserts
### User/### Assistantblocks for clarity - Live streaming responses, updated inline
- Cleans up ANSI escape sequences from output
- Minimal, no external dependencies
{
"Dheeraj-Murthy/Ollama_chat.nvim",
config = function()
require("ollama_chat").setup({
model = "deepseek-coder-v2", -- optional, defaults to this
})
end
}use {
"Dheeraj-Murthy/Ollama_chat.nvim",
config = function()
require("ollama_chat").setup()
end
}- Run
:OllamaChatto open or jump to the chat buffer (OllamaChat.md). - Type your message under the
### Userblock. - Press
<Enter>in normal mode to send it. - The model’s response will stream back under a new
### Assistantblock.
- Ollama installed (
ollama run ...must work in your terminal) - An Ollama-compatible model (e.g.
deepseek-coder-v2,codellama,llama3)
Example to get started:
ollama run deepseek-coder-v2- Session history / persistence
- Model switching from inside Neovim
- Telescope integration to browse past sessions
- Richer Markdown formatting for roles
M. S. Dheeraj Murthy
GitHub ·
LinkedIn
Contributions are welcome — issues, PRs, and ideas!
If you build something cool on top (prompt templates, chaining commands,
Telescope pickers, etc.), feel free to share it.
