Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
## Checklist

- [ ] I've read the [contributing](https://github.com/olimorris/codecompanion.nvim/blob/main/CONTRIBUTING.md) guidelines and have adhered to them in this PR
- [ ] I confirm that this PR has been majority created by me, and not AI (unless stated in the "AI Usage" section above)
- [ ] I've run `make all` to ensure docs are generated, tests pass and [StyLua](https://github.com/JohnnyMorganz/StyLua) has formatted the code
- [ ] _(optional)_ I've added [test](https://github.com/olimorris/codecompanion.nvim/blob/main/CONTRIBUTING.md#testing) coverage for this fix/feature
- [ ] _(optional)_ I've updated the README and/or relevant docs pages
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
matrix:
os: [ubuntu-latest]
nvim_tag: [nightly, v0.11.0]
name: ${{ matrix.os }}
name: ${{ matrix.os }} / ${{ matrix.nvim_tag }}
runs-on: ${{ matrix.os }}
env:
NVIM: ${{ matrix.os == 'windows-latest' && 'nvim-win64\\bin\\nvim.exe' || 'nvim' }}
Expand Down
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# Changelog

## [18.5.1](https://github.com/olimorris/codecompanion.nvim/compare/v18.5.0...v18.5.1) (2026-01-24)


### Bug Fixes

* **adapters:** check for nil in `openai_responses` ([#2662](https://github.com/olimorris/codecompanion.nvim/issues/2662)) ([e22c043](https://github.com/olimorris/codecompanion.nvim/commit/e22c04336b5c56e3839bfebd6d88944f014f4b30))
* **adapters:** copilot supported endpoints ([#2691](https://github.com/olimorris/codecompanion.nvim/issues/2691)) ([dd98466](https://github.com/olimorris/codecompanion.nvim/commit/dd98466a893abf499fbd69ab9526b2da7c094fb8))
* **tools:** remove globals for better concurrent tool usage ([#2680](https://github.com/olimorris/codecompanion.nvim/issues/2680)) ([c9d74dd](https://github.com/olimorris/codecompanion.nvim/commit/c9d74dd667cf609b4f2064ae7f5471285b5356cb))

## [18.5.0](https://github.com/olimorris/codecompanion.nvim/compare/v18.4.1...v18.5.0) (2026-01-21)


### Features

* **ui:** better cursor scrolling in the chat ([#2670](https://github.com/olimorris/codecompanion.nvim/issues/2670)) ([6657e6f](https://github.com/olimorris/codecompanion.nvim/commit/6657e6fd594d3c9f6dd3ab9e26a0c76b8e7082e1))


### Bug Fixes

* **chat:** buffers with duplicate short_paths ([#2665](https://github.com/olimorris/codecompanion.nvim/issues/2665)) ([e0780fa](https://github.com/olimorris/codecompanion.nvim/commit/e0780fa9fda504ffb89307cabcb6cbe1ce8eb60c))

## [18.4.1](https://github.com/olimorris/codecompanion.nvim/compare/v18.4.0...v18.4.1) (2026-01-16)


Expand Down
40 changes: 20 additions & 20 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,36 +8,36 @@ Before contributing a PR, please open up a discussion to talk about it. While I

The plugin has adopted semantic versioning. As such, any PR which breaks the existing API is unlikely to be merged.

### Plugin Philosophy
### CodeCompanion is Omakase

**CodeCompanion enables developers to write better code, faster, through LLM interactions.**
In Japanese cuisine, omakase means _"I'll leave it up to you"_ - the diner allows the chef to carefully select each course. CodeCompanion follows this philosophy: carefully curated features, rather than an all-you-can-eat buffet of every possible feature. In the world of LLMs, this means that CodeCompanion will never be at the bleeding edge. However, what it sacrifices in novelty, it makes up for in stability, reliability, and a great user experience.

When proposing new features, please ensure they align with this philosophy:
**Breaking this down:**
- **Intentional over exhaustive** - Each new feature is carefully considered against the whole menu rather than just the course itself
- **Complementary** - New features compliment the dish rather than acting like an unnecessary side
- **Maintainable** - Every addition is code that I commit to maintaining indefinitely

**In Scope:**
- LLM interaction modes (chat, inline, cmd, workflows, agents)
- Tools and context that extend LLM capabilities while coding
- Integrations (MCP, adapters) that enhance LLM assistance
- Essential infrastructure for reviewing and applying LLM-generated changes (diff providers, edit tracking, completion/action providers)
### AI-Assisted Contributions

**Out of Scope:**
- Elaborate UIs (beyond basic diff/review needs) or features that do not facilitate LLM-assisted code generation
- Features better served by standalone plugins
- General development tools not tied to LLM interactions
While CodeCompanion itself is a tool for AI-assisted development, that does not mean I am willing to accept "vibe-coded" contributions - PRs where the contributor used an LLM to generate code but doesn't deeply understand what they're submitting.

**Questions to ask:**
1. Does this help the LLM write code?
2. Is this essential for users to accept/reject LLM code?
3. Or is this nice-to-have feature that belongs in a separate plugin?
4. To add this feature, are we looking at > 1,000 LOC of new code?
**Red flags:**
- User cannot explain implementation decisions when asked
- Code doesn't match existing architectural patterns
- Tests appear comprehensive but don't actually validate edge cases
- Generic LLM patterns (overly defensive coding, verbose comments)

**If a feature is primarily about viewing what's already happened rather than enabling the next LLM interaction, it's out of scope.**
**What I Expect**:
- **Understand** the codebase before contributing (use the rules, read the tests, explore the architecture)
- **Own** your contribution - you should be able to explain every line you submit
- **Test** thoroughly - write tests that demonstrate you understand the feature
- **Iterate** based on feedback - PRs are conversations, not fire-and-forget submissions

If your feature doesn't directly support LLM-assisted code generation or isn't minimal essential infrastructure, consider publishing it as a standalone plugin that works alongside CodeCompanion.
> As a rule of thumb, use an LLM to create a feature _OR_ a test. But never both.

## How to Contribute

1. Open up a [discussion](https://github.com/olimorris/codecompanion.nvim/discussions) to propose your idea.
1. Open up a [discussion](https://github.com/olimorris/codecompanion.nvim/discussions) to propose your idea - Save yourself time and effort by checking this is a feature that aligns with the project's goals.
2. Fork the repository and create your branch from `main`.
3. Add your feature or fix to your branch.
4. Ensure your code follows the project's coding style and conventions.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
Thank you to the following people:

<p align="center">
<!-- sponsors --><a href="https://github.com/unicell"><img src="https:&#x2F;&#x2F;github.com&#x2F;unicell.png" width="60px" alt="User avatar: Qiu Yu" /></a><a href="https://github.com/jfgordon2"><img src="https:&#x2F;&#x2F;github.com&#x2F;jfgordon2.png" width="60px" alt="User avatar: Jeff Gordon" /></a><a href="https://github.com/pratyushmittal"><img src="https:&#x2F;&#x2F;github.com&#x2F;pratyushmittal.png" width="60px" alt="User avatar: Pratyush Mittal" /></a><a href="https://github.com/JuanCrg90"><img src="https:&#x2F;&#x2F;github.com&#x2F;JuanCrg90.png" width="60px" alt="User avatar: Juan Carlos Ruiz" /></a><a href="https://github.com/Alexander-Garcia"><img src="https:&#x2F;&#x2F;github.com&#x2F;Alexander-Garcia.png" width="60px" alt="User avatar: Alexander Garcia" /></a><a href="https://github.com/LumenYoung"><img src="https:&#x2F;&#x2F;github.com&#x2F;LumenYoung.png" width="60px" alt="User avatar: Lumen Yang" /></a><a href="https://github.com/JPFrancoia"><img src="https:&#x2F;&#x2F;github.com&#x2F;JPFrancoia.png" width="60px" alt="User avatar: JPFrancoia" /></a><a href="https://github.com/pixlmint"><img src="https:&#x2F;&#x2F;github.com&#x2F;pixlmint.png" width="60px" alt="User avatar: Christian Gröber" /></a><a href="https://github.com/le4ker"><img src="https:&#x2F;&#x2F;github.com&#x2F;le4ker.png" width="60px" alt="User avatar: Panos Sakkos" /></a><!-- sponsors -->
<!-- sponsors --><a href="https://github.com/unicell"><img src="https:&#x2F;&#x2F;github.com&#x2F;unicell.png" width="60px" alt="User avatar: Qiu Yu" /></a><a href="https://github.com/jfgordon2"><img src="https:&#x2F;&#x2F;github.com&#x2F;jfgordon2.png" width="60px" alt="User avatar: Jeff Gordon" /></a><a href="https://github.com/pratyushmittal"><img src="https:&#x2F;&#x2F;github.com&#x2F;pratyushmittal.png" width="60px" alt="User avatar: Pratyush Mittal" /></a><a href="https://github.com/JuanCrg90"><img src="https:&#x2F;&#x2F;github.com&#x2F;JuanCrg90.png" width="60px" alt="User avatar: Juan Carlos Ruiz" /></a><a href="https://github.com/Alexander-Garcia"><img src="https:&#x2F;&#x2F;github.com&#x2F;Alexander-Garcia.png" width="60px" alt="User avatar: Alexander Garcia" /></a><a href="https://github.com/LumenYoung"><img src="https:&#x2F;&#x2F;github.com&#x2F;LumenYoung.png" width="60px" alt="User avatar: Lumen Yang" /></a><a href="https://github.com/JPFrancoia"><img src="https:&#x2F;&#x2F;github.com&#x2F;JPFrancoia.png" width="60px" alt="User avatar: JPFrancoia" /></a><a href="https://github.com/pixlmint"><img src="https:&#x2F;&#x2F;github.com&#x2F;pixlmint.png" width="60px" alt="User avatar: Christian Gröber" /></a><a href="https://github.com/le4ker"><img src="https:&#x2F;&#x2F;github.com&#x2F;le4ker.png" width="60px" alt="User avatar: Panos Sakkos" /></a><a href="https://github.com/itskyedo"><img src="https:&#x2F;&#x2F;github.com&#x2F;itskyedo.png" width="60px" alt="User avatar: Kyedo" /></a><a href="https://github.com/jsit"><img src="https:&#x2F;&#x2F;github.com&#x2F;jsit.png" width="60px" alt="User avatar: Jay Sitter" /></a><!-- sponsors -->
</p>

<p align="center">If <i>you</i> love CodeCompanion and use it in your workflow, please consider <a href="https://github.com/sponsors/olimorris">sponsoring me</a></p>
Expand Down
14 changes: 7 additions & 7 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
*codecompanion.txt* For NVIM v0.11 Last change: 2026 January 10
*codecompanion.txt* For NVIM v0.11 Last change: 2026 January 21

==============================================================================
Table of Contents *codecompanion-table-of-contents*
Expand Down Expand Up @@ -2048,7 +2048,7 @@ Palette - `description` - Description shown in the Action Palette -

**Optional frontmatter fields:** - `opts` - Additional options (see
|codecompanion--options| section) - `context` - Pre-loaded context (see
|codecompanion--prompts-with-context| section)
|codecompanion--context-placeholders| section)

**Prompt sections:** - `## system` - System messages that set the LLM’s
behaviour - `## user` - User messages containing your requests
Expand Down Expand Up @@ -5331,8 +5331,8 @@ required.
role = "user",
opts = { auto_submit = true },
-- Scope this prompt to the cmd_runner tool
condition = function()
return _G.codecompanion_current_tool == "cmd_runner"
condition = function(chat)
return chat.tools.tool and chat.tools.tool.name == "cmd_runner"
end,
-- Repeat until the tests pass, as indicated by the testing flag
-- which the cmd_runner tool sets on the chat buffer
Expand Down Expand Up @@ -5408,12 +5408,12 @@ Now let’s look at how we trigger the automated reflection prompts:
role = "user",
opts = { auto_submit = true },
-- Scope this prompt to only run when the cmd_runner tool is active
condition = function()
return _G.codecompanion_current_tool == "cmd_runner"
condition = function(chat)
return chat.tools.tool and chat.tools.tool.name == "cmd_runner"
end,
-- Repeat until the tests pass, as indicated by the testing flag
repeat_until = function(chat)
return chat.tools.flags.testing == true
return chat.tool_registry.flags.testing == true
end,
content = "The tests have failed. Can you edit the buffer and run the test suite again?",
},
Expand Down
18 changes: 1 addition & 17 deletions doc/configuration/chat-buffer.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ require("codecompanion").setup({
display = {
diff = {
enabled = true,
provider = providers.diff, -- inline|split|mini.diff
provider = providers.diff, -- inline|split|mini_diff
},
},
})
Expand Down Expand Up @@ -582,22 +582,6 @@ require("codecompanion").setup({
})
```

```lua [Debug Window]
require("codecompanion").setup({
display = {
chat = {
-- Alter the sizing of the debug window
debug_window = {
---@return number|fun(): number
width = vim.o.columns - 5,
---@return number|fun(): number
height = vim.o.lines - 2,
},
},
},
})
```

```lua [Floating Window]
require("codecompanion").setup({
display = {
Expand Down
2 changes: 1 addition & 1 deletion doc/configuration/prompt-library.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Markdown prompts consist of two main parts:

**Optional frontmatter fields:**
- `opts` - Additional options (see [Options](#options) section)
- `context` - Pre-loaded context (see [Prompts with Context](#prompts-with-context) section)
- `context` - Pre-loaded context (see [Context Placeholders](#context-placeholders) section)

**Prompt sections:**
- `## system` - System messages that set the LLM's behaviour
Expand Down
6 changes: 3 additions & 3 deletions doc/extending/agentic-workflows.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,12 +134,12 @@ Now let's look at how we trigger the automated reflection prompts:
role = "user",
opts = { auto_submit = true },
-- Scope this prompt to only run when the cmd_runner tool is active
condition = function()
return _G.codecompanion_current_tool == "cmd_runner"
condition = function(chat)
return chat.tools.tool and chat.tools.tool.name == "cmd_runner"
end,
-- Repeat until the tests pass, as indicated by the testing flag
repeat_until = function(chat)
return chat.tools.flags.testing == true
return chat.tool_registry.flags.testing == true
end,
content = "The tests have failed. Can you edit the buffer and run the test suite again?",
},
Expand Down
Loading
Loading