Conversation
|
Fantastic, thank you @cairijun. The holidays start from Monday for me, so I will get around to properly testing this out, then. But this will be priority 1 for me. |
olimorris
left a comment
There was a problem hiding this comment.
Firstly, this is awesome and I'm very excited about this making its way into CodeCompanion.
I've done a first pass review. Nothing major, lots of nitpicking. Then I'll give it some proper testing over the holiday period.
Giving some thoughts to the UX, I think the distinction between MCP "tools" and regular tools, the former should be visible in the chat buffer as @{mcp.runPython}. It makes it easier for users to filter via their completion plugin and makes the distinction clear.
Seeing how you've implemented the MCP servers:
sequentialThinking = {
cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
tool_overrides = {
sequentialthinking = {
output = {
success = function(self, tools, cmd, stdout)
local output = stdout and stdout[#stdout]
local msg = "Sequential thinking: " .. self.args.thought
tools.chat:add_tool_output(self, output, msg)
end,
},
},
},
},Is brilliant 👏🏼. The tool_bridge.lua file is an excellent idea. Ideally, we wouldn't have sequentialthinking appear twice in the same config block but this can be tackled towards the end of the PR.
Seeing success = function(self, tools, cmd, stdout)...this is on me to refactor. I'm trying to move everything over to success = function(self, args) pattern as it's future proof should we add additional arguments. I realized this pattern far too late on. So heads up, this may change in the future.
Once again, thanks for this brilliant PR.
| local M = {} | ||
|
|
||
| ---Default tool output callbacks | ||
| local DefaultOutputCallbacks = {} |
There was a problem hiding this comment.
This should be snake case.
Also, using Default implies that there is an alternative for these callbacks. Something other than the defaults`.
There was a problem hiding this comment.
We indeed have alternatives. The user can override some of the callbacks via tool_overrides.{TOOL_NAME}.output config table. I changed it to local default_output = { ... }.
968ca4a to
163c909
Compare
There was a problem hiding this comment.
Pull request overview
This PR implements a basic MCP (Model Context Protocol) client for CodeCompanion, enabling integration with MCP servers to expose their tools as CodeCompanion tools. The implementation supports stdio transport, tool pagination, roots capability, and customizable tool behavior.
Key changes:
- Core MCP client with JSON-RPC communication over stdio transport
- Tool bridge to convert MCP tools into CodeCompanion tool format with grouping by server
- Comprehensive test coverage with mock transport for unit and integration testing
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/stubs/mcp/tools.jsonl | Test fixtures containing sample MCP tool definitions for testing |
| tests/mocks/mcp_client_transport.lua | Mock transport implementation enabling testable JSON-RPC communication without external processes |
| tests/mcp/test_mcp_client.lua | Unit tests for MCP client initialization, tool loading, tool calls, timeouts, and roots capability |
| tests/interactions/chat/mcp/test_mcp_tools.lua | Integration tests verifying MCP tools work within chat interactions with tool overrides |
| tests/config.lua | Adds empty MCP config section to test configuration |
| lua/codecompanion/types.lua | Type definitions for MCP protocol structures (JSON-RPC, tools, content blocks) |
| lua/codecompanion/mcp/tool_bridge.lua | Converts MCP tool specifications to CodeCompanion tools with customizable output handlers |
| lua/codecompanion/mcp/init.lua | Module entry point that starts all configured MCP servers on first chat creation |
| lua/codecompanion/mcp/client.lua | Main client implementation handling transport, JSON-RPC, initialization, and tool operations |
| lua/codecompanion/interactions/chat/tools/init.lua | Updates JSON decode to handle null values properly for MCP tool arguments |
| lua/codecompanion/interactions/chat/init.lua | Integrates MCP startup into chat initialization flow |
| lua/codecompanion/config.lua | Adds default MCP configuration with empty servers table |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Hi @olimorris , thanks for your detailed and constructive comments! I've been a bit busy on some personal matters recently so I'm replying slower than usual. I've push some commits addressing most of the comments. The generated tool groups are now named There's also a major change that I moved the output formatting logic from |
47946fa to
2eac686
Compare
|
No worries @cairijun and no rush. I too have been in the same boat. We'll definitely get this PR into CodeCompanion 👍🏼. I will be picking this back up next week too. |
|
I've got some time planned for this weekend to take a look at this. Looking forward to it. |
olimorris
left a comment
There was a problem hiding this comment.
Initial reaction:
This is awesome. Really, really awesome. The fact all I needed to do to get this working, was:
mcp = {
servers = {
["tavily-mcp"] = {
cmd = { "npx", "-y", "tavily-mcp@latest" },
env = {
TAVILY_API_KEY = "cmd:op read op://personal/Tavily_API/credential --no-newline",
},
},
},
},...is tremendous.
I've put some inline comments. Pretty much minor stuff now. I will write the docs and add some more test coverage (think we might need a bit of coverage around mcp servers appearing in the completion list).
There's a couple of (hopefully) small hurdles I've noticed:
- In my example above, I'm being prompted by 1Password everytime I create a new chat buffer. I may not want to add the Tavily MCP server to this chat buffer so think this prompt should be when I've pressed submit to send the response to the LLM.
- It can be hit-and-miss if the MCP server appears in the completion list in the chat buffer. I notice when I open a fresh Neovim, create a chat buffer, I often don't see the MCP servers. If I open a second chat buffer, I do see them. I think we may need to shift the logic of creating the MCP groups to the
providers.completion.initfile like we do for all other tools and groups.
Otherwise, this is amazing and I can't wait to merge this into CodeCompanion. No rush and take your time. We all have lives outside of open-source. And, if you need me to pickup any bits, just say.
| @@ -0,0 +1,194 @@ | |||
| local log = require("codecompanion.utils.log") | |||
|
|
|||
| local CONSTANTS = { | |||
There was a problem hiding this comment.
I've extracted some messaging to the top of the file as I'm likely to tweak this over time. Also, preference is to have @{mcp:tavily} rather than @{mcp.tavily} as I think it visually clarifies tavily is part of the MCP group (super minor nitpick).
| table.insert(server_prompt, server_instructions) | ||
| end | ||
|
|
||
| chat_tools.groups[fmt("%s%s", CONSTANTS.TOOL_PREFIX, client.name)] = { |
There was a problem hiding this comment.
I've noticed that MCP tools appearing in the completion menu in the chat buffer to be temperamental. I'm assuming there is a timing difference between when these are resolved in the chat buffer and when the providers.completion.init file resolves them. I do think that logic should be moved to the latter file
|
|
The newly added 10s default timeout seems a bit too small. It affects the Also |
|
Good spot. Will amend |
|
Will aim to merge this over the weekend. Just the docs to go. I will add a subsequent PR next week to allow users to specify a JSON file that contains their servers. |
|
I have several little patches that might improve user experience:
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -34,13 +34,19 @@ local output = {
success = function(self, tools, cmd, stdout)
local chat = tools.chat
local output = M.format_tool_result_content(stdout and stdout[#stdout])
+ local output_for_user = output
+ local DISPLAY_LIMIT_BYTES = 1000
+ if #output_for_user > DISPLAY_LIMIT_BYTES then
+ local utf_offset = vim.str_utf_start(output_for_user, 1 + DISPLAY_LIMIT_BYTES)
+ output_for_user = output_for_user:sub(1, DISPLAY_LIMIT_BYTES + utf_offset) .. "\n\n...[truncated]"
+ end
local for_user = fmt(
[[MCP: %s executed successfully:
````
%s
````]],
self.name,
- output
+ output_for_user
)
chat:add_tool_output(self, output, for_user)
end,
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -53,13 +59,16 @@ local output = {
local chat = tools.chat
local err_msg = M.format_tool_result_content(stderr and stderr[#stderr] or "<NO ERROR MESSAGE>")
local for_user = fmt(
- [[MCP: %s failed
+ [[MCP: %s failed:
````
%s
+````
+Arguments:
+````%s
````]],
self.name,
- vim.inspect(self.args),
- err_msg
+ err_msg,
+ vim.inspect(self.args)
)
chat:add_tool_output(self, "MCP Tool execution failed:\n" .. err_msg, for_user)
end,
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -69,7 +78,7 @@ local output = {
---@param tools CodeCompanion.Tools
---@return nil|string
prompt = function(self, tools)
- return fmt("Execute the `%s` MCP tool?", self.name)
+ return fmt("Execute the `%s` MCP tool?\nArguments:\n%s", self.name, vim.inspect(self.args))
end,
}
diff --git a/lua/codecompanion/mcp/client.lua b/lua/codecompanion/mcp/client.lua
index d6405ea67..2ae8a1a01 100644
--- a/lua/codecompanion/mcp/client.lua
+++ b/lua/codecompanion/mcp/client.lua
@@ -99,8 +99,9 @@ function StdioTransport:start(on_line_read, on_close)
self._on_close = on_close
adapter_utils.get_env_vars(self)
+ local cmd = adapter_utils.set_env_vars(self, self.cmd)
self._sysobj = self.methods.job(
- self.cmd,
+ cmd,
{
env = self.env_replaced or self.env,
text = true, |
|
@cairijun great patches. Please push them |
|
Pushed. I also added a security warning about the |
|
Fantastic work. So excited to have this in CodeCompanion. There's some minor features I'll aim to add in before I release I will also think about how we can better debug user's MCP issues. I'm expecting there will be a handful of edge cases with adapters (like as you said with the Once again, thanks for your hard work @cairijun. |


Description
As discussed in #2506, we want a basic MCP client implemented in CodeCompanion.
Core design:
MCP.Clients are started on first chat creation.@{mcpServerName}to grant tool access to the LLM.Features:
mcp-remoteexperimentalin the Speclog:infoConfig example:
Related Issue(s)
#2506
Screenshots
Checklist
make allto ensure docs are generated, tests pass and my formatting is appliedCodeCompanion.hasin the init.lua file for my new feature