Skip to content

feat: add MCP client#2549

Merged
olimorris merged 41 commits intoolimorris:developfrom
cairijun:feat/mcp_init_ver
Feb 1, 2026
Merged

feat: add MCP client#2549
olimorris merged 41 commits intoolimorris:developfrom
cairijun:feat/mcp_init_ver

Conversation

@cairijun
Copy link

@cairijun cairijun commented Dec 18, 2025

Description

As discussed in #2506, we want a basic MCP client implemented in CodeCompanion.

Core design:

  • MCP.Clients are started on first chat creation.
  • Tools from MCP servers are loaded as CodeCompanion tools.
  • Tools from the same MCP server are grouped as a tool group.
  • User can use @{mcpServerName} to grant tool access to the LLM.

Features:

Category Feature Supported Notes
Transport Stdio Yes
Transport Streamable HTTP No No plan; can use mcp-remote
Basic Cancellation Partial Only triggered on timeout
Basic Progress No Possible future improvement
Basic Task No No plan; too complex and marked experimental in the Spec
Client Roots Yes Disabled by default
Client Sampling No No plan; needs concrete examples to justify
Client Elicitation No No plan; needs concrete examples
Server Tools Yes Currently only supports Text Content
Server Tool list changed notification No No plan; needs concrete examples to demonstrate necessarity
Server Prompts No Planned; needs concrete examples to help design UX
Server Resources No Planned
Server Completion No No plan; depends on Prompts UX
Server Logging Partial Currently implemented by forwarding JSON-RPC messages to log:info
Server Pagination Yes

Config example:

local config = { ... }
config.mcp.servers = {
  -- a simple case
  tavilyMcp = {
    cmd = { "npx", "-y", "tavily-mcp@latest" },
    env = {
      TAVILY_API_KEY = "cmd:cat ~/.config/tavily_api_key",
    },
  },
  -- workspace roots reporting
  filesystem = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-filesystem" },
    roots = function()
      local roots =
        { { name = "Current Working Directory", uri = vim.uri_from_fname(vim.fn.getcwd()) } }
      vim
        .iter(vim.lsp.get_clients())
        :map(function(client)
          return client.workspace_folders
        end)
        :flatten()
        :each(function(folder)
          table.insert(roots, { name = "LSP Workspace", uri = folder.uri })
        end)
      return roots
    end,
    register_roots_list_changed = function(notify)
      vim.api.nvim_create_autocmd("LspAttach", {
        callback = function()
          notify()
        end,
      })
    end,
  },
  -- override output handlers
  sequentialThinking = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
    tool_overrides = {
      sequentialthinking = {
        output = {
          success = function(self, tools, cmd, stdout)
            local output = stdout and stdout[#stdout]
            local msg = "Sequential thinking: " .. self.args.thought
            tools.chat:add_tool_output(self, output, msg)
          end,
        },
      },
    },
  },
  -- a more complex example
  runPython = {
    cmd = { "uvx", "mcp-run-python", "--deps", "numpy,pandas", "stdio" },
    -- override server-level prompt
    server_instructions = function(orig)
      return orig
        .. "\nThe Python environment has the following packages pre-installed: numpy, pandas."
        .. " You can only use these packages and standard library modules."
    end,
    -- set default tool opts
    default_tool_opts = {
      require_approval_before = true,
    },
    tool_overrides = {
      run_python_code = {
        timeout_ms = 5 * 60 * 1000, -- 5 minutes
        output = {
          prompt = function(self)
            local args = self.args
            return string.format(
              "Confirm to execute the following Python code:\n```python\n%s\n```\nGlobal Variables:\n%s",
              args.python_code,
              vim.inspect(args.global_variables)
            )
          end,
        },
      },
    },
  },
}

require("codecompanion").setup(config)

Related Issue(s)

#2506

Screenshots

image image

Checklist

  • I've read the contributing guidelines and have adhered to them in this PR
  • I've added test coverage for this fix/feature
  • I've run make all to ensure docs are generated, tests pass and my formatting is applied
  • (optional) I've updated CodeCompanion.has in the init.lua file for my new feature
  • (optional) I've updated the README and/or relevant docs pages

@olimorris olimorris added the P3 Low impact, low urgency label Dec 18, 2025
@olimorris
Copy link
Owner

Fantastic, thank you @cairijun. The holidays start from Monday for me, so I will get around to properly testing this out, then. But this will be priority 1 for me.

Copy link
Owner

@olimorris olimorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Firstly, this is awesome and I'm very excited about this making its way into CodeCompanion.

I've done a first pass review. Nothing major, lots of nitpicking. Then I'll give it some proper testing over the holiday period.

Giving some thoughts to the UX, I think the distinction between MCP "tools" and regular tools, the former should be visible in the chat buffer as @{mcp.runPython}. It makes it easier for users to filter via their completion plugin and makes the distinction clear.

Seeing how you've implemented the MCP servers:

sequentialThinking = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
    tool_overrides = {
      sequentialthinking = {
        output = {
          success = function(self, tools, cmd, stdout)
            local output = stdout and stdout[#stdout]
            local msg = "Sequential thinking: " .. self.args.thought
            tools.chat:add_tool_output(self, output, msg)
          end,
        },
      },
    },
  },

Is brilliant 👏🏼. The tool_bridge.lua file is an excellent idea. Ideally, we wouldn't have sequentialthinking appear twice in the same config block but this can be tackled towards the end of the PR.

Seeing success = function(self, tools, cmd, stdout)...this is on me to refactor. I'm trying to move everything over to success = function(self, args) pattern as it's future proof should we add additional arguments. I realized this pattern far too late on. So heads up, this may change in the future.

Once again, thanks for this brilliant PR.

local M = {}

---Default tool output callbacks
local DefaultOutputCallbacks = {}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be snake case.

Also, using Default implies that there is an alternative for these callbacks. Something other than the defaults`.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We indeed have alternatives. The user can override some of the callbacks via tool_overrides.{TOOL_NAME}.output config table. I changed it to local default_output = { ... }.

Copilot AI review requested due to automatic review settings December 25, 2025 10:20
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a basic MCP (Model Context Protocol) client for CodeCompanion, enabling integration with MCP servers to expose their tools as CodeCompanion tools. The implementation supports stdio transport, tool pagination, roots capability, and customizable tool behavior.

Key changes:

  • Core MCP client with JSON-RPC communication over stdio transport
  • Tool bridge to convert MCP tools into CodeCompanion tool format with grouping by server
  • Comprehensive test coverage with mock transport for unit and integration testing

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
tests/stubs/mcp/tools.jsonl Test fixtures containing sample MCP tool definitions for testing
tests/mocks/mcp_client_transport.lua Mock transport implementation enabling testable JSON-RPC communication without external processes
tests/mcp/test_mcp_client.lua Unit tests for MCP client initialization, tool loading, tool calls, timeouts, and roots capability
tests/interactions/chat/mcp/test_mcp_tools.lua Integration tests verifying MCP tools work within chat interactions with tool overrides
tests/config.lua Adds empty MCP config section to test configuration
lua/codecompanion/types.lua Type definitions for MCP protocol structures (JSON-RPC, tools, content blocks)
lua/codecompanion/mcp/tool_bridge.lua Converts MCP tool specifications to CodeCompanion tools with customizable output handlers
lua/codecompanion/mcp/init.lua Module entry point that starts all configured MCP servers on first chat creation
lua/codecompanion/mcp/client.lua Main client implementation handling transport, JSON-RPC, initialization, and tool operations
lua/codecompanion/interactions/chat/tools/init.lua Updates JSON decode to handle null values properly for MCP tool arguments
lua/codecompanion/interactions/chat/init.lua Integrates MCP startup into chat initialization flow
lua/codecompanion/config.lua Adds default MCP configuration with empty servers table

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@cairijun
Copy link
Author

cairijun commented Dec 25, 2025

Hi @olimorris , thanks for your detailed and constructive comments! I've been a bit busy on some personal matters recently so I'm replying slower than usual.

I've push some commits addressing most of the comments.

The generated tool groups are now named mcp.{SERVER_NAME}, while the tools are named mcp_{SERVER_NAME}_{ORIG_TOOL_NAME} without the use of ., as it is not supported by many LLM providers. This inconsistency is not visible to the users as they always use the groups and the tools are actually hidden.

There's also a major change that I moved the output formatting logic from cmds to success/error. Now the user provided output handlers receive the original MCP tool result content objects. It makes the handlers a bit more verbose for simple cases, but must more flexible for complex tool results. I think this is acceptable since you normally don't need to override them for simple cases.

@olimorris
Copy link
Owner

No worries @cairijun and no rush. I too have been in the same boat. We'll definitely get this PR into CodeCompanion 👍🏼. I will be picking this back up next week too.

@olimorris
Copy link
Owner

I've got some time planned for this weekend to take a look at this. Looking forward to it.

@olimorris olimorris dismissed their stale review January 11, 2026 12:14

Completed

Copy link
Owner

@olimorris olimorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial reaction:

This is awesome. Really, really awesome. The fact all I needed to do to get this working, was:

      mcp = {
        servers = {
          ["tavily-mcp"] = {
            cmd = { "npx", "-y", "tavily-mcp@latest" },
            env = {
              TAVILY_API_KEY = "cmd:op read op://personal/Tavily_API/credential --no-newline",
            },
          },
        },
      },

...is tremendous.

I've put some inline comments. Pretty much minor stuff now. I will write the docs and add some more test coverage (think we might need a bit of coverage around mcp servers appearing in the completion list).

There's a couple of (hopefully) small hurdles I've noticed:

  • In my example above, I'm being prompted by 1Password everytime I create a new chat buffer. I may not want to add the Tavily MCP server to this chat buffer so think this prompt should be when I've pressed submit to send the response to the LLM.
  • It can be hit-and-miss if the MCP server appears in the completion list in the chat buffer. I notice when I open a fresh Neovim, create a chat buffer, I often don't see the MCP servers. If I open a second chat buffer, I do see them. I think we may need to shift the logic of creating the MCP groups to the providers.completion.init file like we do for all other tools and groups.

Otherwise, this is amazing and I can't wait to merge this into CodeCompanion. No rush and take your time. We all have lives outside of open-source. And, if you need me to pickup any bits, just say.

@@ -0,0 +1,194 @@
local log = require("codecompanion.utils.log")

local CONSTANTS = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've extracted some messaging to the top of the file as I'm likely to tweak this over time. Also, preference is to have @{mcp:tavily} rather than @{mcp.tavily} as I think it visually clarifies tavily is part of the MCP group (super minor nitpick).

table.insert(server_prompt, server_instructions)
end

chat_tools.groups[fmt("%s%s", CONSTANTS.TOOL_PREFIX, client.name)] = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've noticed that MCP tools appearing in the completion menu in the chat buffer to be temperamental. I'm assuming there is a timing difference between when these are resolved in the chat buffer and when the providers.completion.init file resolves them. I do think that logic should be moved to the latter file

@cairijun
Copy link
Author

  • In my example above, I'm being prompted by 1Password everytime I create a new chat buffer. I may not want to add the Tavily MCP server to this chat buffer so think this prompt should be when I've pressed submit to send the response to the LLM.
  • It can be hit-and-miss if the MCP server appears in the completion list in the chat buffer. I notice when I open a fresh Neovim, create a chat buffer, I often don't see the MCP servers. If I open a second chat buffer, I do see them. I think we may need to shift the logic of creating the MCP groups to the providers.completion.init file like we do for all other tools and groups.

They might come from a same root cause --- MCP servers are started asynchronously on chat creation.

You're prompted by 1Pass because mcp.start_servers() is triggerred on Chat.new(). But that should only happen on the first chat buffer, even if the server is extremely slow to start --- unless it exited before you start another chat. I do not use 1Pass and didn't realise their could be a user interaction involved.

And because the servers are started (and loaded) asynchronously, they will be missing from the completion list until actually available. We can add a stub tool group to the config then resolve it once the server is ready, but that may cause missing tools if the chat is submitted before the tool group is resolved. Making the procedure synchronous also seems inappropriate to me, as npx might need to install a whole universe as deps.

I ran over the Chat and ToolRegistry implementation and failed to find an appropriate mechanism allowing me to inject a potentially long run logic before the chat is actually submitted. Another approach is to use an nvim command or slash command to explicitly start (maybe some) MCP servers, to avoid starting all servers on chat creation.

I personally use the MCPServerStart/MCPToolsLoaded/MCPServerExit events with fidget.nvim to setup a progress notification to make the startup progress visible to me.

图片

To sum up, we might have the following ways to address this:

  1. Create a stub tool group without any tools, and resolve it once the server is ready. The server can be started on demand, but we still need to find a way to block chat submittion before all mentioned servers are ready.
  2. Allow disabling auto-start for some servers (auto_start = false), and the users need to use some command to explictly start them.
  3. Keep the currently behavior but give more clear status feedback to the users, so they know their config is ok just the servers need some time to start.

@olimorris
Copy link
Owner

olimorris commented Jan 28, 2026

  • Added slash command so users can manually stop and start servers
  • Servers can be turned off at the config level and enabled in the chat
  • Completion engine automatically refreshes when mcp servers are added, keeping config immutable

@cairijun
Copy link
Author

The newly added 10s default timeout seems a bit too small. It affects the initialize request, which waits for the server to startup. 10s might not be adequate for npx.

Also CodeCompanion.MCP.ToolOverride is currently using timeout_ms as the config key, while timeout is used elsewhere. I think we better make it consistent.

@olimorris
Copy link
Owner

Good spot. Will amend

@olimorris
Copy link
Owner

Will aim to merge this over the weekend. Just the docs to go.

I will add a subsequent PR next week to allow users to specify a JSON file that contains their servers.

@cairijun
Copy link
Author

cairijun commented Jan 31, 2026

I have several little patches that might improve user experience:

  1. Add output truncation to success callback to prevent Neovim freezing due to treesitter's poor performance with large texts.
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -34,13 +34,19 @@ local output = {
   success = function(self, tools, cmd, stdout)
     local chat = tools.chat
     local output = M.format_tool_result_content(stdout and stdout[#stdout])
+    local output_for_user = output
+    local DISPLAY_LIMIT_BYTES = 1000
+    if #output_for_user > DISPLAY_LIMIT_BYTES then
+      local utf_offset = vim.str_utf_start(output_for_user, 1 + DISPLAY_LIMIT_BYTES)
+      output_for_user = output_for_user:sub(1, DISPLAY_LIMIT_BYTES + utf_offset) .. "\n\n...[truncated]"
+    end
     local for_user = fmt(
       [[MCP: %s executed successfully:
 ````
 %s
 ````]],
       self.name,
-      output
+      output_for_user
     )
     chat:add_tool_output(self, output, for_user)
   end,
  1. Fix error callback to properly display error messages and tool arguments.
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -53,13 +59,16 @@ local output = {
     local chat = tools.chat
     local err_msg = M.format_tool_result_content(stderr and stderr[#stderr] or "<NO ERROR MESSAGE>")
     local for_user = fmt(
-      [[MCP: %s failed
+      [[MCP: %s failed:
 ````
 %s
+````
+Arguments:
+````%s
 ````]],
       self.name,
-      vim.inspect(self.args),
-      err_msg
+      err_msg,
+      vim.inspect(self.args)
     )
     chat:add_tool_output(self, "MCP Tool execution failed:\n" .. err_msg, for_user)
   end,
  1. Make prompt callback to include tool invocation arguments in the prompt message. Showing only the tool name seems less useful. Users can override the function if they find the prompt too verbose.
diff --git a/lua/codecompanion/mcp/tool_bridge.lua b/lua/codecompanion/mcp/tool_bridge.lua
index 3d8a3794f..315ae2298 100644
--- a/lua/codecompanion/mcp/tool_bridge.lua
+++ b/lua/codecompanion/mcp/tool_bridge.lua
@@ -69,7 +78,7 @@ local output = {
   ---@param tools CodeCompanion.Tools
   ---@return nil|string
   prompt = function(self, tools)
-    return fmt("Execute the `%s` MCP tool?", self.name)
+    return fmt("Execute the `%s` MCP tool?\nArguments:\n%s", self.name, vim.inspect(self.args))
   end,
 }
  1. Support environment variable substitution in cmd configuration to enable MCP servers requiring API_KEY or other secrets passed via command-line arguments (e.g., mcp-remote).
diff --git a/lua/codecompanion/mcp/client.lua b/lua/codecompanion/mcp/client.lua
index d6405ea67..2ae8a1a01 100644
--- a/lua/codecompanion/mcp/client.lua
+++ b/lua/codecompanion/mcp/client.lua
@@ -99,8 +99,9 @@ function StdioTransport:start(on_line_read, on_close)
   self._on_close = on_close
 
   adapter_utils.get_env_vars(self)
+  local cmd = adapter_utils.set_env_vars(self, self.cmd)
   self._sysobj = self.methods.job(
-    self.cmd,
+    cmd,
     {
       env = self.env_replaced or self.env,
       text = true,

@olimorris
Copy link
Owner

@cairijun great patches. Please push them

@cairijun
Copy link
Author

Pushed.

I also added a security warning about the roots feature in the doc to prevent potential misuse.

@olimorris
Copy link
Owner

Fantastic work. So excited to have this in CodeCompanion.

There's some minor features I'll aim to add in before I release v19.0.0. Namely, allowing users to specify MCP servers in an external JSON file which will make adding servers without restarting Neovim that much easier. Potentially allow people to migrate from MCPHub.nvim to this as well.

I will also think about how we can better debug user's MCP issues. I'm expecting there will be a handful of edge cases with adapters (like as you said with the openai_responses).

Once again, thanks for your hard work @cairijun.

@olimorris olimorris merged commit a09c0ac into olimorris:develop Feb 1, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

P3 Low impact, low urgency

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants