Claude desktop integration #311
Unanswered
emersonbarth
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hey @emersonbarth, yes you can use mcp-agent inside an mcp server and expose the server to Claude Desktop. You will need to create an mcp-agent server first: # basic_agent_server.py
import asyncio
import os
from mcp_agent.app import MCPApp
from mcp_agent.server.app_server import create_mcp_server_for_app
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.executor.workflow import Workflow, WorkflowResult
app = MCPApp(name="basic_agent_server", description="Basic agent server example")
@app.workflow
class BasicAgentWorkflow(Workflow[str]):
"""
A basic workflow that demonstrates how to create a simple agent.
This workflow is used as an example of a basic agent configuration.
"""
@app.workflow_run
async def run(self, input: str) -> WorkflowResult[str]:
"""
Run the basic agent workflow.
Args:
input: The input string to prompt the agent.
Returns:
WorkflowResult containing the processed data.
"""
context = app.context
# Add the current directory to the filesystem server's args
context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])
finder_agent = Agent(
name="finder",
instruction="""You are an agent with access to the filesystem,
as well as the ability to fetch URLs. Your job is to identify
the closest match to a user's request, make the appropriate tool calls,
and return the URI and CONTENTS of the closest match.""",
server_names=["fetch", "filesystem"],
)
async with finder_agent:
result = await finder_agent.list_tools()
llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)
result = await llm.generate_str(
message=input,
)
return WorkflowResult(value=result)
async def main():
async with app.run() as agent_app:
# Add the current directory to the filesystem server's args if needed
context = agent_app.context
if "filesystem" in context.config.mcp.servers:
context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])
# Create the MCP server that exposes both workflows and agent configurations
mcp_server = create_mcp_server_for_app(agent_app)
# Run the server
await mcp_server.run_stdio_async()
if __name__ == "__main__":
asyncio.run(main()) Next, ensure you have your execution_engine: asyncio
logger:
transports: [file]
level: debug
path: "logs/mcp-agent.jsonl"
mcp:
servers:
fetch:
command: "/Users/saqadri/.local/bin/uvx" # Update this
args: ["mcp-server-fetch"]
description: "Fetch content at URLs from the world wide web"
filesystem:
command: "/Users/saqadri/.nvm/versions/node/v20.3.0/bin/npx" # Update this
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
# Current directory will be added by the code
]
description: "Read and write files on the filesystem"
openai:
default_model: gpt-4o
# Secrets are loaded from mcp_agent.secrets.yaml Next add your mcp-agent server to {
"mcpServers": {
"mcpAgent": {
"command": "uv",
"args": [
"--directory",
"<ABSOLUTE DIR PATH>",
"run",
"basic_agent_server.py"
]
}
},
"globalShortcut": ""
} If you restart Claude desktop, you should be able to query the mcp-agent via Claude Desktop: |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Struggling to add this MCP onto Claude desktop- is it able to run via npm/node automatically within Claude or is it only accessible via python?
When I try to run 'npm install' there isn't a package.json avail in the folder (I only see package-lock.json), which is how I set up other MCPs without issue.
Do I also need to add the API key after registering on lastmileai.dev? Other MCPs haven't required this, so I'm a bit confused :/
Beta Was this translation helpful? Give feedback.
All reactions