Skip to content
Merged
47 changes: 39 additions & 8 deletions examples/mcp_agent_server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ This directory includes two implementations of the MCP Agent Server pattern:
### [Asyncio](./asyncio)

The asyncio implementation provides:

- In-memory execution with minimal setup
- Simple deployment with no external dependencies
- Fast startup and execution
Expand All @@ -35,6 +36,7 @@ The asyncio implementation provides:
### [Temporal](./temporal)

The Temporal implementation provides:

- Durable execution of workflows using Temporal as the orchestration engine
- Pause/resume capabilities via Temporal signals
- Automatic retry and recovery from failures
Expand All @@ -50,21 +52,49 @@ Each implementation demonstrates:

## Key MCP Agent Server Advantages

| Capability | Description |
|------------|-------------|
| **Protocol Standardization** | Agents communicate via standardized MCP protocol, ensuring interoperability |
| **Workflow Encapsulation** | Complex agent workflows are exposed as simple MCP tools |
| **Execution Flexibility** | Choose between in-memory (asyncio) or durable (Temporal) execution |
| **Client Independence** | Connect from any MCP client: Claude, VSCode, Cursor, MCP Inspector, or custom apps |
| **Multi-Agent Ecosystems** | Build systems where multiple agents can interact and collaborate |
| Capability | Description |
| ---------------------------- | ---------------------------------------------------------------------------------- |
| **Protocol Standardization** | Agents communicate via standardized MCP protocol, ensuring interoperability |
| **Workflow Encapsulation** | Complex agent workflows are exposed as simple MCP tools |
| **Execution Flexibility** | Choose between in-memory (asyncio) or durable (Temporal) execution |
| **Client Independence** | Connect from any MCP client: Claude, VSCode, Cursor, MCP Inspector, or custom apps |
| **Multi-Agent Ecosystems** | Build systems where multiple agents can interact and collaborate |

## Getting Started

Each implementation directory contains its own README with detailed instructions:
Each implementation directory contains its own README with detailed instructions. Prefer the decorator-based tool definition (`@app.tool` / `@app.async_tool`) for the simplest developer experience:

- [Asyncio Implementation](./asyncio/README.md)
- [Temporal Implementation](./temporal/README.md)

### Preferred: Declare tools with decorators

Instead of only defining workflow classes, you can expose tools directly from functions:

```python
from mcp_agent.app import MCPApp

app = MCPApp(name="my_agent_server")

@app.tool
async def do_something(arg: str) -> str:
"""Do something synchronously and return the final result."""
return "done"

@app.async_tool(name="do_something_async")
async def do_something_async(arg: str) -> str:
"""
Start work asynchronously.

Returns 'workflow_id' and 'run_id'. Use 'workflows-get_status' with the returned
IDs to retrieve status and results.
"""
return "started"
```
Comment on lines +70 to +93
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Align async tool example with described return contract.

Example says it returns workflow_id/run_id but returns a string.

 @app.tool
 async def do_something(arg: str) -> str:
     """Do something synchronously and return the final result."""
     return "done"
 
 @app.async_tool(name="do_something_async")
-async def do_something_async(arg: str) -> str:
+async def do_something_async(arg: str) -> dict[str, str]:
     """
     Start work asynchronously.
 
     Returns 'workflow_id' and 'run_id'. Use 'workflows-get_status' with the returned
     IDs to retrieve status and results.
     """
-    return "started"
+    return {"workflow_id": "do_something_async", "run_id": "<generated-run-id>"}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Preferred: Declare tools with decorators
Instead of only defining workflow classes, you can expose tools directly from functions:
```python
from mcp_agent.app import MCPApp
app = MCPApp(name="my_agent_server")
@app.tool
async def do_something(arg: str) -> str:
"""Do something synchronously and return the final result."""
return "done"
@app.async_tool(name="do_something_async")
async def do_something_async(arg: str) -> str:
"""
Start work asynchronously.
Returns 'workflow_id' and 'run_id'. Use 'workflows-get_status' with the returned
IDs to retrieve status and results.
"""
return "started"
```
### Preferred: Declare tools with decorators
Instead of only defining workflow classes, you can expose tools directly from functions:
🤖 Prompt for AI Agents
examples/mcp_agent_server/README.md lines 70-93: the async tool example claims
it returns workflow_id and run_id but the function currently returns a single
string; change the example so the async tool returns the expected identifiers
(either a tuple like (workflow_id, run_id) or a dict with keys "workflow_id" and
"run_id"), and update the function body and docstring accordingly to show the
actual return shape used by the workflows-get_status call.


- Sync tool returns the final result; no status polling needed.
- Async tool returns IDs for polling via the generic `workflows-get_status` endpoint.

## Multi-Agent Interaction Pattern

One of the most powerful capabilities enabled by the MCP Agent Server pattern is multi-agent interaction. Here's a conceptual example:
Expand All @@ -88,6 +118,7 @@ One of the most powerful capabilities enabled by the MCP Agent Server pattern is
```

In this example:

1. Claude Desktop can use both agent servers
2. The Writing Agent can also use the Research Agent as a tool
3. All communication happens via the MCP protocol
Expand Down
190 changes: 133 additions & 57 deletions examples/mcp_agent_server/asyncio/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,49 @@ https://github.com/user-attachments/assets/f651af86-222d-4df0-8241-616414df66e4
- Creating workflows with the `Workflow` base class
- Registering workflows with an `MCPApp`
- Exposing workflows as MCP tools using `create_mcp_server_for_app`, optionally using custom FastMCP settings
- Preferred: Declaring MCP tools with `@app.tool` and `@app.async_tool`
- Connecting to an MCP server using `gen_client`
- Running workflows remotely and monitoring their status

## Preferred: Define tools with decorators

You can declare tools directly from plain Python functions using `@app.tool` (sync) and `@app.async_tool` (async). This is the simplest and recommended way to expose agent logic.

```python
from mcp_agent.app import MCPApp
from typing import Optional

app = MCPApp(name="basic_agent_server")

# Synchronous tool – returns the final result to the caller
@app.tool
async def grade_story(story: str, app_ctx: Optional[Context] = None) -> str:
"""
Grade a student's short story and return a structured report.
"""
# ... implement using your agents/LLMs ...
return "Report..."

# Asynchronous tool – starts a workflow and returns IDs to poll later
@app.async_tool(name="grade_story_async")
async def grade_story_async(story: str, app_ctx: Optional[Context] = None) -> str:
"""
Start grading the story asynchronously.

This tool starts the workflow and returns 'workflow_id' and 'run_id'. Use the
generic 'workflows-get_status' tool with the returned IDs to retrieve status/results.
"""
# ... implement using your agents/LLMs ...
return "(async run)"
```
Comment on lines +24 to +50
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix sample: missing Context import and async return shape.

Import Context (or remove the annotation) and return IDs from the async tool as documented.

 ```python
 from mcp_agent.app import MCPApp
-from typing import Optional
+from typing import Optional
+from mcp_agent.context import Context  # verify correct import path
@@
 # Synchronous tool – returns the final result to the caller
 @app.tool
 async def grade_story(story: str, app_ctx: Optional[Context] = None) -> str:
@@
 # Asynchronous tool – starts a workflow and returns IDs to poll later
 @app.async_tool(name="grade_story_async")
-async def grade_story_async(story: str, app_ctx: Optional[Context] = None) -> str:
+async def grade_story_async(story: str, app_ctx: Optional[Context] = None) -> dict[str, str]:
@@
-    # ... implement using your agents/LLMs ...
-    return "(async run)"
+    # ... implement using your agents/LLMs ...
+    return {"workflow_id": "grade_story_async", "run_id": "<generated-run-id>"}
To locate the correct Context import:

---

🏁 Script executed:

```shell
#!/bin/bash
rg -nP '\bclass\s+Context\b|from\s+mcp_agent\..*import\s+Context' -C2

Length of output: 31203


Update Context import and async tool return

  • In examples/mcp_agent_server/asyncio/README.md, change the import to
    from mcp_agent.core.context import Context
  • Update grade_story_async to return a dict[str, str] (e.g. {"workflow_id": ..., "run_id": ...}) instead of a plain string.
🤖 Prompt for AI Agents
In examples/mcp_agent_server/asyncio/README.md around lines 24 to 50, update the
Context import to use from mcp_agent.core.context import Context and change the
async tool signature and return to return a dict[str,str] (e.g. async def
grade_story_async(story: str, app_ctx: Optional[Context] = None) -> dict[str,
str]) and ensure the function returns a mapping like {"workflow_id": "<id>",
"run_id": "<id>"} instead of a plain string; add any necessary typing imports if
missing.


What gets exposed:

- Sync tools appear as `<tool_name>` and return the final result (no status polling needed).
- Async tools appear as `<tool_name>` and return `{"workflow_id","run_id"}`; use `workflows-get_status` to query status.

These decorator-based tools are registered automatically when you call `create_mcp_server_for_app(app)`.

## Components in this Example

1. **BasicAgentWorkflow**: A simple workflow that demonstrates basic agent functionality:
Expand All @@ -34,12 +74,16 @@ https://github.com/user-attachments/assets/f651af86-222d-4df0-8241-616414df66e4

The MCP agent server exposes the following tools:

- `workflows-list` - Lists all available workflows
- `workflows-BasicAgentWorkflow-run` - Runs the BasicAgentWorkflow, returns the wf run ID
- `workflows-BasicAgentWorkflow-get_status` - Gets the status of a running workflow
- `workflows-ParallelWorkflow-run` - Runs the ParallelWorkflow, returns the wf run ID
- `workflows-ParallelWorkflow-get_status` - Gets the status of a running workflow
- `workflows-cancel` - Cancels a running workflow
- `workflows-list` - Lists available workflows and their parameter schemas
- `workflows-get_status` - Get status for a running workflow by `run_id` (and optional `workflow_id`)
- `workflows-cancel` - Cancel a running workflow

If you use the preferred decorator approach:

- Sync tool: `grade_story` (returns final result)
- Async tool: `grade_story_async` (returns `workflow_id/run_id`; poll with `workflows-get_status`)

The workflow-based endpoints (e.g., `workflows-<Workflow>-run`) are still available when you define explicit workflow classes.

## Prerequisites

Expand All @@ -55,25 +99,26 @@ Before running the example, you'll need to configure the necessary paths and API

1. Copy the example secrets file:

```bash
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
```
```
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
```

2. Edit `mcp_agent.secrets.yaml` to add your API keys:
```yaml
anthropic:
api_key: "your-anthropic-api-key"
openai:
api_key: "your-openai-api-key"
```

```
anthropic:
api_key: "your-anthropic-api-key"
openai:
api_key: "your-openai-api-key"
```

## How to Run

### Using the Client Script

The simplest way to run the example is using the provided client script:

```bash
```
# Make sure you're in the mcp_agent_server/asyncio directory
uv run client.py
```
Expand All @@ -91,21 +136,52 @@ You can also run the server and client separately:

1. In one terminal, start the server:

```bash
uv run basic_agent_server.py
```
uv run basic_agent_server.py

# Optionally, run with the example custom FastMCP settings
uv run basic_agent_server.py --custom-fastmcp-settings
```
# Optionally, run with the example custom FastMCP settings
uv run basic_agent_server.py --custom-fastmcp-settings
```

2. In another terminal, run the client:

```bash
uv run client.py
```
uv run client.py

# Optionally, run with the example custom FastMCP settings
uv run client.py --custom-fastmcp-settings
```

## Receiving Server Logs in the Client

The server advertises the `logging` capability (via `logging/setLevel`) and forwards its structured logs upstream using `notifications/message`. To receive these logs in a client session, pass a `logging_callback` when constructing the client session and set the desired level:

```python
from datetime import timedelta
from anyio.streams.memory import MemoryObjectReceiveStream, MemoryObjectSendStream
from mcp import ClientSession
from mcp.types import LoggingMessageNotificationParams
from mcp_agent.mcp.mcp_agent_client_session import MCPAgentClientSession

async def on_server_log(params: LoggingMessageNotificationParams) -> None:
print(f"[SERVER LOG] [{params.level.upper()}] [{params.logger}] {params.data}")

def make_session(read_stream: MemoryObjectReceiveStream,
write_stream: MemoryObjectSendStream,
read_timeout_seconds: timedelta | None) -> ClientSession:
return MCPAgentClientSession(
read_stream=read_stream,
write_stream=write_stream,
read_timeout_seconds=read_timeout_seconds,
logging_callback=on_server_log,
)

# Optionally, run with the example custom FastMCP settings
uv run client.py --custom-fastmcp-settings
```
# Later, when connecting via gen_client(..., client_session_factory=make_session)
# you can request the minimum server log level:
# await server.set_logging_level("info")
```

The example client (`client.py`) demonstrates this end-to-end: it registers a logging callback and calls `set_logging_level("info")` so logs from the server appear in the client's console.

## MCP Clients

Expand All @@ -116,7 +192,7 @@ like any other MCP server.

You can inspect and test the server using [MCP Inspector](https://github.com/modelcontextprotocol/inspector):

```bash
```
npx @modelcontextprotocol/inspector \
uv \
--directory /path/to/mcp-agent/examples/mcp_agent_server/asyncio \
Expand All @@ -138,41 +214,41 @@ To use this server with Claude Desktop:

2. Add a new server configuration:

```json
"basic-agent-server": {
"command": "/path/to/uv",
"args": [
"--directory",
"/path/to/mcp-agent/examples/mcp_agent_server/asyncio",
"run",
"basic_agent_server.py"
]
}
```
```json
"basic-agent-server": {
"command": "/path/to/uv",
"args": [
"--directory",
"/path/to/mcp-agent/examples/mcp_agent_server/asyncio",
"run",
"basic_agent_server.py"
]
}
```

3. Restart Claude Desktop, and you'll see the server available in the tool drawer

4. (**claude desktop workaround**) Update `mcp_agent.config.yaml` file with the full paths to npx/uvx on your system:

Find the full paths to `uvx` and `npx` on your system:

```bash
which uvx
which npx
```

Update the `mcp_agent.config.yaml` file with these paths:

```yaml
mcp:
servers:
fetch:
command: "/full/path/to/uvx" # Replace with your path
args: ["mcp-server-fetch"]
filesystem:
command: "/full/path/to/npx" # Replace with your path
args: ["-y", "@modelcontextprotocol/server-filesystem"]
```
Find the full paths to `uvx` and `npx` on your system:

```
which uvx
which npx
```

Update the `mcp_agent.config.yaml` file with these paths:

```yaml
mcp:
servers:
fetch:
command: "/full/path/to/uvx" # Replace with your path
args: ["mcp-server-fetch"]
filesystem:
command: "/full/path/to/npx" # Replace with your path
args: ["-y", "@modelcontextprotocol/server-filesystem"]
```

## Code Structure

Expand Down
Loading
Loading