Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 17 additions & 10 deletions src/mcp_agent/cli/commands/init.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,8 @@ def init(
scaffolding_templates = {
"basic": "Simple agent with filesystem and fetch capabilities",
"server": "MCP server with workflow and parallel agents",
"token": "Token counting example with monitoring",
"factory": "Agent factory with router-based selection",
# "token": "Token counting example with monitoring",
Copy link
Member Author

@rholinshead rholinshead Oct 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just removing these for now; generally not sure about their usefulness as templates (vs just examples in the repo) but also they were copying unrelated README so would at least need an updated README on top of validating that they work (e.g. we know token counting doesn't work for cloud workflow runs atm)

# "factory": "Agent factory with router-based selection",
Comment on lines +156 to +157
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Inconsistency: Token and factory templates are commented out but code paths remain.

Lines 156-157 comment out the token and factory templates, but:

  • Their template handling code (lines 401-439) is still present and functional
  • Interactive mode (lines 496-497) still offers them as choices
  • Using --template token or --template factory will fail at line 266 with "Unknown template" error
  • Selecting them in interactive mode will also fail

If these templates are intentionally excluded (per PR objectives), consider:

  1. Either remove the code paths entirely (lines 401-439 and interactive choices at 496-497)
  2. Or keep them but add a check to display a helpful message like "This template is temporarily disabled pending verification"

Run this script to verify template references:


🏁 Script executed:

#!/bin/bash
# Check for any other references to token/factory templates that might break

rg -n "token.*template|factory.*template" --type py -C 2

Length of output: 1282


I need to verify the template validation logic to confirm the exact error path:That search won't help. Let me check the actual validation code in the file:


🏁 Script executed:

#!/bin/bash
# Find the template validation logic and line 266
sed -n '250,280p' src/mcp_agent/cli/commands/init.py

Length of output: 1097


🏁 Script executed:

#!/bin/bash
# Find where scaffolding_templates dict is used for validation
rg -n "scaffolding_templates\[|in scaffolding_templates|get.*template" src/mcp_agent/cli/commands/init.py -A2 -B2

Length of output: 514


🏁 Script executed:

#!/bin/bash
# Check the complete init function signature and early validation
sed -n '240,280p' src/mcp_agent/cli/commands/init.py

Length of output: 1521


I need to find where the templates variable is assigned:


🏁 Script executed:

#!/bin/bash
# Find where templates variable is defined in the init function
sed -n '130,180p' src/mcp_agent/cli/commands/init.py

Length of output: 77


🏁 Script executed:

#!/bin/bash
# Find where templates variable is assigned in init function
rg -n "templates\s*=" src/mcp_agent/cli/commands/init.py -B2 -A2 | head -30

Length of output: 992


🏁 Script executed:

#!/bin/bash
# Also check the function signature to understand the parameters
rg -n "def init\(" src/mcp_agent/cli/commands/init.py -A20

Length of output: 985


🏁 Script executed:

#!/bin/bash
# Check the complete interactive mode templates dict and what happens after selection
rg -n "def init\(" src/mcp_agent/cli/commands/init.py -A 370 | grep -A50 "Template selection"

Length of output: 74


🏁 Script executed:

#!/bin/bash
# Get lines 490-520 to see interactive template selection and handling
rg -n "Interactive mode|if interactive|Template selection" src/mcp_agent/cli/commands/init.py -A15

Length of output: 689


🏁 Script executed:

#!/bin/bash
# Also search for where the interactive choice is processed
rg -n 'choice\s*=' src/mcp_agent/cli/commands/init.py -A3 -B3

Length of output: 292


🏁 Script executed:

#!/bin/bash
# Check code after line 507 to see how init() is called
rg -n "template_name" src/mcp_agent/cli/commands/init.py | grep -v "^\s*#"

Length of output: 254


Interactive mode offers token/factory templates but execution will fail.

The original analysis was correct. Here's the confirmed issue:

  • Lines 496-497: Interactive mode offers "token" and "factory" as choices
  • Line 506: User's selection is extracted as template_name
  • Line 542: This template_name is passed to init(template=template_name)
  • Line 265 in init(): Validation checks if template not in templates:
  • The main templates dict (line 180) only includes "basic", "server", "workflow", "researcher", "gen_coding", and "chat"
  • Token and factory are commented out from scaffolding_templates (lines 156-157)
  • Result: Users selecting token/factory interactively will hit the "Unknown template" error at line 265
  • Lines 401-439: The elif blocks handling these templates are now dead code

Fix required: Either remove token/factory from interactive choices (lines 496-497) AND remove their code paths (lines 401-439), OR uncomment them in scaffolding_templates (lines 156-157) if they should be available.

🤖 Prompt for AI Agents
In src/mcp_agent/cli/commands/init.py around lines 156-157 and 401-439
(interactive choices at ~496-497 and template validation at ~265), the
interactive menu presents "token" and "factory" but those keys are commented out
in scaffolding_templates, causing an "Unknown template" error and leaving the
template-specific code paths dead; fix by doing one of two: A) Re-enable the
templates by uncommenting the "token" and "factory" entries at lines 156-157 so
they exist in the templates dict used by init(), keeping the existing elif
blocks, or B) Remove "token" and "factory" from the interactive choices (~lines
496-497) and delete the corresponding elif handling code blocks (lines ~401-439)
so the UI and implementation stay consistent. Ensure whichever option you choose
also updates any tests/docs referencing those templates.

"minimal": "Minimal configuration files only",
}

Expand Down Expand Up @@ -360,7 +360,7 @@ def init(
# No separate agents.yaml needed; agent definitions live in mcp_agent.config.yaml

# Create README for the basic template
readme_content = _load_template("README_init.md")
readme_content = _load_template("README_basic.md")
if readme_content:
created = _write_readme(dir, readme_content, force)
if created:
Expand All @@ -374,23 +374,30 @@ def init(
files_created.append(created)

elif template == "server":
server_path = dir / "server.py"
server_path = dir / "main.py"
server_content = _load_template("basic_agent_server.py")
if server_content and _write(server_path, server_content, force):
files_created.append("server.py")
files_created.append("main.py")
# Make executable
try:
server_path.chmod(server_path.stat().st_mode | 0o111)
except Exception:
pass

# README for server template
readme_content = _load_template("README_init.md")
readme_content = _load_template("README_server.md")
if readme_content:
created = _write_readme(dir, readme_content, force)
if created:
files_created.append(created)

# Add basic requirements.txt
requirements_content = _load_template("requirements.txt")
if requirements_content:
created = _write_requirements(dir, requirements_content, force)
if created:
files_created.append(created)

elif template == "token":
token_path = dir / "token_example.py"
token_content = _load_template("token_counter.py")
Expand All @@ -402,7 +409,7 @@ def init(
except Exception:
pass

readme_content = _load_template("README_init.md")
readme_content = _load_template("README_token.md")
if readme_content:
created = _write_readme(dir, readme_content, force)
if created:
Expand All @@ -425,7 +432,7 @@ def init(
if agents_content and _write(agents_path, agents_content, force):
files_created.append("agents.yaml")

readme_content = _load_template("README_init.md")
readme_content = _load_template("README_factory.md")
if readme_content:
created = _write_readme(dir, readme_content, force)
if created:
Expand All @@ -448,9 +455,9 @@ def init(
run_file = entry_script_name or "main.py"
console.print(f"3. Run your agent: [cyan]uv run {run_file}[/cyan]")
elif template == "server":
console.print("3. Run the server: [cyan]uv run server.py[/cyan]")
console.print("3. Run the server: [cyan]uv run main.py[/cyan]")
console.print(
" Or serve: [cyan]mcp-agent dev serve --script server.py[/cyan]"
" Or serve: [cyan]mcp-agent dev serve --script main.py[/cyan]"
)
elif template == "token":
console.print("3. Run the example: [cyan]uv run token_example.py[/cyan]")
Expand Down
Empty file.
143 changes: 143 additions & 0 deletions src/mcp_agent/data/templates/README_server.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# MCP-Agent Server Starter

Welcome! This project was generated by `mcp-agent init`. It demonstrates how to expose your mcp-agent application as an MCP server, making your agentic workflows available to any MCP client.

## What's included

- An `MCPApp` named `basic_agent_server` (see `main.py`).
- A workflow class `BasicAgentWorkflow`:
- Uses `Agent` to connect to `filesystem` and `fetch` MCP servers.
- Demonstrates multi-turn conversations with an LLM (OpenAI).
- Shows how to configure model preferences for specific requests.
- A tool function decorated with `@app.tool`:
- `grade_story(story: str, app_ctx?)` - Grades a student's short story using parallel agents (proofreader, fact checker, style enforcer) via `ParallelLLM`.
- Returns the final result directly to the caller (no polling needed).
- Server logs are forwarded to connected MCP clients as notifications.

## What gets exposed as MCP tools

When you run `main.py`, your MCP server exposes:

- `workflows-list` - Lists available workflows and their parameter schemas
- `workflows-BasicAgentWorkflow-run` - Executes the BasicAgentWorkflow with input
- `workflows-get_status` - Get status for a running workflow by `run_id`
- `workflows-cancel` - Cancel a running workflow
- `grade_story` - Synchronous tool that grades a short story and returns the final result

## Quick start

1. Add your OpenAI API key to `mcp_agent.secrets.yaml` (or set `OPENAI_API_KEY` env var).

NOTE: You can use another supported provider (e.g. Anthropic) instead, just be sure to set its API key in the `mcp_agent.secrets.yaml` (or set its env var) and import/use the relevant `AugmentedLLM` in `main.py`.

2. Review `mcp_agent.config.yaml`:

- Execution engine: `asyncio`
- Logger settings
- MCP servers: `filesystem`, `fetch`

3. Install dependencies and run the server:

```bash
uv pip install -r requirements.txt
uv run main.py
```

The server will start and expose its tools over sse. You'll see:

```bash
Creating MCP server for basic_agent_server
Registered workflows:
- BasicAgentWorkflow
MCP Server settings: ...
```

4. Connect with an MCP client:

You can connect to this server using any MCP client. For example, use [MCP Inspector](https://github.com/modelcontextprotocol/inspector) to explore and test:

```bash
npx @modelcontextprotocol/inspector --transport sse --server-url http://127.0.0.1:8000/sse
```

This will launch the inspector UI where you can:

- See all available tools (`grade_story`, `workflows-BasicAgentWorkflow-run`, etc.)
- Test workflow execution
- View request/response details

5. Deploy as a remote MCP server:

When you're ready to deploy, ensure the required API keys are set in `mcp_agent.secrets.yaml` and then run:

```bash
uv run mcp-agent login
```

to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix capitalization: "Github" → "GitHub".

The official brand name uses a capital "H".

Apply this diff:

-to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github.
+to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or GitHub.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github.
to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or GitHub.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~77-~77: The official name of this software platform is spelled with a capital “H”.
Context: ...p-agent cloud account through Google or Github. Set up your mcp-agent cloud API Key a...

(GITHUB)

🤖 Prompt for AI Agents
In src/mcp_agent/data/templates/README_server.md around line 77, fix the brand
capitalization by changing "Github" to "GitHub" in the sentence "to authenticate
to mcp-agent cloud. You will be redirected to the login page, create an
mcp-agent cloud account through Google or Github." so it reads "...through
Google or GitHub.".


Set up your mcp-agent cloud API Key and copy & paste it into your terminal

```bash
INFO: Directing to MCP Agent Cloud API login...
Please enter your API key 🔑:
```

In your terminal, deploy the MCP app:

```bash
uv run mcp-agent deploy basic_agent_server
```

You will then be prompted to specify the type of secret to save your OpenAI API key as. Select (1) deployment secret so that it is available to the deployed server.

The `deploy` command will bundle the app files and deploy them, wrapping your app as a hosted MCP SSE server with a URL of the form:
`https://<server_id>.deployments.mcp-agent.com`.

Anything decorated with `@app.tool` (or `@app.async_tool`) runs as a Temporal workflow in the cloud.

Since the mcp-agent app is exposed as an MCP server, it can be used in any MCP client just
like any other MCP server. For example, you can inspect and test the server using MCP Inspector:

```bash
npx @modelcontextprotocol/inspector --transport sse --server-url https://<server_id>.deployments.mcp-agent.com/sse
```

## Notes

- `app_ctx` is the MCPApp Context (configuration, logger, upstream session, etc.).
- Logging uses `app.logger` and is forwarded as notifications when connected to an MCP client.
- Configuration is read from `mcp_agent.config.yaml` and `mcp_agent.secrets.yaml` (env vars supported).
- The default model is configurable (see `openai.default_model` in config).
- The server runs in `asyncio` mode and exposes tools via sse by default.

## Key concepts demonstrated

- **Creating workflows**: Use the `@app.workflow` decorator and `Workflow` base class to define reusable workflows.
- **Defining tools**: Use `@app.tool` for synchronous tools that return results immediately.
- **Using agents**: Create `Agent` instances with specific instructions and server access (filesystem, fetch, etc.).
- **Parallel execution**: Use `ParallelLLM` to run multiple agents in parallel and aggregate their results.
- **Multi-turn conversations**: LLMs maintain conversation context across multiple `generate_str()` calls.
- **Model preferences**: Configure model selection via `RequestParams` and `ModelPreferences`.
- **Server creation**: Use `create_mcp_server_for_app()` to wrap your MCPApp as an MCP server.

## Next steps

- Modify the `BasicAgentWorkflow` instructions or server list to fit your use case.
- Add more tools with `@app.tool` or `@app.async_tool` as you grow the app.
- Explore the `grade_story` tool to understand parallel agent execution.
- Customize the agents used by `ParallelLLM` (proofreader, fact checker, style enforcer).
- Read the docs and explore examples:
- GitHub: https://github.com/lastmile-ai/mcp-agent
- Docs: https://docs.mcp-agent.com/
- Discord: https://lmai.link/discord/mcp-agent

## Further reading

- Configuration reference and secrets management.
- MCP servers (stdio, SSE, streamable_http, websockets) and timeouts.
- Temporal workflows, activities, and logging/notifications when deployed.
- Agents and LLMs: `AgentSpec`, prompts, and model defaults.
- Using `@app.async_tool` for long-running workflows (returns workflow_id/run_id for polling).

Happy building!
Empty file.
48 changes: 14 additions & 34 deletions src/mcp_agent/data/templates/basic_agent_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
3. Declarative agent configuration using FastMCPApp decorators
"""

import argparse
import asyncio
import os
from typing import Optional
Expand All @@ -20,14 +19,15 @@
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm import RequestParams
from mcp_agent.workflows.llm.llm_selector import ModelPreferences
from mcp_agent.workflows.llm.augmented_llm_anthropic import AnthropicAugmentedLLM

# We are using the OpenAI augmented LLM for this example but you can swap with others (e.g. AnthropicAugmentedLLM)
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.workflows.parallel.parallel_llm import ParallelLLM
from mcp_agent.executor.workflow import Workflow, WorkflowResult

# Note: This is purely optional:
# if not provided, a default FastMCP server will be created by MCPApp using create_mcp_server_for_app()
mcp = FastMCP(name="basic_agent_server", description="My basic agent server example.")
mcp = FastMCP(name="basic_agent_server")

# Define the MCPApp instance. The server created for this app will advertise the
# MCP logging capability and forward structured logs upstream to connected clients.
Expand Down Expand Up @@ -57,8 +57,8 @@ async def run(self, input: str) -> WorkflowResult[str]:
WorkflowResult containing the processed data.
"""

logger = app.logger
context = app.context
logger = context.logger

logger.info("Current config:", data=context.config.model_dump())
logger.info(
Expand All @@ -82,7 +82,7 @@ async def run(self, input: str) -> WorkflowResult[str]:
result = await finder_agent.list_tools()
logger.info("Tools available:", data=result.model_dump())

llm = await finder_agent.attach_llm(AnthropicAugmentedLLM)
llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)

result = await llm.generate_str(
message=input,
Expand Down Expand Up @@ -123,12 +123,8 @@ async def grade_story(story: str, app_ctx: Optional[AppContext] = None) -> str:
story: The student's short story to grade
app_ctx: Optional MCPApp context for accessing app resources and logging
"""
# Use the context's app if available for proper logging with upstream_session
_app = app_ctx.app if app_ctx else app
# Ensure the app's logger is bound to the current context with upstream_session
if _app._logger and hasattr(_app._logger, "_bound_context"):
_app._logger._bound_context = app_ctx
logger = _app.logger
context = app_ctx or app.context
logger = context.logger
logger.info(f"grade_story: Received input: {story}")

proofreader = Agent(
Expand Down Expand Up @@ -184,40 +180,24 @@ async def grade_story(story: str, app_ctx: Optional[AppContext] = None) -> str:


async def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--custom-fastmcp-settings",
action="store_true",
help="Enable custom FastMCP settings for the server",
)
args = parser.parse_args()
use_custom_fastmcp_settings = args.custom_fastmcp_settings

async with app.run() as agent_app:
# Add the current directory to the filesystem server's args if needed
context = agent_app.context
if "filesystem" in context.config.mcp.servers:
context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])

# Log registered workflows and agent configurations
agent_app.logger.info(f"Creating MCP server for {agent_app.name}")
context.logger.info(f"Creating MCP server for {agent_app.name}")

agent_app.logger.info("Registered workflows:")
context.logger.info("Registered workflows:")
for workflow_id in agent_app.workflows:
agent_app.logger.info(f" - {workflow_id}")

# Create the MCP server that exposes both workflows and agent configurations,
# optionally using custom FastMCP settings
fast_mcp_settings = (
{"host": "localhost", "port": 8001, "debug": True, "log_level": "DEBUG"}
if use_custom_fastmcp_settings
else None
)
mcp_server = create_mcp_server_for_app(agent_app, **(fast_mcp_settings or {}))
agent_app.logger.info(f"MCP Server settings: {mcp_server.settings}")
context.logger.info(f" - {workflow_id}")

mcp_server = create_mcp_server_for_app(agent_app)
context.logger.info(f"MCP Server settings: {mcp_server.settings}")

# Run the server
await mcp_server.run_stdio_async()
await mcp_server.run_sse_async()


if __name__ == "__main__":
Expand Down
Loading