-
Notifications
You must be signed in to change notification settings - Fork 781
Fix/template updates #587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/template updates #587
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -153,8 +153,8 @@ def init( | |
| scaffolding_templates = { | ||
| "basic": "Simple agent with filesystem and fetch capabilities", | ||
| "server": "MCP server with workflow and parallel agents", | ||
| "token": "Token counting example with monitoring", | ||
| "factory": "Agent factory with router-based selection", | ||
| # "token": "Token counting example with monitoring", | ||
| # "factory": "Agent factory with router-based selection", | ||
|
Comment on lines
+156
to
+157
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chainInconsistency: Token and factory templates are commented out but code paths remain. Lines 156-157 comment out the token and factory templates, but:
If these templates are intentionally excluded (per PR objectives), consider:
Run this script to verify template references: 🏁 Script executed: #!/bin/bash
# Check for any other references to token/factory templates that might break
rg -n "token.*template|factory.*template" --type py -C 2Length of output: 1282 I need to verify the template validation logic to confirm the exact error path:That search won't help. Let me check the actual validation code in the file: 🏁 Script executed: #!/bin/bash
# Find the template validation logic and line 266
sed -n '250,280p' src/mcp_agent/cli/commands/init.pyLength of output: 1097 🏁 Script executed: #!/bin/bash
# Find where scaffolding_templates dict is used for validation
rg -n "scaffolding_templates\[|in scaffolding_templates|get.*template" src/mcp_agent/cli/commands/init.py -A2 -B2Length of output: 514 🏁 Script executed: #!/bin/bash
# Check the complete init function signature and early validation
sed -n '240,280p' src/mcp_agent/cli/commands/init.pyLength of output: 1521 I need to find where the 🏁 Script executed: #!/bin/bash
# Find where templates variable is defined in the init function
sed -n '130,180p' src/mcp_agent/cli/commands/init.pyLength of output: 77 🏁 Script executed: #!/bin/bash
# Find where templates variable is assigned in init function
rg -n "templates\s*=" src/mcp_agent/cli/commands/init.py -B2 -A2 | head -30Length of output: 992 🏁 Script executed: #!/bin/bash
# Also check the function signature to understand the parameters
rg -n "def init\(" src/mcp_agent/cli/commands/init.py -A20Length of output: 985 🏁 Script executed: #!/bin/bash
# Check the complete interactive mode templates dict and what happens after selection
rg -n "def init\(" src/mcp_agent/cli/commands/init.py -A 370 | grep -A50 "Template selection"Length of output: 74 🏁 Script executed: #!/bin/bash
# Get lines 490-520 to see interactive template selection and handling
rg -n "Interactive mode|if interactive|Template selection" src/mcp_agent/cli/commands/init.py -A15Length of output: 689 🏁 Script executed: #!/bin/bash
# Also search for where the interactive choice is processed
rg -n 'choice\s*=' src/mcp_agent/cli/commands/init.py -A3 -B3Length of output: 292 🏁 Script executed: #!/bin/bash
# Check code after line 507 to see how init() is called
rg -n "template_name" src/mcp_agent/cli/commands/init.py | grep -v "^\s*#"Length of output: 254 Interactive mode offers token/factory templates but execution will fail. The original analysis was correct. Here's the confirmed issue:
Fix required: Either remove token/factory from interactive choices (lines 496-497) AND remove their code paths (lines 401-439), OR uncomment them in 🤖 Prompt for AI Agents |
||
| "minimal": "Minimal configuration files only", | ||
| } | ||
|
|
||
|
|
@@ -360,7 +360,7 @@ def init( | |
| # No separate agents.yaml needed; agent definitions live in mcp_agent.config.yaml | ||
|
|
||
| # Create README for the basic template | ||
| readme_content = _load_template("README_init.md") | ||
| readme_content = _load_template("README_basic.md") | ||
| if readme_content: | ||
| created = _write_readme(dir, readme_content, force) | ||
| if created: | ||
|
|
@@ -374,23 +374,30 @@ def init( | |
| files_created.append(created) | ||
|
|
||
| elif template == "server": | ||
| server_path = dir / "server.py" | ||
| server_path = dir / "main.py" | ||
| server_content = _load_template("basic_agent_server.py") | ||
| if server_content and _write(server_path, server_content, force): | ||
| files_created.append("server.py") | ||
| files_created.append("main.py") | ||
| # Make executable | ||
| try: | ||
| server_path.chmod(server_path.stat().st_mode | 0o111) | ||
| except Exception: | ||
| pass | ||
|
|
||
| # README for server template | ||
| readme_content = _load_template("README_init.md") | ||
| readme_content = _load_template("README_server.md") | ||
| if readme_content: | ||
| created = _write_readme(dir, readme_content, force) | ||
| if created: | ||
| files_created.append(created) | ||
|
|
||
| # Add basic requirements.txt | ||
| requirements_content = _load_template("requirements.txt") | ||
| if requirements_content: | ||
| created = _write_requirements(dir, requirements_content, force) | ||
| if created: | ||
| files_created.append(created) | ||
|
|
||
| elif template == "token": | ||
| token_path = dir / "token_example.py" | ||
| token_content = _load_template("token_counter.py") | ||
|
|
@@ -402,7 +409,7 @@ def init( | |
| except Exception: | ||
| pass | ||
|
|
||
| readme_content = _load_template("README_init.md") | ||
| readme_content = _load_template("README_token.md") | ||
| if readme_content: | ||
| created = _write_readme(dir, readme_content, force) | ||
| if created: | ||
|
|
@@ -425,7 +432,7 @@ def init( | |
| if agents_content and _write(agents_path, agents_content, force): | ||
| files_created.append("agents.yaml") | ||
|
|
||
| readme_content = _load_template("README_init.md") | ||
| readme_content = _load_template("README_factory.md") | ||
| if readme_content: | ||
| created = _write_readme(dir, readme_content, force) | ||
| if created: | ||
|
|
@@ -448,9 +455,9 @@ def init( | |
| run_file = entry_script_name or "main.py" | ||
| console.print(f"3. Run your agent: [cyan]uv run {run_file}[/cyan]") | ||
| elif template == "server": | ||
| console.print("3. Run the server: [cyan]uv run server.py[/cyan]") | ||
| console.print("3. Run the server: [cyan]uv run main.py[/cyan]") | ||
| console.print( | ||
| " Or serve: [cyan]mcp-agent dev serve --script server.py[/cyan]" | ||
| " Or serve: [cyan]mcp-agent dev serve --script main.py[/cyan]" | ||
| ) | ||
| elif template == "token": | ||
| console.print("3. Run the example: [cyan]uv run token_example.py[/cyan]") | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,143 @@ | ||||||
| # MCP-Agent Server Starter | ||||||
|
|
||||||
| Welcome! This project was generated by `mcp-agent init`. It demonstrates how to expose your mcp-agent application as an MCP server, making your agentic workflows available to any MCP client. | ||||||
|
|
||||||
| ## What's included | ||||||
|
|
||||||
| - An `MCPApp` named `basic_agent_server` (see `main.py`). | ||||||
| - A workflow class `BasicAgentWorkflow`: | ||||||
| - Uses `Agent` to connect to `filesystem` and `fetch` MCP servers. | ||||||
| - Demonstrates multi-turn conversations with an LLM (OpenAI). | ||||||
| - Shows how to configure model preferences for specific requests. | ||||||
| - A tool function decorated with `@app.tool`: | ||||||
| - `grade_story(story: str, app_ctx?)` - Grades a student's short story using parallel agents (proofreader, fact checker, style enforcer) via `ParallelLLM`. | ||||||
| - Returns the final result directly to the caller (no polling needed). | ||||||
| - Server logs are forwarded to connected MCP clients as notifications. | ||||||
|
|
||||||
| ## What gets exposed as MCP tools | ||||||
|
|
||||||
| When you run `main.py`, your MCP server exposes: | ||||||
|
|
||||||
| - `workflows-list` - Lists available workflows and their parameter schemas | ||||||
| - `workflows-BasicAgentWorkflow-run` - Executes the BasicAgentWorkflow with input | ||||||
| - `workflows-get_status` - Get status for a running workflow by `run_id` | ||||||
| - `workflows-cancel` - Cancel a running workflow | ||||||
| - `grade_story` - Synchronous tool that grades a short story and returns the final result | ||||||
|
|
||||||
| ## Quick start | ||||||
|
|
||||||
| 1. Add your OpenAI API key to `mcp_agent.secrets.yaml` (or set `OPENAI_API_KEY` env var). | ||||||
|
|
||||||
| NOTE: You can use another supported provider (e.g. Anthropic) instead, just be sure to set its API key in the `mcp_agent.secrets.yaml` (or set its env var) and import/use the relevant `AugmentedLLM` in `main.py`. | ||||||
|
|
||||||
| 2. Review `mcp_agent.config.yaml`: | ||||||
|
|
||||||
| - Execution engine: `asyncio` | ||||||
| - Logger settings | ||||||
| - MCP servers: `filesystem`, `fetch` | ||||||
|
|
||||||
| 3. Install dependencies and run the server: | ||||||
|
|
||||||
| ```bash | ||||||
| uv pip install -r requirements.txt | ||||||
| uv run main.py | ||||||
| ``` | ||||||
|
|
||||||
| The server will start and expose its tools over sse. You'll see: | ||||||
|
|
||||||
| ```bash | ||||||
| Creating MCP server for basic_agent_server | ||||||
| Registered workflows: | ||||||
| - BasicAgentWorkflow | ||||||
| MCP Server settings: ... | ||||||
| ``` | ||||||
|
|
||||||
| 4. Connect with an MCP client: | ||||||
|
|
||||||
| You can connect to this server using any MCP client. For example, use [MCP Inspector](https://github.com/modelcontextprotocol/inspector) to explore and test: | ||||||
|
|
||||||
| ```bash | ||||||
| npx @modelcontextprotocol/inspector --transport sse --server-url http://127.0.0.1:8000/sse | ||||||
| ``` | ||||||
|
|
||||||
| This will launch the inspector UI where you can: | ||||||
|
|
||||||
| - See all available tools (`grade_story`, `workflows-BasicAgentWorkflow-run`, etc.) | ||||||
| - Test workflow execution | ||||||
| - View request/response details | ||||||
|
|
||||||
| 5. Deploy as a remote MCP server: | ||||||
|
|
||||||
| When you're ready to deploy, ensure the required API keys are set in `mcp_agent.secrets.yaml` and then run: | ||||||
|
|
||||||
| ```bash | ||||||
| uv run mcp-agent login | ||||||
| ``` | ||||||
|
|
||||||
| to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix capitalization: "Github" → "GitHub". The official brand name uses a capital "H". Apply this diff: -to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github.
+to authenticate to mcp-agent cloud. You will be redirected to the login page, create an mcp-agent cloud account through Google or GitHub.📝 Committable suggestion
Suggested change
🧰 Tools🪛 LanguageTool[uncategorized] ~77-~77: The official name of this software platform is spelled with a capital “H”. (GITHUB) 🤖 Prompt for AI Agents |
||||||
|
|
||||||
| Set up your mcp-agent cloud API Key and copy & paste it into your terminal | ||||||
|
|
||||||
| ```bash | ||||||
| INFO: Directing to MCP Agent Cloud API login... | ||||||
| Please enter your API key 🔑: | ||||||
| ``` | ||||||
|
|
||||||
| In your terminal, deploy the MCP app: | ||||||
|
|
||||||
| ```bash | ||||||
| uv run mcp-agent deploy basic_agent_server | ||||||
| ``` | ||||||
|
|
||||||
| You will then be prompted to specify the type of secret to save your OpenAI API key as. Select (1) deployment secret so that it is available to the deployed server. | ||||||
|
|
||||||
| The `deploy` command will bundle the app files and deploy them, wrapping your app as a hosted MCP SSE server with a URL of the form: | ||||||
| `https://<server_id>.deployments.mcp-agent.com`. | ||||||
|
|
||||||
| Anything decorated with `@app.tool` (or `@app.async_tool`) runs as a Temporal workflow in the cloud. | ||||||
|
|
||||||
| Since the mcp-agent app is exposed as an MCP server, it can be used in any MCP client just | ||||||
| like any other MCP server. For example, you can inspect and test the server using MCP Inspector: | ||||||
|
|
||||||
| ```bash | ||||||
| npx @modelcontextprotocol/inspector --transport sse --server-url https://<server_id>.deployments.mcp-agent.com/sse | ||||||
| ``` | ||||||
|
|
||||||
| ## Notes | ||||||
|
|
||||||
| - `app_ctx` is the MCPApp Context (configuration, logger, upstream session, etc.). | ||||||
| - Logging uses `app.logger` and is forwarded as notifications when connected to an MCP client. | ||||||
| - Configuration is read from `mcp_agent.config.yaml` and `mcp_agent.secrets.yaml` (env vars supported). | ||||||
| - The default model is configurable (see `openai.default_model` in config). | ||||||
| - The server runs in `asyncio` mode and exposes tools via sse by default. | ||||||
|
|
||||||
| ## Key concepts demonstrated | ||||||
|
|
||||||
| - **Creating workflows**: Use the `@app.workflow` decorator and `Workflow` base class to define reusable workflows. | ||||||
| - **Defining tools**: Use `@app.tool` for synchronous tools that return results immediately. | ||||||
| - **Using agents**: Create `Agent` instances with specific instructions and server access (filesystem, fetch, etc.). | ||||||
| - **Parallel execution**: Use `ParallelLLM` to run multiple agents in parallel and aggregate their results. | ||||||
| - **Multi-turn conversations**: LLMs maintain conversation context across multiple `generate_str()` calls. | ||||||
| - **Model preferences**: Configure model selection via `RequestParams` and `ModelPreferences`. | ||||||
| - **Server creation**: Use `create_mcp_server_for_app()` to wrap your MCPApp as an MCP server. | ||||||
|
|
||||||
| ## Next steps | ||||||
|
|
||||||
| - Modify the `BasicAgentWorkflow` instructions or server list to fit your use case. | ||||||
| - Add more tools with `@app.tool` or `@app.async_tool` as you grow the app. | ||||||
| - Explore the `grade_story` tool to understand parallel agent execution. | ||||||
| - Customize the agents used by `ParallelLLM` (proofreader, fact checker, style enforcer). | ||||||
| - Read the docs and explore examples: | ||||||
| - GitHub: https://github.com/lastmile-ai/mcp-agent | ||||||
| - Docs: https://docs.mcp-agent.com/ | ||||||
| - Discord: https://lmai.link/discord/mcp-agent | ||||||
|
|
||||||
| ## Further reading | ||||||
|
|
||||||
| - Configuration reference and secrets management. | ||||||
| - MCP servers (stdio, SSE, streamable_http, websockets) and timeouts. | ||||||
| - Temporal workflows, activities, and logging/notifications when deployed. | ||||||
| - Agents and LLMs: `AgentSpec`, prompts, and model defaults. | ||||||
| - Using `@app.async_tool` for long-running workflows (returns workflow_id/run_id for polling). | ||||||
|
|
||||||
| Happy building! | ||||||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just removing these for now; generally not sure about their usefulness as templates (vs just examples in the repo) but also they were copying unrelated README so would at least need an updated README on top of validating that they work (e.g. we know token counting doesn't work for cloud workflow runs atm)