-
Notifications
You must be signed in to change notification settings - Fork 768
feat: improve optimizer evaluator workflow for cloud #425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe evaluator-optimizer example was converted into an async tool with parameterized inputs, model references updated to gpt-4.1, result is now returned (and logged), MCP agent config adds MCP servers and updates default_model, requirements comment out local mcp-agent install, and README/secrets expanded for cloud deployment. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant AgentApp as App
participant Tool as cover_letter_writer_tool
participant Evaluator as evaluator_optimizer
participant Logger
Client->>AgentApp: Invoke tool(job_posting, candidate_details, company_information)
AgentApp->>Tool: async call
Tool->>Evaluator: generate_str(model="gpt-4.1", messages)
Evaluator-->>Tool: result (cover letter)
Tool->>Logger: log "Generated cover letter: {result}"
Tool-->>AgentApp: return result
AgentApp-->>Client: deliver result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
dad6c11
to
748961d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml (1)
25-25
: Confirm model id and avoid double-setting it.
- Verify that "gpt-5" is a valid and enabled model for your OpenAI account/region; otherwise runs will fail at runtime.
- Since you also pass model="gpt-5" at the call site, prefer a single source of truth (config OR request_params), not both.
Would you like me to strip the per-call override in main.py to rely solely on this config?
examples/workflows/workflow_evaluator_optimizer/requirements.txt (1)
2-2
: Clarify local-dev install and consider reproducibility.
- The commented local path is fine; consider moving this to a dev extra (pyproject) or a "requirements.dev.txt" to avoid accidental commits flipping it.
- Pin or constrain anthropic/openai to avoid surprise breaks in examples run in CI or cloud.
Apply this small wording tweak for clarity:
-# mcp-agent @ file://../../../ # Link to the local mcp-agent project root, to run locally remove comment of this line +# To run locally against the monorepo, uncomment the next line: +# mcp-agent @ file://../../../examples/workflows/workflow_evaluator_optimizer/main.py (2)
27-33
: Make inputs required (or lighter) for cleaner tool schemas.The long default texts become part of the tool schema and may clutter UIs. Prefer required params (no defaults) or short illustrative examples in the description.
Example:
-async def example_usage( - job_posting: str = "Software Engineer at LastMile AI. Responsibilities include developing AI systems, " - "collaborating with cross-functional teams, and enhancing scalability. Skills required: " - "Python, distributed systems, and machine learning.", - candidate_details: str = "Alex Johnson, 3 years in machine learning, contributor to open-source AI projects, " - "proficient in Python and TensorFlow. Motivated by building scalable AI systems to solve real-world problems.", - company_information: str = "Look up from the LastMile AI About page: https://lastmileai.dev/about" -): +async def example_usage( + job_posting: str, + candidate_details: str, + company_information: str, +):
78-79
: Don’t override the model if config sets it; also verify gpt-5 availability.Let the config pick the default model to keep flexibility across environments.
- request_params=RequestParams(model="gpt-5"), + request_params=RequestParams(),If you need a per-call override later, wire it as a function parameter instead of hardcoding.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
examples/workflows/workflow_evaluator_optimizer/main.py
(2 hunks)examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml
(1 hunks)examples/workflows/workflow_evaluator_optimizer/requirements.txt
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-05T14:31:48.115Z
Learnt from: rholinshead
PR: lastmile-ai/mcp-agent#414
File: src/mcp_agent/logging/logger.py:18-19
Timestamp: 2025-09-05T14:31:48.115Z
Learning: In the mcp-agent logging module (src/mcp_agent/logging/logger.py), temporalio should be imported lazily with try/except ImportError to avoid making it a hard dependency. Use temporalio.workflow.in_workflow() instead of isinstance checks on internal classes like _WorkflowInstanceImpl.
Applied to files:
examples/workflows/workflow_evaluator_optimizer/requirements.txt
🧬 Code graph analysis (1)
examples/workflows/workflow_evaluator_optimizer/main.py (3)
src/mcp_agent/app.py (2)
async_tool
(767-815)logger
(191-208)src/mcp_agent/workflows/llm/augmented_llm.py (1)
RequestParams
(124-174)src/mcp_agent/logging/logger.py (1)
info
(271-279)
🔇 Additional comments (1)
examples/workflows/workflow_evaluator_optimizer/main.py (1)
20-26
: Good move registering this as an async tool; double-check server lifecycle.Nice upgrade to a tool. One caveat: this function starts the app via app.run() inside the tool body. If the tool is invoked while the MCP server is already running, re-entering run() can create nested event-loop/server lifecycle issues. Please confirm the library supports this pattern; if not, split execution paths (tool uses existing context; main bootstraps the app).
I can propose a small refactor that extracts the core workflow into a helper using an injected context and keeps app.run() only under main. Want that?
logger.info(f"Generated cover letter: {result}") | ||
return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging full generated content (PII/large payloads).
The result may contain PII and can be very large. Log a preview and metadata instead.
- logger.info(f"Generated cover letter: {result}")
+ logger.info(
+ "Generated cover letter",
+ preview=result[:200] + ("..." if len(result) > 200 else ""),
+ length=len(result),
+ )
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/main.py around lines 81 to
82, the code logs the full generated cover letter which may contain PII or be
very large; change the logging to avoid printing full content by logging a
truncated preview (e.g., first N characters or first line) and relevant metadata
(length, generation status, timestamp, model id) instead; keep returning the
full result but ensure logger.info only emits the safe preview and metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, ship it!
748961d
to
8b5c59b
Compare
- rename file to main.py - remove description from FastMCP
8b5c59b
to
7471fe2
Compare
# Secrets (API keys, etc.) are stored in an mcp_agent.secrets.yaml file which can be gitignored | ||
default_model: gpt-4o | ||
# API keys are stored in mcp_agent.secrets.yaml (gitignored for security) | ||
default_model: gpt-4.1 # Default model for OpenAI API calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid model name 'gpt-4.1' - this model does not exist in OpenAI's API. This will cause all OpenAI API calls using the default model to fail with a model not found error. Should be changed to a valid OpenAI model name like 'gpt-4o' or 'gpt-4-turbo'.
default_model: gpt-4.1 # Default model for OpenAI API calls | |
default_model: gpt-4o # Default model for OpenAI API calls |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
examples/workflows/workflow_evaluator_optimizer/main.py (2)
69-75
: Hard‑coded OpenAI factory prevents Anthropic‑only runs.Make llm_factory/provider selectable via config/secrets; otherwise limit docs to OpenAI.
I can add provider auto‑detection (OpenAI if OPENAI key present, else Anthropic) with a clear error if neither is configured.
38-38
: Do not log full config — redact secrets.File: examples/workflows/workflow_evaluator_optimizer/main.py:38 logs context.config.model_dump(), which can expose api_key/token/secret fields (see src/mcp_agent/config.py and secrets templates). Replace with a redacted or whitelist dump, e.g. context.config.model_dump(exclude={'api_key','token','secret'}) or log an explicit dict of safe fields only.
♻️ Duplicate comments (1)
examples/workflows/workflow_evaluator_optimizer/main.py (1)
81-82
: Avoid logging full generated content (PII/large payloads).- logger.info(f"Generated cover letter: {result}") + preview = result[:400] + logger.info( + "Generated cover letter", + preview=preview + ("..." if len(result) > len(preview) else ""), + length=len(result), + )
🧹 Nitpick comments (9)
examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example (2)
9-11
: Prefer commented placeholders to avoid accidental check‑ins.Comment out the example api_key line and highlight !developer_secret as the primary path.
- api_key: your-openai-api-key - # For cloud deployment, use developer secrets: - # api_key: !developer_secret OPENAI_API_KEY + # api_key: your-openai-api-key + # For cloud deployment, prefer developer secrets: + # api_key: !developer_secret OPENAI_API_KEY
16-18
: Mirror the OpenAI section change for consistency.- api_key: your-anthropic-api-key - # For cloud deployment, use developer secrets: - # api_key: !developer_secret ANTHROPIC_API_KEY + # api_key: your-anthropic-api-key + # For cloud deployment, prefer developer secrets: + # api_key: !developer_secret ANTHROPIC_API_KEYexamples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml (2)
8-15
: Default log level “debug” is noisy for cloud. Suggest “info” + env override.- level: debug # Logging level (debug, info, warning, error) + level: info # Default to info in examples; override via env if neededOptional: support LOG_LEVEL env in your logger bootstrap.
32-34
: Model default aligned with code; consider removing the explicit override in code.Since default_model is gpt-4.1, you can omit RequestParams(model="gpt-4.1") to rely on config.
examples/workflows/workflow_evaluator_optimizer/README.md (3)
69-69
: Repeat of the provider mismatch.The “only one API key” guidance implies either provider. This will confuse Anthropic‑only users unless code supports it.
105-113
: Secrets snippet uses the config schema header.Use a secrets schema (if available) or remove $schema from the secrets file example to prevent invalid schema validation.
-$schema: ../../../schema/mcp-agent.config.schema.json - openai: api_key: !developer_secret OPENAI_API_KEY @@ anthropic: api_key: !developer_secret ANTHROPIC_API_KEY
204-214
: Sample output looks like a Python repr embedded in JSON.Consider showing realistic JSON (not a Python‑style dict string) or clearly marking it as a string preview.
- "result": "{'kind': 'workflow_result', 'value': '[Your optimized cover letter]'}", + "result": "[Your optimized cover letter preview…]", + "meta": { "kind": "workflow_result" }examples/workflows/workflow_evaluator_optimizer/main.py (2)
76-79
: Redundant explicit model override.Config already sets default_model to gpt‑4.1. Consider relying on config unless you need to override per‑workflow.
- result = await evaluator_optimizer.generate_str( - message=f"Write a cover letter for the following job posting: {job_posting}\n\nCandidate Details: {candidate_details}\n\nCompany information: {company_information}", - request_params=RequestParams(model="gpt-4.1"), - ) + result = await evaluator_optimizer.generate_str( + message=f"Write a cover letter for the following job posting: {job_posting}\n\nCandidate Details: {candidate_details}\n\nCompany information: {company_information}", + request_params=RequestParams(), + )
27-33
: Enrich tool schema with parameter descriptions.Use typing.Annotated to surface per‑param docs in the tool’s JSON schema.
from typing import Annotated async def example_usage( job_posting: Annotated[str, "Job description and requirements"] = "...", candidate_details: Annotated[str, "Candidate background and qualifications"] = "...", company_information: Annotated[str, "Company details or a URL to fetch"] = "...", ): ...
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
examples/workflows/workflow_evaluator_optimizer/README.md
(3 hunks)examples/workflows/workflow_evaluator_optimizer/main.py
(2 hunks)examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml
(1 hunks)examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example
(1 hunks)examples/workflows/workflow_evaluator_optimizer/requirements.txt
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- examples/workflows/workflow_evaluator_optimizer/requirements.txt
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-22T18:59:49.368Z
Learnt from: CR
PR: lastmile-ai/mcp-agent#0
File: examples/usecases/reliable_conversation/CLAUDE.md:0-0
Timestamp: 2025-07-22T18:59:49.368Z
Learning: Applies to examples/usecases/reliable_conversation/examples/reliable_conversation/src/utils/config.py : Configuration values such as quality_threshold, max_refinement_attempts, consolidation_interval, and evaluator_model_provider must be loaded from mcp_agent.config.yaml.
Applied to files:
examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml
🧬 Code graph analysis (1)
examples/workflows/workflow_evaluator_optimizer/main.py (2)
src/mcp_agent/app.py (2)
async_tool
(769-817)logger
(193-210)src/mcp_agent/workflows/llm/augmented_llm.py (1)
RequestParams
(124-174)
🔇 Additional comments (2)
examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example (1)
1-1
: Schema reference incorrect for a secrets file — remove or point to a secrets schemaRepo search found only schema/mcp-agent.config.schema.json (no mcp-agent.secrets.schema.json). examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example (line 1) currently points to ../../../schema/mcp-agent.config.schema.json — drop the $schema to avoid false validation errors, or add and reference a dedicated mcp-agent.secrets.schema.json if secrets should be validated.
examples/workflows/workflow_evaluator_optimizer/README.md (1)
7-14
: ```shell
#!/bin/bash
set -euo pipefail
echo "Searching repo for provider mentions (OpenAI / Anthropic / gpt-4.1 / gpt-4o)..."
rg -n --hidden -S -C3 'Anthropic|anthropic|OpenAI|openai|gpt-4.1|gpt-4o' || trueecho
echo "Showing README if present at examples/workflows/workflow_evaluator_optimizer/README.md..."
if [ -f "examples/workflows/workflow_evaluator_optimizer/README.md" ]; then
echo "=== README: examples/workflows/workflow_evaluator_optimizer/README.md ==="
sed -n '1,240p' examples/workflows/workflow_evaluator_optimizer/README.md || true
else
echo "README not found at that path."
fiecho
echo "Searching for Python files mentioning providers under examples/workflows..."
rg -n --hidden -S --glob 'examples/workflows/**' 'Anthropic|anthropic|OpenAI|openai' -C3 || trueecho
echo "Searching for files named main.py..."
rg -n --hidden -S '\bmain.py\b' || trueecho
echo "Show up to first 5 Python files in examples/workflows that mention OpenAI/Anthropic (if any)..."
FILES=$(rg -n --hidden -S --glob 'examples/workflows/**' --type py 'OpenAI|openai|Anthropic|anthropic' --files | head -n 5 || true)
for f in $FILES; do
echo
echo "=== File: $f ==="
sed -n '1,240p' "$f" || true
done</blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
# Fetch server: Enables web content fetching capabilities | ||
fetch: | ||
command: "uvx" | ||
args: ["mcp-server-fetch"] | ||
|
||
# Filesystem server: Provides file system access capabilities | ||
filesystem: | ||
command: "npx" | ||
args: ["-y", "@modelcontextprotocol/server-filesystem"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Pin MCP server versions and ensure runtime availability of uvx/npx.
Unpinned npx/uvx pulls “latest” at runtime—supply‑chain and reproducibility risk. Pin versions and document that uvx/npx are present in the cloud image.
- fetch:
- command: "uvx"
- args: ["mcp-server-fetch"]
+ fetch:
+ command: "uvx"
+ args: ["--from", "mcp-server-fetch==X.Y.Z", "mcp-server-fetch"]
@@
- filesystem:
- command: "npx"
- args: ["-y", "@modelcontextprotocol/server-filesystem"]
+ filesystem:
+ command: "npx"
+ args: ["-y", "@modelcontextprotocol/[email protected]"]
Run-time check script to confirm availability:
🏁 Script executed:
#!/bin/bash
command -v uvx && uvx --version
command -v npx && npx --version
Length of output: 25
Pin MCP server versions and ensure uvx is present in the runtime image
uvx not found in PATH; npx present at /usr/local/bin/npx (11.4.2).
- Pin mcp-server-fetch and @modelcontextprotocol/server-filesystem to explicit versions in examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml (lines 20–28).
- Install uvx into the cloud image (or replace the fetch call with a pinned npx-based invocation); do not rely on unpinned runtime pulls.
- fetch:
- command: "uvx"
- args: ["mcp-server-fetch"]
+ fetch:
+ command: "uvx"
+ args: ["--from", "mcp-server-fetch==X.Y.Z", "mcp-server-fetch"]
@@
- filesystem:
- command: "npx"
- args: ["-y", "@modelcontextprotocol/server-filesystem"]
+ filesystem:
+ command: "npx"
+ args: ["-y", "@modelcontextprotocol/[email protected]"]
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml around
lines 20 to 28, the agents reference unpinned commands and rely on an
unavailable uvx binary; update the fetch and filesystem entries to use explicit
package versions and a runtime-available command: change fetch to a pinned
invocation (either install uvx into the runtime image and keep command "uvx" or
replace the fetch command with a pinned npx call that runs
mcp-server-fetch@<version>), and pin the filesystem package to
@modelcontextprotocol/server-filesystem@<version> (use exact semver) so both
services use fixed versions and do not depend on unpinned runtime pulls or
missing binaries.
# NOTE: You only need to configure ONE of the following API keys (OpenAI OR Anthropic) | ||
# Choose based on your preferred LLM provider | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Docs say “OpenAI OR Anthropic,” but the workflow is hard‑wired to OpenAI.
main.py uses OpenAIAugmentedLLM. If users only provide Anthropic keys (per this note), the run will fail. Either implement provider selection or tighten docs to “OpenAI only” for this example.
Would you like a follow‑up patch to auto‑select OpenAI vs Anthropic based on configured secrets?
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Show the secrets example file ==="
if [ -f examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example ]; then
sed -n '1,200p' examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example
else
echo "mcp_agent.secrets.yaml.example not found"
fi
echo
echo "=== Search repo for OpenAIAugmentedLLM ==="
rg -n --hidden --no-ignore-vcs -S "OpenAIAugmentedLLM" || true
echo
echo "=== Search repo for AugmentedLLM (generic) ==="
rg -n --hidden --no-ignore-vcs -S "AugmentedLLM" || true
echo
echo "=== Case-insensitive search for Anthropic ==="
rg -n --hidden --no-ignore-vcs -S -i "Anthropic" || true
echo
echo "=== Show main.py in workflow folder if present ==="
if [ -f examples/workflows/workflow_evaluator_optimizer/main.py ]; then
sed -n '1,300p' examples/workflows/workflow_evaluator_optimizer/main.py
else
echo "main.py not found in examples/workflows/workflow_evaluator_optimizer/"
fi
echo
echo "=== Search files in that workflow folder for OpenAI / Anthropic mentions ==="
rg -n --hidden --no-ignore-vcs -S -g 'examples/workflows/workflow_evaluator_optimizer/**' -i "OpenAI|openai|Anthropic|anthropic" || true
Length of output: 264090
Fix docs/examples mismatch: example hard‑codes OpenAI while secrets/README say "OpenAI OR Anthropic"
examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example (lines 3–16) and README.md state either provider is fine, but examples/workflows/workflow_evaluator_optimizer/main.py imports OpenAIAugmentedLLM (line 6) and sets llm_factory=OpenAIAugmentedLLM (line 72) — the example will fail if only Anthropic keys are configured. Either add runtime provider selection (use AnthropicAugmentedLLM when config.anthropic is present or resolve via the workflow factory) or change the example/docs to explicitly say "OpenAI only".
🤖 Prompt for AI Agents
In
examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example
(lines 3–16) and examples/workflows/workflow_evaluator_optimizer/main.py
(imports at line ~6 and llm_factory set at line ~72), the example hard-codes
OpenAIAugmentedLLM while the secrets README suggests either OpenAI or Anthropic;
update the example so it won't fail when only Anthropic keys are provided by
adding runtime provider selection: detect presence of config.anthropic (or
equivalent env/secret) and set llm_factory = AnthropicAugmentedLLM when present,
otherwise use OpenAIAugmentedLLM; alternatively, if you prefer a simpler change,
modify the example README and mcp_agent.secrets.yaml.example to explicitly state
“OpenAI only” so the current main.py remains correct.
#### `c.` Deploy your agent with a single command | ||
|
||
```bash | ||
uv run mcp-agent deploy cover-letter-writer | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify deploy slug matches the app/tool naming.
Command uses “cover-letter-writer”, app name is “cover_letter_writer”, tool is “cover_letter_writer_tool”. Confirm the expected slug for deploy.
🏁 Script executed:
#!/bin/bash
# Inspect declared tools/workflows to confirm the deploy name expected by the CLI
rg -nP '(MCPApp\(name=|@app\.async_tool\()' -C2 examples/workflows/workflow_evaluator_optimizer
Length of output: 866
Use snake_case deploy slug "cover_letter_writer"
main.py declares MCPApp(name="cover_letter_writer") and @app.async_tool(name="cover_letter_writer_tool"); README uses "cover-letter-writer". Update the deploy command in examples/workflows/workflow_evaluator_optimizer/README.md (lines 115–119) to use cover_letter_writer to match the code (see examples/workflows/workflow_evaluator_optimizer/main.py lines 18 and 20).
🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/README.md around lines 115 to
119, the deploy command uses the kebab-case slug "cover-letter-writer" but the
code declares MCPApp(name="cover_letter_writer") and async_tool name
"cover_letter_writer_tool"; update the README deploy command to use the
snake_case slug cover_letter_writer so it matches main.py (lines ~18 and ~20) —
replace "uv run mcp-agent deploy cover-letter-writer" with "uv run mcp-agent
deploy cover_letter_writer".
Summary by CodeRabbit