Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
148 changes: 145 additions & 3 deletions examples/workflows/workflow_evaluator_optimizer/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
# Evaluator-Optimizer Workflow example
# Evaluator-Optimizer Workflow Example

This example is a job cover letter refinement system, which generates a draft based on job description, company information, and candidate details. Then, the evaluator reviews the letter, provides a quality rating, and offers actionable feedback. The cycle continues until the letter meets a predefined quality standard.
This example demonstrates a sophisticated job cover letter refinement system that leverages the evaluator-optimizer pattern. The system generates a draft cover letter based on job description, company information, and candidate details. An evaluator agent then reviews the letter, provides a quality rating, and offers actionable feedback. This iterative cycle continues until the letter meets a predefined quality standard of "excellent".

## What's New in This Branch

- **Tool-based Architecture**: The workflow is now exposed as an MCP tool (`cover_letter_writer_tool`) that can be deployed and accessed remotely
- **Input Parameters**: The tool accepts three parameters:
- `job_posting`: The job description and requirements
- `candidate_details`: The candidate's background and qualifications
- `company_information`: Company details (can be a URL for the agent to fetch)
- **Model Update**: Default model updated from `gpt-4o` to `gpt-4.1` for enhanced performance
- **Cloud Deployment Ready**: Full support for deployment to MCP Agent Cloud

To make things interesting, we specify the company information as a URL, expecting the agent to fetch it using the MCP 'fetch' server, and then using that information to generate the cover letter.

Expand Down Expand Up @@ -56,7 +66,7 @@ Copy and configure your secrets and env variables:
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
```

Then open `mcp_agent.secrets.yaml` and add your api key for your preferred LLM.
Then open `mcp_agent.secrets.yaml` and add your API key for your preferred LLM provider. **Note: You only need to configure ONE API key** - either OpenAI or Anthropic, depending on which provider you want to use.

## (Optional) Configure tracing

Expand All @@ -70,3 +80,135 @@ Run your MCP Agent app:
```bash
uv run main.py
```

## `4` [Beta] Deploy to the Cloud

Deploy your cover letter writer agent to MCP Agent Cloud for remote access and integration.

### Prerequisites

- MCP Agent Cloud account
- API keys configured in `mcp_agent.secrets.yaml`

### Deployment Steps

#### `a.` Log in to [MCP Agent Cloud](https://docs.mcp-agent.com/cloud/overview)

```bash
uv run mcp-agent login
```

#### `b.` Update your `mcp_agent.secrets.yaml` to mark your developer secrets

Configure your secrets file to mark sensitive keys as developer secrets for secure cloud deployment:

```yaml
$schema: ../../../schema/mcp-agent.config.schema.json

openai:
api_key: !developer_secret OPENAI_API_KEY

anthropic:
api_key: !developer_secret ANTHROPIC_API_KEY
```

#### `c.` Deploy your agent with a single command

```bash
uv run mcp-agent deploy cover-letter-writer
```
Comment on lines +115 to +119
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify deploy slug matches the app/tool naming.

Command uses “cover-letter-writer”, app name is “cover_letter_writer”, tool is “cover_letter_writer_tool”. Confirm the expected slug for deploy.


🏁 Script executed:

#!/bin/bash
# Inspect declared tools/workflows to confirm the deploy name expected by the CLI
rg -nP '(MCPApp\(name=|@app\.async_tool\()' -C2 examples/workflows/workflow_evaluator_optimizer

Length of output: 866


Use snake_case deploy slug "cover_letter_writer"

main.py declares MCPApp(name="cover_letter_writer") and @app.async_tool(name="cover_letter_writer_tool"); README uses "cover-letter-writer". Update the deploy command in examples/workflows/workflow_evaluator_optimizer/README.md (lines 115–119) to use cover_letter_writer to match the code (see examples/workflows/workflow_evaluator_optimizer/main.py lines 18 and 20).

🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/README.md around lines 115 to
119, the deploy command uses the kebab-case slug "cover-letter-writer" but the
code declares MCPApp(name="cover_letter_writer") and async_tool name
"cover_letter_writer_tool"; update the README deploy command to use the
snake_case slug cover_letter_writer so it matches main.py (lines ~18 and ~20) —
replace "uv run mcp-agent deploy cover-letter-writer" with "uv run mcp-agent
deploy cover_letter_writer".


#### `d.` Connect to your deployed agent as an MCP server

Once deployed, you can connect to your agent through various MCP clients:

##### Claude Desktop Integration

Configure Claude Desktop to access your agent by updating `~/.claude-desktop/config.json`:

```json
{
"cover-letter-writer": {
"command": "/path/to/npx",
"args": [
"mcp-remote",
"https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse",
"--header",
"Authorization: Bearer ${BEARER_TOKEN}"
],
"env": {
"BEARER_TOKEN": "your-mcp-agent-cloud-api-token"
}
}
}
```

##### MCP Inspector

Use MCP Inspector to explore and test your agent:

```bash
npx @modelcontextprotocol/inspector
```

Configure the following settings in MCP Inspector:

| Setting | Value |
|---|---|
| **Transport Type** | SSE |
| **SSE URL** | `https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse` |
| **Header Name** | Authorization |
| **Bearer Token** | your-mcp-agent-cloud-api-token |

> [!TIP]
> Increase the request timeout in the Configuration settings since LLM calls may take longer than simple API calls.

##### Available Tools

Once connected to your deployed agent, you'll have access to:

**MCP Agent Cloud Default Tools:**
- `workflow-list`: List available workflows
- `workflow-run-list`: List execution runs of your agent
- `workflow-run`: Create a new workflow run
- `workflows-get_status`: Check agent run status
- `workflows-resume`: Resume a paused run
- `workflows-cancel`: Cancel a running workflow

**Your Agent's Tool:**
- `cover_letter_writer_tool`: Generate optimized cover letters with parameters:
- `job_posting`: Job description and requirements
- `candidate_details`: Candidate background and qualifications
- `company_information`: Company details or URL to fetch

##### Monitoring Your Agent

After triggering a run, you'll receive a workflow metadata object:

```json
{
"workflow_id": "cover-letter-writer-uuid",
"run_id": "uuid",
"execution_id": "uuid"
}
```

Monitor logs in real-time:

```bash
uv run mcp-agent cloud logger tail "cover-letter-writer" -f
```

Check run status using `workflows-get_status` to see the generated cover letter:

```json
{
"result": {
"id": "run-uuid",
"name": "cover_letter_writer_tool",
"status": "completed",
"result": "{'kind': 'workflow_result', 'value': '[Your optimized cover letter]'}",
"completed": true
}
}
```
36 changes: 17 additions & 19 deletions examples/workflows/workflow_evaluator_optimizer/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,20 @@
# The cycle continues until the letter meets a predefined quality standard.
app = MCPApp(name="cover_letter_writer")


async def example_usage():
@app.async_tool(name="cover_letter_writer_tool",
description="This tool implements an evaluator-optimizer workflow for generating "
"high-quality cover letters. It takes job postings, candidate details, "
"and company information as input, then iteratively generates and refines "
"cover letters until they meet excellent quality standards through "
"automated evaluation and feedback.")
async def example_usage(
job_posting: str = "Software Engineer at LastMile AI. Responsibilities include developing AI systems, "
"collaborating with cross-functional teams, and enhancing scalability. Skills required: "
"Python, distributed systems, and machine learning.",
candidate_details: str = "Alex Johnson, 3 years in machine learning, contributor to open-source AI projects, "
"proficient in Python and TensorFlow. Motivated by building scalable AI systems to solve real-world problems.",
company_information: str = "Look up from the LastMile AI About page: https://lastmileai.dev/about"
):
async with app.run() as cover_letter_app:
context = cover_letter_app.context
logger = cover_letter_app.logger
Expand Down Expand Up @@ -61,27 +73,13 @@ async def example_usage():
min_rating=QualityRating.EXCELLENT,
)

job_posting = (
"Software Engineer at LastMile AI. Responsibilities include developing AI systems, "
"collaborating with cross-functional teams, and enhancing scalability. Skills required: "
"Python, distributed systems, and machine learning."
)
candidate_details = (
"Alex Johnson, 3 years in machine learning, contributor to open-source AI projects, "
"proficient in Python and TensorFlow. Motivated by building scalable AI systems to solve real-world problems."
)

# This should trigger a 'fetch' call to get the company information
company_information = (
"Look up from the LastMile AI About page: https://lastmileai.dev/about"
)

result = await evaluator_optimizer.generate_str(
message=f"Write a cover letter for the following job posting: {job_posting}\n\nCandidate Details: {candidate_details}\n\nCompany information: {company_information}",
request_params=RequestParams(model="gpt-4o"),
request_params=RequestParams(model="gpt-4.1"),
)

logger.info(f"{result}")
logger.info(f"Generated cover letter: {result}")
return result
Comment on lines +81 to +82
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid logging full generated content (PII/large payloads).

The result may contain PII and can be very large. Log a preview and metadata instead.

-        logger.info(f"Generated cover letter: {result}")
+        logger.info(
+            "Generated cover letter",
+            preview=result[:200] + ("..." if len(result) > 200 else ""),
+            length=len(result),
+        )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/main.py around lines 81 to
82, the code logs the full generated cover letter which may contain PII or be
very large; change the logging to avoid printing full content by logging a
truncated preview (e.g., first N characters or first line) and relevant metadata
(length, generation status, timestamp, model id) instead; keep returning the
full result but ensure logger.info only emits the safe preview and metadata.



if __name__ == "__main__":
Expand Down
Original file line number Diff line number Diff line change
@@ -1,33 +1,42 @@
$schema: ../../../schema/mcp-agent.config.schema.json

# Execution engine configuration
execution_engine: asyncio

# Logging configuration
logger:
type: console
level: debug
batch_size: 100
flush_interval: 2
max_queue_size: 2048
http_endpoint:
http_headers:
http_timeout: 5
type: console # Log output type (console, file, or http)
level: debug # Logging level (debug, info, warning, error)
batch_size: 100 # Number of logs to batch before sending
flush_interval: 2 # Interval in seconds to flush logs
max_queue_size: 2048 # Maximum queue size for buffered logs
http_endpoint: # Optional: HTTP endpoint for remote logging
http_headers: # Optional: Headers for HTTP logging
http_timeout: 5 # Timeout for HTTP logging requests

# MCP (Model Context Protocol) server configuration
mcp:
servers:
# Fetch server: Enables web content fetching capabilities
fetch:
command: "uvx"
args: ["mcp-server-fetch"]

# Filesystem server: Provides file system access capabilities
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem"]
Comment on lines +20 to 28
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Pin MCP server versions and ensure runtime availability of uvx/npx.

Unpinned npx/uvx pulls “latest” at runtime—supply‑chain and reproducibility risk. Pin versions and document that uvx/npx are present in the cloud image.

-    fetch:
-      command: "uvx"
-      args: ["mcp-server-fetch"]
+    fetch:
+      command: "uvx"
+      args: ["--from", "mcp-server-fetch==X.Y.Z", "mcp-server-fetch"]
@@
-    filesystem:
-      command: "npx"
-      args: ["-y", "@modelcontextprotocol/server-filesystem"]
+    filesystem:
+      command: "npx"
+      args: ["-y", "@modelcontextprotocol/[email protected]"]

Run-time check script to confirm availability:


🏁 Script executed:

#!/bin/bash
command -v uvx && uvx --version
command -v npx && npx --version

Length of output: 25


Pin MCP server versions and ensure uvx is present in the runtime image

uvx not found in PATH; npx present at /usr/local/bin/npx (11.4.2).

  • Pin mcp-server-fetch and @modelcontextprotocol/server-filesystem to explicit versions in examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml (lines 20–28).
  • Install uvx into the cloud image (or replace the fetch call with a pinned npx-based invocation); do not rely on unpinned runtime pulls.
-    fetch:
-      command: "uvx"
-      args: ["mcp-server-fetch"]
+    fetch:
+      command: "uvx"
+      args: ["--from", "mcp-server-fetch==X.Y.Z", "mcp-server-fetch"]
@@
-    filesystem:
-      command: "npx"
-      args: ["-y", "@modelcontextprotocol/server-filesystem"]
+    filesystem:
+      command: "npx"
+      args: ["-y", "@modelcontextprotocol/[email protected]"]

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml around
lines 20 to 28, the agents reference unpinned commands and rely on an
unavailable uvx binary; update the fetch and filesystem entries to use explicit
package versions and a runtime-available command: change fetch to a pinned
invocation (either install uvx into the runtime image and keep command "uvx" or
replace the fetch command with a pinned npx call that runs
mcp-server-fetch@<version>), and pin the filesystem package to
@modelcontextprotocol/server-filesystem@<version> (use exact semver) so both
services use fixed versions and do not depend on unpinned runtime pulls or
missing binaries.


# OpenAI configuration
openai:
# Secrets (API keys, etc.) are stored in an mcp_agent.secrets.yaml file which can be gitignored
default_model: gpt-4o
# API keys are stored in mcp_agent.secrets.yaml (gitignored for security)
default_model: gpt-4.1 # Default model for OpenAI API calls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid model name 'gpt-4.1' - this model does not exist in OpenAI's API. This will cause all OpenAI API calls using the default model to fail with a model not found error. Should be changed to a valid OpenAI model name like 'gpt-4o' or 'gpt-4-turbo'.

Suggested change
default_model: gpt-4.1 # Default model for OpenAI API calls
default_model: gpt-4o # Default model for OpenAI API calls

Spotted by Diamond

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.


# OpenTelemetry (OTEL) configuration for distributed tracing
otel:
enabled: false
exporters: ["console"]
# If running jaeger locally, uncomment the following lines and add "otlp" to the exporters list
enabled: false # Set to true to enable tracing
exporters: ["console"] # Trace exporters (console, otlp)
# Uncomment below to export traces to Jaeger running locally
# otlp_settings:
# endpoint: "http://localhost:4318/v1/traces"
service_name: "WorkflowEvaluatorOptimizerExample"
service_name: "WorkflowEvaluatorOptimizerExample" # Service name in traces
Original file line number Diff line number Diff line change
@@ -1,7 +1,18 @@
$schema: ../../../schema/mcp-agent.config.schema.json

# NOTE: You only need to configure ONE of the following API keys (OpenAI OR Anthropic)
# Choose based on your preferred LLM provider

Comment on lines +3 to +5
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Docs say “OpenAI OR Anthropic,” but the workflow is hard‑wired to OpenAI.

main.py uses OpenAIAugmentedLLM. If users only provide Anthropic keys (per this note), the run will fail. Either implement provider selection or tighten docs to “OpenAI only” for this example.

Would you like a follow‑up patch to auto‑select OpenAI vs Anthropic based on configured secrets?


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Show the secrets example file ==="
if [ -f examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example ]; then
  sed -n '1,200p' examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example
else
  echo "mcp_agent.secrets.yaml.example not found"
fi

echo
echo "=== Search repo for OpenAIAugmentedLLM ==="
rg -n --hidden --no-ignore-vcs -S "OpenAIAugmentedLLM" || true

echo
echo "=== Search repo for AugmentedLLM (generic) ==="
rg -n --hidden --no-ignore-vcs -S "AugmentedLLM" || true

echo
echo "=== Case-insensitive search for Anthropic ==="
rg -n --hidden --no-ignore-vcs -S -i "Anthropic" || true

echo
echo "=== Show main.py in workflow folder if present ==="
if [ -f examples/workflows/workflow_evaluator_optimizer/main.py ]; then
  sed -n '1,300p' examples/workflows/workflow_evaluator_optimizer/main.py
else
  echo "main.py not found in examples/workflows/workflow_evaluator_optimizer/"
fi

echo
echo "=== Search files in that workflow folder for OpenAI / Anthropic mentions ==="
rg -n --hidden --no-ignore-vcs -S -g 'examples/workflows/workflow_evaluator_optimizer/**' -i "OpenAI|openai|Anthropic|anthropic" || true

Length of output: 264090


Fix docs/examples mismatch: example hard‑codes OpenAI while secrets/README say "OpenAI OR Anthropic"

examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example (lines 3–16) and README.md state either provider is fine, but examples/workflows/workflow_evaluator_optimizer/main.py imports OpenAIAugmentedLLM (line 6) and sets llm_factory=OpenAIAugmentedLLM (line 72) — the example will fail if only Anthropic keys are configured. Either add runtime provider selection (use AnthropicAugmentedLLM when config.anthropic is present or resolve via the workflow factory) or change the example/docs to explicitly say "OpenAI only".

🤖 Prompt for AI Agents
In
examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example
(lines 3–16) and examples/workflows/workflow_evaluator_optimizer/main.py
(imports at line ~6 and llm_factory set at line ~72), the example hard-codes
OpenAIAugmentedLLM while the secrets README suggests either OpenAI or Anthropic;
update the example so it won't fail when only Anthropic keys are provided by
adding runtime provider selection: detect presence of config.anthropic (or
equivalent env/secret) and set llm_factory = AnthropicAugmentedLLM when present,
otherwise use OpenAIAugmentedLLM; alternatively, if you prefer a simpler change,
modify the example README and mcp_agent.secrets.yaml.example to explicitly state
“OpenAI only” so the current main.py remains correct.

# OpenAI Configuration (if using OpenAI models)
# Create an API key at: https://platform.openai.com/api-keys
openai:
api_key: openai_api_key
api_key: your-openai-api-key
# For cloud deployment, use developer secrets:
# api_key: !developer_secret OPENAI_API_KEY

# Anthropic Configuration (if using Claude models)
# Create an API key at: https://console.anthropic.com/settings/keys
anthropic:
api_key: anthropic_api_key
api_key: your-anthropic-api-key
# For cloud deployment, use developer secrets:
# api_key: !developer_secret ANTHROPIC_API_KEY
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Core framework dependency
mcp-agent @ file://../../../ # Link to the local mcp-agent project root
# mcp-agent @ file://../../../ # Link to the local mcp-agent project root, to run locally remove comment of this line

# Additional dependencies specific to this example
anthropic
Expand Down
Loading