diff --git a/examples/workflows/workflow_evaluator_optimizer/README.md b/examples/workflows/workflow_evaluator_optimizer/README.md index ed098ad15..9322dd5c6 100644 --- a/examples/workflows/workflow_evaluator_optimizer/README.md +++ b/examples/workflows/workflow_evaluator_optimizer/README.md @@ -1,6 +1,16 @@ -# Evaluator-Optimizer Workflow example +# Evaluator-Optimizer Workflow Example -This example is a job cover letter refinement system, which generates a draft based on job description, company information, and candidate details. Then, the evaluator reviews the letter, provides a quality rating, and offers actionable feedback. The cycle continues until the letter meets a predefined quality standard. +This example demonstrates a sophisticated job cover letter refinement system that leverages the evaluator-optimizer pattern. The system generates a draft cover letter based on job description, company information, and candidate details. An evaluator agent then reviews the letter, provides a quality rating, and offers actionable feedback. This iterative cycle continues until the letter meets a predefined quality standard of "excellent". + +## What's New in This Branch + +- **Tool-based Architecture**: The workflow is now exposed as an MCP tool (`cover_letter_writer_tool`) that can be deployed and accessed remotely +- **Input Parameters**: The tool accepts three parameters: + - `job_posting`: The job description and requirements + - `candidate_details`: The candidate's background and qualifications + - `company_information`: Company details (can be a URL for the agent to fetch) +- **Model Update**: Default model updated from `gpt-4o` to `gpt-4.1` for enhanced performance +- **Cloud Deployment Ready**: Full support for deployment to MCP Agent Cloud To make things interesting, we specify the company information as a URL, expecting the agent to fetch it using the MCP 'fetch' server, and then using that information to generate the cover letter. @@ -56,7 +66,7 @@ Copy and configure your secrets and env variables: cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml ``` -Then open `mcp_agent.secrets.yaml` and add your api key for your preferred LLM. +Then open `mcp_agent.secrets.yaml` and add your API key for your preferred LLM provider. **Note: You only need to configure ONE API key** - either OpenAI or Anthropic, depending on which provider you want to use. ## (Optional) Configure tracing @@ -70,3 +80,135 @@ Run your MCP Agent app: ```bash uv run main.py ``` + +## `4` [Beta] Deploy to the Cloud + +Deploy your cover letter writer agent to MCP Agent Cloud for remote access and integration. + +### Prerequisites + +- MCP Agent Cloud account +- API keys configured in `mcp_agent.secrets.yaml` + +### Deployment Steps + +#### `a.` Log in to [MCP Agent Cloud](https://docs.mcp-agent.com/cloud/overview) + +```bash +uv run mcp-agent login +``` + +#### `b.` Update your `mcp_agent.secrets.yaml` to mark your developer secrets + +Configure your secrets file to mark sensitive keys as developer secrets for secure cloud deployment: + +```yaml +$schema: ../../../schema/mcp-agent.config.schema.json + +openai: + api_key: !developer_secret OPENAI_API_KEY + +anthropic: + api_key: !developer_secret ANTHROPIC_API_KEY +``` + +#### `c.` Deploy your agent with a single command + +```bash +uv run mcp-agent deploy cover-letter-writer +``` + +#### `d.` Connect to your deployed agent as an MCP server + +Once deployed, you can connect to your agent through various MCP clients: + +##### Claude Desktop Integration + +Configure Claude Desktop to access your agent by updating `~/.claude-desktop/config.json`: + +```json +{ + "cover-letter-writer": { + "command": "/path/to/npx", + "args": [ + "mcp-remote", + "https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse", + "--header", + "Authorization: Bearer ${BEARER_TOKEN}" + ], + "env": { + "BEARER_TOKEN": "your-mcp-agent-cloud-api-token" + } + } +} +``` + +##### MCP Inspector + +Use MCP Inspector to explore and test your agent: + +```bash +npx @modelcontextprotocol/inspector +``` + +Configure the following settings in MCP Inspector: + +| Setting | Value | +|---|---| +| **Transport Type** | SSE | +| **SSE URL** | `https://[your-agent-server-id].deployments.mcp-agent-cloud.lastmileai.dev/sse` | +| **Header Name** | Authorization | +| **Bearer Token** | your-mcp-agent-cloud-api-token | + +> [!TIP] +> Increase the request timeout in the Configuration settings since LLM calls may take longer than simple API calls. + +##### Available Tools + +Once connected to your deployed agent, you'll have access to: + +**MCP Agent Cloud Default Tools:** +- `workflow-list`: List available workflows +- `workflow-run-list`: List execution runs of your agent +- `workflow-run`: Create a new workflow run +- `workflows-get_status`: Check agent run status +- `workflows-resume`: Resume a paused run +- `workflows-cancel`: Cancel a running workflow + +**Your Agent's Tool:** +- `cover_letter_writer_tool`: Generate optimized cover letters with parameters: + - `job_posting`: Job description and requirements + - `candidate_details`: Candidate background and qualifications + - `company_information`: Company details or URL to fetch + +##### Monitoring Your Agent + +After triggering a run, you'll receive a workflow metadata object: + +```json +{ + "workflow_id": "cover-letter-writer-uuid", + "run_id": "uuid", + "execution_id": "uuid" +} +``` + +Monitor logs in real-time: + +```bash +uv run mcp-agent cloud logger tail "cover-letter-writer" -f +``` + +Check run status using `workflows-get_status` to see the generated cover letter: + +```json +{ + "result": { + "id": "run-uuid", + "name": "cover_letter_writer_tool", + "status": "completed", + "result": "{'kind': 'workflow_result', 'value': '[Your optimized cover letter]'}", + "completed": true + } +} +``` diff --git a/examples/workflows/workflow_evaluator_optimizer/main.py b/examples/workflows/workflow_evaluator_optimizer/main.py index 748b533e9..cc15813b6 100644 --- a/examples/workflows/workflow_evaluator_optimizer/main.py +++ b/examples/workflows/workflow_evaluator_optimizer/main.py @@ -17,8 +17,20 @@ # The cycle continues until the letter meets a predefined quality standard. app = MCPApp(name="cover_letter_writer") - -async def example_usage(): +@app.async_tool(name="cover_letter_writer_tool", + description="This tool implements an evaluator-optimizer workflow for generating " + "high-quality cover letters. It takes job postings, candidate details, " + "and company information as input, then iteratively generates and refines " + "cover letters until they meet excellent quality standards through " + "automated evaluation and feedback.") +async def example_usage( + job_posting: str = "Software Engineer at LastMile AI. Responsibilities include developing AI systems, " + "collaborating with cross-functional teams, and enhancing scalability. Skills required: " + "Python, distributed systems, and machine learning.", + candidate_details: str = "Alex Johnson, 3 years in machine learning, contributor to open-source AI projects, " + "proficient in Python and TensorFlow. Motivated by building scalable AI systems to solve real-world problems.", + company_information: str = "Look up from the LastMile AI About page: https://lastmileai.dev/about" +): async with app.run() as cover_letter_app: context = cover_letter_app.context logger = cover_letter_app.logger @@ -61,27 +73,13 @@ async def example_usage(): min_rating=QualityRating.EXCELLENT, ) - job_posting = ( - "Software Engineer at LastMile AI. Responsibilities include developing AI systems, " - "collaborating with cross-functional teams, and enhancing scalability. Skills required: " - "Python, distributed systems, and machine learning." - ) - candidate_details = ( - "Alex Johnson, 3 years in machine learning, contributor to open-source AI projects, " - "proficient in Python and TensorFlow. Motivated by building scalable AI systems to solve real-world problems." - ) - - # This should trigger a 'fetch' call to get the company information - company_information = ( - "Look up from the LastMile AI About page: https://lastmileai.dev/about" - ) - result = await evaluator_optimizer.generate_str( message=f"Write a cover letter for the following job posting: {job_posting}\n\nCandidate Details: {candidate_details}\n\nCompany information: {company_information}", - request_params=RequestParams(model="gpt-4o"), + request_params=RequestParams(model="gpt-4.1"), ) - logger.info(f"{result}") + logger.info(f"Generated cover letter: {result}") + return result if __name__ == "__main__": diff --git a/examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml b/examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml index eaf34dfcf..8ab8652ed 100644 --- a/examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml +++ b/examples/workflows/workflow_evaluator_optimizer/mcp_agent.config.yaml @@ -1,33 +1,42 @@ $schema: ../../../schema/mcp-agent.config.schema.json +# Execution engine configuration execution_engine: asyncio + +# Logging configuration logger: - type: console - level: debug - batch_size: 100 - flush_interval: 2 - max_queue_size: 2048 - http_endpoint: - http_headers: - http_timeout: 5 + type: console # Log output type (console, file, or http) + level: debug # Logging level (debug, info, warning, error) + batch_size: 100 # Number of logs to batch before sending + flush_interval: 2 # Interval in seconds to flush logs + max_queue_size: 2048 # Maximum queue size for buffered logs + http_endpoint: # Optional: HTTP endpoint for remote logging + http_headers: # Optional: Headers for HTTP logging + http_timeout: 5 # Timeout for HTTP logging requests +# MCP (Model Context Protocol) server configuration mcp: servers: + # Fetch server: Enables web content fetching capabilities fetch: command: "uvx" args: ["mcp-server-fetch"] + + # Filesystem server: Provides file system access capabilities filesystem: command: "npx" args: ["-y", "@modelcontextprotocol/server-filesystem"] +# OpenAI configuration openai: - # Secrets (API keys, etc.) are stored in an mcp_agent.secrets.yaml file which can be gitignored - default_model: gpt-4o + # API keys are stored in mcp_agent.secrets.yaml (gitignored for security) + default_model: gpt-4.1 # Default model for OpenAI API calls +# OpenTelemetry (OTEL) configuration for distributed tracing otel: - enabled: false - exporters: ["console"] - # If running jaeger locally, uncomment the following lines and add "otlp" to the exporters list + enabled: false # Set to true to enable tracing + exporters: ["console"] # Trace exporters (console, otlp) + # Uncomment below to export traces to Jaeger running locally # otlp_settings: # endpoint: "http://localhost:4318/v1/traces" - service_name: "WorkflowEvaluatorOptimizerExample" + service_name: "WorkflowEvaluatorOptimizerExample" # Service name in traces diff --git a/examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example b/examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example index 99e5c606d..f99939819 100644 --- a/examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example +++ b/examples/workflows/workflow_evaluator_optimizer/mcp_agent.secrets.yaml.example @@ -1,7 +1,18 @@ $schema: ../../../schema/mcp-agent.config.schema.json +# NOTE: You only need to configure ONE of the following API keys (OpenAI OR Anthropic) +# Choose based on your preferred LLM provider + +# OpenAI Configuration (if using OpenAI models) +# Create an API key at: https://platform.openai.com/api-keys openai: - api_key: openai_api_key + api_key: your-openai-api-key + # For cloud deployment, use developer secrets: + # api_key: !developer_secret OPENAI_API_KEY +# Anthropic Configuration (if using Claude models) +# Create an API key at: https://console.anthropic.com/settings/keys anthropic: - api_key: anthropic_api_key + api_key: your-anthropic-api-key + # For cloud deployment, use developer secrets: + # api_key: !developer_secret ANTHROPIC_API_KEY diff --git a/examples/workflows/workflow_evaluator_optimizer/requirements.txt b/examples/workflows/workflow_evaluator_optimizer/requirements.txt index 07907e9bb..bb26f0bb4 100644 --- a/examples/workflows/workflow_evaluator_optimizer/requirements.txt +++ b/examples/workflows/workflow_evaluator_optimizer/requirements.txt @@ -1,5 +1,5 @@ # Core framework dependency -mcp-agent @ file://../../../ # Link to the local mcp-agent project root +# mcp-agent @ file://../../../ # Link to the local mcp-agent project root, to run locally remove comment of this line # Additional dependencies specific to this example anthropic