Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions examples/cloud/mcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,3 +184,15 @@ This will launch the MCP Inspector UI where you can:
- See all available tools
- Test workflow execution
- View request/response details

Make sure Inspector is configured with the following settings:

| Setting | Value |
| ---------------- | --------------------------------------------------- |
| _Transport Type_ | _SSE_ |
| _SSE_ | _https://[server_id].deployments.mcp-agent.com/sse_ |
| _Header Name_ | _Authorization_ |
| _Bearer Token_ | _your-mcp-agent-cloud-api-token_ |

> [!TIP]
> In the Configuration, change the request timeout to a longer time period. Since your agents are making LLM calls, it is expected that it should take longer than simple API calls.
5 changes: 3 additions & 2 deletions examples/cloud/mcp/mcp_agent.config.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
$schema: ../../schema/mcp-agent.config.schema.json

execution_engine: asyncio
logger:
transports: [file]
transports: [console]
level: debug
path: "logs/mcp-agent.jsonl"

mcp:
servers:
Expand Down
159 changes: 159 additions & 0 deletions examples/cloud/observability/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Observability Example (OpenTelemetry + Langfuse)

This example demonstrates how to instrument an mcp-agent application with observability features using OpenTelemetry and an OTLP exporter (Langfuse). It shows how to automatically trace tool calls, workflows, LLM calls, and add custom tracing spans.

## What's included

- `main.py` – exposes a `grade_story_async` tool that uses parallel LLM processing with multiple specialized agents (proofreader, fact checker, style enforcer, and grader). Demonstrates both automatic instrumentation by mcp-agent and manual OpenTelemetry span creation.
- `mcp_agent.config.yaml` – configures the execution engine, logging, and enables OpenTelemetry with a custom service name.
- `mcp_agent.secrets.yaml.example` – template for configuring API keys and the Langfuse OTLP exporter endpoint with authentication headers.
- `requirements.txt` – lists dependencies including mcp-agent and OpenAI.

## Features

- **Automatic instrumentation**: Tool calls, workflows, and LLM interactions are automatically traced by mcp-agent
- **Custom tracing**: Example of adding manual OpenTelemetry spans with custom attributes
- **Langfuse integration**: OTLP exporter configuration for sending traces to Langfuse; you can alternatively use your preferred OTLP exporter endpoint

## Prerequisites

- Python 3.10+
- [UV](https://github.com/astral-sh/uv) package manager
- API key for OpenAI
- Langfuse account (for observability dashboards)

## Configuration

Before running the example, you'll need to configure API keys and observability settings.

### API Keys and Observability Setup

1. Copy the example secrets file:

```bash
cd examples/cloud/observability
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
```

2. Edit `mcp_agent.secrets.yaml` to add your credentials:

```yaml
openai:
api_key: "your-openai-api-key"

otel:
exporters:
- otlp:
endpoint: "https://us.cloud.langfuse.com/api/public/otel/v1/traces"
headers:
Authorization: "Basic AUTH_STRING"
```

3. Generate the Langfuse basic auth token:

a. Sign up for a [Langfuse account](https://langfuse.com/) if you don't have one

b. Obtain your Langfuse public and secret keys from the project settings

c. Generate the base64-encoded basic auth token:

```bash
echo -n "pk-lf-YOUR-PUBLIC-KEY:sk-lf-YOUR-SECRET-KEY" | base64
```

d. Replace `AUTH_STRING` in the config with the generated base64 string

> See [Langfuse OpenTelemetry documentation](https://langfuse.com/integrations/native/opentelemetry#opentelemetry-endpoint) for more details, including the OTLP endpoint for EU data region.

## Test Locally

1. Install dependencies:

```bash
uv pip install -r requirements.txt
```

2. Start the mcp-agent server locally with SSE transport:

```bash
uv run main.py
```

3. Use [MCP Inspector](https://github.com/modelcontextprotocol/inspector) to explore and test the server:

```bash
npx @modelcontextprotocol/inspector --transport sse --server-url http://127.0.0.1:8000/sse
```

4. In MCP Inspector, test the `grade_story_async` tool with a sample story. The tool will:

- Create a custom trace span for the magic number calculation
- Automatically trace the parallel LLM execution
- Send all traces to Langfuse for visualization

5. View your traces in the Langfuse dashboard to see:
- Complete execution flow
- Timing for each agent
- LLM calls and responses
- Custom span attributes

## Deploy to mcp-agent Cloud

You can deploy this MCP-Agent app as a hosted mcp-agent app in the Cloud.

1. In your terminal, authenticate into mcp-agent cloud by running:

```bash
uv run mcp-agent login
```

2. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github

3. Set up your mcp-agent cloud API Key and copy & paste it into your terminal

```bash
uv run mcp-agent login
INFO: Directing to MCP Agent Cloud API login...
Please enter your API key 🔑:
```

4. In your terminal, deploy the MCP app:

```bash
uv run mcp-agent deploy observability-example
```

5. When prompted, specify the type of secret to save your API keys. Select (1) deployment secret so that they are available to the deployed server.

The `deploy` command will bundle the app files and deploy them, producing a server URL of the form:
`https://<server_id>.deployments.mcp-agent.com`.

## MCP Clients

Since the mcp-agent app is exposed as an MCP server, it can be used in any MCP client just
like any other MCP server.

### MCP Inspector

You can inspect and test the deployed server using [MCP Inspector](https://github.com/modelcontextprotocol/inspector):

```bash
npx @modelcontextprotocol/inspector --transport sse --server-url https://<server_id>.deployments.mcp-agent.com/sse
```

This will launch the MCP Inspector UI where you can:

- See all available tools
- Test the `grade_story_async` and `ResearchWorkflow` workflow execution

Make sure Inspector is configured with the following settings:

| Setting | Value |
| ---------------- | --------------------------------------------------- |
| _Transport Type_ | _SSE_ |
| _SSE_ | _https://[server_id].deployments.mcp-agent.com/sse_ |
| _Header Name_ | _Authorization_ |
| _Bearer Token_ | _your-mcp-agent-cloud-api-token_ |

> [!TIP]
> In the Configuration, change the request timeout to a longer time period. Since your agents are making LLM calls, it is expected that it should take longer than simple API calls.
131 changes: 131 additions & 0 deletions examples/cloud/observability/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""
Observability Example MCP App

This example demonstrates a very basic MCP app with observability features using OpenTelemetry.

mcp-agent automatically instruments workflows (runs, tasks/activities), tool calls, LLM calls, and more,
allowing you to trace and monitor the execution of your app. You can also add custom tracing spans as needed.

"""

import asyncio
from typing import List, Optional

from opentelemetry import trace

from mcp_agent.agents.agent import Agent
from mcp_agent.app import MCPApp
from mcp_agent.core.context import Context as AppContext
from mcp_agent.executor.workflow import Workflow
from mcp_agent.server.app_server import create_mcp_server_for_app
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.workflows.parallel.parallel_llm import ParallelLLM

app = MCPApp(name="observability_example_app")


# You can always explicitly trace using opentelemetry as usual
def get_magic_number(original_number: int = 0) -> int:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other thing that may be great to add would be a function decorated as an app.workflow_task, which is then called via app_ctx.executor.execute. That will execute the function as an activity, and will be great to show that. Totally optional, but wanted to suggest that for completeness

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done
Screenshot 2025-10-29 at 3 29 53 PM

tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("some_tool_function") as span:
span.set_attribute("example.attribute", "value")
result = 42 + original_number
span.set_attribute("result", result)
return result


# Workflows (runs, tasks/activities), tool calls, LLM calls, etc. are automatically traced by mcp-agent
@app.workflow_task()
async def gather_sources(query: str) -> list[str]:
app.context.logger.info("Gathering sources", data={"query": query})
return [f"https://example.com/search?q={query}"]


@app.workflow
class ResearchWorkflow(Workflow[None]):
@app.workflow_run
async def run(self, topic: str) -> List[str]:
sources = await self.context.executor.execute(gather_sources, topic)
self.context.logger.info(
"Workflow completed", data={"topic": topic, "sources": sources}
)
return sources


@app.async_tool(name="grade_story_async")
async def grade_story_async(story: str, app_ctx: Optional[AppContext] = None) -> str:
"""
Async variant of grade_story that starts a workflow run and returns IDs.
Args:
story: The student's short story to grade
app_ctx: Optional MCPApp context for accessing app resources and logging
"""

context = app_ctx or app.context
await context.info(f"[grade_story_async] Received input: {story}")

magic_number = get_magic_number(10)
await context.info(f"[grade_story_async] Magic number computed: {magic_number}")

proofreader = Agent(
name="proofreader",
instruction="""Review the short story for grammar, spelling, and punctuation errors.
Identify any awkward phrasing or structural issues that could improve clarity.
Provide detailed feedback on corrections.""",
)

fact_checker = Agent(
name="fact_checker",
instruction="""Verify the factual consistency within the story. Identify any contradictions,
logical inconsistencies, or inaccuracies in the plot, character actions, or setting.
Highlight potential issues with reasoning or coherence.""",
)

style_enforcer = Agent(
name="style_enforcer",
instruction="""Analyze the story for adherence to style guidelines.
Evaluate the narrative flow, clarity of expression, and tone. Suggest improvements to
enhance storytelling, readability, and engagement.""",
)

grader = Agent(
name="grader",
instruction="""Compile the feedback from the Proofreader and Fact Checker
into a structured report. Summarize key issues and categorize them by type.
Provide actionable recommendations for improving the story,
and give an overall grade based on the feedback.""",
)

parallel = ParallelLLM(
fan_in_agent=grader,
fan_out_agents=[proofreader, fact_checker, style_enforcer],
llm_factory=OpenAIAugmentedLLM,
context=context,
)

await context.info("[grade_story_async] Starting parallel LLM")

try:
result = await parallel.generate_str(
message=f"Student short story submission: {story}",
)
except Exception as e:
await context.error(f"[grade_story_async] Error generating result: {e}")
return ""

if not result:
await context.error("[grade_story_async] No result from parallel LLM")
return ""

return result


# NOTE: This main function is useful for local testing but will be ignored in the cloud deployment.
async def main():
async with app.run() as agent_app:
mcp_server = create_mcp_server_for_app(agent_app)
await mcp_server.run_sse_async()


if __name__ == "__main__":
asyncio.run(main())
11 changes: 11 additions & 0 deletions examples/cloud/observability/mcp_agent.config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
$schema: ../../schema/mcp-agent.config.schema.json

execution_engine: asyncio
logger:
transports: [console]
level: debug

otel:
enabled: true
service_name: "BasicObservabilityExample"
# OTLP exporter endpoint and headers are configured in mcp_agent.secrets.yaml
14 changes: 14 additions & 0 deletions examples/cloud/observability/mcp_agent.secrets.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
openai:
api_key: sk-your-openai-key

otel:
# Define the Langfuse OTLP exporter (including headers) here so
# mcp_agent.config.yaml does not need a duplicate entry.
# See https://langfuse.com/integrations/native/opentelemetry#opentelemetry-endpoint
# for info on OTLP endpoint for EU data region and for the basic auth generation command:
# `echo -n "pk-lf-1234567890:sk-lf-1234567890" | base64`
exporters:
- otlp:
endpoint: "https://us.cloud.langfuse.com/api/public/otel/v1/traces"
headers:
Authorization: "Basic AUTH_STRING"
5 changes: 5 additions & 0 deletions examples/cloud/observability/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Core framework dependency
mcp-agent @ file://../../../ # Link to the local mcp-agent project root
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify the local dependency path format.

The format mcp-agent @ file://../../../ uses a relative path with the file:// scheme, which is non-standard. According to PEP 508, file:// URLs should typically use absolute paths. Standard alternatives include:

  • -e ../../../ (editable install with relative path)
  • file:///${PWD} (absolute path using environment variable)

Since you mentioned testing was successful, this may work in your environment. However, please verify it works consistently across different installation contexts (local, cloud deployment, CI/CD).


🏁 Script executed:

#!/bin/bash
# Verify the dependency can be installed correctly
cd examples/cloud/observability
python -m pip install --dry-run -r requirements.txt 2>&1 | grep -A5 -B5 "mcp-agent"

Length of output: 192


I need to verify if the mcp-agent package has the required packaging files at the repository root.


---

🏁 Script executed:

```shell
#!/bin/bash
# Check for setup.py and pyproject.toml at repository root
fd -d 1 '(setup\.py|pyproject\.toml)$' . 
echo "---"
# Also verify directory structure
ls -la | head -20

Length of output: 1229


I need to check the pyproject.toml to understand how the package is declared.


---

🏁 Script executed:

```shell
#!/bin/bash
# Check pyproject.toml for package name and structure
cat pyproject.toml | grep -A 10 "^\[project\]" | head -20
echo "---"
# Also check if there's any name field
grep -E '(name\s*=|version\s*=)' pyproject.toml | head -5

Length of output: 446



Fix the local dependency path format in examples/cloud/observability/requirements.txt.

The file://../../../ format with relative paths fails during installation: pip cannot resolve it and reports that pyproject.toml is not found, even though it exists at the repository root. This confirms the path format is non-standard and unreliable.

Replace with one of these standard alternatives:

  • -e ../../../ (editable install with relative path)
  • Absolute path using environment variable or full path with file:// scheme

Update line 2 to use a working format:

-e ../../../
🤖 Prompt for AI Agents
In examples/cloud/observability/requirements.txt around line 2, the dependency
uses a non-standard file://../../../ relative URL which pip cannot resolve;
replace that entry with the editable relative-path format -e ../../../ so pip
installs the local mcp-agent project correctly (or alternatively use an absolute
file:// path), and commit the updated requirements.txt.


# Additional dependencies specific to this example
openai
2 changes: 1 addition & 1 deletion examples/temporal/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Core framework dependency
mcp-agent @ file://../../ # Link to the local mcp-agent project root. Remove @ file://../../ for cloud deployment
mcp-agent @ file://../../ # Link to the local mcp-agent project root

# Additional dependencies specific to this example
anthropic
Expand Down
Loading