Skip to content

Commit ff82bc0

Browse files
rholinsheadandrew-lastmile
authored andcommitted
Feat/cloud observability example (lastmile-ai#599)
* wip * Clean up observability example + README * Remove unnecessary filesystem mcp server * Remove spurious comment in requirements * Add activity/task to example
1 parent 65a618d commit ff82bc0

File tree

8 files changed

+336
-3
lines changed

8 files changed

+336
-3
lines changed

examples/cloud/mcp/README.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,3 +184,15 @@ This will launch the MCP Inspector UI where you can:
184184
- See all available tools
185185
- Test workflow execution
186186
- View request/response details
187+
188+
Make sure Inspector is configured with the following settings:
189+
190+
| Setting | Value |
191+
| ---------------- | --------------------------------------------------- |
192+
| _Transport Type_ | _SSE_ |
193+
| _SSE_ | _https://[server_id].deployments.mcp-agent.com/sse_ |
194+
| _Header Name_ | _Authorization_ |
195+
| _Bearer Token_ | _your-mcp-agent-cloud-api-token_ |
196+
197+
> [!TIP]
198+
> In the Configuration, change the request timeout to a longer time period. Since your agents are making LLM calls, it is expected that it should take longer than simple API calls.

examples/cloud/mcp/mcp_agent.config.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1+
$schema: ../../schema/mcp-agent.config.schema.json
2+
13
execution_engine: asyncio
24
logger:
3-
transports: [file]
5+
transports: [console]
46
level: debug
5-
path: "logs/mcp-agent.jsonl"
67

78
mcp:
89
servers:
Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
# Observability Example (OpenTelemetry + Langfuse)
2+
3+
This example demonstrates how to instrument an mcp-agent application with observability features using OpenTelemetry and an OTLP exporter (Langfuse). It shows how to automatically trace tool calls, workflows, LLM calls, and add custom tracing spans.
4+
5+
## What's included
6+
7+
- `main.py` – exposes a `grade_story_async` tool that uses parallel LLM processing with multiple specialized agents (proofreader, fact checker, style enforcer, and grader). Demonstrates both automatic instrumentation by mcp-agent and manual OpenTelemetry span creation.
8+
- `mcp_agent.config.yaml` – configures the execution engine, logging, and enables OpenTelemetry with a custom service name.
9+
- `mcp_agent.secrets.yaml.example` – template for configuring API keys and the Langfuse OTLP exporter endpoint with authentication headers.
10+
- `requirements.txt` – lists dependencies including mcp-agent and OpenAI.
11+
12+
## Features
13+
14+
- **Automatic instrumentation**: Tool calls, workflows, and LLM interactions are automatically traced by mcp-agent
15+
- **Custom tracing**: Example of adding manual OpenTelemetry spans with custom attributes
16+
- **Langfuse integration**: OTLP exporter configuration for sending traces to Langfuse; you can alternatively use your preferred OTLP exporter endpoint
17+
18+
## Prerequisites
19+
20+
- Python 3.10+
21+
- [UV](https://github.com/astral-sh/uv) package manager
22+
- API key for OpenAI
23+
- Langfuse account (for observability dashboards)
24+
25+
## Configuration
26+
27+
Before running the example, you'll need to configure API keys and observability settings.
28+
29+
### API Keys and Observability Setup
30+
31+
1. Copy the example secrets file:
32+
33+
```bash
34+
cd examples/cloud/observability
35+
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
36+
```
37+
38+
2. Edit `mcp_agent.secrets.yaml` to add your credentials:
39+
40+
```yaml
41+
openai:
42+
api_key: "your-openai-api-key"
43+
44+
otel:
45+
exporters:
46+
- otlp:
47+
endpoint: "https://us.cloud.langfuse.com/api/public/otel/v1/traces"
48+
headers:
49+
Authorization: "Basic AUTH_STRING"
50+
```
51+
52+
3. Generate the Langfuse basic auth token:
53+
54+
a. Sign up for a [Langfuse account](https://langfuse.com/) if you don't have one
55+
56+
b. Obtain your Langfuse public and secret keys from the project settings
57+
58+
c. Generate the base64-encoded basic auth token:
59+
60+
```bash
61+
echo -n "pk-lf-YOUR-PUBLIC-KEY:sk-lf-YOUR-SECRET-KEY" | base64
62+
```
63+
64+
d. Replace `AUTH_STRING` in the config with the generated base64 string
65+
66+
> See [Langfuse OpenTelemetry documentation](https://langfuse.com/integrations/native/opentelemetry#opentelemetry-endpoint) for more details, including the OTLP endpoint for EU data region.
67+
68+
## Test Locally
69+
70+
1. Install dependencies:
71+
72+
```bash
73+
uv pip install -r requirements.txt
74+
```
75+
76+
2. Start the mcp-agent server locally with SSE transport:
77+
78+
```bash
79+
uv run main.py
80+
```
81+
82+
3. Use [MCP Inspector](https://github.com/modelcontextprotocol/inspector) to explore and test the server:
83+
84+
```bash
85+
npx @modelcontextprotocol/inspector --transport sse --server-url http://127.0.0.1:8000/sse
86+
```
87+
88+
4. In MCP Inspector, test the `grade_story_async` tool with a sample story. The tool will:
89+
90+
- Create a custom trace span for the magic number calculation
91+
- Automatically trace the parallel LLM execution
92+
- Send all traces to Langfuse for visualization
93+
94+
5. View your traces in the Langfuse dashboard to see:
95+
- Complete execution flow
96+
- Timing for each agent
97+
- LLM calls and responses
98+
- Custom span attributes
99+
100+
## Deploy to mcp-agent Cloud
101+
102+
You can deploy this MCP-Agent app as a hosted mcp-agent app in the Cloud.
103+
104+
1. In your terminal, authenticate into mcp-agent cloud by running:
105+
106+
```bash
107+
uv run mcp-agent login
108+
```
109+
110+
2. You will be redirected to the login page, create an mcp-agent cloud account through Google or Github
111+
112+
3. Set up your mcp-agent cloud API Key and copy & paste it into your terminal
113+
114+
```bash
115+
uv run mcp-agent login
116+
INFO: Directing to MCP Agent Cloud API login...
117+
Please enter your API key 🔑:
118+
```
119+
120+
4. In your terminal, deploy the MCP app:
121+
122+
```bash
123+
uv run mcp-agent deploy observability-example
124+
```
125+
126+
5. When prompted, specify the type of secret to save your API keys. Select (1) deployment secret so that they are available to the deployed server.
127+
128+
The `deploy` command will bundle the app files and deploy them, producing a server URL of the form:
129+
`https://<server_id>.deployments.mcp-agent.com`.
130+
131+
## MCP Clients
132+
133+
Since the mcp-agent app is exposed as an MCP server, it can be used in any MCP client just
134+
like any other MCP server.
135+
136+
### MCP Inspector
137+
138+
You can inspect and test the deployed server using [MCP Inspector](https://github.com/modelcontextprotocol/inspector):
139+
140+
```bash
141+
npx @modelcontextprotocol/inspector --transport sse --server-url https://<server_id>.deployments.mcp-agent.com/sse
142+
```
143+
144+
This will launch the MCP Inspector UI where you can:
145+
146+
- See all available tools
147+
- Test the `grade_story_async` and `ResearchWorkflow` workflow execution
148+
149+
Make sure Inspector is configured with the following settings:
150+
151+
| Setting | Value |
152+
| ---------------- | --------------------------------------------------- |
153+
| _Transport Type_ | _SSE_ |
154+
| _SSE_ | _https://[server_id].deployments.mcp-agent.com/sse_ |
155+
| _Header Name_ | _Authorization_ |
156+
| _Bearer Token_ | _your-mcp-agent-cloud-api-token_ |
157+
158+
> [!TIP]
159+
> In the Configuration, change the request timeout to a longer time period. Since your agents are making LLM calls, it is expected that it should take longer than simple API calls.
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
"""
2+
Observability Example MCP App
3+
4+
This example demonstrates a very basic MCP app with observability features using OpenTelemetry.
5+
6+
mcp-agent automatically instruments workflows (runs, tasks/activities), tool calls, LLM calls, and more,
7+
allowing you to trace and monitor the execution of your app. You can also add custom tracing spans as needed.
8+
9+
"""
10+
11+
import asyncio
12+
from typing import List, Optional
13+
14+
from opentelemetry import trace
15+
16+
from mcp_agent.agents.agent import Agent
17+
from mcp_agent.app import MCPApp
18+
from mcp_agent.core.context import Context as AppContext
19+
from mcp_agent.executor.workflow import Workflow
20+
from mcp_agent.server.app_server import create_mcp_server_for_app
21+
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
22+
from mcp_agent.workflows.parallel.parallel_llm import ParallelLLM
23+
24+
app = MCPApp(name="observability_example_app")
25+
26+
27+
# You can always explicitly trace using opentelemetry as usual
28+
def get_magic_number(original_number: int = 0) -> int:
29+
tracer = trace.get_tracer(__name__)
30+
with tracer.start_as_current_span("some_tool_function") as span:
31+
span.set_attribute("example.attribute", "value")
32+
result = 42 + original_number
33+
span.set_attribute("result", result)
34+
return result
35+
36+
37+
# Workflows (runs, tasks/activities), tool calls, LLM calls, etc. are automatically traced by mcp-agent
38+
@app.workflow_task()
39+
async def gather_sources(query: str) -> list[str]:
40+
app.context.logger.info("Gathering sources", data={"query": query})
41+
return [f"https://example.com/search?q={query}"]
42+
43+
44+
@app.workflow
45+
class ResearchWorkflow(Workflow[None]):
46+
@app.workflow_run
47+
async def run(self, topic: str) -> List[str]:
48+
sources = await self.context.executor.execute(gather_sources, topic)
49+
self.context.logger.info(
50+
"Workflow completed", data={"topic": topic, "sources": sources}
51+
)
52+
return sources
53+
54+
55+
@app.async_tool(name="grade_story_async")
56+
async def grade_story_async(story: str, app_ctx: Optional[AppContext] = None) -> str:
57+
"""
58+
Async variant of grade_story that starts a workflow run and returns IDs.
59+
Args:
60+
story: The student's short story to grade
61+
app_ctx: Optional MCPApp context for accessing app resources and logging
62+
"""
63+
64+
context = app_ctx or app.context
65+
await context.info(f"[grade_story_async] Received input: {story}")
66+
67+
magic_number = get_magic_number(10)
68+
await context.info(f"[grade_story_async] Magic number computed: {magic_number}")
69+
70+
proofreader = Agent(
71+
name="proofreader",
72+
instruction="""Review the short story for grammar, spelling, and punctuation errors.
73+
Identify any awkward phrasing or structural issues that could improve clarity.
74+
Provide detailed feedback on corrections.""",
75+
)
76+
77+
fact_checker = Agent(
78+
name="fact_checker",
79+
instruction="""Verify the factual consistency within the story. Identify any contradictions,
80+
logical inconsistencies, or inaccuracies in the plot, character actions, or setting.
81+
Highlight potential issues with reasoning or coherence.""",
82+
)
83+
84+
style_enforcer = Agent(
85+
name="style_enforcer",
86+
instruction="""Analyze the story for adherence to style guidelines.
87+
Evaluate the narrative flow, clarity of expression, and tone. Suggest improvements to
88+
enhance storytelling, readability, and engagement.""",
89+
)
90+
91+
grader = Agent(
92+
name="grader",
93+
instruction="""Compile the feedback from the Proofreader and Fact Checker
94+
into a structured report. Summarize key issues and categorize them by type.
95+
Provide actionable recommendations for improving the story,
96+
and give an overall grade based on the feedback.""",
97+
)
98+
99+
parallel = ParallelLLM(
100+
fan_in_agent=grader,
101+
fan_out_agents=[proofreader, fact_checker, style_enforcer],
102+
llm_factory=OpenAIAugmentedLLM,
103+
context=context,
104+
)
105+
106+
await context.info("[grade_story_async] Starting parallel LLM")
107+
108+
try:
109+
result = await parallel.generate_str(
110+
message=f"Student short story submission: {story}",
111+
)
112+
except Exception as e:
113+
await context.error(f"[grade_story_async] Error generating result: {e}")
114+
return ""
115+
116+
if not result:
117+
await context.error("[grade_story_async] No result from parallel LLM")
118+
return ""
119+
120+
return result
121+
122+
123+
# NOTE: This main function is useful for local testing but will be ignored in the cloud deployment.
124+
async def main():
125+
async with app.run() as agent_app:
126+
mcp_server = create_mcp_server_for_app(agent_app)
127+
await mcp_server.run_sse_async()
128+
129+
130+
if __name__ == "__main__":
131+
asyncio.run(main())
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
$schema: ../../schema/mcp-agent.config.schema.json
2+
3+
execution_engine: asyncio
4+
logger:
5+
transports: [console]
6+
level: debug
7+
8+
otel:
9+
enabled: true
10+
service_name: "BasicObservabilityExample"
11+
# OTLP exporter endpoint and headers are configured in mcp_agent.secrets.yaml
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
openai:
2+
api_key: sk-your-openai-key
3+
4+
otel:
5+
# Define the Langfuse OTLP exporter (including headers) here so
6+
# mcp_agent.config.yaml does not need a duplicate entry.
7+
# See https://langfuse.com/integrations/native/opentelemetry#opentelemetry-endpoint
8+
# for info on OTLP endpoint for EU data region and for the basic auth generation command:
9+
# `echo -n "pk-lf-1234567890:sk-lf-1234567890" | base64`
10+
exporters:
11+
- otlp:
12+
endpoint: "https://us.cloud.langfuse.com/api/public/otel/v1/traces"
13+
headers:
14+
Authorization: "Basic AUTH_STRING"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Core framework dependency
2+
mcp-agent @ file://../../../ # Link to the local mcp-agent project root
3+
4+
# Additional dependencies specific to this example
5+
openai

examples/temporal/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Core framework dependency
2-
mcp-agent @ file://../../ # Link to the local mcp-agent project root. Remove @ file://../../ for cloud deployment
2+
mcp-agent @ file://../../ # Link to the local mcp-agent project root
33

44
# Additional dependencies specific to this example
55
anthropic

0 commit comments

Comments
 (0)