Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
04612c3
Add resource management to MCPAggregator and Agent classes
StreetLamb May 27, 2025
b2ad462
Update MCPAggregator to fetch resources using URI
StreetLamb May 28, 2025
c995ec5
Add utility modules for content and resource handling
StreetLamb May 28, 2025
2d4f3bd
Add resource URI parameter to LLM generation methods in AugmentedLLM …
StreetLamb May 28, 2025
f9161bf
Fix type conversion for resource URI in MCPAggregator
StreetLamb May 28, 2025
6967174
Add basic example to showcase using mcp resources
StreetLamb May 28, 2025
6bbb18d
Refactor resource URI handling in AugmentedLLM and OpenAIAugmentedLLM…
StreetLamb May 29, 2025
d5265a5
Enhance MCP Primitives example to demonstrate resource usage and serv…
StreetLamb May 29, 2025
3b6be04
Add OpenAIConverter class for converting MCP message types to OpenAI …
StreetLamb May 29, 2025
fec2a5a
Add AnthropicConverter for converting MCP message types to Anthropic …
StreetLamb May 29, 2025
fbda14a
Add PromptMessageMultipart class for handling multiple content parts …
StreetLamb May 29, 2025
f8c844a
Add resource_uris parameter to generate_str and related methods in An…
StreetLamb May 29, 2025
efb8108
Add resource_uris parameter to generate methods in BedrockAugmentedLL…
StreetLamb May 29, 2025
6d4d6c6
Add resource handling in AzureAugmentedLLM and implement AzureConvert…
StreetLamb May 29, 2025
38f2b60
Add resource handling and GoogleConverter for multipart message conve…
StreetLamb May 29, 2025
b6db823
Add resource_uris parameter to generate_structured method in OllamaAu…
StreetLamb May 29, 2025
1e0f411
Merge branch 'main' of https://github.com/lastmile-ai/mcp-agent into …
StreetLamb May 29, 2025
551a075
Refactor resource handling in LLM classes to use attached resources i…
StreetLamb May 30, 2025
874537e
Add prompt attachment functionality to LLM classes and update message…
StreetLamb May 30, 2025
9c908aa
Add demo server implementation with resource and prompt handling
StreetLamb May 30, 2025
345fecf
Update README to include prompts in MCP primitives example
StreetLamb May 30, 2025
b034621
Refactor settings in main.py
StreetLamb May 30, 2025
7d9120f
Refactor LLM message handling to integrate PromptMessage support and …
StreetLamb May 31, 2025
0d81b9a
Remove unused settings and health status resources from demo server; …
StreetLamb May 31, 2025
effa673
Refactor and add comments in example
StreetLamb May 31, 2025
c27b588
Refactor assertion in TestAnthropicAugmentedLLM to improve readability
StreetLamb May 31, 2025
66550c1
Add create_prompt method to generate prompt messages from names and r…
StreetLamb Jun 1, 2025
aa23e59
Update README and main.py to reflect changes in resource and prompt r…
StreetLamb Jun 1, 2025
fced30c
Enhance MCPAggregator to return resources alongside tools and prompts…
StreetLamb Jun 1, 2025
e4a256b
Add comprehensive tests for MIME utilities and multipart converters
StreetLamb Jun 1, 2025
332d71d
Refactor resource URI handling to use AnyUrl for improved type safety…
StreetLamb Jun 1, 2025
b096b6e
Fix exception class docstring and update file tags in OpenAIConverter…
StreetLamb Jun 1, 2025
ccdd310
Add comprehensive tests for Azure, Bedrock, and Google multipart conv…
StreetLamb Jun 1, 2025
5da0ec4
Minor code formatting
StreetLamb Jun 1, 2025
d6bc44c
Add tests for generating responses with various input types in Augmen…
StreetLamb Jun 1, 2025
fa96d30
Refactor message conversion methods to use unified mixed message hand…
StreetLamb Jun 1, 2025
11956c5
Refactor message tracing logic in AugmentedLLM to simplify attribute …
StreetLamb Jun 1, 2025
bd65320
Minor formatting
StreetLamb Jun 1, 2025
cbbbd9c
Refactor AzureConverter tests to assert list content structure for te…
StreetLamb Jun 1, 2025
2ba325f
Fix potential issues raised by coderabbitai
StreetLamb Jun 1, 2025
7a8ece7
Refactor URI handling to use str() for better compatibility and clari…
StreetLamb Jun 1, 2025
5b47509
Refactor URI handling in Azure and Google converters to use str() for…
StreetLamb Jun 2, 2025
6242777
Remove unnecessary import of AnyUrl in test_create_fallback_text_with…
StreetLamb Jun 2, 2025
a1f43e0
Add async get_poem tool to retrieve poems based on a topic; fix loggi…
StreetLamb Jun 3, 2025
22242c9
Store active LLM instance in context for MCP sampling callbacks; upda…
StreetLamb Jun 4, 2025
ad3e04a
Implement SamplingHandler for human-in-the-loop sampling requests; re…
StreetLamb Jun 4, 2025
e1eee3a
Refactor human approval workflow in SamplingHandler to include reject…
StreetLamb Jun 4, 2025
cf6f205
Update requirements and lock files to include fastmcp dependency and …
StreetLamb Jun 4, 2025
e81f8b2
Refactor get_poem tool to get_haiku; update sampling logic for haiku …
StreetLamb Jun 4, 2025
26eb541
Refactor demo server to streamline user data retrieval; update main a…
StreetLamb Jun 4, 2025
3601bd0
Merge branch 'main' of https://github.com/lastmile-ai/mcp-agent into …
StreetLamb Jun 24, 2025
8dcbf8b
Remove external FastMCP dependency, update example to use native Fast…
StreetLamb Jun 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 49 additions & 42 deletions examples/mcp_primitives/mcp_basic_agent/demo_server.py
Original file line number Diff line number Diff line change
@@ -1,54 +1,14 @@
from mcp.server.fastmcp import FastMCP
import datetime
from mcp.types import ModelPreferences, ModelHint, SamplingMessage, TextContent
import json

# Store server start time
SERVER_START_TIME = datetime.datetime.utcnow()

mcp = FastMCP("Resource Demo MCP Server")

# Define some static resources
STATIC_RESOURCES = {
"demo://docs/readme": {
"name": "README",
"description": "A sample README file.",
"content_type": "text/markdown",
"content": "# Demo Resource Server\n\nThis is a sample README resource provided by the demo MCP server.",
},
"demo://data/users": {
"name": "User Data",
"description": "Sample user data in JSON format.",
"content_type": "application/json",
"content": json.dumps(
[
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"},
{"id": 3, "name": "Charlie"},
],
indent=2,
),
},
}


@mcp.resource("demo://docs/readme")
def get_readme():
"""Provide the README file content."""
meta = STATIC_RESOURCES["demo://docs/readme"]
return meta["content"]


@mcp.resource("demo://data/users")
def get_users():
"""Provide user data."""
meta = STATIC_RESOURCES["demo://data/users"]
return meta["content"]


@mcp.resource("demo://{city}/weather")
def get_weather(city: str) -> str:
"""Provide a simple weather report for a given city."""
return f"It is sunny in {city} today!"
return "# Demo Resource Server\n\nThis is a sample README resource provided by the demo MCP server."


@mcp.prompt()
Expand All @@ -60,6 +20,53 @@ def echo(message: str) -> str:
return f"Prompt: {message}"


@mcp.resource("demo://data/friends")
def get_users():
"""Provide my friend list."""
return (
json.dumps(
[
{"id": 1, "friend": "Alice"},
],
),
)


@mcp.prompt()
def get_haiku_prompt(topic: str) -> str:
"""Get a haiku prompt about a given topic."""
return f"I am fascinated about {topic}. Can you generate a haiku combining {topic} + my friend name?"


@mcp.tool()
async def get_haiku(topic: str) -> str:
"""Get a haiku about a given topic."""
haiku = await mcp.get_context().session.create_message(
messages=[
SamplingMessage(
role="user",
content=TextContent(
type="text", text=f"Generate a haiku about {topic}."
),
)
],
system_prompt="You are a poet.",
max_tokens=100,
temperature=0.7,
model_preferences=ModelPreferences(
hints=[ModelHint(name="gpt-4o-mini")],
costPriority=0.1,
speedPriority=0.8,
intelligencePriority=0.1,
),
)

if isinstance(haiku.content, TextContent):
return haiku.content.text
else:
return "Haiku generation failed, unexpected content type."


def main():
"""Main entry point for the MCP server."""
mcp.run()
Expand Down
13 changes: 10 additions & 3 deletions examples/mcp_primitives/mcp_basic_agent/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
)
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
from mcp_agent.human_input.handler import console_input_callback


settings = Settings(
execution_engine="asyncio",
Expand All @@ -30,7 +32,9 @@

# Settings can either be specified programmatically,
# or loaded from mcp_agent.config.yaml/mcp_agent.secrets.yaml
app = MCPApp(name="mcp_basic_agent") # settings=settings)
app = MCPApp(
name="mcp_basic_agent", human_input_callback=console_input_callback
) # settings=settings)


async def example_usage():
Expand Down Expand Up @@ -71,13 +75,16 @@ async def example_usage():
)

llm = await agent.attach_llm(OpenAIAugmentedLLM)
res = await llm.generate_str(
summary = await llm.generate_str(
[
"Summarise what are my prompts and resources?",
*combined_messages,
]
)
logger.info(f"Summary: {res}")
logger.info(f"Summary: {summary}")

haiku = await llm.generate_str("Write me a haiku")
logger.info(f"Haiku: {haiku}")


if __name__ == "__main__":
Expand Down
2 changes: 2 additions & 0 deletions src/mcp_agent/agents/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,8 @@ async def attach_llm(
value = getattr(self.llm, attr, None)
if value is not None:
span.set_attribute(f"llm.{attr}", value)

self.context.active_llm = self.llm
return self.llm

async def initialize(self, force: bool = False):
Expand Down
3 changes: 3 additions & 0 deletions src/mcp_agent/core/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,9 @@ class Context(BaseModel):
# Use this flag to conditionally serialize expensive data for tracing
tracing_enabled: bool = False

# Store the currently active LLM instance for MCP sampling callbacks
active_llm: Optional[Any] = None # AugmentedLLM instance

model_config = ConfigDict(
extra="allow",
arbitrary_types_allowed=True, # Tell Pydantic to defer type evaluation
Expand Down
44 changes: 6 additions & 38 deletions src/mcp_agent/mcp/mcp_agent_client_session.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@
Implementation,
JSONRPCMessage,
ServerRequest,
TextContent,
ListRootsResult,
NotificationParams,
RequestParams,
Expand All @@ -58,6 +57,7 @@
MCP_TOOL_NAME,
)
from mcp_agent.tracing.telemetry import get_tracer, record_attributes
from mcp_agent.mcp.sampling_handler import SamplingHandler

if TYPE_CHECKING:
from mcp_agent.core.context import Context
Expand Down Expand Up @@ -108,6 +108,7 @@ def __init__(
)

self.server_config: Optional[MCPServerSettings] = None
self._sampling_handler = SamplingHandler(context=self.context)

# Session ID handling for Streamable HTTP transport
self._get_session_id_callback: Optional[Callable[[], str | None]] = None
Expand Down Expand Up @@ -320,45 +321,11 @@ async def _handle_sampling_callback(
context: RequestContext["ClientSession", Any],
params: CreateMessageRequestParams,
) -> CreateMessageResult | ErrorData:
logger.info("Handling sampling request: %s", params)
config = self.context.config
logger.info(f"Handling sampling request: {params}")
server_session = self.context.upstream_session
if server_session is None:
# TODO: saqadri - consider whether we should be handling the sampling request here as a client
logger.warning(
"Error: No upstream client available for sampling requests. Request:",
data=params,
)
try:
from anthropic import AsyncAnthropic

client = AsyncAnthropic(api_key=config.anthropic.api_key)

response = await client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=params.maxTokens,
messages=[
{
"role": m.role,
"content": m.content.text
if hasattr(m.content, "text")
else m.content.data,
}
for m in params.messages
],
system=getattr(params, "systemPrompt", None),
temperature=getattr(params, "temperature", 0.7),
stop_sequences=getattr(params, "stopSequences", None),
)

return CreateMessageResult(
model="claude-3-sonnet-20240229",
role="assistant",
content=TextContent(type="text", text=response.content[0].text),
)
except Exception as e:
logger.error(f"Error handling sampling request: {e}")
return ErrorData(code=-32603, message=str(e))
# Enhanced sampling with human approval workflow
return await self._sampling_handler.handle_sampling_with_human_approval(params)
else:
try:
# If a server_session is available, we'll pass-through the sampling request to the upstream client
Expand All @@ -376,6 +343,7 @@ async def _handle_sampling_callback(
except Exception as e:
return ErrorData(code=-32603, message=str(e))


async def _handle_list_roots_callback(
self,
context: RequestContext["ClientSession", Any],
Expand Down
23 changes: 23 additions & 0 deletions src/mcp_agent/mcp/mcp_aggregator.py
Original file line number Diff line number Diff line change
Expand Up @@ -1209,6 +1209,29 @@ def getter(item: NamespacedResource):
# No match found
return None, None

def _find_server_name_from_uri(self, uri: str) -> str:
"""
Find the server name from a resource URI.

Args:
uri: The URI of the resource.

Returns:
Server name if found, None otherwise
"""
capability_map = self._server_to_resource_map

def getter(item: NamespacedResource):
return str(item.resource.uri)

for server_name, resources in capability_map.items():
for resource in resources:
if uri == getter(resource):
return server_name

# No match found
return None

async def _start_server(self, server_name: str):
if self.connection_persistence:
logger.info(
Expand Down
Loading
Loading