-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Add Agent.to_mcp()
method
#3076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,19 @@ | ||
import base64 | ||
from collections.abc import Sequence | ||
from typing import Literal | ||
from typing import Any, Literal, cast | ||
|
||
import logfire | ||
from pydantic.alias_generators import to_snake | ||
|
||
from pydantic_ai.agent.abstract import AbstractAgent | ||
|
||
from . import exceptions, messages | ||
from .agent import AgentDepsT, OutputDataT | ||
|
||
try: | ||
from mcp import types as mcp_types | ||
from mcp.server.lowlevel.server import Server, StructuredContent | ||
from mcp.types import Tool | ||
except ImportError as _import_error: | ||
raise ImportError( | ||
'Please install the `mcp` package to use the MCP server, ' | ||
|
@@ -121,3 +129,45 @@ def map_from_sampling_content( | |
return messages.TextPart(content=content.text) | ||
else: | ||
raise NotImplementedError('Image and Audio responses in sampling are not yet supported') | ||
|
||
|
||
def agent_to_mcp( | ||
agent: AbstractAgent[AgentDepsT, OutputDataT], | ||
*, | ||
server_name: str | None = None, | ||
tool_name: str | None = None, | ||
tool_description: str | None = None, | ||
# TODO(Marcelo): Should this actually be a factory that is created in every tool call? | ||
deps: AgentDepsT = None, | ||
) -> Server: | ||
server_name = to_snake((server_name or agent.name or 'PydanticAI Agent').replace(' ', '_')) | ||
tool_name = to_snake((tool_name or agent.name or 'PydanticAI Agent').replace(' ', '_')) | ||
DouweM marked this conversation as resolved.
Show resolved
Hide resolved
|
||
app = Server(name=server_name) | ||
|
||
async def list_tools() -> list[Tool]: | ||
return [ | ||
Tool( | ||
name=tool_name, | ||
description=tool_description, | ||
inputSchema={'type': 'object', 'properties': {'prompt': {'type': 'string'}}}, | ||
# TODO(Marcelo): How do I get this? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's not currently a nice way to get this, but it'd be useful to have a new In the case of the
Note that this changes a bit with some refactoring I did in #2970, but directionally it's the same: there's not currently a nice way to get this, and it's especially tricky for tool output, because we don't have a union of all types handy. I should be able to implement this pretty quickly through, once that images PR with the output types refactor merges. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should this PR wait for it then? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll wait for it then. |
||
outputSchema={'type': 'object', 'properties': {}}, | ||
) | ||
] | ||
|
||
async def call_tool(name: str, args: dict[str, Any]) -> StructuredContent: | ||
if name != tool_name: | ||
raise ValueError(f'Unknown tool: {name}') | ||
|
||
# TODO(Marcelo): Should we pass the `message_history` instead? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think just the prompt is fine, when would the LLM generate an entire message history? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmmm, I think the point is that we need to maintain the history in the session... Good point! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We may need to create a database abstraction here. 🤔 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are you sure the tool should be stateful like that? If it's essentially a subagent, wouldn't multiple calls be expected to start separate subagents? I think continuing the conversation should be explicit, with some conversation ID returned and passed in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, if the client wants to create a new conversation, they can open a new session.
The MCP spec handles this with a session ID. |
||
prompt = cast(str, args['prompt']) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we use a typed dict for args so we don't have to cast? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, that would be incorrect... What I actually need to check if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd expect the library to validate the args match the type hint, no? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Explained via Slack - answering here: no. |
||
logfire.info(f'Calling tool: {name} with args: {args}') | ||
|
||
result = await agent.run(user_prompt=prompt, deps=deps) | ||
|
||
return dict(result=result.output) | ||
|
||
app.list_tools()(list_tools) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These could be decorators right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Decorators inside a function are treated as misused type-wise. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Lame |
||
app.call_tool()(call_tool) | ||
|
||
return app |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a union of static deps and a deps factory makes sense, if the deps factory would get the tool call
_meta
.