Skip to content

Commit 16e33e8

Browse files
authored
Merge branch 'main' into schema-flattener
2 parents 6114437 + eae558b commit 16e33e8

File tree

12 files changed

+98
-69
lines changed

12 files changed

+98
-69
lines changed

docs/logfire.md

Lines changed: 16 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -106,49 +106,30 @@ We can also query data with SQL in Logfire to monitor the performance of an appl
106106

107107
### Monitoring HTTP Requests
108108

109-
!!! tip "\"F**k you, show me the prompt.\""
110-
As per Hamel Husain's influential 2024 blog post ["Fuck You, Show Me The Prompt."](https://hamel.dev/blog/posts/prompt/)
111-
(bear with the capitalization, the point is valid), it's often useful to be able to view the raw HTTP requests and responses made to model providers.
109+
As per Hamel Husain's influential 2024 blog post ["Fuck You, Show Me The Prompt."](https://hamel.dev/blog/posts/prompt/)
110+
(bear with the capitalization, the point is valid), it's often useful to be able to view the raw HTTP requests and responses made to model providers.
112111

113-
To observe raw HTTP requests made to model providers, you can use Logfire's [HTTPX instrumentation](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) since all provider SDKs use the [HTTPX](https://www.python-httpx.org/) library internally.
112+
To observe raw HTTP requests made to model providers, you can use Logfire's [HTTPX instrumentation](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) since all provider SDKs (except for [Bedrock](models/bedrock.md)) use the [HTTPX](https://www.python-httpx.org/) library internally:
114113

115-
=== "With HTTP instrumentation"
116114

117-
```py {title="with_logfire_instrument_httpx.py" hl_lines="7"}
118-
import logfire
119-
120-
from pydantic_ai import Agent
121-
122-
logfire.configure()
123-
logfire.instrument_pydantic_ai()
124-
logfire.instrument_httpx(capture_all=True) # (1)!
125-
agent = Agent('openai:gpt-5')
126-
result = agent.run_sync('What is the capital of France?')
127-
print(result.output)
128-
#> The capital of France is Paris.
129-
```
130-
131-
1. See the [`logfire.instrument_httpx` docs][logfire.Logfire.instrument_httpx] more details, `capture_all=True` means both headers and body are captured for both the request and response.
132-
133-
![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)
134-
135-
=== "Without HTTP instrumentation"
115+
```py {title="with_logfire_instrument_httpx.py" hl_lines="7"}
116+
import logfire
136117

137-
```py {title="without_logfire_instrument_httpx.py"}
138-
import logfire
118+
from pydantic_ai import Agent
139119

140-
from pydantic_ai import Agent
120+
logfire.configure()
121+
logfire.instrument_pydantic_ai()
122+
logfire.instrument_httpx(capture_all=True) # (1)!
141123

142-
logfire.configure()
143-
logfire.instrument_pydantic_ai()
124+
agent = Agent('openai:gpt-5')
125+
result = agent.run_sync('What is the capital of France?')
126+
print(result.output)
127+
#> The capital of France is Paris.
128+
```
144129

145-
agent = Agent('openai:gpt-5')
146-
result = agent.run_sync('What is the capital of France?')
147-
print(result.output)
148-
#> The capital of France is Paris.
149-
```
130+
1. See the [`logfire.instrument_httpx` docs][logfire.Logfire.instrument_httpx] more details, `capture_all=True` means both headers and body are captured for both the request and response.
150131

151-
![Logfire without HTTPX instrumentation](img/logfire-without-httpx.png)
132+
![Logfire with HTTPX instrumentation](img/logfire-with-httpx.png)
152133

153134
## Using OpenTelemetry
154135

docs/mcp/client.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,29 @@ calculator_server = MCPServerSSE(
338338
agent = Agent('openai:gpt-5', toolsets=[weather_server, calculator_server])
339339
```
340340

341+
## Server Instructions
342+
343+
MCP servers can provide instructions during initialization that give context about how to best interact with the server's tools. These instructions are accessible via the [`instructions`][pydantic_ai.mcp.MCPServer.instructions] property after the server connection is established.
344+
345+
```python {title="mcp_server_instructions.py"}
346+
from pydantic_ai import Agent
347+
from pydantic_ai.mcp import MCPServerStreamableHTTP
348+
349+
server = MCPServerStreamableHTTP('http://localhost:8000/mcp')
350+
agent = Agent('openai:gpt-5', toolsets=[server])
351+
352+
@agent.instructions
353+
async def mcp_server_instructions():
354+
return server.instructions # (1)!
355+
356+
async def main():
357+
result = await agent.run('What is 7 plus 5?')
358+
print(result.output)
359+
#> The answer is 12.
360+
```
361+
362+
1. The server connection is guaranteed to be established by this point, so `server.instructions` is available.
363+
341364
## Tool metadata
342365

343366
MCP tools can include metadata that provides additional information about the tool's characteristics, which can be useful when [filtering tools][pydantic_ai.toolsets.FilteredToolset]. The `meta`, `annotations`, and `output_schema` fields can be found on the `metadata` dict on the [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] object that's passed to filter functions.

pydantic_ai_slim/pydantic_ai/_agent_graph.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -216,6 +216,12 @@ async def run( # noqa: C901
216216
ctx.state.message_history = messages
217217
ctx.deps.new_message_index = len(messages)
218218

219+
# Validate that message history starts with a user message
220+
if messages and isinstance(messages[0], _messages.ModelResponse):
221+
raise exceptions.UserError(
222+
'Message history cannot start with a `ModelResponse`. Conversations must begin with a user message.'
223+
)
224+
219225
if self.deferred_tool_results is not None:
220226
return await self._handle_deferred_tool_results(self.deferred_tool_results, messages, ctx)
221227

pydantic_ai_slim/pydantic_ai/_output.py

Lines changed: 4 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -470,7 +470,7 @@ def __init__(
470470
allows_image: bool,
471471
):
472472
super().__init__(
473-
processor=PromptedOutputProcessor(processor),
473+
processor=processor,
474474
allows_deferred_tools=allows_deferred_tools,
475475
allows_image=allows_image,
476476
)
@@ -494,13 +494,6 @@ def build_instructions(cls, template: str, object_def: OutputObjectDefinition) -
494494

495495
return template.format(schema=json.dumps(schema))
496496

497-
def instructions(self, default_template: str) -> str: # pragma: no cover
498-
"""Get instructions to tell model to output JSON matching the schema."""
499-
template = self.template or default_template
500-
object_def = self.object_def
501-
assert object_def is not None
502-
return self.build_instructions(template, object_def)
503-
504497

505498
@dataclass(init=False)
506499
class ToolOutputSchema(OutputSchema[OutputDataT]):
@@ -542,28 +535,6 @@ class BaseObjectOutputProcessor(BaseOutputProcessor[OutputDataT]):
542535
object_def: OutputObjectDefinition
543536

544537

545-
@dataclass(init=False)
546-
class PromptedOutputProcessor(BaseObjectOutputProcessor[OutputDataT]):
547-
wrapped: BaseObjectOutputProcessor[OutputDataT]
548-
549-
def __init__(self, wrapped: BaseObjectOutputProcessor[OutputDataT]):
550-
self.wrapped = wrapped
551-
super().__init__(object_def=wrapped.object_def)
552-
553-
async def process(
554-
self,
555-
data: str,
556-
run_context: RunContext[AgentDepsT],
557-
allow_partial: bool = False,
558-
wrap_validation_errors: bool = True,
559-
) -> OutputDataT:
560-
text = _utils.strip_markdown_fences(data)
561-
562-
return await self.wrapped.process(
563-
text, run_context, allow_partial=allow_partial, wrap_validation_errors=wrap_validation_errors
564-
)
565-
566-
567538
@dataclass(init=False)
568539
class ObjectOutputProcessor(BaseObjectOutputProcessor[OutputDataT]):
569540
outer_typed_dict_key: str | None = None
@@ -653,6 +624,9 @@ async def process(
653624
Returns:
654625
Either the validated output data (left) or a retry message (right).
655626
"""
627+
if isinstance(data, str):
628+
data = _utils.strip_markdown_fences(data)
629+
656630
try:
657631
output = self.validate(data, allow_partial)
658632
except ValidationError as e:

pydantic_ai_slim/pydantic_ai/_utils.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -467,12 +467,14 @@ def validate_empty_kwargs(_kwargs: dict[str, Any]) -> None:
467467
raise exceptions.UserError(f'Unknown keyword arguments: {unknown_kwargs}')
468468

469469

470+
_MARKDOWN_FENCES_PATTERN = re.compile(r'```(?:\w+)?\n(\{.*\})', flags=re.DOTALL)
471+
472+
470473
def strip_markdown_fences(text: str) -> str:
471474
if text.startswith('{'):
472475
return text
473476

474-
regex = r'```(?:\w+)?\n(\{.*\})\n```'
475-
match = re.search(regex, text, re.DOTALL)
477+
match = re.search(_MARKDOWN_FENCES_PATTERN, text)
476478
if match:
477479
return match.group(1)
478480

pydantic_ai_slim/pydantic_ai/mcp.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,7 @@ class MCPServer(AbstractToolset[Any], ABC):
122122
_read_stream: MemoryObjectReceiveStream[SessionMessage | Exception]
123123
_write_stream: MemoryObjectSendStream[SessionMessage]
124124
_server_info: mcp_types.Implementation
125+
_instructions: str | None
125126

126127
def __init__(
127128
self,
@@ -200,6 +201,15 @@ def server_info(self) -> mcp_types.Implementation:
200201
)
201202
return self._server_info
202203

204+
@property
205+
def instructions(self) -> str | None:
206+
"""Access the instructions sent by the MCP server during initialization."""
207+
if not hasattr(self, '_instructions'):
208+
raise AttributeError(
209+
f'The `{self.__class__.__name__}.instructions` is only available after initialization.'
210+
)
211+
return self._instructions
212+
203213
async def list_tools(self) -> list[mcp_types.Tool]:
204214
"""Retrieve tools that are currently active on the server.
205215
@@ -337,6 +347,7 @@ async def __aenter__(self) -> Self:
337347
with anyio.fail_after(self.timeout):
338348
result = await self._client.initialize()
339349
self._server_info = result.serverInfo
350+
self._instructions = result.instructions
340351
if log_level := self.log_level:
341352
await self._client.set_logging_level(log_level)
342353

tests/mcp_server.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
)
1717
from pydantic import AnyUrl, BaseModel
1818

19-
mcp = FastMCP('Pydantic AI MCP Server')
19+
mcp = FastMCP('Pydantic AI MCP Server', instructions='Be a helpful assistant.')
2020
log_level = 'unset'
2121

2222

tests/models/test_outlines.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -573,6 +573,7 @@ def test_input_format(transformers_multimodal_model: OutlinesModel, binary_image
573573

574574
# unsupported: tool calls
575575
tool_call_message_history: list[ModelMessage] = [
576+
ModelRequest(parts=[UserPromptPart(content='some user prompt')]),
576577
ModelResponse(parts=[ToolCallPart(tool_call_id='1', tool_name='get_location')]),
577578
ModelRequest(parts=[ToolReturnPart(tool_name='get_location', content='London', tool_call_id='1')]),
578579
]
@@ -588,7 +589,8 @@ def test_input_format(transformers_multimodal_model: OutlinesModel, binary_image
588589

589590
# unsupported: non-image file parts
590591
file_part_message_history: list[ModelMessage] = [
591-
ModelResponse(parts=[FilePart(content=BinaryContent(data=b'test', media_type='text/plain'))])
592+
ModelRequest(parts=[UserPromptPart(content='some user prompt')]),
593+
ModelResponse(parts=[FilePart(content=BinaryContent(data=b'test', media_type='text/plain'))]),
592594
]
593595
with pytest.raises(
594596
UserError, match='File parts other than `BinaryImage` are not supported for Outlines models yet.'

tests/test_agent.py

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6132,3 +6132,19 @@ def llm(messages: list[ModelMessage], _info: AgentInfo) -> ModelResponse:
61326132
]
61336133
)
61346134
assert run.all_messages_json().startswith(b'[{"parts":[{"content":"Hello",')
6135+
6136+
6137+
def test_message_history_cannot_start_with_model_response():
6138+
"""Test that message history starting with ModelResponse raises UserError."""
6139+
6140+
agent = Agent('test')
6141+
6142+
invalid_history = [
6143+
ModelResponse(parts=[TextPart(content='ai response')]),
6144+
]
6145+
6146+
with pytest.raises(
6147+
UserError,
6148+
match='Message history cannot start with a `ModelResponse`.',
6149+
):
6150+
agent.run_sync('hello', message_history=invalid_history)

tests/test_examples.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -304,6 +304,10 @@ class MockMCPServer(AbstractToolset[Any]):
304304
def id(self) -> str | None:
305305
return None # pragma: no cover
306306

307+
@property
308+
def instructions(self) -> str | None:
309+
return None
310+
307311
async def __aenter__(self) -> MockMCPServer:
308312
return self
309313

0 commit comments

Comments
 (0)