Skip to content

Commit 4476b5f

Browse files
authored
Merge branch 'main' into trigger-cf-workflow
2 parents 9e676a1 + 41dd069 commit 4476b5f

File tree

76 files changed

+8154
-695
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+8154
-695
lines changed

.gitignore

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@ examples/pydantic_ai_examples/.chat_app_messages.sqlite
1515
.vscode/
1616
/question_graph_history.json
1717
/docs-site/.wrangler/
18-
/CLAUDE.md
1918
node_modules/
2019
**.idea/
2120
.coverage*

CLAUDE.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Development Commands
6+
7+
### Core Development Tasks
8+
- **Install dependencies**: `make install` (requires uv, pre-commit, and deno)
9+
- **Run all checks**: `make` (format, lint, typecheck, test with coverage)
10+
- **Format code**: `make format`
11+
- **Lint code**: `make lint`
12+
- **Type checking**: `make typecheck` (uses pyright) or `make typecheck-both` (pyright + mypy)
13+
- **Run tests**: `make test` (with coverage) or `make test-fast` (parallel, no coverage)
14+
- **Build docs**: `make docs` or `make docs-serve` (local development)
15+
16+
### Single Test Commands
17+
- **Run specific test**: `uv run pytest tests/test_agent.py::test_function_name -v`
18+
- **Run test file**: `uv run pytest tests/test_agent.py -v`
19+
- **Run with debug**: `uv run pytest tests/test_agent.py -v -s`
20+
21+
### Multi-Python Testing
22+
- **Install all Python versions**: `make install-all-python`
23+
- **Test all Python versions**: `make test-all-python`
24+
25+
## Project Architecture
26+
27+
### Core Components
28+
29+
**Agent System (`pydantic_ai_slim/pydantic_ai/agent.py`)**
30+
- `Agent[AgentDepsT, OutputDataT]`: Main orchestrator class with generic types for dependency injection and output validation
31+
- Entry points: `run()`, `run_sync()`, `run_stream()` methods
32+
- Handles tool management, system prompts, and model interaction
33+
34+
**Model Integration (`pydantic_ai_slim/pydantic_ai/models/`)**
35+
- Unified interface across providers: OpenAI, Anthropic, Google, Groq, Cohere, Mistral, Bedrock, HuggingFace
36+
- Model strings: `"openai:gpt-4o"`, `"anthropic:claude-3-5-sonnet"`, `"google:gemini-1.5-pro"`
37+
- `ModelRequestParameters` for configuration, `StreamedResponse` for streaming
38+
39+
**Graph-based Execution (`pydantic_graph/` + `_agent_graph.py`)**
40+
- State machine execution through: `UserPromptNode``ModelRequestNode``CallToolsNode`
41+
- `GraphAgentState` maintains message history and usage tracking
42+
- `GraphRunContext` provides execution context
43+
44+
**Tool System (`tools.py`, `toolsets/`)**
45+
- `@agent.tool` decorator for function registration
46+
- `RunContext[AgentDepsT]` provides dependency injection in tools
47+
- Support for sync/async functions with automatic schema generation
48+
49+
**Output Handling**
50+
- `TextOutput`: Plain text responses
51+
- `ToolOutput`: Structured data via tool calls
52+
- `NativeOutput`: Provider-specific structured output
53+
- `PromptedOutput`: Prompt-based structured extraction
54+
55+
### Key Design Patterns
56+
57+
**Dependency Injection**
58+
```python
59+
@dataclass
60+
class MyDeps:
61+
database: DatabaseConn
62+
63+
agent = Agent('openai:gpt-4o', deps_type=MyDeps)
64+
65+
@agent.tool
66+
async def get_data(ctx: RunContext[MyDeps]) -> str:
67+
return await ctx.deps.database.fetch_data()
68+
```
69+
70+
**Type-Safe Agents**
71+
```python
72+
class OutputModel(BaseModel):
73+
result: str
74+
confidence: float
75+
76+
agent: Agent[MyDeps, OutputModel] = Agent(
77+
'openai:gpt-4o',
78+
deps_type=MyDeps,
79+
output_type=OutputModel
80+
)
81+
```
82+
83+
## Workspace Structure
84+
85+
This is a uv workspace with multiple packages:
86+
- **`pydantic_ai_slim/`**: Core framework (minimal dependencies)
87+
- **`pydantic_evals/`**: Evaluation system
88+
- **`pydantic_graph/`**: Graph execution engine
89+
- **`examples/`**: Example applications
90+
- **`clai/`**: CLI tool
91+
- **`mcp-run-python/`**: MCP server implementation (Deno/TypeScript)
92+
93+
## Testing Strategy
94+
95+
- **Unit tests**: `tests/` directory with comprehensive model and component coverage
96+
- **VCR cassettes**: `tests/cassettes/` for recorded LLM API interactions
97+
- **Test models**: Use `TestModel` for deterministic testing
98+
- **Examples testing**: `tests/test_examples.py` validates all documentation examples
99+
- **Multi-version testing**: Python 3.9-3.13 support
100+
101+
## Key Configuration Files
102+
103+
- **`pyproject.toml`**: Main workspace configuration with dependency groups
104+
- **`pydantic_ai_slim/pyproject.toml`**: Core package with model optional dependencies
105+
- **`Makefile`**: Development task automation
106+
- **`uv.lock`**: Locked dependencies for reproducible builds
107+
108+
## Important Implementation Notes
109+
110+
- **Model Provider Integration**: Each provider in `models/` directory implements the `Model` abstract base class
111+
- **Message System**: Vendor-agnostic message format in `messages.py` with rich content type support
112+
- **Streaming Architecture**: Real-time response processing with validation during streaming
113+
- **Error Handling**: Specific exception types with retry mechanisms at multiple levels
114+
- **OpenTelemetry Integration**: Built-in observability support
115+
116+
## Documentation Development
117+
118+
- **Local docs**: `make docs-serve` (serves at http://localhost:8000)
119+
- **Docs source**: `docs/` directory (MkDocs with Material theme)
120+
- **API reference**: Auto-generated from docstrings using mkdocstrings
121+
122+
## Dependencies Management
123+
124+
- **Package manager**: uv (fast Python package manager)
125+
- **Lock file**: `uv.lock` (commit this file)
126+
- **Sync command**: `make sync` to update dependencies
127+
- **Optional extras**: Define groups in `pyproject.toml` optional-dependencies

docs/ag-ui.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -150,15 +150,15 @@ app = agent.to_ag_ui(deps=StateDeps(DocumentState()))
150150

151151

152152
@agent.tool
153-
def update_state(ctx: RunContext[StateDeps[DocumentState]]) -> StateSnapshotEvent:
153+
async def update_state(ctx: RunContext[StateDeps[DocumentState]]) -> StateSnapshotEvent:
154154
return StateSnapshotEvent(
155155
type=EventType.STATE_SNAPSHOT,
156156
snapshot=ctx.deps.state,
157157
)
158158

159159

160160
@agent.tool_plain
161-
def custom_events() -> list[CustomEvent]:
161+
async def custom_events() -> list[CustomEvent]:
162162
return [
163163
CustomEvent(
164164
type=EventType.CUSTOM,

docs/api/messages.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ graph RL
1010
RetryPromptPart(RetryPromptPart) --- ModelRequestPart
1111
TextPart(TextPart) --- ModelResponsePart
1212
ToolCallPart(ToolCallPart) --- ModelResponsePart
13+
ThinkingPart(ThinkingPart) --- ModelResponsePart
1314
ModelRequestPart("ModelRequestPart<br>(Union)") --- ModelRequest
1415
ModelRequest("ModelRequest(parts=list[...])") --- ModelMessage
1516
ModelResponsePart("ModelResponsePart<br>(Union)") --- ModelResponse

docs/api/providers.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,8 @@
3232

3333
::: pydantic_ai.providers.openrouter.OpenRouterProvider
3434

35+
::: pydantic_ai.providers.vercel.VercelProvider
36+
3537
::: pydantic_ai.providers.huggingface.HuggingFaceProvider
38+
39+
::: pydantic_ai.providers.moonshotai.MoonshotAIProvider

docs/mcp/client.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -235,6 +235,54 @@ calculator_server = MCPServerSSE(
235235
agent = Agent('openai:gpt-4o', toolsets=[weather_server, calculator_server])
236236
```
237237

238+
## Custom TLS / SSL configuration
239+
240+
In some environments you need to tweak how HTTPS connections are established –
241+
for example to trust an internal Certificate Authority, present a client
242+
certificate for **mTLS**, or (during local development only!) disable
243+
certificate verification altogether.
244+
All HTTP-based MCP client classes
245+
([`MCPServerStreamableHTTP`][pydantic_ai.mcp.MCPServerStreamableHTTP] and
246+
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE]) expose an `http_client`
247+
parameter that lets you pass your own pre-configured
248+
[`httpx.AsyncClient`](https://www.python-httpx.org/async/).
249+
250+
```python {title="mcp_custom_tls_client.py" py="3.10"}
251+
import httpx
252+
import ssl
253+
254+
from pydantic_ai import Agent
255+
from pydantic_ai.mcp import MCPServerSSE
256+
257+
258+
# Trust an internal / self-signed CA
259+
ssl_ctx = ssl.create_default_context(cafile="/etc/ssl/private/my_company_ca.pem")
260+
261+
# OPTIONAL: if the server requires **mutual TLS** load your client certificate
262+
ssl_ctx.load_cert_chain(certfile="/etc/ssl/certs/client.crt", keyfile="/etc/ssl/private/client.key",)
263+
264+
http_client = httpx.AsyncClient(
265+
verify=ssl_ctx,
266+
timeout=httpx.Timeout(10.0),
267+
)
268+
269+
server = MCPServerSSE(
270+
url="http://localhost:3001/sse",
271+
http_client=http_client, # (1)!
272+
)
273+
agent = Agent("openai:gpt-4o", toolsets=[server])
274+
275+
async def main():
276+
async with agent:
277+
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
278+
print(result.output)
279+
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
280+
```
281+
282+
1. When you supply `http_client`, Pydantic AI re-uses this client for every
283+
request. Anything supported by **httpx** (`verify`, `cert`, custom
284+
proxies, timeouts, etc.) therefore applies to all MCP traffic.
285+
238286
## MCP Sampling
239287

240288
!!! info "What is MCP Sampling?"

docs/mcp/server.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,10 @@ async def sampling_callback(
117117
SamplingMessage(
118118
role='user',
119119
content=TextContent(
120-
type='text', text='write a poem about socks', annotations=None
120+
type='text',
121+
text='write a poem about socks',
122+
annotations=None,
123+
meta=None,
121124
),
122125
)
123126
]

docs/models/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ In addition, many providers are compatible with the OpenAI API, and can be used
1919
- [Grok (xAI)](openai.md#grok-xai)
2020
- [Ollama](openai.md#ollama)
2121
- [OpenRouter](openai.md#openrouter)
22+
- [Vercel AI Gateway](openai.md#vercel-ai-gateway)
2223
- [Perplexity](openai.md#perplexity)
2324
- [Fireworks AI](openai.md#fireworks-ai)
2425
- [Together AI](openai.md#together-ai)

docs/models/openai.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -348,6 +348,41 @@ agent = Agent(model)
348348
...
349349
```
350350

351+
### Vercel AI Gateway
352+
353+
To use [Vercel's AI Gateway](https://vercel.com/docs/ai-gateway), first follow the [documentation](https://vercel.com/docs/ai-gateway) instructions on obtaining an API key or OIDC token.
354+
355+
You can set your credentials using one of these environment variables:
356+
357+
```bash
358+
export VERCEL_AI_GATEWAY_API_KEY='your-ai-gateway-api-key'
359+
# OR
360+
export VERCEL_OIDC_TOKEN='your-oidc-token'
361+
```
362+
363+
Once you have set the environment variable, you can use it with the [`VercelProvider`][pydantic_ai.providers.vercel.VercelProvider]:
364+
365+
```python
366+
from pydantic_ai import Agent
367+
from pydantic_ai.models.openai import OpenAIModel
368+
from pydantic_ai.providers.vercel import VercelProvider
369+
370+
# Uses environment variable automatically
371+
model = OpenAIModel(
372+
'anthropic/claude-4-sonnet',
373+
provider=VercelProvider(),
374+
)
375+
agent = Agent(model)
376+
377+
# Or pass the API key directly
378+
model = OpenAIModel(
379+
'anthropic/claude-4-sonnet',
380+
provider=VercelProvider(api_key='your-vercel-ai-gateway-api-key'),
381+
)
382+
agent = Agent(model)
383+
...
384+
```
385+
351386
### Grok (xAI)
352387

353388
Go to [xAI API Console](https://console.x.ai/) and create an API key.
@@ -366,6 +401,24 @@ agent = Agent(model)
366401
...
367402
```
368403

404+
### MoonshotAI
405+
406+
Create an API key in the [Moonshot Console](https://platform.moonshot.ai/console).
407+
With that key you can instantiate the [`MoonshotAIProvider`][pydantic_ai.providers.moonshotai.MoonshotAIProvider]:
408+
409+
```python
410+
from pydantic_ai import Agent
411+
from pydantic_ai.models.openai import OpenAIModel
412+
from pydantic_ai.providers.moonshotai import MoonshotAIProvider
413+
414+
model = OpenAIModel(
415+
'kimi-k2-0711-preview',
416+
provider=MoonshotAIProvider(api_key='your-moonshot-api-key'),
417+
)
418+
agent = Agent(model)
419+
...
420+
```
421+
369422
### GitHub Models
370423

371424
To use [GitHub Models](https://docs.github.com/en/github-models), you'll need a GitHub personal access token with the `models: read` permission.

docs/output.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -325,7 +325,7 @@ agent = Agent(
325325
'openai:gpt-4o',
326326
output_type=NativeOutput(
327327
[Fruit, Vehicle], # (1)!
328-
name='Fruit or vehicle',
328+
name='Fruit_or_vehicle',
329329
description='Return a fruit or vehicle.'
330330
),
331331
)

0 commit comments

Comments
 (0)