Skip to content

Commit 7d98900

Browse files
Merge branch 'main' into update-versins
2 parents a60e11d + 39e2877 commit 7d98900

30 files changed

+776
-161
lines changed

docs/a2a.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The library is designed to be used with any agentic framework, and is **not excl
2727

2828
### Design
2929

30-
**FastA2A** is built on top of [Starlette](https://starlette.io), which means it's fully compatible with any ASGI server.
30+
**FastA2A** is built on top of [Starlette](https://www.starlette.io), which means it's fully compatible with any ASGI server.
3131

3232
Given the nature of the A2A protocol, it's important to understand the design before using it, as a developer
3333
you'll need to provide some components:
@@ -66,7 +66,7 @@ pip/uv-add fasta2a
6666

6767
The only dependencies are:
6868

69-
- [starlette](https://starlette.io): to expose the A2A server as an [ASGI application](https://asgi.readthedocs.io/en/latest/)
69+
- [starlette](https://www.starlette.io): to expose the A2A server as an [ASGI application](https://asgi.readthedocs.io/en/latest/)
7070
- [pydantic](https://pydantic.dev): to validate the request/response messages
7171
- [opentelemetry-api](https://opentelemetry-python.readthedocs.io/en/latest): to provide tracing capabilities
7272

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,6 @@
2828

2929
::: pydantic_ai.providers.heroku.HerokuProvider
3030

31+
::: pydantic_ai.providers.github.GitHubProvider
32+
3133
::: pydantic_ai.providers.openrouter.OpenRouterProvider

docs/direct.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ The following functions are available:
99
- [`model_request`][pydantic_ai.direct.model_request]: Make a non-streamed async request to a model
1010
- [`model_request_sync`][pydantic_ai.direct.model_request_sync]: Make a non-streamed synchronous request to a model
1111
- [`model_request_stream`][pydantic_ai.direct.model_request_stream]: Make a streamed async request to a model
12+
- [`model_request_stream_sync`][pydantic_ai.direct.model_request_stream_sync]: Make a streamed sync request to a model
1213

1314
## Basic Example
1415

docs/mcp/client.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Examples of both are shown below; [mcp-run-python](run-python.md) is used as the
3131
!!! note
3232
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before calling [`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers]. Running the server is not managed by PydanticAI.
3333

34-
The name "HTTP" is used since this implemented will be adapted in future to use the new
34+
The name "HTTP" is used since this implementation will be adapted in future to use the new
3535
[Streamable HTTP](https://github.com/modelcontextprotocol/specification/pull/206) currently in development.
3636

3737
Before creating the SSE client, we need to run the server (docs [here](run-python.md)):
@@ -371,7 +371,7 @@ async def main():
371371

372372
_(This example is complete, it can be run "as is" with Python 3.10+)_
373373

374-
You can disallow sampling by settings [`allow_sampling=False`][pydantic_ai.mcp.MCPServerStdio.allow_sampling] when creating the server reference, e.g.:
374+
You can disallow sampling by setting [`allow_sampling=False`][pydantic_ai.mcp.MCPServerStdio.allow_sampling] when creating the server reference, e.g.:
375375

376376
```python {title="sampling_disallowed.py" hl_lines="6" py="3.10"}
377377
from pydantic_ai.mcp import MCPServerStdio

docs/models/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ In addition, many providers are compatible with the OpenAI API, and can be used
2323
* [Together AI](openai.md#together-ai)
2424
* [Azure AI Foundry](openai.md#azure-ai-foundry)
2525
* [Heroku](openai.md#heroku-ai)
26+
* [GitHub Models](openai.md#github-models)
2627

2728
PydanticAI also comes with [`TestModel`](../api/models/test.md) and [`FunctionModel`](../api/models/function.md)
2829
for testing and development.

docs/models/openai.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -366,6 +366,33 @@ agent = Agent(model)
366366
...
367367
```
368368

369+
### GitHub Models
370+
371+
To use [GitHub Models](https://docs.github.com/en/github-models), you'll need a GitHub personal access token with the `models: read` permission.
372+
373+
Once you have the token, you can use it with the [`GitHubProvider`][pydantic_ai.providers.github.GitHubProvider]:
374+
375+
```python
376+
from pydantic_ai import Agent
377+
from pydantic_ai.models.openai import OpenAIModel
378+
from pydantic_ai.providers.github import GitHubProvider
379+
380+
model = OpenAIModel(
381+
'xai/grok-3-mini', # GitHub Models uses prefixed model names
382+
provider=GitHubProvider(api_key='your-github-token'),
383+
)
384+
agent = Agent(model)
385+
...
386+
```
387+
388+
You can also set the `GITHUB_API_KEY` environment variable:
389+
390+
```bash
391+
export GITHUB_API_KEY='your-github-token'
392+
```
393+
394+
GitHub Models supports various model families with different prefixes. You can see the full list on the [GitHub Marketplace](https://github.com/marketplace?type=models) or the public [catalog endpoint](https://models.github.ai/catalog/models).
395+
369396
### Perplexity
370397

371398
Follow the Perplexity [getting started](https://docs.perplexity.ai/guides/getting-started)

examples/pydantic_ai_examples/stream_whales.py

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
from typing import Annotated
1212

1313
import logfire
14-
from pydantic import Field, ValidationError
14+
from pydantic import Field
1515
from rich.console import Console
1616
from rich.live import Live
1717
from rich.table import Table
@@ -51,20 +51,7 @@ async def main():
5151
) as result:
5252
console.print('Response:', style='green')
5353

54-
async for message, last in result.stream_structured(debounce_by=0.01):
55-
try:
56-
whales = await result.validate_structured_output(
57-
message, allow_partial=not last
58-
)
59-
except ValidationError as exc:
60-
if all(
61-
e['type'] == 'missing' and e['loc'] == ('response',)
62-
for e in exc.errors()
63-
):
64-
continue
65-
else:
66-
raise
67-
54+
async for whales in result.stream(debounce_by=0.01):
6855
table = Table(
6956
title='Species of Whale',
7057
caption='Streaming Structured responses from GPT-4',

examples/pydantic_ai_examples/weather_agent.py

Lines changed: 30 additions & 83 deletions
Original file line numberDiff line numberDiff line change
@@ -12,16 +12,14 @@
1212
from __future__ import annotations as _annotations
1313

1414
import asyncio
15-
import os
16-
import urllib.parse
1715
from dataclasses import dataclass
1816
from typing import Any
1917

2018
import logfire
21-
from devtools import debug
2219
from httpx import AsyncClient
20+
from pydantic import BaseModel
2321

24-
from pydantic_ai import Agent, ModelRetry, RunContext
22+
from pydantic_ai import Agent, RunContext
2523

2624
# 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
2725
logfire.configure(send_to_logfire='if-token-present')
@@ -31,51 +29,38 @@
3129
@dataclass
3230
class Deps:
3331
client: AsyncClient
34-
weather_api_key: str | None
35-
geo_api_key: str | None
3632

3733

3834
weather_agent = Agent(
39-
'openai:gpt-4o',
35+
'openai:gpt-4.1-mini',
4036
# 'Be concise, reply with one sentence.' is enough for some models (like openai) to use
4137
# the below tools appropriately, but others like anthropic and gemini require a bit more direction.
42-
instructions=(
43-
'Be concise, reply with one sentence.'
44-
'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, '
45-
'then use the `get_weather` tool to get the weather.'
46-
),
38+
instructions='Be concise, reply with one sentence.',
4739
deps_type=Deps,
4840
retries=2,
4941
)
5042

5143

44+
class LatLng(BaseModel):
45+
lat: float
46+
lng: float
47+
48+
5249
@weather_agent.tool
53-
async def get_lat_lng(
54-
ctx: RunContext[Deps], location_description: str
55-
) -> dict[str, float]:
50+
async def get_lat_lng(ctx: RunContext[Deps], location_description: str) -> LatLng:
5651
"""Get the latitude and longitude of a location.
5752
5853
Args:
5954
ctx: The context.
6055
location_description: A description of a location.
6156
"""
62-
if ctx.deps.geo_api_key is None:
63-
# if no API key is provided, return a dummy response (London)
64-
return {'lat': 51.1, 'lng': -0.1}
65-
66-
params = {'access_token': ctx.deps.geo_api_key}
67-
loc = urllib.parse.quote(location_description)
57+
# NOTE: the response here will be random, and is not related to the location description.
6858
r = await ctx.deps.client.get(
69-
f'https://api.mapbox.com/geocoding/v5/mapbox.places/{loc}.json', params=params
59+
'https://demo-endpoints.pydantic.workers.dev/latlng',
60+
params={'location': location_description},
7061
)
7162
r.raise_for_status()
72-
data = r.json()
73-
74-
if features := data['features']:
75-
lat, lng = features[0]['center']
76-
return {'lat': lat, 'lng': lng}
77-
else:
78-
raise ModelRetry('Could not find the location')
63+
return LatLng.model_validate_json(r.content)
7964

8065

8166
@weather_agent.tool
@@ -87,70 +72,32 @@ async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str
8772
lat: Latitude of the location.
8873
lng: Longitude of the location.
8974
"""
90-
if ctx.deps.weather_api_key is None:
91-
# if no API key is provided, return a dummy response
92-
return {'temperature': '21 °C', 'description': 'Sunny'}
93-
94-
params = {
95-
'apikey': ctx.deps.weather_api_key,
96-
'location': f'{lat},{lng}',
97-
'units': 'metric',
98-
}
99-
with logfire.span('calling weather API', params=params) as span:
100-
r = await ctx.deps.client.get(
101-
'https://api.tomorrow.io/v4/weather/realtime', params=params
102-
)
103-
r.raise_for_status()
104-
data = r.json()
105-
span.set_attribute('response', data)
106-
107-
values = data['data']['values']
108-
# https://docs.tomorrow.io/reference/data-layers-weather-codes
109-
code_lookup = {
110-
1000: 'Clear, Sunny',
111-
1100: 'Mostly Clear',
112-
1101: 'Partly Cloudy',
113-
1102: 'Mostly Cloudy',
114-
1001: 'Cloudy',
115-
2000: 'Fog',
116-
2100: 'Light Fog',
117-
4000: 'Drizzle',
118-
4001: 'Rain',
119-
4200: 'Light Rain',
120-
4201: 'Heavy Rain',
121-
5000: 'Snow',
122-
5001: 'Flurries',
123-
5100: 'Light Snow',
124-
5101: 'Heavy Snow',
125-
6000: 'Freezing Drizzle',
126-
6001: 'Freezing Rain',
127-
6200: 'Light Freezing Rain',
128-
6201: 'Heavy Freezing Rain',
129-
7000: 'Ice Pellets',
130-
7101: 'Heavy Ice Pellets',
131-
7102: 'Light Ice Pellets',
132-
8000: 'Thunderstorm',
133-
}
75+
# NOTE: the responses here will be random, and are not related to the lat and lng.
76+
temp_response, descr_response = await asyncio.gather(
77+
ctx.deps.client.get(
78+
'https://demo-endpoints.pydantic.workers.dev/number',
79+
params={'min': 10, 'max': 30},
80+
),
81+
ctx.deps.client.get(
82+
'https://demo-endpoints.pydantic.workers.dev/weather',
83+
params={'lat': lat, 'lng': lng},
84+
),
85+
)
86+
temp_response.raise_for_status()
87+
descr_response.raise_for_status()
13488
return {
135-
'temperature': f'{values["temperatureApparent"]:0.0f}°C',
136-
'description': code_lookup.get(values['weatherCode'], 'Unknown'),
89+
'temperature': f'{temp_response.text} °C',
90+
'description': descr_response.text,
13791
}
13892

13993

14094
async def main():
14195
async with AsyncClient() as client:
14296
logfire.instrument_httpx(client, capture_all=True)
143-
# create a free API key at https://www.tomorrow.io/weather-api/
144-
weather_api_key = os.getenv('WEATHER_API_KEY')
145-
# create a free API key at https://www.mapbox.com/
146-
geo_api_key = os.getenv('GEO_API_KEY')
147-
deps = Deps(
148-
client=client, weather_api_key=weather_api_key, geo_api_key=geo_api_key
149-
)
97+
deps = Deps(client=client)
15098
result = await weather_agent.run(
15199
'What is the weather like in London and in Wiltshire?', deps=deps
152100
)
153-
debug(result)
154101
print('Response:', result.output)
155102

156103

examples/pydantic_ai_examples/weather_agent_gradio.py

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
from __future__ import annotations as _annotations
22

33
import json
4-
import os
54

65
from httpx import AsyncClient
76

@@ -18,10 +17,7 @@
1817
TOOL_TO_DISPLAY_NAME = {'get_lat_lng': 'Geocoding API', 'get_weather': 'Weather API'}
1918

2019
client = AsyncClient()
21-
weather_api_key = os.getenv('WEATHER_API_KEY')
22-
# create a free API key at https://geocode.maps.co/
23-
geo_api_key = os.getenv('GEO_API_KEY')
24-
deps = Deps(client=client, weather_api_key=weather_api_key, geo_api_key=geo_api_key)
20+
deps = Deps(client=client)
2521

2622

2723
async def stream_from_agent(prompt: str, chatbot: list[dict], past_messages: list):

pydantic_ai_slim/pydantic_ai/_agent_graph.py

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,6 @@ async def process_function_tools( # noqa C901
641641
run_context = build_run_context(ctx)
642642

643643
calls_to_run: list[tuple[Tool[DepsT], _messages.ToolCallPart]] = []
644-
call_index_to_event_id: dict[int, str] = {}
645644
for call in tool_calls:
646645
if (
647646
call.tool_name == output_tool_name
@@ -668,7 +667,6 @@ async def process_function_tools( # noqa C901
668667
else:
669668
event = _messages.FunctionToolCallEvent(call)
670669
yield event
671-
call_index_to_event_id[len(calls_to_run)] = event.call_id
672670
calls_to_run.append((tool, call))
673671
elif mcp_tool := await _tool_from_mcp_server(call.tool_name, ctx):
674672
if stub_function_tools:
@@ -683,7 +681,6 @@ async def process_function_tools( # noqa C901
683681
else:
684682
event = _messages.FunctionToolCallEvent(call)
685683
yield event
686-
call_index_to_event_id[len(calls_to_run)] = event.call_id
687684
calls_to_run.append((mcp_tool, call))
688685
elif call.tool_name in output_schema.tools:
689686
# if tool_name is in output_schema, it means we found a output tool but an error occurred in
@@ -700,13 +697,13 @@ async def process_function_tools( # noqa C901
700697
content=content,
701698
tool_call_id=call.tool_call_id,
702699
)
703-
yield _messages.FunctionToolResultEvent(part, tool_call_id=call.tool_call_id)
700+
yield _messages.FunctionToolResultEvent(part)
704701
output_parts.append(part)
705702
else:
706703
yield _messages.FunctionToolCallEvent(call)
707704

708705
part = _unknown_tool(call.tool_name, call.tool_call_id, ctx)
709-
yield _messages.FunctionToolResultEvent(part, tool_call_id=call.tool_call_id)
706+
yield _messages.FunctionToolResultEvent(part)
710707
output_parts.append(part)
711708

712709
if not calls_to_run:
@@ -738,7 +735,7 @@ async def process_function_tools( # noqa C901
738735
for task in done:
739736
index = tasks.index(task)
740737
result = task.result()
741-
yield _messages.FunctionToolResultEvent(result, tool_call_id=call_index_to_event_id[index])
738+
yield _messages.FunctionToolResultEvent(result)
742739

743740
if isinstance(result, _messages.RetryPromptPart):
744741
results_by_index[index] = result

0 commit comments

Comments
 (0)