Skip to content

Support Python 3.14 #2431

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: add-python-314
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 17 additions & 13 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ jobs:
env:
UV_PYTHON: ${{ matrix.python-version }}
CI: true
COVERAGE_PROCESS_START: ./pyproject.toml
steps:
- uses: actions/checkout@v4

Expand All @@ -151,20 +152,20 @@ jobs:
with:
deno-version: v2.x

- run: mkdir coverage
- run: mkdir .coverage

# run tests with just `pydantic-ai-slim` dependencies
- run: uv run --package pydantic-ai-slim coverage run -m pytest
- run: uv run --package pydantic-ai-slim coverage run -m pytest -n auto --dist=loadgroup
env:
COVERAGE_FILE: coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-slim
COVERAGE_FILE: .coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-slim

- run: uv run coverage run -m pytest
- run: uv run coverage run -m pytest -n auto --dist=loadgroup
env:
COVERAGE_FILE: coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-standard
COVERAGE_FILE: .coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-standard

- run: uv run --all-extras coverage run -m pytest
- run: uv run --all-extras coverage run -m pytest -n auto --dist=loadgroup
env:
COVERAGE_FILE: coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-all-extras
COVERAGE_FILE: .coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-all-extras

- run: uv run --all-extras python tests/import_examples.py

Expand All @@ -173,15 +174,15 @@ jobs:
if: matrix.python-version != '3.9'
run: |
unset UV_FROZEN
uv run --all-extras --resolution lowest-direct coverage run -m pytest
uv run --all-extras --resolution lowest-direct coverage run -m pytest -n auto --dist=loadgroup
env:
COVERAGE_FILE: coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-lowest-versions
COVERAGE_FILE: .coverage/.coverage.${{ runner.os }}-py${{ matrix.python-version }}-lowest-versions

- name: store coverage files
uses: actions/upload-artifact@v4
with:
name: coverage-${{ matrix.python-version }}
path: coverage
path: .coverage
include-hidden-files: true

coverage:
Expand All @@ -197,15 +198,15 @@ jobs:
uses: actions/download-artifact@v4
with:
merge-multiple: true
path: coverage
path: .coverage

- uses: astral-sh/setup-uv@v5
with:
enable-cache: true

- run: uv sync --package pydantic-ai-slim --only-dev
- run: rm coverage/.coverage.*-py3.9-* # Exclude 3.9 coverage as it gets the wrong line numbers, causing invalid failures.
- run: uv run coverage combine coverage
- run: rm .coverage/.coverage.*-py3.9-* # Exclude 3.9 coverage as it gets the wrong line numbers, causing invalid failures.
- run: uv run coverage combine

- run: uv run coverage html --show-contexts --title "Pydantic AI coverage for ${{ github.sha }}"

Expand All @@ -228,7 +229,10 @@ jobs:

- run: uv run coverage report --fail-under 100
- run: uv run diff-cover coverage.xml --fail-under 100

- run: uv run strict-no-cover
env:
COVERAGE_FILE: .coverage/.coverage

test-mcp-run-python:
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
- **Format code**: `make format`
- **Lint code**: `make lint`
- **Type checking**: `make typecheck` (uses pyright) or `make typecheck-both` (pyright + mypy)
- **Run tests**: `make test` (with coverage) or `make test-fast` (parallel, no coverage)
- **Run tests**: `make test` (with coverage)
- **Build docs**: `make docs` or `make docs-serve` (local development)

### Single Test Commands
Expand Down
7 changes: 2 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,10 @@ typecheck-both: typecheck-pyright typecheck-mypy

.PHONY: test
test: ## Run tests and collect coverage data
uv run coverage run -m pytest
COVERAGE_PROCESS_START=./pyproject.toml uv run coverage run -m pytest -n auto --dist=loadgroup
@uv run coverage combine
@uv run coverage report

.PHONY: test-fast
test-fast: ## Same as test except no coverage and 4x faster depending on hardware
uv run pytest -n auto --dist=loadgroup

.PHONY: test-all-python
test-all-python: ## Run tests on Python 3.9 to 3.13
UV_PROJECT_ENVIRONMENT=.venv39 uv run --python 3.9 --all-extras --all-packages coverage run -p -m pytest
Expand Down
6 changes: 6 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,12 @@ Pydantic AI is still pre-version 1, so breaking changes will occur, however:
!!! note
Here's a filtered list of the breaking changes for each version to help you upgrade Pydantic AI.

### v0.5.0 (2025-08-04)

See [#2388](https://github.com/pydantic/pydantic-ai/pull/2388) - The `source` field of an `EvaluationResult` is now of type `EvaluatorSpec` rather than the actual source `Evaluator` instance, to help with serialization/deserialization.

See [#2163](https://github.com/pydantic/pydantic-ai/pull/2163) - The `EvaluationReport.print` and `EvaluationReport.console_table` methods now require most arguments be passed by keyword.

### v0.4.0 (2025-07-08)

See [#1799](https://github.com/pydantic/pydantic-ai/pull/1799) - Pydantic Evals `EvaluationReport` and `ReportCase` are now generic dataclasses instead of Pydantic models. If you were serializing them using `model_dump()`, you will now need to use the `EvaluationReportAdapter` and `ReportCaseAdapter` type adapters instead.
Expand Down
6 changes: 6 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,18 @@ pip/uv-add "pydantic-ai-slim[openai]"
* `evals` — installs [`pydantic-evals`](evals.md) [PyPI ↗](https://pypi.org/project/pydantic-evals){:target="_blank"}
* `openai` — installs `openai` [PyPI ↗](https://pypi.org/project/openai){:target="_blank"}
* `vertexai` — installs `google-auth` [PyPI ↗](https://pypi.org/project/google-auth){:target="_blank"} and `requests` [PyPI ↗](https://pypi.org/project/requests){:target="_blank"}
* `google` — installs `google-genai` [PyPI ↗](https://pypi.org/project/google-genai){:target="_blank"}
* `anthropic` — installs `anthropic` [PyPI ↗](https://pypi.org/project/anthropic){:target="_blank"}
* `groq` — installs `groq` [PyPI ↗](https://pypi.org/project/groq){:target="_blank"}
* `mistral` — installs `mistralai` [PyPI ↗](https://pypi.org/project/mistralai){:target="_blank"}
* `cohere` - installs `cohere` [PyPI ↗](https://pypi.org/project/cohere){:target="_blank"}
* `bedrock` - installs `boto3` [PyPI ↗](https://pypi.org/project/boto3){:target="_blank"}
* `huggingface` - installs `huggingface-hub[inference]` [PyPI ↗](https://pypi.org/project/huggingface-hub){:target="_blank"}
* `duckduckgo` - installs `ddgs` [PyPI ↗](https://pypi.org/project/ddgs){:target="_blank"}
* `tavily` - installs `tavily-python` [PyPI ↗](https://pypi.org/project/tavily-python){:target="_blank"}
* `cli` - installs `rich` [PyPI ↗](https://pypi.org/project/rich){:target="_blank"}, `prompt-toolkit` [PyPI ↗](https://pypi.org/project/prompt-toolkit){:target="_blank"}, and `argcomplete` [PyPI ↗](https://pypi.org/project/argcomplete){:target="_blank"}
* `mcp` - installs `mcp` [PyPI ↗](https://pypi.org/project/mcp){:target="_blank"}
* `a2a` - installs `fasta2a` [PyPI ↗](https://pypi.org/project/fasta2a){:target="_blank"}
* `ag-ui` - installs `ag-ui-protocol` [PyPI ↗](https://pypi.org/project/ag-ui-protocol){:target="_blank"} and `starlette` [PyPI ↗](https://pypi.org/project/starlette){:target="_blank"}

See the [models](models/index.md) documentation for information on which optional dependencies are required for each model.
Expand Down
8 changes: 7 additions & 1 deletion docs/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ There are a number of ways to register tools with an agent:
- via the [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator — for tools that do not need access to the agent [context][pydantic_ai.tools.RunContext]
- via the [`tools`][pydantic_ai.Agent.__init__] keyword argument to `Agent` which can take either plain functions, or instances of [`Tool`][pydantic_ai.tools.Tool]

For more advanced use cases, the [toolsets](toolsets.md) feature lets you manage collections of tools (built by you or provided by an [MCP server](mcp/client.md) or other [third party](#third-party-tools)) and register them with an agent in one go via the [`toolsets`][pydantic_ai.Agent.__init__] keyword argument to `Agent`.
For more advanced use cases, the [toolsets](toolsets.md) feature lets you manage collections of tools (built by you or provided by an [MCP server](mcp/client.md) or other [third party](#third-party-tools)) and register them with an agent in one go via the [`toolsets`][pydantic_ai.Agent.__init__] keyword argument to `Agent`. Internally, all `tools` and `toolsets` are gathered into a single [combined toolset](toolsets.md#combining-toolsets) that's made available to the model.

!!! info "Function tools vs. RAG"
Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
Expand Down Expand Up @@ -724,6 +724,12 @@ def my_flaky_tool(query: str) -> str:

Raising `ModelRetry` also generates a `RetryPromptPart` containing the exception message, which is sent back to the LLM to guide its next attempt. Both `ValidationError` and `ModelRetry` respect the `retries` setting configured on the `Tool` or `Agent`.

### Parallel tool calls & concurrency

When a model returns multiple tool calls in one response, Pydantic AI schedules them concurrently using `asyncio.create_task`.

Async functions are run on the event loop, while sync functions are offloaded to threads. To get the best performance, _always_ use an async function _unless_ you're doing blocking I/O (and there's no way to use a non-blocking library instead) or CPU-bound work (like `numpy` or `scikit-learn` operations), so that simple functions are not offloaded to threads unnecessarily.

## Third-Party Tools

### MCP Tools {#mcp-tools}
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/_agent_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -620,7 +620,7 @@ async def process_function_tools( # noqa: C901
result_data = await tool_manager.handle_call(call)
except exceptions.UnexpectedModelBehavior as e:
ctx.state.increment_retries(ctx.deps.max_result_retries, e)
raise e # pragma: no cover
raise e # pragma: lax no cover
except ToolRetryError as e:
ctx.state.increment_retries(ctx.deps.max_result_retries, e)
yield _messages.FunctionToolCallEvent(call)
Expand Down
11 changes: 7 additions & 4 deletions pydantic_ai_slim/pydantic_ai/_function_schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,17 +154,21 @@ def function_schema( # noqa: C901
if p.kind == Parameter.VAR_POSITIONAL:
annotation = list[annotation]

# FieldInfo.from_annotation expects a type, `annotation` is Any
required = p.default is Parameter.empty
# FieldInfo.from_annotated_attribute expects a type, `annotation` is Any
annotation = cast(type[Any], annotation)
field_info = FieldInfo.from_annotation(annotation)
if required:
field_info = FieldInfo.from_annotation(annotation)
else:
field_info = FieldInfo.from_annotated_attribute(annotation, p.default)
if field_info.description is None:
field_info.description = field_descriptions.get(field_name)

fields[field_name] = td_schema = gen_schema._generate_td_field_schema( # pyright: ignore[reportPrivateUsage]
field_name,
field_info,
decorators,
required=p.default is Parameter.empty,
required=required,
)
# noinspection PyTypeChecker
td_schema.setdefault('metadata', {})['is_model_like'] = is_model_like(annotation)
Expand Down Expand Up @@ -281,7 +285,6 @@ def _build_schema(
td_schema = core_schema.typed_dict_schema(
fields,
config=core_config,
total=var_kwargs_schema is None,
extras_schema=gen_schema.generate_schema(var_kwargs_schema) if var_kwargs_schema else None,
)
return td_schema, None
Expand Down
4 changes: 2 additions & 2 deletions pydantic_ai_slim/pydantic_ai/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
from typing import TYPE_CHECKING

if sys.version_info < (3, 11):
from exceptiongroup import ExceptionGroup
from exceptiongroup import ExceptionGroup as ExceptionGroup # pragma: lax no cover
else:
ExceptionGroup = ExceptionGroup
ExceptionGroup = ExceptionGroup # pragma: lax no cover

if TYPE_CHECKING:
from .messages import RetryPromptPart
Expand Down
47 changes: 37 additions & 10 deletions pydantic_ai_slim/pydantic_ai/messages.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ class FileUrl(ABC):
- `GoogleModel`: `VideoUrl.vendor_metadata` is used as `video_metadata`: https://ai.google.dev/gemini-api/docs/video-understanding#customize-video-processing
"""

_media_type: str | None = field(init=False, repr=False)
_media_type: str | None = field(init=False, repr=False, compare=False)

def __init__(
self,
Expand All @@ -120,19 +120,21 @@ def __init__(
self.force_download = force_download
self._media_type = media_type

@abstractmethod
def _infer_media_type(self) -> str:
"""Return the media type of the file, based on the url."""

@property
def media_type(self) -> str:
"""Return the media type of the file, based on the url or the provided `_media_type`."""
"""Return the media type of the file, based on the URL or the provided `media_type`."""
return self._media_type or self._infer_media_type()

@abstractmethod
def _infer_media_type(self) -> str:
"""Infer the media type of the file based on the URL."""
raise NotImplementedError

@property
@abstractmethod
def format(self) -> str:
"""The file format."""
raise NotImplementedError

__repr__ = _utils.dataclasses_no_defaults_repr

Expand Down Expand Up @@ -182,7 +184,9 @@ def _infer_media_type(self) -> VideoMediaType:
elif self.is_youtube:
return 'video/mp4'
else:
raise ValueError(f'Unknown video file extension: {self.url}')
raise ValueError(
f'Could not infer media type from video URL: {self.url}. Explicitly provide a `media_type` instead.'
)

@property
def is_youtube(self) -> bool:
Expand Down Expand Up @@ -238,7 +242,9 @@ def _infer_media_type(self) -> AudioMediaType:
if self.url.endswith('.aac'):
return 'audio/aac'

raise ValueError(f'Unknown audio file extension: {self.url}')
raise ValueError(
f'Could not infer media type from audio URL: {self.url}. Explicitly provide a `media_type` instead.'
)

@property
def format(self) -> AudioFormat:
Expand Down Expand Up @@ -278,7 +284,9 @@ def _infer_media_type(self) -> ImageMediaType:
elif self.url.endswith('.webp'):
return 'image/webp'
else:
raise ValueError(f'Unknown image file extension: {self.url}')
raise ValueError(
f'Could not infer media type from image URL: {self.url}. Explicitly provide a `media_type` instead.'
)

@property
def format(self) -> ImageFormat:
Expand Down Expand Up @@ -312,9 +320,28 @@ def __init__(

def _infer_media_type(self) -> str:
"""Return the media type of the document, based on the url."""
# Common document types are hardcoded here as mime-type support for these
# extensions varies across operating systems.
if self.url.endswith(('.md', '.mdx', '.markdown')):
return 'text/markdown'
elif self.url.endswith('.asciidoc'):
return 'text/x-asciidoc'
elif self.url.endswith('.txt'):
return 'text/plain'
elif self.url.endswith('.pdf'):
return 'application/pdf'
elif self.url.endswith('.rtf'):
return 'application/rtf'
elif self.url.endswith('.docx'):
return 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'
elif self.url.endswith('.xlsx'):
return 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'

type_, _ = guess_type(self.url)
if type_ is None:
raise ValueError(f'Unknown document file extension: {self.url}')
raise ValueError(
f'Could not infer media type from document URL: {self.url}. Explicitly provide a `media_type` instead.'
)
return type_

@property
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/models/anthropic.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ async def _messages_create(
except APIStatusError as e:
if (status_code := e.status_code) >= 400:
raise ModelHTTPError(status_code=status_code, model_name=self.model_name, body=e.body) from e
raise # pragma: no cover
raise # pragma: lax no cover

def _process_response(self, response: BetaMessage) -> ModelResponse:
"""Process a non-streamed response, and prepare a message to return."""
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/models/bedrock.py
Original file line number Diff line number Diff line change
Expand Up @@ -665,4 +665,4 @@ async def __anext__(self) -> T:
if type(e.__cause__) is StopIteration:
raise StopAsyncIteration
else:
raise e # pragma: no cover
raise e # pragma: lax no cover
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/models/cohere.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ async def _chat(
except ApiError as e:
if (status_code := e.status_code) and status_code >= 400:
raise ModelHTTPError(status_code=status_code, model_name=self.model_name, body=e.body) from e
raise # pragma: no cover
raise # pragma: lax no cover

def _process_response(self, response: V2ChatResponse) -> ModelResponse:
"""Process a non-streamed response, and prepare a message to return."""
Expand Down
6 changes: 3 additions & 3 deletions pydantic_ai_slim/pydantic_ai/models/gemini.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ async def _make_request(

if gemini_labels := model_settings.get('gemini_labels'):
if self._system == 'google-vertex':
request_data['labels'] = gemini_labels
request_data['labels'] = gemini_labels # pragma: lax no cover

headers = {'Content-Type': 'application/json', 'User-Agent': get_user_agent()}
url = f'/{self._model_name}:{"streamGenerateContent" if streamed else "generateContent"}'
Expand Down Expand Up @@ -366,11 +366,11 @@ async def _map_user_prompt(self, part: UserPromptPart) -> list[_GeminiPartUnion]
inline_data={'data': downloaded_item['data'], 'mime_type': downloaded_item['data_type']}
)
content.append(inline_data)
else:
else: # pragma: lax no cover
file_data = _GeminiFileDataPart(file_data={'file_uri': item.url, 'mime_type': item.media_type})
content.append(file_data)
else:
assert_never(item)
assert_never(item) # pragma: lax no cover
return content

def _map_response_schema(self, o: OutputObjectDefinition) -> dict[str, Any]:
Expand Down
6 changes: 4 additions & 2 deletions pydantic_ai_slim/pydantic_ai/models/google.py
Original file line number Diff line number Diff line change
Expand Up @@ -407,7 +407,7 @@ async def _map_user_prompt(self, part: UserPromptPart) -> list[PartDict]:
content.append(inline_data_dict) # type: ignore
elif isinstance(item, VideoUrl) and item.is_youtube:
file_data_dict = {'file_data': {'file_uri': item.url, 'mime_type': item.media_type}}
if item.vendor_metadata:
if item.vendor_metadata: # pragma: no branch
file_data_dict['video_metadata'] = item.vendor_metadata
content.append(file_data_dict) # type: ignore
elif isinstance(item, FileUrl):
Expand All @@ -421,7 +421,9 @@ async def _map_user_prompt(self, part: UserPromptPart) -> list[PartDict]:
inline_data = {'data': downloaded_item['data'], 'mime_type': downloaded_item['data_type']}
content.append({'inline_data': inline_data}) # type: ignore
else:
content.append({'file_data': {'file_uri': item.url, 'mime_type': item.media_type}})
content.append(
{'file_data': {'file_uri': item.url, 'mime_type': item.media_type}}
) # pragma: lax no cover
else:
assert_never(item)
return content
Expand Down
Loading
Loading