Skip to content

OpenAIChatModel ignores settings from OpenAIResponsesModelSettings #3496

@duarteocarmo

Description

@duarteocarmo

Initial Checks

Description

When using OpenAIResponsesModelSettings with OpenAIChatModel they get completely ignored.

Just run examples with uv run

For example, when I run it, I get this:

Error in test: status_code: 400, model_name: gpt-5-nano, body: {'message': 'Your organization must be verified to generate reasoning summaries. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.', 'type': 'invalid_request_error', 'param': 'reasoning.summary', 'code': 'unsupported_value'}
ModelResponse(parts=[TextPart(content='Based on the latest official data, the city of Ancona (municipality) has a population a bit over 100,000, with the trend being fairly stable to very slowly increasing in recent years. A reasonable 2025 estimate would be about 102,000 people, give or take a few thousand.\n\nNotes:\n- This is an approximation; official numbers come from Istat (Italian National Institute of Statistics) and local records.\n- If you need a precise figure for a specific date in 2025, check Istat’s annual estimates or the Comune di Ancona’s population reports.')], usage=RequestUsage(input_tokens=19, output_tokens=321, details={'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 192, 'rejected_prediction_tokens': 0}), model_name='gpt-5-nano-2025-08-07', timestamp=datetime.datetime(2025, 11, 20, 16, 44, 20, tzinfo=TzInfo(0)), provider_name='openai', provider_details={'finish_reason': 'stop'}, provider_response_id='chatcmpl-ceaceacac', finish_reason='stop')

Example Code

# /// script
# dependencies = [
#   "pydantic-ai",
# ]
# ///

from pydantic_ai import ModelRequest
from pydantic_ai.direct import model_request_sync
from pydantic_ai.models.openai import (
    OpenAIChatModel,
    OpenAIResponsesModel,
    OpenAIResponsesModelSettings,
)


def test():
    settings = OpenAIResponsesModelSettings(
        openai_reasoning_effort="low",
        openai_reasoning_summary="concise",
    )

    model = OpenAIResponsesModel("gpt-5-nano", settings=settings)

    model_response = model_request_sync(
        model=model,
        messages=[
            ModelRequest.user_text_prompt(
                "Population of city of Ancona in 2025? Estimate."
            )
        ],
    )
    print(model_response)


def very_weird_case():
    settings = OpenAIResponsesModelSettings(
        # Both of these get completely ignored?
        openai_reasoning_effort="low",
        openai_reasoning_summary="concise",
    )

    model = OpenAIChatModel("gpt-5-nano", settings=settings)

    model_response = model_request_sync(
        model,
        messages=[
            ModelRequest.user_text_prompt(
                "Population of city of Ancona in 2025? Estimate."
            )
        ],
    )
    print(model_response)


if __name__ == "__main__":
    for function in [test, very_weird_case]:
        try:
            function()
        except Exception as e:
            print(f"Error in {function.__name__}: {e}")

Python, Pydantic AI & LLM client version

Latest.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions