-
Notifications
You must be signed in to change notification settings - Fork 147
Open
Description
Description
When response_format is set to a Pydantic model, the OpenAI provider correctly calls client.chat.completions.parse(), which returns an openai.ParsedChatCompletion[T] with a typed message.parsed: T field containing a validated Pydantic model instance.
However, _convert_chat_completion round-trips the response through model_dump() → ChatCompletion.model_validate(), which serializes the Pydantic instance on message.parsed into a plain dict. The data is preserved (thanks to extra="allow" on ChatCompletionMessage), but the typed model is gone.
Example
from pydantic import BaseModel
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
client = AnyLLM.create("openai")
response = await client.acompletion(
model="gpt-4o",
messages=[{"role": "user", "content": "List a birthday party event"}],
response_format=CalendarEvent,
)
parsed = response.choices[0].message.parsed
# Expected: CalendarEvent(name='...', date='...', participants=['...'])
# Actual: {'name': '...', 'date': '...', 'participants': ['...']}Would it be possible to either preserve the ParsedChatCompletion type through the conversion, or expose a dedicated return path for structured output that keeps the typed parsed field?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels