Skip to content

Commit e8e89b0

Browse files
authored
docs: updates from langchain-openai 0.3.26 (#31764)
1 parent eb08b06 commit e8e89b0

File tree

2 files changed

+259
-156
lines changed
  • docs/docs/integrations/chat
  • libs/partners/openai/langchain_openai/chat_models

2 files changed

+259
-156
lines changed

docs/docs/integrations/chat/openai.ipynb

Lines changed: 212 additions & 143 deletions
Large diffs are not rendered by default.

libs/partners/openai/langchain_openai/chat_models/base.py

Lines changed: 47 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2278,11 +2278,23 @@ class GetPopulation(BaseModel):
22782278
`docs <https://python.langchain.com/docs/integrations/chat/openai/>`_ for more
22792279
detail.
22802280
2281+
.. note::
2282+
``langchain-openai >= 0.3.26`` allows users to opt-in to an updated
2283+
AIMessage format when using the Responses API. Setting
2284+
2285+
.. code-block:: python
2286+
2287+
llm = ChatOpenAI(model="...", output_version="responses/v1")
2288+
2289+
will format output from reasoning summaries, built-in tool invocations, and
2290+
other response items into the message's ``content`` field, rather than
2291+
``additional_kwargs``. We recommend this format for new applications.
2292+
22812293
.. code-block:: python
22822294
22832295
from langchain_openai import ChatOpenAI
22842296
2285-
llm = ChatOpenAI(model="gpt-4o-mini")
2297+
llm = ChatOpenAI(model="gpt-4.1-mini", output_version="responses/v1")
22862298
22872299
tool = {"type": "web_search_preview"}
22882300
llm_with_tools = llm.bind_tools([tool])
@@ -2323,7 +2335,7 @@ class GetPopulation(BaseModel):
23232335
23242336
from langchain_openai import ChatOpenAI
23252337
2326-
llm = ChatOpenAI(model="gpt-4o-mini", use_responses_api=True)
2338+
llm = ChatOpenAI(model="gpt-4.1-mini", use_responses_api=True)
23272339
response = llm.invoke("Hi, I'm Bob.")
23282340
response.text()
23292341
@@ -2342,11 +2354,34 @@ class GetPopulation(BaseModel):
23422354
23432355
"Your name is Bob. How can I help you today, Bob?"
23442356
2357+
.. versionadded:: 0.3.26
2358+
2359+
You can also initialize ChatOpenAI with :attr:`use_previous_response_id`.
2360+
Input messages up to the most recent response will then be dropped from request
2361+
payloads, and ``previous_response_id`` will be set using the ID of the most
2362+
recent response.
2363+
2364+
.. code-block:: python
2365+
2366+
llm = ChatOpenAI(model="gpt-4.1-mini", use_previous_response_id=True)
2367+
23452368
.. dropdown:: Reasoning output
23462369
23472370
OpenAI's Responses API supports `reasoning models <https://platform.openai.com/docs/guides/reasoning?api-mode=responses>`_
23482371
that expose a summary of internal reasoning processes.
23492372
2373+
.. note::
2374+
``langchain-openai >= 0.3.26`` allows users to opt-in to an updated
2375+
AIMessage format when using the Responses API. Setting
2376+
2377+
.. code-block:: python
2378+
2379+
llm = ChatOpenAI(model="...", output_version="responses/v1")
2380+
2381+
will format output from reasoning summaries, built-in tool invocations, and
2382+
other response items into the message's ``content`` field, rather than
2383+
``additional_kwargs``. We recommend this format for new applications.
2384+
23502385
.. code-block:: python
23512386
23522387
from langchain_openai import ChatOpenAI
@@ -2357,24 +2392,23 @@ class GetPopulation(BaseModel):
23572392
}
23582393
23592394
llm = ChatOpenAI(
2360-
model="o4-mini", use_responses_api=True, model_kwargs={"reasoning": reasoning}
2395+
model="o4-mini", reasoning=reasoning, output_version="responses/v1"
23612396
)
23622397
response = llm.invoke("What is 3^3?")
23632398
2399+
# Response text
23642400
print(f"Output: {response.text()}")
2365-
print(f"Reasoning: {response.additional_kwargs['reasoning']}")
23662401
2367-
.. code-block:: none
2402+
# Reasoning summaries
2403+
for block in response.content:
2404+
if block["type"] == "reasoning":
2405+
for summary in block["summary"]:
2406+
print(summary["text"])
23682407
2369-
Output: 3^3 = 27.
2408+
.. code-block:: none
23702409
2371-
Reasoning: {
2372-
'id': 'rs_67fffc44b1c08191b6ca9bead6d832590433145b1786f809',
2373-
'summary': [
2374-
{'text': 'The user wants to know...', 'type': 'summary_text'}
2375-
],
2376-
'type': 'reasoning'
2377-
}
2410+
Output: 3³ = 27
2411+
Reasoning: The user wants to know...
23782412
23792413
.. dropdown:: Structured output
23802414

0 commit comments

Comments
 (0)