feat(openai): support reasoning_content parsing for Qwen-compatible models #33836
+161
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat(openai): support
reasoning_contentparsing for Qwen-compatible modelsDescription
This PR adds logic to extract the
reasoning_content(or equivalent “thinking” field) returned by Qwen models that follow the OpenAI-compatible ChatCompletion API.The goal is to properly surface the model’s internal reasoning output through the
AIMessage.additional_kwargsfield without affecting any existing models or functionality.The change addresses an existing issue (#33672 ) originally reported in Chinese, describing that Qwen models expose an additional
reasoning_contentfield that LangChain currently ignores.Background
Qwen models (e.g.,
qwen3-chat) use the OpenAI-compatible endpoint (/v1/chat/completions) but add an extra field:Until now, LangChain dropped this field. This PR ensures it is correctly parsed and preserved in both standard and streaming responses.
Implementation
reasoning_parser.pyunderlangchain_openai/chat_models/.extract_reasoning_content()andextract_reasoning_delta()into:_create_chat_result()_convert_chunk_to_generation_chunk()if "qwen" in model_name.lower()).Tests
tests/unit_tests/chat_models/test_reasoning_parser.pythink,thought)All new and modified tests for this feature pass:
When running
pytest tests/unit_tests/chat_models -v:213 passed, 1 xpassed, 23 warnings in 36.95sWhen running the full repository suite via
make test,7 unrelated errors appear. These errors are also reproducible on the
mainbranchand are caused by pre-existing async/network tests (e.g.,
test_glm4_astream,test_openai_astream,test_openai_ainvoke) that rely onpytest_socketrestrictions.To confirm this, the same 7 errors occur on a clean checkout of
mainwithout any local modifications.
Therefore, no new test regressions were introduced by this PR.
Impact
ChatOpenAIremains the same for all standard OpenAI models.Notes for Reviewers
OPENAI_API_KEY.Checklist
langchain-ai/langchainmaintainers