-
Notifications
You must be signed in to change notification settings - Fork 2k
feat(zhipuai): ZhipuAI add thinking and response_format parameter support
#4359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… support - Add `thinking` and `response_format` fields to `ZhiPuAiApi` and `ZhiPuAiChatOptions` - Add ZhiPuAiChatOptionsTests with 16 test methods covering all aspects of the class - Test builder pattern with all fields including responseFormat and thinking - Test copy functionality, setters, default values, and equals/hashCode - Test tool callbacks, tool names validation, and collection handling - Test stop sequences alias and fluent setters - Add documentation for response-format.type and thinking.type properties Signed-off-by: YunKui Lu <[email protected]>
3d215d6 to
f216e36
Compare
|
@mxsl-gr Requesting your review for this PR as well. Thank you |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good.
not related to this PR's changes, with GLM-4.5 family models, the default behavior is to have thinking mode enabled.
while it does improve the output quality, I still feel it’s not ideal since it noticeably slows down the response speed
@mxsl-gr Thank you for your review. If not explicitly set, this PR will not set the |
| /** | ||
| * Control whether to enable the large model's chain of thought. Available options: (default) enabled, disabled. | ||
| */ | ||
| private @JsonProperty("thinking") ChatCompletionRequest.Thinking thinking; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't the default value be null if it's not explicitly set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ilayaperumalg Yes, the thinking in the request body will be null, but for models GLM-4.5 and above, setting thinking=null is equivalent to enabling chain-of-thought reasoning. Here's the English documentation: https://docs.z.ai/api-reference/llm/chat-completion#body-thinking-type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@YunKuiLu Thanks for the clarification!
thinkingandresponse_formatfields toZhiPuAiApiandZhiPuAiChatOptionsOfficial API:
https://docs.bigmodel.cn/api-reference/%E6%A8%A1%E5%9E%8B-api/%E5%AF%B9%E8%AF%9D%E8%A1%A5%E5%85%A8#body-response-format