Skip to content

Commit 9470c2b

Browse files
YunKuiLuWillam2004
authored andcommitted
docs(zhipu): fix zhipuai-chat.adoc (spring-projects#4387)
- Change default model from 'GLM-3-Turbo' to 'glm-4-air' throughout examples - Fix formatting consistency in documentation tables Signed-off-by: YunKui Lu <[email protected]> Signed-off-by: 家娃 <[email protected]>
1 parent 39b282a commit 9470c2b

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/zhipuai-chat.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ The prefix `spring.ai.retry` is used as the property prefix that lets you config
102102

103103
==== Connection Properties
104104

105-
The prefix `spring.ai.zhiPu` is used as the property prefix that lets you connect to ZhiPuAI.
105+
The prefix `spring.ai.zhipuai` is used as the property prefix that lets you connect to ZhiPuAI.
106106

107107
[cols="3,5,1", stripes=even]
108108
|====
@@ -133,9 +133,9 @@ The prefix `spring.ai.zhipuai.chat` is the property prefix that lets you configu
133133

134134
| spring.ai.zhipuai.chat.enabled (Removed and no longer valid) | Enable ZhiPuAI chat model. | true
135135
| spring.ai.model.chat | Enable ZhiPuAI chat model. | zhipuai
136-
| spring.ai.zhipuai.chat.base-url | Optional overrides the spring.ai.zhipuai.base-url to provide chat specific url | https://open.bigmodel.cn/api/paas
137-
| spring.ai.zhipuai.chat.api-key | Optional overrides the spring.ai.zhipuai.api-key to provide chat specific api-key | -
138-
| spring.ai.zhipuai.chat.options.model | This is the ZhiPuAI Chat model to use | `GLM-3-Turbo` (the `GLM-3-Turbo`, `GLM-4`, `GLM-4-Air`, `GLM-4-AirX`, `GLM-4-Flash`, and `GLM-4V` point to the latest model versions)
136+
| spring.ai.zhipuai.chat.base-url | Optional overrides the spring.ai.zhipuai.base-url to provide chat specific url. | https://open.bigmodel.cn/api/paas
137+
| spring.ai.zhipuai.chat.api-key | Optional overrides the spring.ai.zhipuai.api-key to provide chat specific api-key. | -
138+
| spring.ai.zhipuai.chat.options.model | This is the ZhiPuAI Chat model to use. You can select between models such as: `glm-4.5`, `glm-4.5-air`, `glm-4-air`, and more. | `glm-4-air`
139139
| spring.ai.zhipuai.chat.options.maxTokens | The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. | -
140140
| spring.ai.zhipuai.chat.options.temperature | What sampling temperature to use, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. | 0.7
141141
| spring.ai.zhipuai.chat.options.topP | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.. | 1.0
@@ -169,7 +169,7 @@ ChatResponse response = chatModel.call(
169169
new Prompt(
170170
"Generate the names of 5 famous pirates.",
171171
ZhiPuAiChatOptions.builder()
172-
.model(ZhiPuAiApi.ChatModel.GLM_3_Turbo.getValue())
172+
.model(ZhiPuAiApi.ChatModel.GLM_4_Air.getValue())
173173
.temperature(0.5)
174174
.build()
175175
));
@@ -252,7 +252,7 @@ Next, create a `ZhiPuAiChatModel` and use it for text generations:
252252
var zhiPuAiApi = new ZhiPuAiApi(System.getenv("ZHIPU_AI_API_KEY"));
253253
254254
var chatModel = new ZhiPuAiChatModel(this.zhiPuAiApi, ZhiPuAiChatOptions.builder()
255-
.model(ZhiPuAiApi.ChatModel.GLM_3_Turbo.getValue())
255+
.model(ZhiPuAiApi.ChatModel.GLM_4_Air.getValue())
256256
.temperature(0.4)
257257
.maxTokens(200)
258258
.build());
@@ -284,11 +284,11 @@ ChatCompletionMessage chatCompletionMessage =
284284
285285
// Sync request
286286
ResponseEntity<ChatCompletion> response = this.zhiPuAiApi.chatCompletionEntity(
287-
new ChatCompletionRequest(List.of(this.chatCompletionMessage), ZhiPuAiApi.ChatModel.GLM_3_Turbo.getValue(), 0.7, false));
287+
new ChatCompletionRequest(List.of(this.chatCompletionMessage), ZhiPuAiApi.ChatModel.GLM_4_Air.getValue(), 0.7, false));
288288
289289
// Streaming request
290290
Flux<ChatCompletionChunk> streamResponse = this.zhiPuAiApi.chatCompletionStream(
291-
new ChatCompletionRequest(List.of(this.chatCompletionMessage), ZhiPuAiApi.ChatModel.GLM_3_Turbo.getValue(), 0.7, true));
291+
new ChatCompletionRequest(List.of(this.chatCompletionMessage), ZhiPuAiApi.ChatModel.GLM_4_Air.getValue(), 0.7, true));
292292
----
293293

294294
Follow the https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-zhipuai/src/main/java/org/springframework/ai/zhipuai/api/ZhiPuAiApi.java[ZhiPuAiApi.java]'s JavaDoc for further information.

0 commit comments

Comments
 (0)