You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: models/spring-ai-azure-openai/src/test/java/org/springframework/ai/azure/openai/metadata/AzureOpenAiChatModelMetadataTests.java
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/azure-openai-chat.adoc
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,8 +52,8 @@ For example, a deployment named 'MyAiDeployment' could be configured to use eith
52
52
53
53
To get started, follow these steps to create a deployment with the default settings:
54
54
55
-
Deployment Name: `gpt-35-turbo`
56
-
Model Name: `gpt-35-turbo`
55
+
Deployment Name: `gpt-4o`
56
+
Model Name: `gpt-4o`
57
57
58
58
This Azure configuration aligns with the default configurations of the Spring Boot Azure AI Starter and its Autoconfiguration feature.
59
59
If you use a different Deployment Name, make sure to update the configuration property accordingly:
@@ -132,7 +132,7 @@ The prefix `spring.ai.azure.openai.chat` is the property prefix that configures
132
132
It's important to note that within an Azure OpenAI deployment, the "Deployment Name" is distinct from the model itself.
133
133
The confusion around these terms stems from the intention to make the Azure OpenAI client library compatible with the original OpenAI endpoint.
134
134
The deployment structures offered by Azure OpenAI and Sam Altman's OpenAI differ significantly.
135
-
Deployments model name to provide as part of this completions request. | gpt-35-turbo
135
+
Deployments model name to provide as part of this completions request. | gpt-4o
136
136
| spring.ai.azure.openai.chat.options.maxTokens | The maximum number of tokens to generate. | -
137
137
| spring.ai.azure.openai.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.7
138
138
| spring.ai.azure.openai.chat.options.topP | An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass. | -
@@ -225,7 +225,7 @@ Add a `application.properties` file, under the `src/main/resources` directory, t
Copy file name to clipboardExpand all lines: spring-ai-spring-boot-autoconfigure/src/main/java/org/springframework/ai/autoconfigure/azure/openai/AzureOpenAiChatProperties.java
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ public class AzureOpenAiChatProperties {
0 commit comments