diff --git a/mcp/common/src/main/java/org/springframework/ai/mcp/McpToolUtils.java b/mcp/common/src/main/java/org/springframework/ai/mcp/McpToolUtils.java index 58400952518..2f8f366d076 100644 --- a/mcp/common/src/main/java/org/springframework/ai/mcp/McpToolUtils.java +++ b/mcp/common/src/main/java/org/springframework/ai/mcp/McpToolUtils.java @@ -278,7 +278,7 @@ public static McpServerFeatures.AsyncToolSpecification toAsyncToolSpecification( * * @param toolCallback the Spring AI tool callback to convert * @param mimeType the MIME type of the output content - * @return an MCP asynchronous tool specificaiotn that wraps the tool callback + * @return an MCP asynchronous tool specification that wraps the tool callback * @see McpServerFeatures.AsyncToolSpecification * @see Schedulers#boundedElastic() */ diff --git a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatclient.adoc b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatclient.adoc index 16765d7ef6d..48f39dc70cc 100644 --- a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatclient.adoc +++ b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatclient.adoc @@ -296,7 +296,7 @@ Flux output = chatClient.prompt() You can also stream the `ChatResponse` using the method `Flux chatResponse()`. In the future, we will offer a convenience method that will let you return a Java entity with the reactive `stream()` method. -In the meantime, you should use the xref:api/structured-output-converter.adoc#StructuredOutputConverter[Structured Output Converter] to convert the aggregated response explicity as shown below. +In the meantime, you should use the xref:api/structured-output-converter.adoc#StructuredOutputConverter[Structured Output Converter] to convert the aggregated response explicitly as shown below. This also demonstrates the use of parameters in the fluent API that will be discussed in more detail in a later section of the documentation. [source,java] diff --git a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/azure-openai-embeddings.adoc b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/azure-openai-embeddings.adoc index 034282e29d5..a66d840db7d 100644 --- a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/azure-openai-embeddings.adoc +++ b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/azure-openai-embeddings.adoc @@ -145,7 +145,7 @@ The prefix `spring.ai.azure.openai` is the property prefix to configure the conn | spring.ai.azure.openai.endpoint | The endpoint from the Azure AI OpenAI `Keys and Endpoint` section under `Resource Management` | - | spring.ai.azure.openai.openai-api-key | (non Azure) OpenAI API key. Used to authenticate with the OpenAI service, instead of Azure OpenAI. This automatically sets the endpoint to https://api.openai.com/v1. Use either `api-key` or `openai-api-key` property. -With this configuration the `spring.ai.azure.openai.embedding.options.deployment-name` is threated as an https://platform.openai.com/docs/models[OpenAi Model] name.| - +With this configuration the `spring.ai.azure.openai.embedding.options.deployment-name` is treated as an https://platform.openai.com/docs/models[OpenAi Model] name.| - |==== diff --git a/spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc b/spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc index e0341e46889..222ce3ea020 100644 --- a/spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc +++ b/spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc @@ -116,7 +116,7 @@ They measure the time spent in the advisor (including the time spend on the inne == Chat Model NOTE: Observability features are currently supported only for `ChatModel` implementations from the following AI model -providers: Anthropic, Azure OpenAI, Mistral AI, Ollama, OpenAI, Vertex AI, MiniMax, Moonshot, QianFan, Zhiu AI. +providers: Anthropic, Azure OpenAI, Mistral AI, Ollama, OpenAI, Vertex AI, MiniMax, Moonshot, QianFan, Zhipu AI. Additional AI model providers will be supported in a future release. The `gen_ai.client.operation` observations are recorded when calling the ChatModel `call` or `stream` methods.