From 6499abdb19da70cc064fa02aaeaeb0641288f04c Mon Sep 17 00:00:00 2001 From: jonghoon park Date: Mon, 31 Mar 2025 20:18:18 +0900 Subject: [PATCH] docs: fix BakLLaVA model name spelling in multimodality documentation Signed-off-by: Wenhao Ma <296232679@qq.com> --- .../main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc | 2 +- .../src/main/antora/modules/ROOT/pages/api/multimodality.adoc | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc index 0b8e65f4c98..08770c5a16e 100644 --- a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc +++ b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc @@ -252,7 +252,7 @@ TIP: You need Ollama 0.2.8 or newer to use the functional calling capabilities a Multimodality refers to a model's ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. -Some of the models available in Ollama with multimodality support are https://ollama.com/library/llava[LLaVa] and https://ollama.com/library/bakllava[bakllava] (see the link:https://ollama.com/search?c=vision[full list]). +Some of the models available in Ollama with multimodality support are https://ollama.com/library/llava[LLaVA] and https://ollama.com/library/bakllava[BakLLaVA] (see the link:https://ollama.com/search?c=vision[full list]). For further details, refer to the link:https://llava-vl.github.io/[LLaVA: Large Language and Vision Assistant]. The Ollama link:https://github.com/ollama/ollama/blob/main/docs/api.md#parameters-1[Message API] provides an "images" parameter to incorporate a list of base64-encoded images with the message. diff --git a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc index 1829b8c152e..5c1933fed25 100644 --- a/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc +++ b/spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc @@ -13,7 +13,7 @@ Contrary to those principles, the Machine Learning was often focused on speciali For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification. However, a new wave of multimodal large language models starts to emerge. -Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and Balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs. +Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and BakLLaVA are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs. NOTE: The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video. @@ -69,6 +69,6 @@ Spring AI provides multimodal support for the following chat models: * xref:api/chat/bedrock-converse.adoc#_multimodal[AWS Bedrock Converse] * xref:api/chat/azure-openai-chat.adoc#_multimodal[Azure Open AI (e.g. GPT-4o models)] * xref:api/chat/mistralai-chat.adoc#_multimodal[Mistral AI (e.g. Mistral Pixtral models)] -* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LlaVa, Baklava, Llama3.2 models)] +* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LLaVA, BakLLaVA, Llama3.2 models)] * xref:api/chat/openai-chat.adoc#_multimodal[OpenAI (e.g. GPT-4 and GPT-4o models)] * xref:api/chat/vertexai-gemini-chat.adoc#_multimodal[Vertex AI Gemini (e.g. gemini-1.5-pro-001, gemini-1.5-flash-001 models)]