Skip to content

Commit c2aa35d

Browse files
committed
Improve Multimodality doc
1 parent ee9de90 commit c2aa35d

File tree

2 files changed

+9
-11
lines changed

2 files changed

+9
-11
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/nav.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@
9797
* xref:observabilty/index.adoc[]
9898
* xref:api/functions.adoc[Function Calling]
9999
* xref:api/multimodality.adoc[Multimodality]
100-
* xref:api/testing.adoc[]
100+
* xref:api/testing.adoc[LLM Evaluation]
101101
* xref:api/structured-output-converter.adoc[Structured Output]
102102
103103
* Service Connections

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,21 @@
11
[[Multimodality]]
22
= Multimodality API
33

4+
// image::orbis-sensualium-pictus2.jpg[Orbis Sensualium Pictus, align="center"]
5+
6+
> "All things that are naturally connected ought to be taught in combination" - John Amos Comenius, "Orbis Sensualium Pictus", 1658
7+
48
Humans process knowledge, simultaneously across multiple modes of data inputs.
59
The way we learn, our experiences are all multimodal.
610
We don't have just vision, just audio and just text.
711

8-
These foundational principles of learning were articulated by the father of modern education link:https://en.wikipedia.org/wiki/John_Amos_Comenius[John Amos Comenius], in his work, "Orbis Sensualium Pictus", dating back to 1658.
9-
10-
image::orbis-sensualium-pictus2.jpg[Orbis Sensualium Pictus, align="center"]
11-
12-
> "All things that are naturally connected ought to be taught in combination"
13-
14-
Contrary to those principles, in the past, our approach to Machine Learning was often focused on specialized models tailored to process a single modality.
12+
Contrary to those principles, the Machine Learning was often focused on specialized models tailored to process a single modality.
1513
For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification.
1614

1715
However, a new wave of multimodal large language models starts to emerge.
18-
Examples include OpenAI's GPT-4 Vision, Google's Vertex AI Gemini Pro Vision, Anthropic's Claude3, and open source offerings LLaVA and balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
16+
Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and Balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
1917

20-
The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video.
18+
NOTE: The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video.
2119

2220
== Spring AI Multimodality
2321

@@ -68,7 +66,7 @@ and produce a response like:
6866
Spring AI provides multimodal support for the following chat models:
6967

7068
* xref:api/chat/openai-chat.adoc#_multimodal[OpenAI (e.g. GPT-4 and GPT-4o models)]
71-
* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LlaVa and Baklava models)]
69+
* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LlaVa, Baklava, Llama3.2 models)]
7270
* xref:api/chat/vertexai-gemini-chat.adoc#_multimodal[Vertex AI Gemini (e.g. gemini-1.5-pro-001, gemini-1.5-flash-001 models)]
7371
* xref:api/chat/anthropic-chat.adoc#_multimodal[Anthropic Claude 3]
7472
* xref:api/chat/bedrock/bedrock-anthropic3.adoc#_multimodal[AWS Bedrock Anthropic Claude 3]

0 commit comments

Comments
 (0)