You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: models/spring-ai-bedrock-converse/src/test/java/org/springframework/ai/bedrock/converse/client/BedrockNovaChatClientIT.java
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/bedrock.adoc
+16Lines changed: 16 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,21 @@
1
1
= Amazon Bedrock
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all Chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
15
+
The Converse API does not support embedding operations, so these will remain in the current API and the embedding model functionality in the existing `InvokeModel API` will be maintained
16
+
====
17
+
18
+
3
19
link:https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html[Amazon Bedrock] is a managed service that provides foundation models from various AI providers, available through a unified API.
4
20
5
21
Spring AI supports https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html[all the Chat and Embedding AI models] available through Amazon Bedrock by implementing the Spring interfaces `ChatModel`, `StreamingChatModel`, and `EmbeddingModel`.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock-converse.adoc
+113-8Lines changed: 113 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,14 +15,7 @@ TIP: The Bedrock Converse API provides a unified interface across multiple model
15
15
[NOTE]
16
16
====
17
17
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all chat conversation implementations in Spring AI.
18
-
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the Converse API for several key benefits:
19
-
20
-
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
21
-
- Model Flexibility: Seamlessly switch between different conversation models without code changes
22
-
- Extended Functionality: Support for model-specific parameters through dedicated structures
23
-
- Tool Support: Native integration with function calling and tool usage capabilities
24
-
- Multimodal Capabilities: Built-in support for vision and other multimodal features
25
-
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
18
+
While the existing xref:api/bedrock-chat.adoc[InvokeModel API] supports conversation applications, we strongly recommend adopting the Converse API for all Char conversation models.
26
19
27
20
The Converse API does not support embedding operations, so these will remain in the current API and the embedding model functionality in the existing `InvokeModel API` will be maintained
Multimodality refers to a model's ability to simultaneously understand and process information from various sources, including text, images, video, pdf, doc, html, md and more data formats.
136
+
137
+
The Bedrock Converse API supports multimodal inputs, including text and image inputs, and can generate a text response based on the combined input.
138
+
139
+
You need a model that supports multimodal inputs, such as the Anthropic Claude or Amazon Nova models.
140
+
141
+
=== Images
142
+
143
+
For link:https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html[models] that support vision multimodality, such as Amazon Nova, Anthropic Claude, Llama 3.2, the Bedrock Converse API Amazon allows you to include multiple images in the payload. Those models can analyze the passed images and answer questions, classify an image, as well as summarize images based on provided instructions.
144
+
145
+
Currently, Bedrock Converse supports the `base64` encoded images of `image/jpeg`, `image/png`, `image/gif` and `image/webp` mime types.
146
+
147
+
Spring AI's `Message` interface supports multimodal AI models by introducing the `Media`` type.
148
+
It contains data and information about media attachments in messages, using Spring's `org.springframework.util.MimeType` and a `java.lang.Object` for the raw media data.
149
+
150
+
Below is a simple code example, demonstrating the combination of user text with an image.
151
+
152
+
[source,java]
153
+
----
154
+
String response = ChatClient.create(chatModel)
155
+
.prompt()
156
+
.user(u -> u.text("Explain what do you see on this picture?")
157
+
.media(Media.Format.IMAGE_PNG, new ClassPathResource("/test.png")))
158
+
.call()
159
+
.content();
160
+
161
+
logger.info(response);
162
+
----
163
+
164
+
It takes as an input the `test.png` image:
165
+
166
+
image::multimodal.test.png[Multimodal Test Image, 200, 200, align="left"]
167
+
168
+
along with the text message "Explain what do you see on this picture?", and generates a response something like:
169
+
170
+
----
171
+
The image shows a close-up view of a wire fruit basket containing several pieces of fruit.
172
+
...
173
+
----
174
+
175
+
=== Video
176
+
177
+
The link:https://docs.aws.amazon.com/nova/latest/userguide/modalities-video.html[Amazon Nova models] allow you to include a single video in the payload, which can be provided either in base64 format or through an Amazon S3 URI.
178
+
179
+
Currently, Bedrock Nova supports the images of `video/x-matros`, `video/quicktime`, `video/mp4`, `video/video/webm`, `video/x-flv`, `video/mpeg`, `video/x-ms-wmv` and `image/3gpp` mime types.
180
+
181
+
Spring AI's `Message` interface supports multimodal AI models by introducing the `Media`` type.
182
+
It contains data and information about media attachments in messages, using Spring's `org.springframework.util.MimeType` and a `java.lang.Object` for the raw media data.
183
+
184
+
Below is a simple code example, demonstrating the combination of user text with a video.
185
+
186
+
[source,java]
187
+
----
188
+
String response = ChatClient.create(chatModel)
189
+
.prompt()
190
+
.user(u -> u.text("Explain what do you see in this video?")
191
+
.media(Media.Format.VIDEO_MP4, new ClassPathResource("/test.video.mp4")))
192
+
.call()
193
+
.content();
194
+
195
+
logger.info(response);
196
+
----
197
+
198
+
It takes as an input the `test.video.mp4` image:
199
+
200
+
image::test.video.jpeg[Multimodal Test Video, 200, 200, align="left"]
201
+
202
+
along with the text message "Explain what do you see in this video?", and generates a response something like:
203
+
204
+
----
205
+
The video shows a group of baby chickens, also known as chicks, huddled together on a surface
206
+
...
207
+
----
208
+
209
+
=== Documents
210
+
211
+
For some models, Bedrock allows you to include documents in the payload through Converse API document support, which can be provided in bytes.
212
+
The document support has two different variants as explained below:
213
+
214
+
- **Text document types** (txt, csv, html, md, and so on), where the emphasis is on text understanding. These use case include answering based on textual elements of the document.
215
+
- **Media document types** (pdf, docx, xlsx), where the emphasis is on vision-based understanding to answer questions. These use cases include answering questions based on charts, graphs, and so on.
216
+
217
+
Currently the Anthropic link:https://docs.anthropic.com/en/docs/build-with-claude/pdf-support[PDF support (beta)] and Amazon Bedrock Nova models support document multimodality.
218
+
219
+
Below is a simple code example, demonstrating the combination of user text with a media document.
220
+
221
+
[source,java]
222
+
----
223
+
String response = ChatClient.create(chatModel)
224
+
.prompt()
225
+
.user(u -> u.text(
226
+
"You are a very professional document summarization specialist. Please summarize the given document.")
227
+
.media(Media.Format.DOC_PDF, new ClassPathResource("/spring-ai-reference-overview.pdf")))
228
+
.call()
229
+
.content();
230
+
231
+
logger.info(response);
232
+
----
233
+
234
+
image::test.pdf.png[Multimodal Test PNG, 200, 200, align="left"]
235
+
236
+
along with the text message "You are a very professional document summarization specialist. Please summarize the given document.", and generates a response something like:
237
+
238
+
----
239
+
**Introduction:**
240
+
- Spring AI is designed to simplify the development of applications with artificial intelligence (AI) capabilities, aiming to avoid unnecessary complexity.
241
+
...
242
+
----
243
+
244
+
140
245
== Sample Controller
141
246
142
247
Create a new Spring Boot project and add the `spring-ai-bedrock-converse-spring-boot-starter` to your dependencies.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock/bedrock-anthropic.adoc
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
= Bedrock Anthropic 2 Chat
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
====
15
+
3
16
NOTE: The Anthropic 2 Chat API is deprecated and replaced by the new Anthropic Claude 3 Message API.
4
17
Please use the xref:api/chat/bedrock/bedrock-anthropic3.adoc[Anthropic Claude 3 Message API] for new projects.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock/bedrock-anthropic3.adoc
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
= Bedrock Anthropic 3
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
====
15
+
3
16
link:https://www.anthropic.com/[Anthropic Claude] is a family of foundational AI models that can be used in a variety of applications.
4
17
5
18
The Claude model has the following high level features
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock/bedrock-cohere.adoc
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
= Cohere Chat
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all Chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
====
15
+
3
16
Provides Bedrock Cohere chat model.
4
17
Integrate generative AI capabilities into essential apps and workflows that improve business outcomes.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock/bedrock-jurassic2.adoc
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
= Jurassic-2 Chat
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all Chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
====
15
+
3
16
https://aws.amazon.com/bedrock/jurassic/[AI21 Labs Jurassic on Amazon Bedrock
4
17
] Jurassic is AI21 Labs’ family of reliable FMs for the enterprise, powering sophisticated language generation tasks – such as question answering, text generation, search, and summarization – across thousands of live applications.
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/bedrock/bedrock-llama.adoc
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
= Llama Chat
2
2
3
+
[NOTE]
4
+
====
5
+
Following the Bedrock recommendations, Spring AI is transitioning to using Amazon Bedrock's Converse API for all Chat conversation implementations in Spring AI.
6
+
While the existing `InvokeModel API` supports conversation applications, we strongly recommend adopting the xref:api/chat/bedrock-converse.adoc[Bedrock Converse API] for several key benefits:
7
+
8
+
- Unified Interface: Write your code once and use it with any supported Amazon Bedrock model
9
+
- Model Flexibility: Seamlessly switch between different conversation models without code changes
10
+
- Extended Functionality: Support for model-specific parameters through dedicated structures
11
+
- Tool Support: Native integration with function calling and tool usage capabilities
12
+
- Multimodal Capabilities: Built-in support for vision and other multimodal features
13
+
- Future-Proof: Aligned with Amazon Bedrock's recommended best practices
14
+
====
15
+
3
16
https://ai.meta.com/llama/[Meta's Llama Chat] is part of the Llama collection of large language models.
4
17
It excels in dialogue-based applications with a parameter scale ranging from 7 billion to 70 billion.
5
18
Leveraging public datasets and over 1 million human annotations, Llama Chat offers context-aware dialogues.
0 commit comments