You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/concepts.adoc
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,11 +70,11 @@ Initially starting as simple strings, prompts have evolved to include multiple m
70
70
71
71
== Embeddings
72
72
73
-
Embeddings are numerical representations of text, images, or videos that capture relationships between inputs.
73
+
Embeddings are numerical representations of text, images, or videos that capture relationships between inputs.
74
74
75
-
Embeddings work by converting text, image, and video into arrays of floating point numbers, called vectors.
76
-
These vectors are designed to capture the meaning of the text, images, and videos.
77
-
The length of the embedding array is called the vector's dimensionality.
75
+
Embeddings work by converting text, image, and video into arrays of floating point numbers, called vectors.
76
+
These vectors are designed to capture the meaning of the text, images, and videos.
77
+
The length of the embedding array is called the vector's dimensionality.
78
78
79
79
By calculating the numerical distance between the vector representations of two pieces of text, an application can determine the similarity between the objects used to generate the embedding vectors.
80
80
@@ -169,13 +169,13 @@ This is the reason to use a vector database. It is very good at finding similar
169
169
170
170
image::spring-ai-rag.jpg[Spring AI RAG, width=1000, align="center"]
171
171
172
-
* The xref::api/etl-pipeline.adoc[ETL pipeline] provides further information about orchestrating the flow of extracting data from the data sources and store it in a structured vector store, ensuring data is in the optimal format for retrieval by the AI model.
173
-
* The xref::api/chatclient.adoc#_retrieval_augmented_generation[ChatClient - RAG] explains how to use the `QuestionAnswerAdvisor` advisor to enable the RAG capability to your application.
172
+
* The xref::api/etl-pipeline.adoc[ETL pipeline] provides further information about orchestrating the flow of extracting data from data sources and storing it in a structured vector store, ensuring data is in the optimal format for retrieval when passing it to the AI model.
173
+
* The xref::api/chatclient.adoc#_retrieval_augmented_generation[ChatClient - RAG] explains how to use the `QuestionAnswerAdvisor` advisor to enable the RAG capability in your application.
174
174
175
175
[[concept-fc]]
176
176
=== Function Calling
177
177
178
-
Large Language Models (LLMs) are frozen after training, leading to stale knowledge and they are unable to access or modify external data.
178
+
Large Language Models (LLMs) are frozen after training, leading to stale knowledge, and they are unable to access or modify external data.
179
179
180
180
The xref::api/functions.adoc[Function Calling] mechanism addresses these shortcomings.
181
181
It allows you to register your own functions to connect the large language models to the APIs of external systems.
@@ -188,8 +188,8 @@ Additionally, you can define and reference multiple functions in a single prompt
* (1) perform a chat request along with a function definition information.
192
-
Later provides the `name`, `description` (e.g. explaining when the Model should call the function), and `input parameters` (e.g. the function's input parameters schema).
191
+
* (1) perform a chat request sending along function definition information.
192
+
The later provides the `name`, `description` (e.g. explaining when the Model should call the function), and `input parameters` (e.g. the function's input parameters schema).
193
193
* (2) when the Model decides to call the function, it will call the function with the input parameters and return the output to the model.
194
194
* (3) Spring AI handles this conversation for you.
195
195
It dispatches the function call to the appropriate function and returns the result to the model.
0 commit comments