You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, this step is optional. indeed, if you are just testing or don’t have a PostgreSQL database available, LangChain4j also supports an in-memory embedding store. This makes it easy to get started without setting up any external infrastructure.
62
+
However, this step is optional. Indeed, if you are just testing or don’t have a PostgreSQL database available, LangChain4j also supports an in-memory embedding store. This makes it easy to get started without setting up any external infrastructure.
63
63
64
-
To use the in-memory store, you will just need to replace the embedding store configuration in the code we are going to write later:
64
+
To use the in-memory store, you will just need to replace the embedding store configuration in the code we are going to write later:
@@ -171,15 +171,15 @@ public class RAGStreamingChatbot {
171
171
}
172
172
```
173
173
174
-
Note that the chatbot will use the streaming mode, as explain in the [Memory Chatbot with LangChain4j](/pages/public_cloud/ai_machine_learning/endpoints_tuto_10_memory_chatbot_langchain4j) tutorial.
174
+
Note that the chatbot will use the streaming mode, as explained in the [Memory Chatbot with LangChain4j](/pages/public_cloud/ai_machine_learning/endpoints_tuto_10_memory_chatbot_langchain4j) tutorial.
175
175
176
176
### Test the chatbot without knowledge base
177
177
178
-
As you can see below, the LLM gives an answer, but not the expected one 😅.
178
+
As you can see below, the LLM gives an answer, but not the expected one.
This is not a surprise, since the model was trained before OVHcloud created AI Endpoints. The model does not this platform.
182
+
This is not a surprise, since the model was trained before OVHcloud created AI Endpoints. The model does not know this platform.
183
183
184
184
That is why we are going to create a knowledge base, to improve the LLM's answers.
185
185
@@ -193,7 +193,7 @@ You can find an example file in our [public-cloud-examples GitHub repository](ht
193
193
194
194
To do this, we are going to create chunks from our document. A chunk is a part of the document that will be transformed in vector.
195
195
196
-
It’s then used to perform a similarity search. This is a delicate phase, and in this example, the chunking is based on the number of characters. In a more complex use case, you will create chunk based on the meaning of the text.
196
+
It’s then used to perform a similarity search. This is a delicate phase, and in this example, the chunking is based on the number of characters. In a more complex use case, you will create chunks based on the meaning of the text.
197
197
198
198
```java
199
199
publicclassRAGStreamingChatbot {
@@ -217,7 +217,7 @@ public class RAGStreamingChatbot {
217
217
218
218
Next, you transform the text in vectors and store them.
219
219
220
-
If you do not have a PostgreSQL manage instance, you can use the in-memory store as mentioned earlier (only for test purpose).
220
+
If you do not have a PostgreSQL managed instance, you can use the in-memory store as mentioned earlier (only for test purposes).
221
221
222
222
```java
223
223
publicclassRAGStreamingChatbot {
@@ -393,7 +393,7 @@ Thanks to your knowledge base, our new chatbot will answer with relevant informa
393
393
394
394
## Conclusion
395
395
396
-
You've now created a Retrieval-Augmented Generation (RAG) chatbot using your own documents and the OVHcloud AI Endpoints platform. LangChain’s integration with Chroma and embedding models makes RAG implementation straightforward—even production-ready.
396
+
You've now created a Retrieval-Augmented Generation (RAG) chatbot using your own documents and the OVHcloud AI Endpoints platform. LangChain’s integration with Chroma and embedding models makes RAG implementation straightforward and even production-ready.
0 commit comments