Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ Instead of passing the prompt directly to the LLM, in the RAG approach you:
1. Generate vector embeddings from an existing dataset or corpus (for example, the dataset you want to use to add additional context to the LLMs response). An existing dataset or corpus could be a product documentation, research data, technical specifications, or your product catalog and descriptions.
2. Store the output embeddings in a Vectorize database index.

When a user initiates a prompt, instead of passing it (without additional context) to the LLM, you _augment_ it with additional context:
When a user initiates a prompt, instead of passing it (without additional context) to the LLM, you _augment_ it with additional context:

1. The user prompt is passed into the same ML model used for your dataset, returning a vector embedding representation of the query.
Expand Down