Skip to content

Commit 2928dc2

Browse files
authored
Apply pencil edits for blocking issues in PR review
1 parent 652f2b4 commit 2928dc2

File tree

1 file changed

+8
-8
lines changed
  • learn-pr/wwl-data-ai/retrieval-augmented-generation-azure-databricks/includes

1 file changed

+8
-8
lines changed

learn-pr/wwl-data-ai/retrieval-augmented-generation-azure-databricks/includes/2-workflow.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,10 @@ You can use RAG for chatbots, search enhancement, and content creation and summa
2828

2929
The RAG workflow is built on four essential components that work together:
3030

31-
1. **Embeddings** - Convert text into mathematical vectors that capture meaning
32-
2. **Vector databases** - Store and organize these vectors for fast searching
33-
3. **Search and retrieval** - Find the most relevant information based on user queries
34-
4. **Prompt augmentation** - Combine retrieved information with the original question
31+
- **Embeddings** - Convert text into mathematical vectors that capture meaning
32+
- **Vector databases** - Store and organize these vectors for fast searching
33+
- **Search and retrieval** - Find the most relevant information based on user queries
34+
- **Prompt augmentation** - Combine retrieved information with the original question
3535

3636
Think of these components as building blocks: embeddings translate everything into a common language, vector databases organize this information, search and retrieval find what's needed, and prompt augmentation puts it all together for the AI to use.
3737

@@ -67,9 +67,9 @@ After finding the most relevant documents, the RAG system combines this informat
6767

6868
The augmentation process looks like this:
6969

70-
- Start with the user's question: "What's our vacation policy?"
71-
- Add retrieved context: Include relevant excerpts from your HR documents
72-
- Create augmented prompt: "Based on these HR policy documents: [retrieved content], what's our vacation policy?"
70+
1. Start with the user's question: "What's our vacation policy?"
71+
1. Add retrieved context: Include relevant excerpts from your HR documents
72+
1. Create augmented prompt: "Based on these HR policy documents: [retrieved content], what's our vacation policy?"
7373
The LLM now has both the user's question **and** the specific information needed to answer it accurately. This is called "in-context learning" because the LLM learns from the context provided in the prompt rather than from its original training data.
7474

7575
In the final step, the augmented prompt is sent to the Large Language Model (LLM), which generates a response based on both the question and the retrieved information. The LLM can include citations of the original sources, allowing users to verify where the information came from.
@@ -81,4 +81,4 @@ The complete RAG workflow combines all the components we've reviewed into a unif
8181

8282
The key mechanism is **in-context learning** - instead of retraining the LLM, you provide relevant information as context in each prompt, allowing the LLM to generate informed responses without permanent modification.
8383

84-
Advanced implementations might include feedback loops to refine results when the initial response doesn't meet quality thresholds.
84+
Advanced implementations might include feedback loops to refine results when the initial response doesn't meet quality thresholds.

0 commit comments

Comments
 (0)