Skip to content

Commit 7422a9c

Browse files
updated diagram
Signed-off-by: Francisco Javier Arceo <[email protected]>
1 parent c19e1b7 commit 7422a9c

File tree

1 file changed

+0
-28
lines changed

1 file changed

+0
-28
lines changed

module_4_rag/README.md

Lines changed: 0 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -50,33 +50,5 @@ flowchart TD;
5050
A[Pull Data] --> B[Batch Score Embeddings];
5151
B[Batch Score Embeddings] --> C[Materialize Online];
5252
C[Materialize Online] --> D[Retrieval Augmented Generation];
53-
D[Retrieval Augmented Generation] --> E[Store User Interaction];
54-
E[Store User Interaction] --> F[Update Training Labels];
55-
F[Update Training Labels] --> H[Fine Tuning];
56-
H[Fine Tuning] -. Backpropagate .-> B[Batch Score Embeddings];
5753
```
5854

59-
60-
A simple example of the user experience:
61-
62-
```
63-
Q: Can you tell me about Chicago?
64-
A: Here's some wikipedia facts about Chicago...
65-
```
66-
67-
# Limitations
68-
A common issue with RAG and LLMs is hallucination. There are two common
69-
approaches:
70-
71-
1. Prompt engineering
72-
- This approach is the most obvious but is susceptible to prompt injection
73-
74-
2. Build a Classifier to return the "I don't know" response
75-
- This approach is less obvious, requires another model, more training data,
76-
and fine tuning
77-
78-
We can, in fact, use both approaches to further attempt to minimize the
79-
likelihood of prompt injection.
80-
81-
This demo will display both.
82-

0 commit comments

Comments
 (0)