Skip to content

Commit 6c0f807

Browse files
author
Your Name
committed
new genai rag guide
Signed-off-by: Your Name <[email protected]>
1 parent 73d74ad commit 6c0f807

File tree

1 file changed

+206
-0
lines changed

1 file changed

+206
-0
lines changed

content/guides/genai-rag.md

Lines changed: 206 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,206 @@
1+
---
2+
title: Leveraging RAG in GenAI to teach new information
3+
linkTitle: Leveraging RAG in GenAI
4+
description: This guide walks through the process of setting up and utilizing a GenAI stack with Retrieval-Augmented Generation (RAG) systems and graph databases. Learn how to integrate graph databases like Neo4j with AI models for more accurate, contextually-aware responses.
5+
keywords: Docker, GenAI, Retrieval-Augmented Generation, RAG, Graph Databases, Neo4j, AI, LLM
6+
summary: |
7+
This guide explains setting up a GenAI stack with Retrieval-Augmented Generation (RAG) and Neo4j, covering key concepts, deployment steps, and a case study. It also includes troubleshooting tips for optimizing AI performance with real-time data.
8+
tags: [ai]
9+
params:
10+
time: 35 minutes
11+
---
12+
13+
14+
This guide walks you through setting up a powerful AI stack that integrates Retrieval-Augmented Generation (RAG) systems with graph databases. In this guide, you’ll learn how to:
15+
16+
- Set up a GenAI system that enhances language models with real-time data from graph databases
17+
- Use Docker to containerize and deploy the complete GenAI stack, including graph databases and AI models
18+
- Leverage Neo4j for efficient information retrieval and context-aware AI responses
19+
- Inspecting data in the database, ensuring smooth operation for AI-driven applications
20+
21+
22+
23+
## Introduction to Graph Databases
24+
25+
All NoSQL databases can be grouped under 4 main groups:
26+
27+
* Document-based databases
28+
* Key-value stores
29+
* Column-oriented databases
30+
* Graph-based databases
31+
32+
Being one of the four main groups, graph databases are specialized database systems designed to store and query data where relationships between entities are as important as the entities themselves. Unlike traditional databases, graph databases use nodes (vertices) to store entities and edges to store relationships between these entities, making them ideal for handling highly connected data.
33+
34+
### Graph Databases vs SQL Databases
35+
36+
1. Data Model:
37+
- SQL: Uses tables with rows and columns, relationships expressed through foreign keys
38+
- Graph: Uses nodes and edges, relationships are much more flexible
39+
40+
2. Schema Flexibility:
41+
- SQL: Rigid schema, changes require extra steps
42+
- Graph: Flexible schema, can add new relationships without migrations
43+
44+
3. Use Cases:
45+
- SQL: Structured data with fixed relationships
46+
- Graph: Social networks, recommendation engines, knowledge graphs
47+
48+
49+
## Understanding RAG (Retrieval-Augmented Generation)
50+
51+
RAG (Retrieval-Augmented Generation) is a hybrid framework that enhances the capabilities of large language models by integrating information retrieval. It combines three core components:
52+
53+
- **Information retrieval** from an external knowledge base
54+
- **Large Language Model (LLM)** for generating responses
55+
- **Vector embeddings** to enable semantic search
56+
57+
In a Retrieval-Augmented Generation (RAG) system, vector embeddings are used to represent the semantic meaning of text in a way that a machine can understand and process. For instance, the words "dog" and "puppy" will have similar embeddings because they share similar meanings. By integrating these embeddings into the RAG framework, the system can combine the generative power of large language models with the ability to pull in highly relevant, contextually-aware data from external sources.
58+
59+
The system operates as follows:
60+
1. Queries are transformed into vector embeddings to capture semantic meaning.
61+
2. Relevant information is retrieved from the knowledge base using these embeddings.
62+
3. The retrieved data is used to augment the input prompt for the LLM.
63+
4. The LLM generates responses that blend the model's inherent knowledge with the retrieved context.
64+
65+
### When RAG is Effective
66+
67+
1. Knowledge Limitations:
68+
- LLMs have fixed knowledge cutoffs
69+
- Cannot access real-time or specialized information
70+
- May hallucinate or provide outdated information
71+
72+
2. Business Requirements:
73+
- Need for accurate, up-to-date information
74+
- Compliance and auditability
75+
- Domain-specific knowledge integration
76+
77+
3. Cost Efficiency:
78+
- Reduced need for model fine-tuning
79+
- Smaller models can perform well with good context
80+
- Lower computational requirements
81+
82+
83+
84+
85+
## A Case Study in RAG Implementation
86+
87+
### What is Apache NiFi?
88+
89+
Apache NiFi is a powerful data integration tool, but in our context, it serves us as a concept rather than a tool. Since it's a relatively new concept, large language models (LLMs) may have limited or no inherent knowledge about it.
90+
91+
92+
93+
Question: What is Apache Nifi?
94+
RAG: Disabled
95+
96+
Hello! I'm here to help you with your question about Apache NiFi. Unfortunately, I don't know the answer to that question. I'm just an AI and my knowledge cutoff is December 2022, so I may not be familiar with the latest technologies or software. Can you please provide more context or details about Apache NiFi? Maybe there's something I can help you with related to it.
97+
98+
Question: What is Apache Nifi?
99+
RAG: Enabled
100+
101+
Answer:
102+
103+
Apache NiFi is an open-source, distributed data processing system that allows you to collect, transform, and transport data across various sources and sinks. It provides a graphical user interface for designing and managing data flows, as well as a set of APIs for integrating with other systems.
104+
105+
According to this Stack Overflow question, the free Neo4J JDBC driver doesn't understand SQL, only Cypher, so you would need to use a non-free BI Connector to connect to Neo4j from Apache Drill.
106+
107+
Additionally, this Stack Overflow question suggests that Apache POI does work in a Neo4j User Defined Function. However, there may be an unknown glitch in the standalone UDF mentioned in the initial question.
108+
109+
To export data from Neo4j to Excel using APOC Extended procedures, you can use apoc.export.xls.query which takes a Cypher query and exports the results to an Excel file.
110+
111+
References:
112+
113+
How to connect and query Neo4j Database on Apache Drill?
114+
Is a Neo4j UDF compatible with Apache POI?
115+
116+
117+
118+
119+
## Setting Up GenAI Stack with GPU Acceleration on Linux
120+
121+
To set up and run the GenAI stack on a Linux host with GPU acceleration, execute the following command:
122+
123+
```bash
124+
docker compose --profile linux-gpu up -d
125+
```
126+
127+
### Setting Up on Other Platforms
128+
129+
For instructions on how to set up the stack on other platforms, refer to [this page](https://github.com/docker/genai-stack).
130+
131+
---
132+
133+
### Notes
134+
135+
- **Initial Startup**: The first startup may take some time because the system needs to download a large language model (LLM).
136+
- **Monitoring Progress**: We can monitor the download and initialization progress by viewing the logs.
137+
138+
Run the following command to view the logs:
139+
140+
```bash
141+
docker compose logs
142+
```
143+
144+
Wait for specific lines in the logs indicating that the download is complete and the stack is ready. These lines typically confirm successful setup and initialization.
145+
146+
pull-model-1 exited with code 0
147+
database-1 | 2024-12-29 09:35:53.269+0000 INFO Started.
148+
pdf_bot-1 | You can now view your Streamlit app in your browser.
149+
loader-1 | You can now view your Streamlit app in your browser.
150+
bot-1 | You can now view your Streamlit app in your browser.
151+
152+
153+
You can now access the interface at [http://localhost:8501/](http://localhost:8501/) to ask questions. For example, you can try the sample question:
154+
155+
156+
What is Apache Nifi?
157+
158+
159+
The response should be similar to the following:
160+
161+
162+
... I'm just an AI and my knowledge cutoff is December 2022...
163+
164+
165+
Now it's time to teach the AI some new tricks. First, connect to [loader-1](http://localhost:8502/). Instead of using the "neo4j" tag, change it to the "apache-nifi" tag, then click the **Import** button. After the import is successful, access Neo4j to verify the data.
166+
167+
168+
169+
After logging in to [http://localhost:7474/](http://localhost:7474/) using the credentials from the `.env` file, you can run queries on Neo4j. Using the Neo4j Cypher query language, you can check for the data stored in the database.
170+
171+
To count the data, run the following query:
172+
173+
174+
```cypher
175+
MATCH (n)
176+
RETURN DISTINCT labels(n) AS NodeTypes, count(*) AS Count
177+
ORDER BY Count DESC;
178+
```
179+
180+
181+
You can also run the following query to visualize the data:
182+
183+
```cypher
184+
CALL db.schema.visualization()
185+
```
186+
187+
To check the relationships in the database, run the following query:
188+
189+
```cypher
190+
CALL db.relationshipTypes()
191+
```
192+
193+
Now, we are ready to enable our LLM to use this information. Go back to [http://localhost:8501/](http://localhost:8501/), enable the **RAG** checkbox, and ask the same question again. The LLM will now provide a more detailed answer.
194+
195+
Keep in mind that new questions will be added to Stack Overflow, and due to the inherent randomness in most AI models, the answers may vary and won't be identical to those in this example.
196+
197+
Feel free to start over with another [Stack Overflow tag](https://stackoverflow.com/tags). To drop all data in Neo4j, you can use the following command in the Neo4j Web UI:
198+
199+
200+
```cypher
201+
MATCH (n)
202+
DETACH DELETE n;
203+
```
204+
205+
For optimal results, choose a tag that the LLM is not familiar with.
206+

0 commit comments

Comments
 (0)