You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-8Lines changed: 15 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,6 +48,14 @@ No faster way to get started than by diving in and playing around with one of ou
48
48
49
49
Need specific sample code to help get started with Redis? Start here.
50
50
51
+
## Getting started with Redis & Vector Search
52
+
53
+
| Recipe | Description |
54
+
| --- | --- |
55
+
|[/redis-intro/redis_intro.ipynb](python-recipes/redis-intro/redis_intro.ipynb)| The place to start if brand new to Redis |
56
+
|[/vector-search/00_redispy.ipynb](python-recipes/vector-search/00_redispy.ipynb)| Vector search with Redis python client |
57
+
|[/vector-search/01_redisvl.ipynb](python-recipes/vector-search/01_redisvl.ipynb)| Vector search with Redis Vector Library |
58
+
51
59
## Getting started with RAG
52
60
53
61
**Retrieval Augmented Generation** (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The **retrieval** part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to **augment** the **generative** capabilities of an LLM.
@@ -56,27 +64,26 @@ To get started with RAG, either from scratch or using a popular framework like L
56
64
57
65
| Recipe | Description |
58
66
| --- | --- |
59
-
|[/00_intro_redispy](python-recipes/RAG/00_intro_redispy.ipynb)| Introduction to vector search using the standard redis python client |
60
-
|[/01_redisvl](python-recipes/RAG/01_redisvl.ipynb)| RAG from scratch with the Redis Vector Library |
61
-
|[/02_langchain](python-recipes/RAG/02_langchain.ipynb)| RAG using Redis and LangChain |
62
-
|[/03_llamaindex](python-recipes/RAG/03_llamaindex.ipynb)| RAG using Redis and LlamaIndex |
63
-
|[/04_advanced_redisvl](python-recipes/RAG//04_advanced_redisvl.ipynb)| Advanced RAG with redisvl |
64
-
|[/05_nvidia_ai_rag_redis](python-recipes/RAG/05_nvidia_ai_rag_redis.ipynb)| RAG using Redis and Nvidia |
67
+
|[/RAG/01_redisvl.ipynb](python-recipes/RAG/01_redisvl.ipynb)| RAG from scratch with the Redis Vector Library |
68
+
|[/RAG/02_langchain.ipynb](python-recipes/RAG/02_langchain.ipynb)| RAG using Redis and LangChain |
69
+
|[/RAG/03_llamaindex.ipynb](python-recipes/RAG/03_llamaindex.ipynb)| RAG using Redis and LlamaIndex |
70
+
|[/RAG/04_advanced_redisvl.ipynb](python-recipes/RAG/04_advanced_redisvl.ipynb)| Advanced RAG with redisvl |
71
+
|[/RAG/05_nvidia_ai_rag_redis.ipynb](python-recipes/RAG/05_nvidia_ai_rag_redis.ipynb)| RAG using Redis and Nvidia |
65
72
66
73
67
74
## Semantic Cache
68
75
An estimated 31% of LLM queries are potentially redundant ([source](https://arxiv.org/pdf/2403.02694)). Redis enables semantic caching to help cut down on LLM costs quickly.
69
76
70
77
| Recipe | Description |
71
78
| --- | --- |
72
-
|[/semantic_caching_gemini](python-recipes/semantic-cache/semantic_caching_gemini.ipynb)| Build a semantic cache with Redis and Google Gemini |
79
+
|[/semantic-cache/semantic_caching_gemini.ipynb](python-recipes/semantic-cache/semantic_caching_gemini.ipynb)| Build a semantic cache with Redis and Google Gemini |
73
80
74
81
## Advanced RAG
75
82
For further insights on enhancing RAG applications with dense content representations, query re-writing, and other techniques.
76
83
77
84
| Recipe | Description |
78
85
| --- | --- |
79
-
[/advanced_RAG](python-recipes/RAG/04_advanced_redisvl.ipynb) | Notebook for additional tips and techniques to improve RAG quality |
86
+
[/RAG/04_advanced_redisvl.ipynb](python-recipes/RAG/04_advanced_redisvl.ipynb) | Notebook for additional tips and techniques to improve RAG quality |
0 commit comments