You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reorganize AI index page navigation and add video tutorials section
- Reorganize top image cards to prioritize RedisVL, Search and Query, and LangCache
- Update second card to 'Use Redis Query Engine to search data' linking to search-and-query
- Update third card to 'Use LangCache to store LLM responses' linking to langcache
- Move original Vector and RAG quickstart links to Quickstarts section as simple list
- Add 'Video tutorials' bullet point to overview section before Benchmarks
- Improve navigation alignment with main site navigation structure
This reorganization better reflects the current product priorities and provides clearer user pathways to key Redis AI capabilities.
Copy file name to clipboardExpand all lines: content/develop/ai/_index.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@ hideListLinks: true
14
14
Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within [hashes]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json" >}}) documents for [indexing]({{< relref "/develop/ai/search-and-query/indexing" >}}) and [querying]({{< relref "/develop/ai/search-and-query/query" >}}).
{{< image-card image="images/ai-brain.svg" alt="AI Redis icon" title="Use LangCache to store LLM responses" url="/develop/ai/langcache/" >}}
20
20
</div>
21
21
22
22
#### Overview
@@ -27,6 +27,7 @@ This page is organized into a few sections depending on what you're trying to do
27
27
***Quickstarts** - Short, focused guides to get you started with key features or workflows in minutes.
28
28
***Tutorials** - In-depth walkthroughs that dive deeper into specific use cases or processes. These step-by-step guides help you master essential tasks and workflows.
29
29
***Integrations** - Guides and resources to help you connect and use the product with popular tools, frameworks, or platforms.
30
+
***Video tutorials** - Watch our AI video collection featuring practical tutorials and demonstrations.
30
31
***Benchmarks** - Performance comparisons and metrics to demonstrate how the product performs under various scenarios. This helps you understand its efficiency and capabilities.
31
32
***Best practices** - Recommendations and guidelines for maximizing effectiveness and avoiding common pitfalls. This section equips you to use the product effectively and efficiently.
32
33
@@ -60,6 +61,11 @@ Learn to perform vector search and use gateways and semantic caching in your AI/
60
61
61
62
Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent.
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user's query, serving as contextual information to augment the generative capabilities of an LLM.
0 commit comments