Skip to content

Commit 2cd6ef3

Browse files
Reorganize AI index page navigation and add video tutorials section
- Reorganize top image cards to prioritize RedisVL, Search and Query, and LangCache - Update second card to 'Use Redis Query Engine to search data' linking to search-and-query - Update third card to 'Use LangCache to store LLM responses' linking to langcache - Move original Vector and RAG quickstart links to Quickstarts section as simple list - Add 'Video tutorials' bullet point to overview section before Benchmarks - Improve navigation alignment with main site navigation structure This reorganization better reflects the current product priorities and provides clearer user pathways to key Redis AI capabilities.
1 parent a3bac29 commit 2cd6ef3

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

content/develop/ai/_index.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ hideListLinks: true
1414
Redis stores and indexes vector embeddings that semantically represent unstructured data including text passages, images, videos, or audio. Store vectors and the associated metadata within [hashes]({{< relref "/develop/data-types/hashes" >}}) or [JSON]({{< relref "/develop/data-types/json" >}}) documents for [indexing]({{< relref "/develop/ai/search-and-query/indexing" >}}) and [querying]({{< relref "/develop/ai/search-and-query/query" >}}).
1515

1616
<div class="grid grid-cols-1 md:grid-cols-3 gap-6 my-8">
17-
{{< image-card image="images/ai-cube.svg" alt="AI Redis icon" title="Redis vector database quick start guide" url="/develop/get-started/vector-database" >}}
18-
{{< image-card image="images/ai-brain.svg" alt="AI Redis icon" title="Retrieval-Augmented Generation quick start guide" url="/develop/get-started/rag" >}}
1917
{{< image-card image="images/ai-lib.svg" alt="AI Redis icon" title="Redis vector Python client library documentation" url="/develop/ai/redisvl/" >}}
18+
{{< image-card image="images/ai-cube.svg" alt="AI Redis icon" title="Use Redis Query Engine to search data" url="/develop/ai/search-and-query/" >}}
19+
{{< image-card image="images/ai-brain.svg" alt="AI Redis icon" title="Use LangCache to store LLM responses" url="/develop/ai/langcache/" >}}
2020
</div>
2121

2222
#### Overview
@@ -27,6 +27,7 @@ This page is organized into a few sections depending on what you're trying to do
2727
* **Quickstarts** - Short, focused guides to get you started with key features or workflows in minutes.
2828
* **Tutorials** - In-depth walkthroughs that dive deeper into specific use cases or processes. These step-by-step guides help you master essential tasks and workflows.
2929
* **Integrations** - Guides and resources to help you connect and use the product with popular tools, frameworks, or platforms.
30+
* **Video tutorials** - Watch our AI video collection featuring practical tutorials and demonstrations.
3031
* **Benchmarks** - Performance comparisons and metrics to demonstrate how the product performs under various scenarios. This helps you understand its efficiency and capabilities.
3132
* **Best practices** - Recommendations and guidelines for maximizing effectiveness and avoiding common pitfalls. This section equips you to use the product effectively and efficiently.
3233

@@ -60,6 +61,11 @@ Learn to perform vector search and use gateways and semantic caching in your AI/
6061

6162
Quickstarts or recipes are useful when you are trying to build specific functionality. For example, you might want to do RAG with LangChain or set up LLM memory for your AI agent.
6263

64+
Get started with these foundational guides:
65+
66+
* [Redis vector database quick start guide]({{< relref "/develop/get-started/vector-database" >}})
67+
* [Retrieval-Augmented Generation quick start guide]({{< relref "/develop/get-started/rag" >}})
68+
6369
#### RAG
6470
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user's query, serving as contextual information to augment the generative capabilities of an LLM.
6571

0 commit comments

Comments
 (0)