diff --git a/src/content/docs/autorag/configuration/cache.mdx b/src/content/docs/autorag/configuration/cache.mdx
index 4ca268292d7e225..3f8ca29707ee21e 100644
--- a/src/content/docs/autorag/configuration/cache.mdx
+++ b/src/content/docs/autorag/configuration/cache.mdx
@@ -47,4 +47,4 @@ The similarity threshold decides how close two prompts need to be to reuse a cac
| Broad | Moderate match, more hits | "What’s the weather like today?" matches with "Tell me today’s weather" |
| Loose | Low similarity, max reuse | "What’s the weather like today?" matches with "Give me the forecast" |
-Test these values to see which works best with your application.
+Test these values to see which works best with your [RAG application](/autorag/).
diff --git a/src/content/docs/autorag/configuration/models.mdx b/src/content/docs/autorag/configuration/models.mdx
index ccc739c33a1eba9..ec1787d041f1b02 100644
--- a/src/content/docs/autorag/configuration/models.mdx
+++ b/src/content/docs/autorag/configuration/models.mdx
@@ -36,4 +36,4 @@ If you choose **Smart Default** in your model selection, then AutoRAG will selec
### Per-request generation model override
-While the generation model can be set globally at the AutoRAG instance level, you can also override it on a per-request basis in the [AI Search API](/autorag/usage/rest-api/#ai-search). This is useful if your application requires dynamic selection of generation models based on context or user preferences.
+While the generation model can be set globally at the AutoRAG instance level, you can also override it on a per-request basis in the [AI Search API](/autorag/usage/rest-api/#ai-search). This is useful if your [RAG application](/autorag/) requires dynamic selection of generation models based on context or user preferences.
diff --git a/src/content/docs/autorag/get-started.mdx b/src/content/docs/autorag/get-started.mdx
index 04ee044e7ae3de0..a31d9b174220399 100644
--- a/src/content/docs/autorag/get-started.mdx
+++ b/src/content/docs/autorag/get-started.mdx
@@ -6,7 +6,7 @@ sidebar:
head:
- tag: title
content: Get started with AutoRAG
- Description: Get started creating fully-managed, retrieval-augmented generation pipelines with Cloudflare AutoRAG.
+Description: Get started creating fully-managed, retrieval-augmented generation pipelines with Cloudflare AutoRAG.
---
AutoRAG allows developers to create fully managed retrieval-augmented generation (RAG) pipelines to power AI applications with accurate and up-to-date information without needing to manage infrastructure.
@@ -55,7 +55,7 @@ Once indexing is complete, you can run your first query:
## 5. Add to your application
-There are multiple ways you can add AutoRAG to your applications:
+There are multiple ways you can create [RAG applications](/autorag/) with Cloudflare AutoRAG:
- [Workers Binding](/autorag/usage/workers-binding/)
- [REST API](/autorag/usage/rest-api/)
diff --git a/src/content/docs/autorag/index.mdx b/src/content/docs/autorag/index.mdx
index 5c287f9b750b95d..e17b48b0ee6b331 100644
--- a/src/content/docs/autorag/index.mdx
+++ b/src/content/docs/autorag/index.mdx
@@ -2,6 +2,7 @@
pcx_content_type: overview
title: Overview
type: overview
+description: Build scalable, fully-managed RAG applications with Cloudflare AutoRAG. Create retrieval-augmented generation pipelines to deliver accurate, context-aware AI without managing infrastructure.
sidebar:
order: 1
head:
@@ -20,13 +21,12 @@ import {
} from "~/components";
- Create fully-managed RAG pipelines to power your AI applications with accurate
- and up-to-date information.
+ Create fully-managed RAG applications that continuously update and scale on Cloudflare.
-AutoRAG lets you create fully-managed, retrieval-augmented generation (RAG) pipelines that continuously updates and scales on Cloudflare. With AutoRAG, you can integrate context-aware AI into your applications without managing infrastructure.
+AutoRAG lets you create retrieval-augmented generation (RAG) pipelines that power your AI applications with accurate and up-to-date information. Create RAG applications that integrate context-aware AI without managing infrastructure.
You can use AutoRAG to build:
diff --git a/src/content/docs/autorag/tutorial/brower-rendering-autorag-tutorial.mdx b/src/content/docs/autorag/tutorial/brower-rendering-autorag-tutorial.mdx
index 6cba012b924f8fa..ee0b69c2c2c4efd 100644
--- a/src/content/docs/autorag/tutorial/brower-rendering-autorag-tutorial.mdx
+++ b/src/content/docs/autorag/tutorial/brower-rendering-autorag-tutorial.mdx
@@ -158,7 +158,7 @@ You can view the progress of your indexing job in the Overview page of your Auto
Once AutoRAG finishes indexing your content, you’re ready to start asking it questions. You can open up your AutoRAG instance, navigate to the Playground tab, and ask a question based on your uploaded content, like “What is AutoRAG?”.
-Once you’re happy with the results in the Playground, you can integrate AutoRAG directly into the application that you are building. If you are using a Worker to build your application, then you can use the AI binding to directly call your AutoRAG:
+Once you’re happy with the results in the Playground, you can integrate AutoRAG directly into the application that you are building. If you are using a Worker to build your [RAG application](/autorag/), then you can use the AI binding to directly call your AutoRAG:
```jsonc
{
diff --git a/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx b/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx
index 68b1b8eee677cfc..e2c91573cb80b23 100644
--- a/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx
+++ b/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx
@@ -50,7 +50,7 @@ When a user initiates a prompt, instead of passing it (without additional contex
3. These vectors are used to look up the content they relate to (if not embedded directly alongside the vectors as metadata).
4. This content is provided as context alongside the original user prompt, providing additional context to the LLM and allowing it to return an answer that is likely to be far more contextual than the standalone prompt.
-Create a RAG today with [AutoRAG](/autorag) to deploy a fully managed RAG pipeline in just a few clicks. AutoRAG automatically sets up Vectorize, handles continuous indexing, and serves responses through a single API.
+[Create a RAG application today with AutoRAG](/autorag/) to deploy a fully managed RAG pipeline in just a few clicks. AutoRAG automatically sets up Vectorize, handles continuous indexing, and serves responses through a single API.
1 You can learn more about the theory behind RAG by reading the [RAG
paper](https://arxiv.org/abs/2005.11401).