Skip to content

Commit 56f13d5

Browse files
committed
added image fix links
1 parent 9167606 commit 56f13d5

File tree

8 files changed

+18
-18
lines changed

8 files changed

+18
-18
lines changed
187 KB
Loading
191 KB
Loading

src/content/docs/autorag/configuration/models.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,4 @@ If you choose Smart Default in your model selection then AutoRAG will select a C
3232

3333
### Per-request generation model override
3434

35-
While the generation model can be set globally at the AutoRAG instance level, you can also override it on a per-request basis in the [AI Search API](/autorag/use-autorag/rest-api/#ai-search). This is useful if your application requires dynamic selection of generation models based on context or user preferences.
35+
While the generation model can be set globally at the AutoRAG instance level, you can also override it on a per-request basis in the [AI Search API](/autorag/usage/rest-api/#ai-search). This is useful if your application requires dynamic selection of generation models based on context or user preferences.

src/content/docs/autorag/configuration/retrieval-configuration.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,6 @@ If no results meet the threshold, AutoRAG will not generate a response.
3939

4040
## Configuration
4141

42-
These values can be configured at the AutoRAG instance level or overridden on a per-request basis using the [REST API](/autorag/use-autorag/rest-api/) or the [Workers binding](/autorag/use-autorag/workers-binding/).
42+
These values can be configured at the AutoRAG instance level or overridden on a per-request basis using the [REST API](/autorag/usage/rest-api/) or the [Workers binding](/autorag/usage/workers-binding/).
4343

4444
Use the parameters `match_threshold` and `max_num_results` to customize retrieval behavior per request.

src/content/docs/autorag/get-started.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,9 @@ Once indexing is complete, you can run your first query:
5151
3. Select **Search with AI** or **Search**.
5252
4. Enter a **query** to test out its response.
5353

54-
## Use AutoRAG
54+
## Usage
5555

5656
Cloudflare provides multiple ways for developers to use AutoRAG in their applications:
5757

58-
- [Workers Binding](/autorag/use-autorag/workers-binding/)
59-
- [REST API](/autorag/use-autorag/rest-api/)
58+
- [Workers Binding](/autorag/usage/workers-binding/)
59+
- [REST API](/autorag/usage/rest-api/)

src/content/docs/autorag/how-autorag-works.mdx

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
pcx_content_type: concept
33
title: How AutoRAG works
44
sidebar:
5-
order: 3
5+
order: 2
66
---
77

88
AutoRAG simplifies the process of building and managing a Retrieval-Augmented Generation (RAG) pipeline using Cloudflare's serverless platform. Instead of manually stitching together components like Workers AI, Vectorize, and writing custom logic for indexing, retrieval, and generation, AutoRAG handles it all for you. It also continuously indexes your data to ensure responses stay accurate and up-to-date.
@@ -19,27 +19,27 @@ Indexing begins automatically when you create an AutoRAG instance and connect a
1919
Here is what happens during indexing:
2020

2121
1. **Data ingestion:** AutoRAG reads from your connected data source. Files that are unsupported or exceed size limits are flagged and reported as indexing errors.
22-
2. **Markdown conversion:** AutoRAG uses a Worker powered by [Workers AI’s Markdown Conversion](/workers-ai/markdown-conversion/) to convert all data into structured Markdown. This ensures consistency across diverse file types. For images, Workers AI is used to perform object detection followed by vision-to-language transformation to convert images into Markdown text.
23-
3. **Chunking:** The extracted text is chunked to improve retrieval granularity.
22+
2. **Markdown conversion:** AutoRAG uses [Workers AI’s Markdown Conversion](/workers-ai/markdown-conversion/) to convert all data into structured Markdown. This ensures consistency across diverse file types. For images, Workers AI is used to perform object detection followed by vision-to-language transformation to convert images into Markdown text.
23+
3. **Chunking:** The extracted text is chunked into smaller pieces to improve retrieval granularity.
2424
4. **Embedding:** Each chunk is embedded using Workers AI’s embedding model to transform the content into vectors.
25-
5. **Vector storage:** The resulting vectors, along with metadata like source location and file name, are stored in a Vectorize index created on your account.
25+
5. **Vector storage:** The resulting vectors, along with metadata like source location and file name, are stored in a Cloudflare’s Vectorize database created on your account.
2626

27-
[INSERT IMAGE]
27+
![Indexing](~/assets/images/autorag/indexing.png)
2828

2929
## Querying
3030

3131
Once indexing is complete, AutoRAG is ready to respond to end-user queries in real time.
3232

3333
Here’s how the querying pipeline works:
3434

35-
1. **Receive query from AutoRAG API:** The query workflow begins when you send a request to the AutoRAG API.
36-
2. **Query rewriting (optional):** The input query can be rewritten by one of Workers AI’s LLMs to improve semantic retrieval, if enabled.
37-
3. **Embedding the query:** The rewritten (or original) query is turned into a vector via the same embedding model in Workers AI.
38-
4. **Querying Vectorize index:** The query vector is [searched](/vectorize/best-practices/query-vectors/) against the Vectorize index associated with your AutoRAG instance.
35+
1. **Receive query from AutoRAG API:** The query workflow begins when you send a request to either the AutoRAG’s AI Search or Search endpoint.
36+
2. **Query rewriting (optional):** AutoRAG provides the option to rewrite the input query using one of Workers AI’s LLMs to improve retrieval quality by transforming the original query into a more effective search query.
37+
3. **Embedding the query:** The rewritten (or original) query is transformed into a vector via the same embedding model used to embed your data so that it can be compared against your vectorized data to find the most relevant matches.
38+
4. **Querying Vectorize index:** The query vector is searched against stored vectors in the associated Vectorize database for your AutoRAG.
3939
5. **Content retrieval:** Vectorize returns the most relevant chunks and their metadata. And the original content is retrieved from the R2 bucket. These are passed to a text-generation model.
40-
6. **Response generation:** A text-generation model from Workers AI is used to a response using the retrieved content and the original user’s query using the generation model you select.
40+
6. **Response generation:** A text-generation model from Workers AI is used to generate a response using the retrieved content and the original user’s query.
4141

42-
[INSERT IMAGE]
42+
![Querying](~/assets/images/autorag/querying.png)
4343

4444
## Get Started
4545

src/content/docs/autorag/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Automatically and continuously index your data source, keeping your content fres
4545

4646
</Feature>
4747

48-
<Feature header="Workers Binding" href="/autorag/use-autorag/workers-binding/" cta="Add to Worker">
48+
<Feature header="Workers Binding" href="/autorag/usage/workers-binding/" cta="Add to Worker">
4949

5050
Call your AutoRAG instance for search or AI search directly from a Cloudflare Worker using the native binding integration.
5151

src/content/docs/autorag/usage/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
pcx_content_type: navigation
33
title: Usage
44
sidebar:
5-
order: 4
5+
order: 3
66
group:
77
hideIndex: true
88
---

0 commit comments

Comments
 (0)