Skip to content

Commit 5f42b72

Browse files
committed
progress 2
1 parent 418c453 commit 5f42b72

File tree

3 files changed

+211
-1
lines changed

3 files changed

+211
-1
lines changed

src/content/docs/autorag/configuration/index.mdx

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,25 @@ sidebar:
55
order: 5
66
---
77

8-
something about all the configurations
8+
import { MetaInfo, Type } from "~/components";
9+
10+
When creating an AutoRAG instance, you can customize how your RAG pipeline ingests, processes, and responds to data using a set of configuration options. Some settings can be updated after the instance is created, while others are fixed at creation time.
11+
12+
The table below lists all available configuration options:
13+
14+
| Configuration | Editable after creation | Description |
15+
| --------------------------- | ----------------------- | ----------------------------------------------------------------------------------------- |
16+
| Data source | no | The source where your knowledge base is stored (e.g. R2 bucket) |
17+
| Chunk size | yes | Number of tokens per chunk |
18+
| Chunk overlap | yes | Number of overlapping tokens between chunks |
19+
| Embedding model | no | Model used to generate vector embeddings |
20+
| Query rewrite | yes | Enable or disable query rewriting before retrieval |
21+
| Query rewrite model | yes | Model used for query rewriting |
22+
| Query rewrite system prompt | yes | Custom system prompt to guide query rewriting behavior |
23+
| Match threshold | yes | Minimum similarity score required for a vector match |
24+
| Maximum number of results | yes | Maximum number of vector matches returned (`top_k`) |
25+
| Generation model | yes | Model used to generate the final response |
26+
| Generation system prompt | yes | Custom system prompt to guide response generation |
27+
| AI Gateway | yes | AI Gateway for monitoring and controlling model usage |
28+
| AutoRAG name | no | Name of your AutoRAG instance |
29+
| Service API token | yes | API token granted to AutoRAG to give it permission to configure resources on your account |

src/content/docs/autorag/configuration/indexing.mdx

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,82 @@ import {
1515
MetaInfo,
1616
Type,
1717
} from "~/components";
18+
19+
AutoRAG automatically indexes your data into vector embeddings optimized for semantic search. Once a data source is connected, indexing runs continuously in the background to keep your knowledge base fresh and queryable.
20+
21+
## Supported Data Source
22+
23+
AutoRAG currently supports Cloudflare R2 as the data source for indexing.
24+
25+
To get started, [configure an R2 bucket](/r2/get-started/) containing your data. AutoRAG will automatically scan and process supported files stored in that bucket.
26+
27+
## Supported File Types and Limits
28+
29+
AutoRAG supports the following file formats:
30+
31+
- `.pdf`, `.docx`, `.txt`, `.csv`, `.html`, `.xml`, `.md`
32+
- Image files such as `.png`, `.jpeg` (used for OCR and image-to-text via Workers AI)
33+
34+
**File limits:**
35+
36+
- Maximum file size: 10 MB
37+
- Unsupported or oversized files will be skipped and logged as errors
38+
39+
## Continuous Indexing
40+
41+
AutoRAG continuously monitors your data source for updates and reindexes your data automatically.
42+
43+
- **Automatic sync**: AutoRAG checks for updates in the connected R2 bucket every 4 hours.
44+
- **Manual sync**: You can manually trigger a sync by clicking **"Sync Index"** in the dashboard or calling the API.
45+
- **Pause indexing**: You can pause indexing to temporarily stop all scheduled checks and reprocessing.
46+
47+
During each cycle, AutoRAG only reprocesses files that have been added or modified since the last indexing run.
48+
49+
## Indexing Workflow
50+
51+
For a breakdown of the full indexing workflow—including ingestion, Markdown conversion, chunking, embedding, and storage—refer to the [How AutoRAG Works](../how-it-works) page.
52+
53+
That page includes a detailed diagram of the indexing and query-time processes.
54+
55+
## Indexing Statuses
56+
57+
Each AutoRAG instance has an associated indexing status to help monitor its state:
58+
59+
| Status | Description |
60+
| ------------------ | ------------------------------------------------------------------------- |
61+
| `active` | Indexing is running on schedule and up to date |
62+
| `waiting_to_start` | A new indexing cycle is queued but has not yet started |
63+
| `indexing` | Indexing is currently in progress |
64+
| `paused` | Indexing is manually paused and will not check for updates |
65+
| `error` | A failure occurred (e.g. expired Service API token, misconfigured source) |
66+
67+
Indexing status is visible in the dashboard and available via API.
68+
69+
## File Deletions
70+
71+
If you delete a file from your R2 bucket, AutoRAG does not automatically remove the corresponding data from your vector index.
72+
73+
To remove deleted content from search results, you can:
74+
75+
- Manually delete the associated vectors via API, or
76+
- Recreate your AutoRAG instance with a fresh data source
77+
78+
Automatic deletion support may be added in the future.
79+
80+
## Indexing Performance
81+
82+
AutoRAG processes files in parallel for efficient indexing. The total time to index depends on the number and type of files in your R2 bucket.
83+
84+
Factors that affect performance include:
85+
86+
- Total number of files and their sizes
87+
- File formats (e.g., PDFs take longer than plain text)
88+
- Latency of Workers AI models used for embedding and image processing
89+
90+
Indexing large datasets may take several minutes to complete.
91+
92+
## Best Practices
93+
94+
- Ensure your files are under the size limit to avoid skipped indexing.
95+
- Use structured formats (Markdown, HTML, plain text) for more accurate embeddings.
96+
- Keep your Service API token up to date to prevent indexing errors.
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
---
2+
pcx_content_type: concept
3+
title: Similarity cache
4+
sidebar:
5+
order: 4
6+
---
7+
8+
Semantic caching or similarity-based caching in AutoRAG lets you serve responses from Cloudflare’s cache for queries that are _similar enough_ to previous requests, not just exact matches. This speeds up response times and cuts costs by reusing answers for questions that are close in meaning.
9+
10+
Unlike basic caching, which only works for identical requests, this feature uses an advanced algorithm (MinHash with Locality-Sensitive Hashing) to compare prompts based on their content. It’s perfect when users ask similar questions in different ways—like "What’s the weather today?" and "How’s the weather today?"—and you want to reuse cached responses smartly.
11+
12+
You can control how strict or flexible the similarity matching is with customizable thresholds. Cached responses stay valid for 30 days before expiring.
13+
14+
## How It Works
15+
16+
When a request comes in:
17+
18+
1. AutoRAG checks if a _similar_ prompt (based on your chosen threshold) has been answered before.
19+
2. If a match is found, it returns the cached response instantly.
20+
3. If no match is found, it generates a new response, caches it for 30 days, and links it to related data (like document chunks) for future use.
21+
22+
Similarity is measured on a scale from 0 (completely different) to 1 (identical). You pick how close prompts need to be to count as a match—stricter settings need near-identical prompts, while looser ones allow more variation.
23+
24+
To see if a response came from the cache, check the `cf-aig-cache-status` header: `HIT` for cached, `MISS` for new.
25+
26+
---
27+
28+
## How Similarity Matching Works
29+
30+
We use a clever trick called _MinHash with Locality-Sensitive Hashing (LSH)_ to figure out if two prompts are similar. Here’s how it works, step by step, with some real examples:
31+
32+
1. **Break It Down**:
33+
We split your prompt into small pieces (like puzzle bits) to capture its meaning.
34+
35+
- Example: "What’s the weather like today?" becomes pieces like "What’s the weather," "the weather like," and "weather like today."
36+
- Example: "How’s the weather today?" becomes "How’s the weather," "the weather today."
37+
38+
2. **Make a Fingerprint**:
39+
We turn those pieces into a special code—a “fingerprint”—that sums up the prompt. Prompts with lots of overlapping pieces get similar fingerprints.
40+
41+
- Example: "What’s the weather like today?" and "How’s the weather today?" share bits like "the weather," so their fingerprints are close.
42+
- Example: "What’s the weather like today?" vs. "Tell me about cats" have no overlap, so their fingerprints are way different.
43+
44+
3. **Group Similar Ones**:
45+
We toss prompts with similar fingerprints into buckets. This way, we only check a small group instead of every past prompt.
46+
47+
- Example: "What’s the weather like today?" lands in a "weather questions" bucket with "How’s the weather today?" but not "Tell me about cats."
48+
- Example: "Give me a recipe for cake" goes into a "recipe" bucket with "How do I bake a cake?" but not "What’s the time?"
49+
50+
4. **Compare Fast**:
51+
For a new prompt, we check its fingerprint against the buckets. If it’s close enough (based on your threshold), we grab the cached answer.
52+
- Example: New prompt "What’s today’s weather?" matches "What’s the weather like today?" (85% similar) and gets the cached response: "It’s sunny, 72°F."
53+
- Example: New prompt "How do I cook pasta?" matches "Give me a recipe for pasta" (75% similar) and reuses: "Boil water, add pasta, cook 10 mins."
54+
55+
### Real-World Examples
56+
57+
- **Weather Chatbot**:
58+
59+
- Cached: "What’s the weather like today?" → "Sunny, 72°F."
60+
- New: "How’s the weather today?" → 85% similar, returns "Sunny, 72°F" from cache.
61+
- New: "What’s the time?" → 10% similar, generates a new response.
62+
63+
- **Recipe App**:
64+
65+
- Cached: "How do I bake a cake?" → "Mix flour, sugar, eggs; bake at 350°F for 30 mins."
66+
- New: "Give me a cake recipe" → 75% similar, reuses the cached steps.
67+
- New: "How’s the weather?" → 5% similar, no match, new response generated.
68+
69+
- **Support Bot**:
70+
- Cached: "How do I reset my password?" → "Click ‘Forgot Password’ and follow the link."
71+
- New: "How can I change my password?" → 80% similar, uses the cached answer.
72+
- New: "What’s your return policy?" → 20% similar, fetches a fresh answer.
73+
74+
This method is fast because it doesn’t compare every word—it uses those fingerprints and buckets to zoom in on likely matches.
75+
76+
---
77+
78+
## Choosing a Threshold
79+
80+
The similarity threshold decides how close two prompts need to be to reuse a cached response. Here’s what you can pick from:
81+
82+
- **Super Strict Match (95%)**:
83+
84+
- For near-identical prompts—like "What’s the weather?" and "What’s the weather today?"
85+
- Fewer cache hits, but super accurate answers.
86+
87+
- **Close Enough (85%)**:
88+
89+
- For very similar prompts—like "What’s today’s weather?" and "How’s the weather today?"
90+
- Balances speed and accuracy (our recommended default).
91+
92+
- **Flexible Friend (75%)**:
93+
94+
- For fairly similar prompts—like "Tell me about cats" and "What are cats like?"
95+
- More cache hits, still keeps things relevant.
96+
97+
- **Anything Goes (60%)**:
98+
- For loosely related prompts—like "What’s the weather?" and "What’s the forecast?"
99+
- Maximizes reuse, but might stretch relevance a bit.
100+
101+
Test these out to find what fits your app best! Higher thresholds (like 95%) are pickier, while lower ones (like 60%) are more forgiving.
102+
103+
---
104+
105+
:::caution[Cache Behavior Notes]
106+
107+
- **Volatile Cache**: If two similar requests hit at the same time, the first might not cache in time for the second to use it, resulting in a `MISS`.
108+
- **30-Day Cache**: Cached responses last 30 days, then expire automatically. No custom durations for now.
109+
- **Data Dependency**: Cached responses are tied to specific document chunks. If those chunks change or get deleted, the cache clears to keep answers fresh.
110+
:::

0 commit comments

Comments
 (0)