Skip to content

Commit eb487f7

Browse files
committed
Add autorag recipes page
1 parent 9bca212 commit eb487f7

File tree

3 files changed

+93
-3
lines changed

3 files changed

+93
-3
lines changed

src/content/docs/autorag/configuration/indexing.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,6 @@ Factors that affect performance include:
3232

3333
To ensure smooth and reliable indexing:
3434

35-
- Make sure your files are within the size limit (10 MB) and in a supported format to avoid being skipped.
35+
- Make sure your files are within the [**size limit**](/autorag/platform/limits-pricing/#limits) and in a supported format to avoid being skipped.
3636
- Keep your Service API token valid to prevent indexing failures.
3737
- Regularly clean up outdated or unnecessary content in your knowledge base to avoid hitting [Vectorize index limits](/vectorize/platform/limits/).
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
---
2+
pcx_content_type: concept
3+
title: Recipes
4+
sidebar:
5+
order: 4
6+
---
7+
8+
import {
9+
Badge,
10+
Description,
11+
Render,
12+
TabItem,
13+
Tabs,
14+
WranglerConfig,
15+
MetaInfo,
16+
Type,
17+
} from "~/components";
18+
19+
This section provides practical examples and recipes for common use cases. These examples are done using [Workers Binding](/autorag/usage/workers-binding/) but can be easely adapted to use the [REST API](/autorag/usage/rest-api/) instead.
20+
21+
## Bring your own model
22+
23+
This scenario allows you to leverage AutoRAG for chunk search, while asking a different model to answer the user question.
24+
25+
```ts
26+
import {openai} from '@ai-sdk/openai';
27+
import {generateText} from "ai";
28+
29+
export interface Env {
30+
AI: Ai;
31+
OPENAI_API_KEY: string;
32+
}
33+
34+
export default {
35+
async fetch(request, env): Promise<Response> {
36+
const url = new URL(request.url)
37+
const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?'
38+
const searchResult = await env.AI.autorag('my-rag').search({query: userQuery})
39+
40+
if (searchResult.data.length === 0) {
41+
return Response.json({text: `No data found for query "${userQuery}"`})
42+
}
43+
44+
const chunks = searchResult.data.map((item) => {
45+
const data = item.content.map((content) => {
46+
return content.text
47+
}).join('\n\n')
48+
49+
return `<file name="${item.filename}">${data}</file>`
50+
}).join('\n\n')
51+
52+
const generateResult = await generateText({
53+
model: openai("gpt-4o-mini"),
54+
messages: [
55+
{role: 'system', content: 'You are a helpful assistant and your task is to answer the user question using the provided files.'},
56+
{role: 'user', content: chunks},
57+
{role: 'user', content: userQuery},
58+
],
59+
});
60+
61+
return Response.json({text: generateResult.text});
62+
},
63+
} satisfies ExportedHandler<Env>;
64+
```
65+
66+
## Simple search engine
67+
68+
Using the `search` method you can a simple but fast search engine.
69+
70+
To replicate this example remember to:
71+
- Disable `rewrite_query` as you want to match the original user query
72+
- Configure your AutoRAG to have small chunk sizes, usually 256 tokens is enough
73+
74+
```ts
75+
export interface Env {
76+
AI: Ai;
77+
}
78+
79+
export default {
80+
async fetch(request, env): Promise<Response> {
81+
const url = new URL(request.url)
82+
const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?'
83+
const searchResult = await env.AI.autorag('my-rag').search({query: userQuery, rewrite_query: false})
84+
85+
return Response.json({
86+
files: searchResult.data.map((obj) => obj.filename)
87+
})
88+
},
89+
} satisfies ExportedHandler<Env>;
90+
```

src/content/products/autorag.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ product:
1010
meta:
1111
title: AutoRAG
1212
description: Create fully managed RAG pipelines for your AI applications.
13-
author: '@cloudflare'
13+
author: "@cloudflare"
1414

1515
resources:
16-
discord: https://discord.gg/cloudflaredev
16+
discord: https://discord.gg/cloudflaredev

0 commit comments

Comments
 (0)