diff --git a/public/__redirects b/public/__redirects index 15440da4cf8f6d..021d20cf4bc63e 100644 --- a/public/__redirects +++ b/public/__redirects @@ -177,6 +177,9 @@ # api-shield /api-shield/security/sequential-abuse-detection/ /api-shield/security/sequence-analytics/ 301 +#autorag +/autorag/usage/recipes/ /autorag/how-to/ 301 + # bots /bots/about/plans/ /bots/plans/ 301 /bots/about/plans/biz-and-ent/ /bots/plans/biz-and-ent/ 301 diff --git a/src/content/changelog/autorag/2025-04-23-autorag-metadata-filtering.mdx b/src/content/changelog/autorag/2025-04-23-autorag-metadata-filtering.mdx new file mode 100644 index 00000000000000..d9f10a7c3341fc --- /dev/null +++ b/src/content/changelog/autorag/2025-04-23-autorag-metadata-filtering.mdx @@ -0,0 +1,36 @@ +--- +title: Metadata filtering and multitenancy support in AutoRAG +description: Add metadata filters to AutoRAG queries to enable multitenancy and control the scope of retrieved results. +products: + - autorag +date: 2025-04-23T6:00:00Z +--- + +You can now filter [AutoRAG](/autorag) search results by `folder` and `modified_date` using [metadata filtering](/autorag/configuration/metadata-filtering/) to narrow down the scope of your query. + +This makes it easy to build [multitenant experiences](/autorag/how-to/multitenancy/) where each user can only access their own data. By organizing your content into per-tenant folders and applying a `folder` filter at query time, you ensure that each tenant retrieves only their own documents. + +**Example folder structure:** + +```bash +customer-a/logs/ +customer-a/contracts/ +customer-b/contracts/ +``` + +**Example query:** + +```js +const response = await env.AI.autorag("my-autorag").search({ + query: "When did I sign my agreement contract?", + filters: { + type: "eq", + key: "folder", + value: "customer-a/contracts/", + }, +}); +``` + +You can use metadata filtering by creating a new AutoRAG or reindexing existing data. To reindex all content in an existing AutoRAG, update any chunking setting and select **Sync index**. Metadata filtering is available for all data indexed on or after **April 21, 2025**. + +If you are new to AutoRAG, get started with the [Get started AutoRAG guide](/autorag/get-started/). diff --git a/src/content/docs/autorag/configuration/metadata-filtering.mdx b/src/content/docs/autorag/configuration/metadata-filtering.mdx new file mode 100644 index 00000000000000..81b9db5bad5fe6 --- /dev/null +++ b/src/content/docs/autorag/configuration/metadata-filtering.mdx @@ -0,0 +1,117 @@ +--- +pcx_content_type: concept +title: Metadata filtering +sidebar: + order: 6 +--- + +Metadata filtering narrows down search results based on metadata, so only relevant content is retrieved. The filter narrows down results prior to retrieval, so that you only query the scope of documents that matter. + +Here is an example of metadata filtering using [Workers Binding](/autorag/usage/workers-binding/) but it can be easily adapted to use the [REST API](/autorag/usage/rest-api/) instead. + +```js +const answer = await env.AI.autorag("my-autorag").search({ + query: "How do I train a llama to deliver coffee?", + filters: { + type: "and", + filters: [ + { + type: "eq", + key: "folder", + value: "llama/logistics/", + }, + { + type: "gte", + key: "modified_date", + value: "1735689600000", // unix timestamp for 2025-01-01 + }, + ], + }, +}); +``` + +## Metadata attributes + +You can currently filter by the `folder` and `modified_date` of an R2 object. Currently, custom metadata attributes are not supported. + +### `folder` + +The directory to the object. For example, the `folder` of the object at `llama/logistics/llama-logistics.mdx` is `llama/logistics/`. Note that the `folder` does not include a leading `/`. + +Note that `folder` filter only includes files exactly in that folder, so files in subdirectories are not included. For example, specifying `folder: "llama/"` will match files in `llama/` but does not match files in `llama/logistics`. + +### `modified_date` + +The timestamp indicating when the object was last modified. Comparisons are supported using a 13-digit Unix timestamp (milliseconds), but values will be rounded to 10 digits (seconds). For example, `1735689600999` or `2025-01-01 00:00:00.999 UTC` will be rounded down to `1735689600000`, corresponding to `2025-01-01 00:00:00 UTC`. + +## Filter schema + +You can create simple comparison filters or an array of comparison filters using a compound filter. + +### Comparison filter + +You can compare a metadata attribute (for example, `folder` or `modified_date`) with a target value using a comparison filter. + +```js +filters: { + type: "operator", + key: "metadata_attribute", + value: "target_value" +} +``` + +The available operators for the comparison are: + +| Operator | Description | +| -------- | ------------------------- | +| `eq` | Equals | +| `ne` | Not equals | +| `gt` | Greater than | +| `gte` | Greater than or equals to | +| `lt` | Less than | +| `lte` | Less than or equals to | + +### Compound filter + +You can use a compound filter to combine multiple comparison filters with a logical operator. + +```js +filters: { + type: "compound_operator", + filters: [...] +} +``` + +The available compound operators are: `and`, `or`. + +Note the following limitations with the compound operators: + +- No nesting combinations of `and`'s and `or`'s, meaning you can only pick 1 `and` or 1 `or`. +- When using `or`: + - Only the `eq` operator is allowed. + - All conditions must filter on the **same key** (for example, all on `folder`) + +## Response + +You can see the metadata attributes of your retrieved data in the response under the property `attributes` for each retrieved chunk. For example: + +```js +"data": [ + { + "file_id": "llama001", + "filename": "llama/logistics/llama-logistics.md", + "score": 0.45, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/logistics/", + }, + "content": [ + { + "id": "llama001", + "type": "text", + "text": "Llamas can carry 3 drinks max." + } + ] + } +] +``` diff --git a/src/content/docs/autorag/get-started.mdx b/src/content/docs/autorag/get-started.mdx index 670534ea322108..04ee044e7ae3de 100644 --- a/src/content/docs/autorag/get-started.mdx +++ b/src/content/docs/autorag/get-started.mdx @@ -1,5 +1,5 @@ --- -title: Get started +title: Getting started pcx_content_type: get-started sidebar: order: 2 diff --git a/src/content/docs/autorag/how-to/bring-your-own-generation-model.mdx b/src/content/docs/autorag/how-to/bring-your-own-generation-model.mdx new file mode 100644 index 00000000000000..45770ae0e7bb45 --- /dev/null +++ b/src/content/docs/autorag/how-to/bring-your-own-generation-model.mdx @@ -0,0 +1,83 @@ +--- +pcx_content_type: concept +title: Bring your own generation model +sidebar: + order: 5 +--- + +import { + Badge, + Description, + Render, + TabItem, + Tabs, + WranglerConfig, + MetaInfo, + Type, +} from "~/components"; + +When using `AI Search`, AutoRAG leverages a Workers AI model to generate the response. If you want to use a model outside of Workers AI, you can use AutoRAG for search while leveraging a model outside of Workers AI to generate responses. + +Here is an example of how you can use an OpenAI model to generate your responses. This example uses [Workers Binding](/autorag/usage/workers-binding/), but can be easily adapted to use the [REST API](/autorag/usage/rest-api/) instead. + +```ts +import { openai } from "@ai-sdk/openai"; +import { generateText } from "ai"; + +export interface Env { + AI: Ai; + OPENAI_API_KEY: string; +} + +export default { + async fetch(request, env): Promise { + // Parse incoming url + const url = new URL(request.url); + + // Get the user query or default to a predefined one + const userQuery = + url.searchParams.get("query") ?? + "How do I train a llama to deliver coffee?"; + + // Search for documents in AutoRAG + const searchResult = await env.AI.autorag("my-rag").search({ + query: userQuery, + }); + + if (searchResult.data.length === 0) { + // No matching documents + return Response.json({ text: `No data found for query "${userQuery}"` }); + } + + // Join all document chunks into a single string + const chunks = searchResult.data + .map((item) => { + const data = item.content + .map((content) => { + return content.text; + }) + .join("\n\n"); + + return `${data}`; + }) + .join("\n\n"); + + // Send the user query + matched documents to openai for answer + const generateResult = await generateText({ + model: openai("gpt-4o-mini"), + messages: [ + { + role: "system", + content: + "You are a helpful assistant and your task is to answer the user question using the provided files.", + }, + { role: "user", content: chunks }, + { role: "user", content: userQuery }, + ], + }); + + // Return the generated answer + return Response.json({ text: generateResult.text }); + }, +} satisfies ExportedHandler; +``` diff --git a/src/content/docs/autorag/how-to/index.mdx b/src/content/docs/autorag/how-to/index.mdx new file mode 100644 index 00000000000000..36463f7c584c31 --- /dev/null +++ b/src/content/docs/autorag/how-to/index.mdx @@ -0,0 +1,12 @@ +--- +pcx_content_type: navigation +title: How to +sidebar: + order: 4 + group: + hideIndex: true +--- + +import { DirectoryListing } from "~/components"; + + diff --git a/src/content/docs/autorag/how-to/multitenancy.mdx b/src/content/docs/autorag/how-to/multitenancy.mdx new file mode 100644 index 00000000000000..b3541108033a78 --- /dev/null +++ b/src/content/docs/autorag/how-to/multitenancy.mdx @@ -0,0 +1,41 @@ +--- +pcx_content_type: concept +title: Create multitenancy +sidebar: + order: 5 +--- + +AutoRAG supports multitenancy by letting you segment content by tenant, so each user, customer, or workspace can only access their own data. This is typically done by organizing documents into per-tenant folders and applying [metadata filters](/autorag/configuration/metadata-filtering/) at query time. + +## 1. Organize Content by Tenant + +When uploading files to R2, structure your content by tenant using unique folder paths. + +Example folder structure: + +```bash +customer-a/logs/ +customer-a/contracts/ +customer-b/contracts/ +``` + +When indexing, AutoRAG will automatically store the folder path as metadata under the `folder` attribute. It is recommended to enforce folder separation during upload or indexing to prevent accidental data access across tenants. + +## 2. Search Using Folder Filters + +To ensure a tenant only retrieves their own documents, apply a `folder` filter when performing a search. + +Example using [Workers Binding](/autorag/usage/workers-binding/): + +```js +const response = await env.AI.autorag("my-autorag").search({ + query: "When did I sign my agreement contract?", + filters: { + type: "eq", + key: "folder", + value: `customer-a/contracts/`, + }, +}); +``` + +To filter across multiple folders, or to add date-based filtering, you can use a compound filter with an array of [comparison filters](/autorag/configuration/metadata-filtering/#compound-filter). diff --git a/src/content/docs/autorag/how-to/simple-search-engine.mdx b/src/content/docs/autorag/how-to/simple-search-engine.mdx new file mode 100644 index 00000000000000..f170bb28abc469 --- /dev/null +++ b/src/content/docs/autorag/how-to/simple-search-engine.mdx @@ -0,0 +1,36 @@ +--- +pcx_content_type: concept +title: Create a simple search engine +sidebar: + order: 5 +--- + +By using the `search` method, you can implement a simple but fast search engine. This example uses [Workers Binding](/autorag/usage/workers-binding/), but can be easily adapted to use the [REST API](/autorag/usage/rest-api/) instead. + +To replicate this example remember to: + +- Disable `rewrite_query`, as you want to match the original user query +- Configure your AutoRAG to have small chunk sizes, usually 256 tokens is enough + +```ts +export interface Env { + AI: Ai; +} + +export default { + async fetch(request, env): Promise { + const url = new URL(request.url); + const userQuery = + url.searchParams.get("query") ?? + "How do I train a llama to deliver coffee?"; + const searchResult = await env.AI.autorag("my-rag").search({ + query: userQuery, + rewrite_query: false, + }); + + return Response.json({ + files: searchResult.data.map((obj) => obj.filename), + }); + }, +} satisfies ExportedHandler; +``` diff --git a/src/content/docs/autorag/index.mdx b/src/content/docs/autorag/index.mdx index 5c92c1fe88f658..5c287f9b750b95 100644 --- a/src/content/docs/autorag/index.mdx +++ b/src/content/docs/autorag/index.mdx @@ -55,6 +55,12 @@ Automatically and continuously index your data source, keeping your content fres + + +Create multitenancy by scoping search to each tenant’s data using folder-based metadata filters. + + + Call your AutoRAG instance for search or AI Search directly from a Cloudflare Worker using the native binding integration. diff --git a/src/content/docs/autorag/usage/recipes.mdx b/src/content/docs/autorag/usage/recipes.mdx deleted file mode 100644 index 7ed9d89b298195..00000000000000 --- a/src/content/docs/autorag/usage/recipes.mdx +++ /dev/null @@ -1,99 +0,0 @@ ---- -pcx_content_type: concept -title: Recipes -sidebar: - order: 5 ---- - -import { - Badge, - Description, - Render, - TabItem, - Tabs, - WranglerConfig, - MetaInfo, - Type, -} from "~/components"; - -This section provides practical examples and recipes for common use cases. These examples are done using [Workers Binding](/autorag/usage/workers-binding/) but can be easely adapted to use the [REST API](/autorag/usage/rest-api/) instead. - -## Bring your own model - -You can use AutoRAG for search while leveraging a model outside of Workers AI to generate responses. Here is an example of how you can use an OpenAI model to generate your responses. - -```ts -import {openai} from '@ai-sdk/openai'; -import {generateText} from "ai"; - -export interface Env { - AI: Ai; - OPENAI_API_KEY: string; -} - -export default { - async fetch(request, env): Promise { - // Parse incoming url - const url = new URL(request.url) - - // Get the user query or default to a predefined one - const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?' - - // Search for documents in AutoRAG - const searchResult = await env.AI.autorag('my-rag').search({query: userQuery}) - - if (searchResult.data.length === 0) { - // No matching documents - return Response.json({text: `No data found for query "${userQuery}"`}) - } - - // Join all document chunks into a single string - const chunks = searchResult.data.map((item) => { - const data = item.content.map((content) => { - return content.text - }).join('\n\n') - - return `${data}` - }).join('\n\n') - - // Send the user query + matched documents to openai for answer - const generateResult = await generateText({ - model: openai("gpt-4o-mini"), - messages: [ - {role: 'system', content: 'You are a helpful assistant and your task is to answer the user question using the provided files.'}, - {role: 'user', content: chunks}, - {role: 'user', content: userQuery}, - ], - }); - - // Return the generated answer - return Response.json({text: generateResult.text}); - }, -} satisfies ExportedHandler; -``` - -## Simple search engine - -Using the `search` method you can implement a simple but fast search engine. - -To replicate this example remember to: -- Disable `rewrite_query` as you want to match the original user query -- Configure your AutoRAG to have small chunk sizes, usually 256 tokens is enough - -```ts -export interface Env { - AI: Ai; -} - -export default { - async fetch(request, env): Promise { - const url = new URL(request.url) - const userQuery = url.searchParams.get('query') ?? 'How do I train a llama to deliver coffee?' - const searchResult = await env.AI.autorag('my-rag').search({query: userQuery, rewrite_query: false}) - - return Response.json({ - files: searchResult.data.map((obj) => obj.filename) - }) - }, -} satisfies ExportedHandler; -``` diff --git a/src/content/docs/autorag/usage/rest-api.mdx b/src/content/docs/autorag/usage/rest-api.mdx index 9685f3cb8e63bb..ad5359e7738c48 100644 --- a/src/content/docs/autorag/usage/rest-api.mdx +++ b/src/content/docs/autorag/usage/rest-api.mdx @@ -41,10 +41,10 @@ curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/autorag/rags/{AU -d '{ "query": "How do I train a llama to deliver coffee?", "model": @cf/meta/llama-3.3-70b-instruct-sd, - "rewrite_query": true, + "rewrite_query": false, "max_num_results": 10, "ranking_options": { - "score_threshold": 0.6 + "score_threshold": 0.3 }, "stream": true, }' @@ -81,7 +81,7 @@ curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/autorag/rags/{AU "rewrite_query": true, "max_num_results": 10, "ranking_options": { - "score_threshold": 0.6 + "score_threshold": 0.3 }, }' diff --git a/src/content/docs/autorag/usage/workers-binding.mdx b/src/content/docs/autorag/usage/workers-binding.mdx index 97606e0fdbe065..c5fafe2c4f6d28 100644 --- a/src/content/docs/autorag/usage/workers-binding.mdx +++ b/src/content/docs/autorag/usage/workers-binding.mdx @@ -40,8 +40,9 @@ const answer = await env.AI.autorag("my-autorag").aiSearch({ rewrite_query: true, max_num_results: 2, ranking_options: { - score_threshold: 0.7, + score_threshold: 0.3, }, + stream: true, }); ``` @@ -61,9 +62,12 @@ This is the response structure without `stream` enabled. "data": [ { "file_id": "llama001", - "filename": "docs/llama-logistics.md", - "score": 0.98, - "attributes": {}, + "filename": "llama/logistics/llama-logistics.md", + "score": 0.45, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/logistics/", + }, "content": [ { "id": "llama001", @@ -74,9 +78,12 @@ This is the response structure without `stream` enabled. }, { "file_id": "llama042", - "filename": "docs/llama-commands.md", - "score": 0.95, - "attributes": {}, + "filename": "llama/llama-commands.md", + "score": 0.4, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/", + }, "content": [ { "id": "llama042", @@ -102,7 +109,7 @@ const answer = await env.AI.autorag("my-autorag").search({ rewrite_query: true, max_num_results: 2, ranking_options: { - score_threshold: 0.7, + score_threshold: 0.3, }, }); ``` @@ -120,9 +127,12 @@ const answer = await env.AI.autorag("my-autorag").search({ "data": [ { "file_id": "llama001", - "filename": "docs/llama-logistics.md", - "score": 0.98, - "attributes": {}, + "filename": "llama/logistics/llama-logistics.md", + "score": 0.45, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/logistics/", + }, "content": [ { "id": "llama001", @@ -133,9 +143,12 @@ const answer = await env.AI.autorag("my-autorag").search({ }, { "file_id": "llama042", - "filename": "docs/llama-commands.md", - "score": 0.95, - "attributes": {}, + "filename": "llama/llama-commands.md", + "score": 0.4, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/", + }, "content": [ { "id": "llama042", diff --git a/src/content/partials/autorag/ai-search-api-params.mdx b/src/content/partials/autorag/ai-search-api-params.mdx index abbe380e1d7981..96759837561f1f 100644 --- a/src/content/partials/autorag/ai-search-api-params.mdx +++ b/src/content/partials/autorag/ai-search-api-params.mdx @@ -27,6 +27,10 @@ Configurations for customizing result ranking. Defaults to `{}`. - `score_threshold` - The minimum match score required for a result to be considered a match. Defaults to `0`. Must be between `0` and `1`. -`streaming` +`stream` Returns a stream of results as they are available. Defaults to `false`. + +`filters` + +Narrow down search results based on metadata, like folder and date, so only relevant content is retrieved. For more details, refer to [Metadata filtering](/autorag/configuration/metadata-filtering). diff --git a/src/content/partials/autorag/ai-search-response.mdx b/src/content/partials/autorag/ai-search-response.mdx index 8f130e6aa0e289..744942aff82064 100644 --- a/src/content/partials/autorag/ai-search-response.mdx +++ b/src/content/partials/autorag/ai-search-response.mdx @@ -12,9 +12,12 @@ "data": [ { "file_id": "llama001", - "filename": "docs/llama-logistics.md", - "score": 0.98, - "attributes": {}, + "filename": "llama/logistics/llama-logistics.md", + "score": 0.45, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/logistics/", + }, "content": [ { "id": "llama001", @@ -25,9 +28,12 @@ }, { "file_id": "llama042", - "filename": "docs/llama-commands.md", - "score": 0.95, - "attributes": {}, + "filename": "llama/llama-commands.md", + "score": 0.4, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/", + }, "content": [ { "id": "llama042", diff --git a/src/content/partials/autorag/search-api-params.mdx b/src/content/partials/autorag/search-api-params.mdx index 9033f12a121882..87750ffb8d3d10 100644 --- a/src/content/partials/autorag/search-api-params.mdx +++ b/src/content/partials/autorag/search-api-params.mdx @@ -22,3 +22,7 @@ Configurations for customizing result ranking. Defaults to `{}`. - `score_threshold` - The minimum match score required for a result to be considered a match. Defaults to `0`. Must be between `0` and `1`. + +`filters` + +Narrow down search results based on metadata, like folder and date, so only relevant content is retrieved. For more details, refer to [Metadata filtering](/autorag/configuration/metadata-filtering). diff --git a/src/content/partials/autorag/search-response.mdx b/src/content/partials/autorag/search-response.mdx index 283857fc8d2c9d..fefd5eea30cd2e 100644 --- a/src/content/partials/autorag/search-response.mdx +++ b/src/content/partials/autorag/search-response.mdx @@ -11,9 +11,12 @@ "data": [ { "file_id": "llama001", - "filename": "docs/llama-logistics.md", - "score": 0.98, - "attributes": {}, + "filename": "llama/logistics/llama-logistics.md", + "score": 0.45, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/logistics/", + }, "content": [ { "id": "llama001", @@ -24,9 +27,12 @@ }, { "file_id": "llama042", - "filename": "docs/llama-commands.md", - "score": 0.95, - "attributes": {}, + "filename": "llama/llama-commands.md", + "score": 0.4, + "attributes": { + "modified_date": 1735689600000, // unix timestamp for 2025-01-01 + "folder": "llama/", + }, "content": [ { "id": "llama042", diff --git a/src/content/release-notes/autorag.yaml b/src/content/release-notes/autorag.yaml index b9678c7ac2367d..97374315d39c18 100644 --- a/src/content/release-notes/autorag.yaml +++ b/src/content/release-notes/autorag.yaml @@ -5,6 +5,10 @@ productLink: "/autorag/" productArea: Developer platform productAreaLink: /workers/platform/changelog/platform/ entries: + - publish_date: "2025-04-23" + title: Response streaming in AutoRAG binding added + description: |- + AutoRAG now supports response streaming in the `AI Search` method of the [Workers binding](/autorag/usage/workers-binding/), allowing you to stream results as they’re retrieved by setting `stream: true`. - publish_date: "2025-04-07" title: AutoRAG is now in open beta! description: |-