diff --git a/src/components/GitHubCode.astro b/src/components/GitHubCode.astro index 2f3a56e34898082..47ef6ab0a93a82d 100644 --- a/src/components/GitHubCode.astro +++ b/src/components/GitHubCode.astro @@ -57,7 +57,7 @@ if (lines) { x.includes(``), ); - if (!startTag || !endTag) { + if (startTag === -1 || endTag === -1) { throw new Error(`[GitHubCode] Unable to find a region using tag "${tag}".`); } diff --git a/src/content/docs/analytics/graphql-api/tutorials/end-customer-analytics.mdx b/src/content/docs/analytics/graphql-api/tutorials/end-customer-analytics.mdx index a4daf981a9a6b05..ffaf522ba8cb4b6 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/end-customer-analytics.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/end-customer-analytics.mdx @@ -8,7 +8,7 @@ title: Querying HTTP events by hostname with GraphQL In this example, we are going to use the GraphQL Analytics API to query aggregated metrics about HTTP events by hostname over a specific period of time. -The following API call will request the number of visits and edge response bytes for the custom hostname `hostname.example.com` over a four day period. Be sure to replace `CLOUDFLARE_ZONE_ID` AND `API_TOKEN` with your zone ID and API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed. +The following API call will request the number of visits and edge response bytes for the custom hostname `hostname.example.com` over a four day period. Be sure to replace `CLOUDFLARE_ZONE_TAG` and `API_TOKEN`[^1] with your zone ID and API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed. ### API Call @@ -30,7 +30,7 @@ echo '{ "query": } }", "variables": { - "zoneTag": "", + "zoneTag": "", "filter": { "datetime_geq": "2022-07-20T11:00:00Z", "datetime_lt": "2022-07-24T12:00:00Z", @@ -191,3 +191,5 @@ https://api.cloudflare.com/client/v4/graphql \ } }' | jq -r 'try .data.viewer.zones[].topPaths[] | "\"\(.dimensions.metric)\": \(.sum.edgeResponseBytes)"' | sort ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/analytics/graphql-api/tutorials/querying-access-login-events.mdx b/src/content/docs/analytics/graphql-api/tutorials/querying-access-login-events.mdx index 523bb4288ecbad2..fdd3264b42cc131 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/querying-access-login-events.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/querying-access-login-events.mdx @@ -8,9 +8,8 @@ In this example, we are going to use the GraphQL Analytics API to retrieve logs The following API call will request logs for a single Access login event and output the requested fields. The authentication request is identified by its **Ray ID**, which you can obtain from the `403` Forbidden page shown to the user. -You will need to insert your API credentials in `` and `` and substitute your own values for the following variables: +You will need to insert your ``, your API credentials in ``[^1], and substitute your own values for the following variables: -* `accountTag`: Your Cloudflare account ID. * `rayID`: A unique identifier assigned to the authentication request. * `datetimeStart`: The earliest event time to query (no earlier than September 16, 2022). * `datetimeEnd`: The latest event time to query. Be sure to specify a time range that includes the login event you are querying. @@ -46,7 +45,7 @@ echo '{ "query": } }", "variables": { - "accountTag": "699d98642c564d2e855e9661899b7252", + "accountTag": "", "rayId": "74e4ac510dfdc44f", "datetimeStart": "2022-09-20T14:36:38Z", "datetimeEnd": "2022-09-22T14:36:38Z" @@ -103,3 +102,5 @@ Rather than filter by `cfRayId`, you may also [filter](/analytics/graphql-api/fe ``` You can compare the query results to your Access policies to understand why a user was blocked. For example, if your application requires a valid mTLS certificate, Access blocked the request shown above because `mtlsStatus`, `mtlsCommonName`, and `mtlsCertSerialId` are empty. + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/analytics/graphql-api/tutorials/querying-firewall-events.mdx b/src/content/docs/analytics/graphql-api/tutorials/querying-firewall-events.mdx index 1f869340ae14068..9a9c75e6e4a08cf 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/querying-firewall-events.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/querying-firewall-events.mdx @@ -6,7 +6,7 @@ title: Querying Firewall Events with GraphQL In this example, we are going to use the GraphQL Analytics API to query for Firewall Events over a specified time period. -The following API call will request Firewall Events over a one hour period, and output the requested fields. Be sure to replace ``, ``, and `` with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. +The following API call will request Firewall Events over a one hour period, and output the requested fields. Be sure to replace ``, ``, and ``[^1] with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. ## API Call @@ -34,7 +34,7 @@ echo '{ "query": } }", "variables": { - "zoneTag": "", + "zoneTag": "", "filter": { "datetime_geq": "2022-07-24T11:00:00Z", "datetime_leq": "2022-07-24T12:00:00Z" @@ -182,3 +182,5 @@ https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-ids-samples.mdx b/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-ids-samples.mdx index e81b33fccf00d71..00b8a116764462c 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-ids-samples.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-ids-samples.mdx @@ -6,7 +6,7 @@ title: Querying Magic Firewall Intrusion Detection System (IDS) samples with Gra In this example, we are going to use the GraphQL Analytics API to query for IDS samples over a specified time period. -The following API call will request IDS samples over a one hour period, and output the requested fields. Be sure to replace ``, ``, and `` with your account tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. +The following API call will request IDS samples over a one hour period, and output the requested fields. Be sure to replace `` and ``[^1] with your account tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. ## API Call @@ -31,7 +31,7 @@ echo '{ "query": } }", "variables": { - "accountTag": "", + "accountTag": "", "filter": { "datetime_geq": "2023-06-20T11:00:00.000Z", "datetime_leq": "2023-06-20T12:00:00.000Z", @@ -101,3 +101,5 @@ https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-samples.mdx b/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-samples.mdx index ed2624e8ae1c9ec..598181b891a80cb 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-samples.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/querying-magic-firewall-samples.mdx @@ -6,7 +6,7 @@ title: Querying Magic Firewall Samples with GraphQL In this example, we are going to use the GraphQL Analytics API to query for Magic Firewall Samples over a specified time period. -The following API call will request Magic Firewall Samples over a one hour period, and output the requested fields. Be sure to replace ``, ``, and `` with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. +The following API call will request Magic Firewall Samples over a one hour period, and output the requested fields. Be sure to replace `` and ``[^1] with your zone tag and API credentials, and adjust the `datetime_geg` and `datetime_leq` values to your liking. ## API Call @@ -33,7 +33,7 @@ echo '{ "query": } }", "variables": { - "accountTag": "", + "accountTag": "", "filter": { "datetime_geq": "2022-07-24T11:00:00Z", "datetime_leq": "2022-07-24T11:10:00Z" @@ -106,3 +106,5 @@ https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/analytics/graphql-api/tutorials/querying-workers-metrics.mdx b/src/content/docs/analytics/graphql-api/tutorials/querying-workers-metrics.mdx index 5e3a5a7c908ae2b..0bc60957fb0db3a 100644 --- a/src/content/docs/analytics/graphql-api/tutorials/querying-workers-metrics.mdx +++ b/src/content/docs/analytics/graphql-api/tutorials/querying-workers-metrics.mdx @@ -8,7 +8,7 @@ products: In this example, we are going to use the GraphQL Analytics API to query for Workers Metrics over a specified time period. We can query up to one month of data for dates up to three months ago. -The following API call will request a Worker script's metrics over a one day period, and output the requested fields. Be sure to replace ``, ``, and `` with your API credentials, and adjust the `datetimeStart`, `datetimeEnd`, and `scriptName` variables as needed. +The following API call will request a Worker script's metrics over a one day period, and output the requested fields. Be sure to replace `` and ``[^1] with your API credentials, and adjust the `datetimeStart`, `datetimeEnd`, and `scriptName` variables as needed. ## API Call @@ -41,7 +41,7 @@ echo '{ "query": } }", "variables": { - "accountTag": "", + "accountTag": "", "datetimeStart": "2022-08-04T00:00:00.000Z", "datetimeEnd": "2022-08-04T01:00:00.000Z", "scriptName": "worker-subrequest-test-client" @@ -126,3 +126,5 @@ https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/docs/cache/how-to/cache-keys.mdx b/src/content/docs/cache/how-to/cache-keys.mdx index 95f757e5b79a3c8..d8c13121cbceb67 100644 --- a/src/content/docs/cache/how-to/cache-keys.mdx +++ b/src/content/docs/cache/how-to/cache-keys.mdx @@ -83,11 +83,8 @@ Headers control which headers go into the Cache Key. Similar to Query String, yo When you include a header, the header value is included in the Cache Key. For example, if an HTTP request contains an HTTP header like `X-Auth-API-key: 12345`, and you include the `X-Auth-API-Key header` in your Cache Key Template, then `12345` appears in the Cache Key. -To check for the presence of a header without including its actual value, use the `check_presence` option. +In the **Check if header contains** section, you can add header names and their values to the cache key. For custom headers, values are optional, but for the following restricted headers, you must include one to three specific values: -Currently, you can only exclude the `Origin` header. The `Origin` header is always included unless explicitly excluded. Including the [Origin header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin) in the Cache Key is important to enforce [CORS](https://developer.mozilla.org/en-US/docs/Glossary/CORS). Additionally, you cannot include the following headers: - -* Headers that have high cardinality and risk sharding the cache * `accept` * `accept-charset` * `accept-encoding` @@ -95,6 +92,13 @@ Currently, you can only exclude the `Origin` header. The `Origin` header is alwa * `accept-language` * `referer` * `user-agent` + +To check for the presence of a header without including its actual value, use the **Check presence of** option. + +Currently, you can only exclude the `Origin` header. The `Origin` header is always included unless explicitly excluded. Including the [Origin header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin) in the Cache Key is important to enforce [CORS](https://developer.mozilla.org/en-US/docs/Glossary/CORS). + +Additionally, you cannot include the following headers: + * Headers that re-implement cache or proxy features * `connection` * `content-length` diff --git a/src/content/docs/cache/how-to/cache-rules/index.mdx b/src/content/docs/cache/how-to/cache-rules/index.mdx index 9c1041816edff78..3b22f3d39b8e13c 100644 --- a/src/content/docs/cache/how-to/cache-rules/index.mdx +++ b/src/content/docs/cache/how-to/cache-rules/index.mdx @@ -9,6 +9,11 @@ Use Cache Rules to customize cache settings on Cloudflare. Cache Rules allows yo Cache Rules can be created in the [dashboard](/cache/how-to/cache-rules/create-dashboard/), via [API](/cache/how-to/cache-rules/create-api/) or [Terraform](/cache/how-to/cache-rules/terraform-example/). +:::note +Rules can be versioned. Refer to the [Version Management](/version-management/) documentation for more information. +::: + + ## Rules templates diff --git a/src/content/docs/cache/how-to/cache-rules/settings.mdx b/src/content/docs/cache/how-to/cache-rules/settings.mdx index 024bc2dedcece08..aa1c8c3bfe4786c 100644 --- a/src/content/docs/cache/how-to/cache-rules/settings.mdx +++ b/src/content/docs/cache/how-to/cache-rules/settings.mdx @@ -178,7 +178,17 @@ Define the request components used to define a [custom cache key](/cache/how-to/ Enterprise customers have these additional options for custom cache keys: * In the **Query string** section, you can select **All query string parameters**, **All query string parameters except** and enter an exception, **No query parameters except** and enter the parameters, or **Ignore query string** (also available for pay-as-you-go customers). -* In the **Headers** section, you can include headers names and their values, check the presence of another header, and **Include origin header**. +* In the **Headers** section, you can specify header names along with their values. For custom headers, values are optional; however, for the following restricted headers, you must include one to three specific values: + + * `accept` + * `accept-charset` + * `accept-encoding` + * `accept-datetime` + * `accept-language` + * `referer` + * `user-agent` + + To check for a header's presense wihtout including its value, use the **Check presence of** option. You can also choose whether to **Include origin header**. * In the **Cookie** section, you can include cookie names and their values, and check for the presence of another cookie. * In the **Host** section, you can select **Use original host** and **Resolved host**. In the **User** section, you can select **Device type**, **Country**, and **Language**. diff --git a/src/content/docs/cache/how-to/purge-cache/index.mdx b/src/content/docs/cache/how-to/purge-cache/index.mdx index 85374c528234bf5..b0a5e585373971b 100644 --- a/src/content/docs/cache/how-to/purge-cache/index.mdx +++ b/src/content/docs/cache/how-to/purge-cache/index.mdx @@ -10,4 +10,8 @@ Cloudflare's Instant Purge ensures that updates to your content are reflected im +:::note +If versioning is active on your zone and multiple environments are configured, you can select the specific environment you want to purge. For more details, refer to the [Version Management](/version-management/) documentation. +::: + diff --git a/src/content/docs/cache/how-to/purge-cache/purge-everything.mdx b/src/content/docs/cache/how-to/purge-cache/purge-everything.mdx index 6b1ba5f53726219..4a927cee751c285 100644 --- a/src/content/docs/cache/how-to/purge-cache/purge-everything.mdx +++ b/src/content/docs/cache/how-to/purge-cache/purge-everything.mdx @@ -15,6 +15,10 @@ Purging everything instantly clears all resources from your CDN cache in all Clo 3. Under **Purge Cache**, select **Purge Everything**. A warning window appears. 4. If you agree, select **Purge Everything**. +:::note +When purging everything for a non-production cache environment, all files for that specific cache environment will be purged. However, when purging everything for the production environment, all files will be purged across all environments. +::: + ## Resulting cache status Purge Everything invalidates the resource, resulting in the `CF-Cache-Status` header indicating [`EXPIRED`](/cache/concepts/cache-responses/#expired) for subsequent requests. diff --git a/src/content/docs/cloudflare-one/insights/logs/index.mdx b/src/content/docs/cloudflare-one/insights/logs/index.mdx index 6ee8254b6b37904..c230f8e3aa57c7d 100644 --- a/src/content/docs/cloudflare-one/insights/logs/index.mdx +++ b/src/content/docs/cloudflare-one/insights/logs/index.mdx @@ -26,6 +26,7 @@ Cloudflare Zero Trust logs are stored for a varying period of time based on the | **Network logs** | 24 hours | 30 days | 24 hours | 30 days | 30 days | | **HTTP logs** | 24 hours | 30 days | 24 hours | 30 days | 30 days | | **DEX logs** | 7 days | 7 days | 7 days | 7 days | 7 days | +| **Device posture logs** | 30 days | 30 days | 30 days | 30 days | 30 days | 1 Enterprise users on per query plans cannot store DNS logs via Cloudflare. You can still export logs via [Logpush](/cloudflare-one/insights/logs/logpush/). diff --git a/src/content/docs/email-security/deployment/api/setup/gsuite-bcc-setup/add-domain.mdx b/src/content/docs/email-security/deployment/api/setup/gsuite-bcc-setup/add-domain.mdx index 403cb579fc14b68..a844f018ce69e1f 100644 --- a/src/content/docs/email-security/deployment/api/setup/gsuite-bcc-setup/add-domain.mdx +++ b/src/content/docs/email-security/deployment/api/setup/gsuite-bcc-setup/add-domain.mdx @@ -20,7 +20,7 @@ To set up Email Security (formerly Area 1) for Gmail: - **Domain**: Enter the domain you want to set up BCC from Google. - **Configured As**: Select Hops, enter `2`. - **Forwarding To**: Enter `google.com`. - - **Outbound TLS**: Select Forward all messages over TLS. + - **Outbound TLS**: Select **Forward all messages over TLS**. - **Quarantine policy**: Ensure no policy is selected. 5. Select **Publish Domain**. diff --git a/src/content/docs/kv/api/write-key-value-pairs.mdx b/src/content/docs/kv/api/write-key-value-pairs.mdx index 7394756c5fc335e..93592b659f4ab16 100644 --- a/src/content/docs/kv/api/write-key-value-pairs.mdx +++ b/src/content/docs/kv/api/write-key-value-pairs.mdx @@ -136,6 +136,138 @@ await env.NAMESPACE.put(key, value, { }); ``` +### Limits to KV writes to the same key + +Workers KV has a maximum of 1 write to the same key per second. Writes made to the same key within 1 second will cause rate limiting (`429`) errors to be thrown. + +You should not write more than once per second to the same key. Consider consolidating your writes to a key within a Worker invocation to a single write, or wait at least 1 second between writes. + +The following example serves as a demonstration of how multiple writes to the same key may return errors by forcing concurrent writes within a single Worker invocation. This is not a pattern that should be used in production. + +```typescript +export default { + async fetch(request, env, ctx): Promise { + // Rest of code omitted + const key = "common-key"; + const parallelWritesCount = 20; + + // Helper function to attempt a write to KV and handle errors + const attemptWrite = async (i: number) => { + try { + await env. YOUR_KV_NAMESPACE.put(key, `Write attempt #${i}`); + return { attempt: i, success: true }; + } catch (error) { + // An error may be thrown if a write to the same key is made within 1 second with a message. For example: + // error: { + // "message": "KV PUT failed: 429 Too Many Requests" + // } + + return { + attempt: i, + success: false, + error: { message: (error as Error).message }, + }; + } + }; + + // Send all requests in parallel and collect results + const results = await Promise.all( + Array.from({ length: parallelWritesCount }, (_, i) => + attemptWrite(i + 1), + ), + ); + // Results will look like: + // [ + // { + // "attempt": 1, + // "success": true + // }, + // { + // "attempt": 2, + // "success": false, + // "error": { + // "message": "KV PUT failed: 429 Too Many Requests" + // } + // }, + // ... + // ] + + return new Response(JSON.stringify(results), { + headers: { "Content-Type": "application/json" }, + }); + }, +}; +``` + +To handle these errors, we recommend implementing a retry logic, with exponential backoff. Here is a simple approach to add retries to the above code. + +```typescript +export default { + async fetch(request, env, ctx): Promise { + // Rest of code omitted + const key = "common-key"; + const parallelWritesCount = 20; + + // Helper function to attempt a write to KV with retries + const attemptWrite = async (i: number) => { + return await retryWithBackoff(async () => { + await env.YOUR_KV_NAMESPACE.put(key, `Write attempt #${i}`); + return { attempt: i, success: true }; + }); + }; + + // Send all requests in parallel and collect results + const results = await Promise.all( + Array.from({ length: parallelWritesCount }, (_, i) => + attemptWrite(i + 1), + ), + ); + + return new Response(JSON.stringify(results), { + headers: { "Content-Type": "application/json" }, + }); + }, +}; + +async function retryWithBackoff( + fn: Function, + maxAttempts = 5, + initialDelay = 1000, +) { + let attempts = 0; + let delay = initialDelay; + + while (attempts < maxAttempts) { + try { + // Attempt the function + return await fn(); + } catch (error) { + // Check if the error is a rate limit error + if ( + (error as Error).message.includes( + "KV PUT failed: 429 Too Many Requests", + ) + ) { + attempts++; + if (attempts >= maxAttempts) { + throw new Error("Max retry attempts reached"); + } + + // Wait for the backoff period + console.warn(`Attempt ${attempts} failed. Retrying in ${delay} ms...`); + await new Promise((resolve) => setTimeout(resolve, delay)); + + // Exponential backoff + delay *= 2; + } else { + // If it's a different error, rethrow it + throw error; + } + } + } +} +``` + ## Other methods to access KV You can also [write key-value pairs from the command line with Wrangler](/kv/reference/kv-commands/#create) and [write data via the API](/api/operations/workers-kv-namespace-write-key-value-pair-with-metadata). diff --git a/src/content/docs/pages/get-started/direct-upload.mdx b/src/content/docs/pages/get-started/direct-upload.mdx index 53345c21ecbf7c9..73fc8e4ce6cf4f0 100644 --- a/src/content/docs/pages/get-started/direct-upload.mdx +++ b/src/content/docs/pages/get-started/direct-upload.mdx @@ -25,7 +25,7 @@ After you have your prebuilt assets ready, there are two ways to begin uploading :::note -Within a Direct Upload project, you can switch between creating deployments with either Wrangler or drag and drop. However, you cannot create deployments with Direct Upload on a project that you created through Git integration on the dashboard. Only projects created with Direct Upload can be updated with Direct Upload. +Within a Direct Upload project, you can switch between creating deployments with either Wrangler or drag and drop. For existing Git-integrated projects, you can manually create deployments using [`wrangler deploy`](/workers/wrangler/commands/#deploy). However, you cannot use drag and drop on the dashboard with existing Git-integrated projects. ::: @@ -141,4 +141,4 @@ If using the drag and drop method, a red warning symbol will appear next to an a Drag and drop deployments made from the Cloudflare dashboard do not currently support compiling a `functions` folder of [Pages Functions](/pages/functions/). To deploy a `functions` folder, you must use Wrangler. When deploying a project using Wrangler, if a `functions` folder exists where the command is run, that `functions` folder will be uploaded with the project. -However, note that a `_worker.js` file is supported by both Wrangler and drag-and-drop deployments made from the dashboard. +However, note that a `_worker.js` file is supported by both Wrangler and drag and drop deployments made from the dashboard. \ No newline at end of file diff --git a/src/content/docs/ruleset-engine/rules-language/fields/http-request-body.mdx b/src/content/docs/ruleset-engine/rules-language/fields/http-request-body.mdx index b9bcc3418a8da06..0d9e10280b7932f 100644 --- a/src/content/docs/ruleset-engine/rules-language/fields/http-request-body.mdx +++ b/src/content/docs/ruleset-engine/rules-language/fields/http-request-body.mdx @@ -28,7 +28,7 @@ The Cloudflare Rules language supports these HTTP body fields. `http.request.body.raw` `String` -Represents the unaltered HTTP request body. +The unaltered HTTP request body. When the value of `http.request.body.truncated` is true, the return value may be truncated. @@ -58,7 +58,7 @@ This field may have a value larger than the one returned by `len(http.request.bo `http.request.body.form` `Map>` -Represents the HTTP request body of a form as a Map (or associative array). Populated when the Content-Type header is `application/x-www-form-urlencoded`. +The HTTP request body of a form represented as a Map (or associative array). Populated when the `Content-Type` header is `application/x-www-form-urlencoded`. The values are not pre-processed and retain the original case used in the request. @@ -83,9 +83,9 @@ Example value: `http.request.body.form.names` `Array` -Represents the names of the form fields in an HTTP request where the content type is `application/x-www-form-urlencoded`. +The names of the form fields in an HTTP request where the content type is `application/x-www-form-urlencoded`. -The names are not pre-processed and retain the original case used in the request. They are listed in the same order as in the request. +Names are not pre-processed and retain the original case used in the request. They are listed in the same order as in the request. Duplicate names are listed multiple times. @@ -109,7 +109,7 @@ Example value: `http.request.body.form.values` `Array` -Represents the values of the form fields in an HTTP request where the content type is `application/x-www-form-urlencoded`. +The values of the form fields in an HTTP request where the content type is `application/x-www-form-urlencoded`. The values are not pre-processed and retain the original case used in the request. They are listed in the same order as in the request. @@ -149,7 +149,7 @@ This field is available on all Cloudflare plans. `http.request.body.multipart` `Map>` -A Map (or associative array) of multipart names to multipart values in the request body. +A Map (or associative array) representation of multipart names to multipart values in the request body. Example value: diff --git a/src/content/docs/ruleset-engine/rules-language/fields/http-request-header.mdx b/src/content/docs/ruleset-engine/rules-language/fields/http-request-header.mdx index 67aff5abd29118c..640efc8717cff19 100644 --- a/src/content/docs/ruleset-engine/rules-language/fields/http-request-header.mdx +++ b/src/content/docs/ruleset-engine/rules-language/fields/http-request-header.mdx @@ -16,7 +16,7 @@ The Cloudflare Rules language supports these HTTP header fields. `http.request.headers` `Map>` -Represents HTTP request headers as a Map (or associative array). +The HTTP request headers represented as a Map (or associative array). The keys of the associative array are the names of HTTP request headers converted to lowercase. @@ -44,7 +44,7 @@ Example value: `http.request.headers.names` `Array` -Represents the names of the headers in the HTTP request. +The names of the headers in the HTTP request. The names are not pre-processed and retain the original case used in the request. @@ -72,7 +72,7 @@ Example value: `["content-type"]` `http.request.headers.values` `Array` -Represents the values of the headers in the HTTP request. +The values of the headers in the HTTP request. The values are not pre-processed and retain the original case used in the request. @@ -122,7 +122,7 @@ When `true`, `http.request.headers`, `http.request.headers.names`, and `http.req `http.request.accepted_languages` `Array` -Represents the list of language tags provided in the [`Accept-Language`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language) HTTP request header, sorted by weight (`;q=`, with a default weight of `1`) in descending order. +List of language tags provided in the [`Accept-Language`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language) HTTP request header, sorted by weight (`;q=`, with a default weight of `1`) in descending order. If the HTTP header is not present in the request or is empty, `http.request.accepted_languages[0]` will return a "[missing value](/ruleset-engine/rules-language/values/#notes)", which the `concat()` function will handle as an empty string. diff --git a/src/content/docs/ruleset-engine/rules-language/fields/http-request-response.mdx b/src/content/docs/ruleset-engine/rules-language/fields/http-request-response.mdx index 85f3295304f3875..89f3255b6c006c8 100644 --- a/src/content/docs/ruleset-engine/rules-language/fields/http-request-response.mdx +++ b/src/content/docs/ruleset-engine/rules-language/fields/http-request-response.mdx @@ -16,7 +16,7 @@ The Cloudflare Rules language supports these HTTP response fields. `http.response.code` `Integer` -Represents the HTTP status code returned to the client, either set by a Cloudflare product or returned by the origin server. +The HTTP status code returned to the client, either set by a Cloudflare product or returned by the origin server. Example value: `403` @@ -25,7 +25,7 @@ Example value: `http.response.headers` `Map>` -Represents HTTP response headers as a Map (or associative array). +The HTTP response headers represented as a Map (or associative array). When there are repeating headers, the array includes them in the order they appear in the response. The keys convert to lowercase. @@ -47,7 +47,7 @@ Example value: ## `http.response.headers.names` -Represents the names of the headers in the HTTP response. The names are not pre-processed and retain the original case used in the response. +The names of the headers in the HTTP response. The names are not pre-processed and retain the original case used in the response. The order of header names is not guaranteed but will match `http.response.headers.values`. @@ -67,7 +67,7 @@ Example value: `["content-type"]` ## `http.response.headers.values` -Represents the values of the headers in the HTTP response. +The values of the headers in the HTTP response. The values are not pre-processed and retain the original case used in the response. @@ -155,7 +155,7 @@ Note: This field is only available in [HTTP response header modifications](/rule `cf.response.error_type` `String` -Contains a string with the type of error in the response being returned. The default value is an empty string (`""`). +A string with the type of error in the response being returned. The default value is an empty string (`""`). The available values are the following: diff --git a/src/content/docs/ruleset-engine/rules-language/fields/standard-fields.mdx b/src/content/docs/ruleset-engine/rules-language/fields/standard-fields.mdx index 2a0cb2ab5ca0440..2b935f8a102f55d 100644 --- a/src/content/docs/ruleset-engine/rules-language/fields/standard-fields.mdx +++ b/src/content/docs/ruleset-engine/rules-language/fields/standard-fields.mdx @@ -8,7 +8,7 @@ head: content: Standard fields | Fields reference --- -import { Details } from "~/components"; +import { Details, Render } from "~/components"; Most standard fields use the same naming conventions as [Wireshark display fields](https://www.wireshark.org/docs/wsug_html_chunked/ChWorkBuildDisplayFilterSection.html). However, there are some subtle differences between Cloudflare and Wireshark: @@ -34,7 +34,7 @@ The Cloudflare Rules language supports these standard fields. `http.cookie` `String` -Represents the entire cookie as a string. +The entire cookie as a string. Example value: @@ -46,7 +46,7 @@ session=8521F670545D7865F79C3D7BEDC29CCE;-background=light `http.host` `String` -Represents the hostname used in the full request URI. +The hostname used in the full request URI. Example value: @@ -58,7 +58,7 @@ www.example.org `http.referer` `String` -Represents the HTTP Referer request header, which contains the address of the web page that linked to the currently requested page. +The HTTP `Referer` request header, which contains the address of the web page that linked to the currently requested page. Example value: @@ -70,7 +70,7 @@ Referer: htt­ps://developer.example.org/en-US/docs/Web/JavaScript `http.request.full_uri` `String` -Represents the full URI as received by the web server (does not include `#fragment`, which is not sent to web servers). +The full URI as received by the web server (does not include `#fragment`, which is not sent to web servers). Example value: @@ -82,7 +82,7 @@ htt­ps://www.example.org/articles/index?section=539061&expand=comments `http.request.method` `String` -Represents the HTTP method, returned as a string of uppercase characters. +The HTTP method, returned as a string of uppercase characters. Example value: @@ -94,7 +94,7 @@ GET `http.request.cookies` `Map>` -Represents the `Cookie` HTTP header associated with a request as a Map (associative array). The cookie values are not pre-processed and retain the original case used in the request. +The `Cookie` HTTP header associated with a request represented as a Map (associative array). The cookie values are not pre-processed and retain the original case used in the request. **Decoding:** The cookie names are URL decoded. If two cookies have the same name after decoding, their value arrays are merged. @@ -114,7 +114,7 @@ Example value: `http.request.timestamp.sec` `Integer` -Represents the timestamp when Cloudflare received the request, expressed as Unix time in seconds. This value is 10 digits long. +The timestamp when Cloudflare received the request, expressed as UNIX time in seconds. This value is 10 digits long. To obtain the timestamp milliseconds, use the `http.request.timestamp.msec` field. @@ -130,7 +130,7 @@ When validating HMAC tokens in an expression, pass this field as the `currentTim `http.request.timestamp.msec` `Integer` -Represents the millisecond when Cloudflare received the request, between 0 and 999. +The millisecond when Cloudflare received the request, between 0 and 999. To obtain the complete timestamp, use both `http.request.timestamp.sec` and `http.request.timestamp.msec` fields. @@ -146,7 +146,7 @@ Here is the formatted information for the provided rows: `http.request.uri` `String` -Represents the URI path and query string of the request. +The URI path and query string of the request. Example value: @@ -158,7 +158,7 @@ Example value: `http.request.uri.path` `String` -Represents the URI path of the request. +The URI path of the request. Example value: @@ -197,7 +197,7 @@ Example values: `http.request.uri.query` `String` -Represents the entire query string, without the `?` delimiter. +The entire query string, without the `?` delimiter. Example value: @@ -209,7 +209,7 @@ section=539061&expand=comments `http.user_agent` `String` -Represents the HTTP User-Agent request header, which contains a characteristic string to identify the client operating system and web browser. +The HTTP `User-Agent` request header, which contains a characteristic string to identify the client operating system and web browser. Example value: @@ -221,7 +221,7 @@ Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65 `http.request.version` `String` -Represents the version of the HTTP protocol used. Use this field when different checks are needed for different versions. +The version of the HTTP protocol used. Use this field when different checks are needed for different versions. Example values: @@ -232,7 +232,7 @@ Example values: `http.x_forwarded_for` `String` -Represents the full `X-Forwarded-For` HTTP header. +The full `X-Forwarded-For` HTTP header. Example value: @@ -244,7 +244,7 @@ Example value: `ip.src` `IP address` -Represents the client TCP IP address, which may be adjusted to reflect the actual address of the client using HTTP headers such as `X-Forwarded-For` or `X-Real-IP`. +The client TCP IP address, which may be adjusted to reflect the actual address of the client using HTTP headers such as `X-Forwarded-For` or `X-Real-IP`. Example value: @@ -256,7 +256,7 @@ Example value: `ip.src.lat` `String` -Represents the latitude associated with the client IP address. +The latitude associated with the client IP address. Example value: @@ -268,7 +268,7 @@ Example value: `ip.src.lon` `String` -Represents the longitude associated with the client IP address. +The longitude associated with the client IP address. Example value: @@ -280,7 +280,7 @@ Example value: `ip.src.city` `String` -Represents the city associated with the client IP address. +The city associated with the client IP address. Example value: @@ -292,7 +292,7 @@ San Francisco `ip.src.postal_code` `String` -Represents the postal code associated with the incoming request. +The postal code associated with the incoming request. Example value: @@ -304,7 +304,7 @@ Example value: `ip.src.metro_code` `String` -Represents the metro code or Designated Market Area (DMA) code associated with the incoming request. +The metro code or Designated Market Area (DMA) code associated with the incoming request. Example value: @@ -316,7 +316,7 @@ Example value: `ip.src.region` `String` -Represents the region name associated with the incoming request. +The region name associated with the incoming request. Example value: @@ -328,7 +328,7 @@ Texas `ip.src.region_code` `String` -Represents the region code associated with the incoming request. +The region code associated with the incoming request. Example value: @@ -340,7 +340,7 @@ TX `ip.src.timezone.name` `String` -Represents the name of the timezone associated with the incoming request. This field is only available in rewrite expressions of [Transform Rules](/rules/transform/). +The name of the timezone associated with the incoming request. This field is only available in rewrite expressions of [Transform Rules](/rules/transform/). Example value: @@ -352,7 +352,7 @@ America/Chicago `ip.src.asnum` `Number` -Represents the 16- or 32-bit integer representing the Autonomous System (AS) number associated with the client IP address. +The 16-bit or 32-bit integer representing the Autonomous System (AS) number associated with the client IP address. This field has the same value as the `ip.geoip.asnum` field, which is deprecated. The `ip.geoip.asnum` field is still available for new and existing rules, but you should use the `ip.src.asnum` field instead. @@ -360,7 +360,7 @@ This field has the same value as the `ip.geoip.asnum` field, which is deprecated `ip.src.continent` `String` -Represents the continent code associated with the client IP address: +The continent code associated with the client IP address: - **AF**: Africa - **AN**: Antarctica @@ -377,7 +377,7 @@ This field has the same value as the `ip.geoip.continent` field, which is deprec `ip.src.country` `String` -Represents the 2-letter country code in [ISO 3166-1 Alpha 2](https://www.iso.org/obp/ui/#search/code/) format. +The 2-letter country code in [ISO 3166-1 Alpha 2](https://www.iso.org/obp/ui/#search/code/) format. Example value: @@ -393,7 +393,7 @@ This field has the same value as the `ip.geoip.country` field, which is deprecat `ip.src.subdivision_1_iso_code` `String` -Represents the ISO 3166-2 code for the first-level region associated with the IP address. When the actual value is not available, this field contains an empty string. +The ISO 3166-2 code for the first-level region associated with the IP address. When the actual value is not available, this field contains an empty string. Example value: @@ -409,7 +409,7 @@ This field has the same value as the `ip.geoip.subdivision_1_iso_code` field, wh `ip.src.subdivision_2_iso_code` `String` -Represents the ISO 3166-2 code for the second-level region associated with the IP address. When the actual value is not available, this field contains an empty string. +The ISO 3166-2 code for the second-level region associated with the IP address. When the actual value is not available, this field contains an empty string. Example value: @@ -476,7 +476,12 @@ This field has the same value as the `ip.geoip.is_in_european_union` field, whic `raw.http.request.full_uri` `String` -Similar to the [`http.request.full_uri`](#httprequestfull_uri) non-raw field. Represents the full URI as received by the web server without the URI fragment (if any) and without any transformation. +The raw full URI as received by the web server without the URI fragment (if any) and without any transformation. + + **Note:** This raw field may include some basic normalization done by Cloudflare's HTTP server. However, this can change in the future. @@ -484,7 +489,12 @@ Similar to the [`http.request.full_uri`](#httprequestfull_uri) non-raw field. Re `raw.http.request.uri` `String` -Similar to the [`http.request.uri`](#httprequesturi) non-raw field. Represents the URI path and query string of the request without any transformation. +The raw URI path and query string of the request without any transformation. + + **Note:** This raw field may include some basic normalization done by Cloudflare's HTTP server. However, this can change in the future. @@ -492,7 +502,12 @@ Similar to the [`http.request.uri`](#httprequesturi) non-raw field. Represents t `raw.http.request.uri.path` `String` -Similar to the [`http.request.uri.path`](#httprequesturipath) non-raw field. Represents the URI path of the request without any transformation. +The raw URI path of the request without any transformation. + + **Note:** This raw field may include some basic normalization done by Cloudflare's HTTP server. However, this can change in the future. @@ -500,13 +515,26 @@ Similar to the [`http.request.uri.path`](#httprequesturipath) non-raw field. Rep `raw.http.request.uri.path.extension` `String` -Similar to the [`http.request.uri.path.extension`](#httprequesturipathextension) non-raw field. Represents the file extension in the request URI path without any transformation. +The raw file extension in the request URI path without any transformation. + + ## `raw.http.request.uri.query` `raw.http.request.uri.query` `String` -Similar to the [`http.request.uri.query`](#httprequesturiquery) non-raw field. Represents the entire query string without the `?` delimiter and without any transformation. +The entire query string without the `?` delimiter and without any transformation. + + **Note:** This raw field may include some basic normalization done by Cloudflare's HTTP server. However, this can change in the future. diff --git a/src/content/docs/ruleset-engine/rules-language/fields/uri.mdx b/src/content/docs/ruleset-engine/rules-language/fields/uri.mdx index 2bf89a132816bda..beff5428b9af851 100644 --- a/src/content/docs/ruleset-engine/rules-language/fields/uri.mdx +++ b/src/content/docs/ruleset-engine/rules-language/fields/uri.mdx @@ -8,6 +8,8 @@ head: content: URI argument and value fields | Fields reference --- +import { Render } from "~/components"; + The Cloudflare Rules language includes URI argument and value fields associated with HTTP requests. Many of these fields return [arrays](/ruleset-engine/rules-language/values/#arrays) containing the respective values. The Cloudflare Rules language supports these URI argument and value fields. @@ -16,7 +18,7 @@ The Cloudflare Rules language supports these URI argument and value fields. `http.request.uri.args` `Map>` -Represents the HTTP URI arguments associated with a request as a Map (associative array). +The HTTP URI arguments associated with a request represented as a Map (associative array). When an argument repeats, the array contains multiple items in the order they appear in the request. @@ -37,7 +39,7 @@ Example value: `http.request.uri.args.names` `Array` -Represents the names of the arguments in the HTTP URI query string. The names are not pre-processed and retain the original case used in the request. +The names of the arguments in the HTTP URI query string. The names are not pre-processed and retain the original case used in the request. When a name repeats, the array contains multiple items in the order that they appear in the request. @@ -56,7 +58,7 @@ Example value: `http.request.uri.args.values` `Array` -Represents the values of arguments in the HTTP URI query string. The values are not pre-processed and retain the original case used in the request. They are in the same order as in the request. +The values of arguments in the HTTP URI query string. The values are not pre-processed and retain the original case used in the request. They are in the same order as in the request. Duplicated values are listed multiple times. @@ -75,16 +77,35 @@ Example value: `raw.http.request.uri.args` `Map>` -Contains the same field values as [`http.request.uri.args`](#httprequesturiargs). +The raw HTTP URI arguments associated with a request represented as a Map (associative array). + + ## `raw.http.request.uri.args.names` `raw.http.request.uri.args.names` `Array` -Contains the same field values as [`http.request.uri.args.names`](#httprequesturiargsnames). +The raw names of the arguments in the HTTP URI query string. + + ## `raw.http.request.uri.args.values` `raw.http.request.uri.args.values` `Array` -Contains the same field values as [`http.request.uri.args.values`](#httprequesturiargsvalues). +The raw values of arguments in the HTTP URI query string. + + diff --git a/src/content/docs/security-center/cloudforce-one/index.mdx b/src/content/docs/security-center/cloudforce-one/index.mdx index d91403b03ba621b..7debafebc708d8d 100644 --- a/src/content/docs/security-center/cloudforce-one/index.mdx +++ b/src/content/docs/security-center/cloudforce-one/index.mdx @@ -21,6 +21,25 @@ To review and manage Request for Information: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Go to **Security Center** > **Cloudforce One Requests**, then populate the required fields. +### Submit RFIs + +To submit RFIs (Request for Information): + +1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. +2. Select **Cloudforce One Requests** > Select **New Request**. +3. Fill in the required fields, then select **Save**. + +Once you select **Save**, the dashboard will display an overview of the shared information consisting of: + +- **Status**: When you submit the RFI, the status is `Open`. Once the team accepts the RFI, the status changes to `Accept`. When the team commits to answer your RFI, the status changes to `Complete`. +- **Priority**: Priority of request. +- **Request type**: Choose among a selection of request types, such as DDos Attack, Passive DNS Resolution, and more. +- **Request content**: The content of the request. + +The **Responses** section allows you to add clarifying questions and comments. + +To view your RFI, select **Cloudforce One Requests** on the sidebar, locate your RFI, then select **View**. From here, you can also choose to edit your existing RFI by selecting **Edit**. + ### Upload and download attachment You can also choose to upload and download an attachment. diff --git a/src/content/docs/speed/optimization/content/speed-brain.mdx b/src/content/docs/speed/optimization/content/speed-brain.mdx index f43515d163970f1..623b4de0fe4d465 100644 --- a/src/content/docs/speed/optimization/content/speed-brain.mdx +++ b/src/content/docs/speed/optimization/content/speed-brain.mdx @@ -55,7 +55,7 @@ The configuration looks like this: } ``` -This configuration instructs the browser to initiate prefetch requests for future navigations. These prefetch requests will include the `sec-purpose: prefetch` HTTP request header. Prefetches that are not successful will respond with a `406` status code (used to be `503`). Prefetches that are successful will respond with a `200` status code. +This configuration instructs the browser to initiate prefetch requests for future navigations. These prefetch requests will include the `sec-purpose: prefetch` HTTP request header. Prefetches that are not successful will respond with a `503` status code. Prefetches that are successful will respond with a `200` status code. ## Test Speed Brain diff --git a/src/content/docs/vectorize/best-practices/query-vectors.mdx b/src/content/docs/vectorize/best-practices/query-vectors.mdx index 11599965553d6c1..a45c80be1ec8ea9 100644 --- a/src/content/docs/vectorize/best-practices/query-vectors.mdx +++ b/src/content/docs/vectorize/best-practices/query-vectors.mdx @@ -67,6 +67,15 @@ This would return a set of matches resembling the following, based on the distan Refer to [Vectorize API](/vectorize/reference/client-api/) for additional examples. +## Query by vector identifier + +Vectorize now offers the ability to search for vectors similar to a vector that is already present in the index using the `queryById()` operation. This can be considered as a single operation that combines the `getById()` and the `query()` operation. + +```ts +// the query operation would yield results if a vector with id `some-vector-id` is already present in the index. +let matches = await env.YOUR_INDEX.queryById("some-vector-id"); +``` + ## Control over scoring precision and query accuracy When querying vectors, you can specify to either use high-precision scoring, thereby increasing the precision of the query matches scores as well as the accuracy of the query results, or use approximate scoring for faster response times. diff --git a/src/content/docs/vectorize/get-started/intro.mdx b/src/content/docs/vectorize/get-started/intro.mdx index f92c27ef7342781..e49b7823a7efa5b 100644 --- a/src/content/docs/vectorize/get-started/intro.mdx +++ b/src/content/docs/vectorize/get-started/intro.mdx @@ -395,6 +395,8 @@ export default { } satisfies ExportedHandler; ``` +You can also use the Vectorize `queryById()` operation to search for vectors similar to a vector that is already present in the index. + ## 7. Deploy your Worker Before deploying your Worker globally, log in with your Cloudflare account by running: diff --git a/src/content/docs/vectorize/reference/client-api.mdx b/src/content/docs/vectorize/reference/client-api.mdx index f0706c5ed4f40b9..d15857b26c8f8f7 100644 --- a/src/content/docs/vectorize/reference/client-api.mdx +++ b/src/content/docs/vectorize/reference/client-api.mdx @@ -83,6 +83,24 @@ For legacy Vectorize (V1) indexes, `topK` is limited to 20, and the `returnMetad ::: +### Query vectors by ID + +```ts +let matches = await env.YOUR_INDEX.queryById("some-vector-id"); +``` + +Query an index using a vector that is already present in the index. + +Query options remain the same as the query operation described above. + +```ts +let matches = await env.YOUR_INDEX.queryById("some-vector-id", { + topK: 5, + returnValues: true, + returnMetadata: "all", +}); +``` + ### Get vectors by ID ```ts diff --git a/src/content/docs/workers/configuration/multipart-upload-metadata.mdx b/src/content/docs/workers/configuration/multipart-upload-metadata.mdx index 25fac9277146fe4..2a86a9ee06e1a1a 100644 --- a/src/content/docs/workers/configuration/multipart-upload-metadata.mdx +++ b/src/content/docs/workers/configuration/multipart-upload-metadata.mdx @@ -39,6 +39,18 @@ At a minimum, the `main_module` key is required to upload a Worker. * The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`. +* `assets` + + * [Asset](/workers/static-assets/) configuration for a Worker. + * `config` + * [html_handling](/workers/static-assets/routing/#1-html_handling) determines the redirects and rewrites of requests for HTML content. + * [not_found_handling](/workers/static-assets/routing/#2-not_found_handling) determines the response when a request does not match a static asset, and there is no Worker script. + * `jwt` field provides a token authorizing assets to be attached to a Worker. + +* `keep_assets` + + * Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token. + * `bindings` array\[object] optional * [Bindings](#bindings) to expose in the Worker. @@ -126,6 +138,10 @@ Workers can interact with resources on the Cloudflare Developer Platform using [ "name": "", "dataset": "" }, + { + "type": "assets", + "name": "" + }, { "type": "browser_rendering", "name": "" diff --git a/src/content/docs/workers/wrangler/commands.mdx b/src/content/docs/workers/wrangler/commands.mdx index f491c1fc6509a3c..330359590e37418 100644 --- a/src/content/docs/workers/wrangler/commands.mdx +++ b/src/content/docs/workers/wrangler/commands.mdx @@ -139,6 +139,12 @@ wrangler docs [] ## `init` +:::note + +The `init` command will be removed in a future version. Please use `npm create cloudflare@latest` instead. + +::: + Create a new project via the [create-cloudflare-cli (C3) tool](/workers/get-started/guide/#1-create-a-new-worker-project). A variety of web frameworks are available to choose from as well as templates. Dependencies are installed by default, with the option to deploy your project immediately. ```txt @@ -591,8 +597,10 @@ npx wrangler vectorize query [OPTIONS] - `INDEX_NAME` - The name of the Vectorize index to query. -- `--vector` - - Vector against which the Vectorize index is queried. +- `--vector` + - Vector against which the Vectorize index is queried. Either this or the `vector-id` param must be provided. +- `--vector-id` + - Identifier for a vector that is already present in the index against which the index is queried. Either this or the `vector` param must be provided. - `--top-k` - The number of vectors to query (default: `5`). - `--return-values` @@ -924,6 +932,74 @@ List R2 bucket in the current account. wrangler r2 bucket list ``` +### `domain add` + +Connect a [custom domain](/r2/buckets/public-buckets/#custom-domains) to an R2 bucket. + +```txt +wrangler r2 bucket domain add [OPTIONS] +``` + +- `NAME` + - The name of the R2 bucket to connect a custom domain to. +- `--domain` + - The custom domain to connect to the R2 bucket. +- `--zone-id` + - The [zone ID](/fundamentals/setup/find-account-and-zone-ids/) associated with the custom domain. +- `--min-tls` + - Set the minimum TLS version for the custom domain (defaults to 1.0 if not set). +- `--jurisdiction` + - The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). +- `--force` + - Skip confirmation when adding the custom domain. + +### `domain remove` + +Remove a [custom domain](/r2/buckets/public-buckets/#custom-domains) from an R2 bucket. + +```txt +wrangler r2 bucket domain remove [OPTIONS] +``` + +- `NAME` + - The name of the R2 bucket to remove the custom domain from. +- `--domain` + - The custom domain to remove from the R2 bucket. +- `--jurisdiction` + - The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). +- `--force` + - Skip confirmation when removing the custom domain. + +### `domain update` + +Update settings for a [custom domain](/r2/buckets/public-buckets/#custom-domains) connected to an R2 bucket. + +```txt +wrangler r2 bucket domain update [OPTIONS] +``` + +- `NAME` + - The name of the R2 bucket associated with the custom domain to update. +- `--domain` + - The custom domain whose settings will be updated. +- `--min-tls` + - Update the minimum TLS version for the custom domain. +- `--jurisdiction` + - The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). + +### `domain list` + +List [custom domains](/r2/buckets/public-buckets/#custom-domains) for an R2 bucket. + +```txt +wrangler r2 bucket domain list [OPTIONS] +``` + +- `NAME` + - The name of the R2 bucket whose connected custom domains will be listed. +- `--jurisdiction` + - The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). + ### `notification create` Create an [event notification](/r2/buckets/event-notifications/) rule for an R2 bucket. @@ -990,7 +1066,7 @@ wrangler r2 bucket sippy enable [OPTIONS] - `--r2-secret-access-key` - Your R2 Secret Access Key. Requires read and write access. - `--jurisdiction` - - The jurisdiction where this R2 bucket is located, if a jurisdiction has been specified. Refer to [Jurisdictional Restrictions](/r2/reference/data-location/#jurisdictional-restrictions). + - The jurisdiction where the bucket exists, if a jurisdiction has been specified. Refer to [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). - **AWS S3 provider-specific options:** - `--key-id` - Your AWS Access Key ID. Requires [read and list access](/r2/data-migration/sippy/#amazon-s3). @@ -1262,7 +1338,7 @@ wrangler workflows list ``` - `--page` - - Show a specific page from the listing. You can configure page size using "per-page". + - Show a specific page from the listing. You can configure page size using "per-page". - `--per-page` - Configure the maximum number of Workflows to show per page. @@ -1325,7 +1401,6 @@ wrangler workflows instances terminate [OPTIONS] - `ID` - The ID of a Workflow instance. -{/* ### `instances pause` Pause (until resumed) a Workflow instance. @@ -1352,8 +1427,6 @@ wrangler workflows instances resume [OPTIONS] - `ID` - The ID of a Workflow instance. -*/} - ### `describe` ```sh @@ -1367,7 +1440,7 @@ wrangler workflows describe [OPTIONS] Trigger (create) a Workflow instance. ```sh -wrangler workflows describe [OPTIONS] +wrangler workflows trigger [OPTIONS] ``` - `WORKFLOW_NAME` @@ -1377,9 +1450,11 @@ wrangler workflows describe [OPTIONS] ```sh # Pass optional params to the Workflow. - wrangler workflows instances trigger my-workflow '{"hello":"world"}' + wrangler workflows trigger my-workflow '{"hello":"world"}' ``` +{/* + ### `delete` Delete (unregister) a Workflow. @@ -1391,6 +1466,8 @@ wrangler workflows delete [OPTIONS] - `WORKFLOW_NAME` - The name of a registered Workflow. +*/} + ## `tail` Start a session to livestream logs from a deployed Worker. diff --git a/src/content/docs/workflows/build/sleeping-and-retrying.mdx b/src/content/docs/workflows/build/sleeping-and-retrying.mdx index cdfaf298f754992..fc366956e047862 100644 --- a/src/content/docs/workflows/build/sleeping-and-retrying.mdx +++ b/src/content/docs/workflows/build/sleeping-and-retrying.mdx @@ -70,7 +70,7 @@ const defaultConfig: WorkflowStepConfig = { When providing your own `StepConfig`, you can configure: * The total number of attempts to make for a step -* The delay between attempts +* The delay between attempts (accepts both `number` (ms) or a human-readable format) * What backoff algorithm to apply between each attempt: any of `constant`, `linear`, or `exponential` * When to timeout (in duration) before considering the step as failed (including during a retry attempt) diff --git a/src/content/docs/workflows/build/trigger-workflows.mdx b/src/content/docs/workflows/build/trigger-workflows.mdx index 8efa29646664f75..1bd5cbe0ef5a601 100644 --- a/src/content/docs/workflows/build/trigger-workflows.mdx +++ b/src/content/docs/workflows/build/trigger-workflows.mdx @@ -62,20 +62,20 @@ interface Env { export default { async fetch(req: Request, env: Env) { - // - const instanceId = new URL(req.url).searchParams.get("instanceId") + // Get instanceId from query parameters + const instanceId = new URL(req.url).searchParams.get("instanceId") - // If an ?instanceId= query parameter is provided, fetch the status - // of an existing Workflow by its ID. - if (instanceId) { - let instance = await env.MY_WORKFLOW.get(instanceId); + // If an ?instanceId= query parameter is provided, fetch the status + // of an existing Workflow by its ID. + if (instanceId) { + let instance = await env.MY_WORKFLOW.get(instanceId); return Response.json({ status: await instance.status(), }); - } + } - // Else, create a new instance of our Workflow, passing in any (optional) params - // and return the ID. + // Else, create a new instance of our Workflow, passing in any (optional) + // params and return the ID. const newId = await crypto.randomUUID(); let instance = await env.MY_WORKFLOW.create({ id: newId }); return Response.json({ diff --git a/src/content/docs/workflows/get-started/cli-quick-start.mdx b/src/content/docs/workflows/get-started/cli-quick-start.mdx index c5b41a89a2ab0bf..35bda25a060a330 100644 --- a/src/content/docs/workflows/get-started/cli-quick-start.mdx +++ b/src/content/docs/workflows/get-started/cli-quick-start.mdx @@ -7,7 +7,7 @@ sidebar: --- -import { Render, PackageManagers } from "~/components" +import { GitHubCode, Render, PackageManagers } from "~/components" :::note @@ -49,91 +49,12 @@ This will create a new folder called `workflows-tutorial`, which contains two fi Open the `src/index.ts` file in your text editor. This file contains the following code, which is the most basic instance of a Workflow definition: -```ts title="src/index.ts" -import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers'; - -type Env = { - // Add your bindings here, e.g. Workers KV, D1, Workers AI, etc. - MY_WORKFLOW: Workflow; -}; - -// User-defined params passed to your workflow -type Params = { - email: string; - metadata: Record; -}; - -export class MyWorkflow extends WorkflowEntrypoint { - async run(event: WorkflowEvent, step: WorkflowStep) { - // Can access bindings on `this.env` - // Can access params on `event.params` - - const files = await step.do('my first step', async () => { - // Fetch a list of files from $SOME_SERVICE - return { - inputParams: event, - files: [ - 'doc_7392_rev3.pdf', - 'report_x29_final.pdf', - 'memo_2024_05_12.pdf', - 'file_089_update.pdf', - 'proj_alpha_v2.pdf', - 'data_analysis_q2.pdf', - 'notes_meeting_52.pdf', - 'summary_fy24_draft.pdf', - ], - }; - }); - - const apiResponse = await step.do('some other step', async () => { - let resp = await fetch('https://api.cloudflare.com/client/v4/ips'); - return await resp.json(); - }); - - await step.sleep('wait on something', '1 minute'); - - await step.do( - 'make a call to write that could maybe, just might, fail', - // Define a retry strategy - { - retries: { - limit: 5, - delay: '5 second', - backoff: 'exponential', - }, - timeout: '15 minutes', - }, - async () => { - // Do stuff here, with access to the state from our previous steps - if (Math.random() > 0.5) { - throw new Error('API call to $STORAGE_SYSTEM failed'); - } - }, - ); - } -} - -export default { - async fetch(req: Request, env: Env): Promise { - let id = new URL(req.url).searchParams.get('instanceId'); - - // Get the status of an existing instance, if provided - if (id) { - let instance = await env.MY_WORKFLOW.get(id); - return Response.json({ - status: await instance.status(), - }); - } - - // Spawn a new instance and return the ID and status - let instance = await env.MY_WORKFLOW.create(); - return Response.json({ - id: instance.id, - details: await instance.status(), - }); - }, -}; -``` + Specifically, the code above: @@ -172,15 +93,13 @@ You can [bind to a Workflow](/workers/runtime-apis/bindings/#what-is-a-binding) To bind a Workflow to a Worker, you need to define a `[[workflows]]` binding in your `wrangler.toml` configuration: -```toml title="wrangler.toml" -[[workflows]] -# name of your workflow -name = "workflows-starter" -# binding name env.MY_WORKFLOW -binding = "MY_WORKFLOW" -# this is class that extends the Workflow class in src/index.ts -class_name = "MyWorkflow" -``` + You can then invoke the methods on this binding directly from your Worker script's `env` parameter. The `Workflow` type has methods for: @@ -190,36 +109,13 @@ You can then invoke the methods on this binding directly from your Worker script For example, the following Worker will fetch the status of an existing Workflow instance by ID (if supplied), else it will create a new Workflow instance and return its ID: -```ts title="src/index.ts" -// Import the Workflow definition -import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent} from 'cloudflare:workers'; - -interface Env { - // Matches the binding definition in your wrangler.toml - MY_WORKFLOW: Workflow; -} - -export default { - async fetch(req: Request, env: Env): Promise { - let id = new URL(req.url).searchParams.get('instanceId'); - - // Get the status of an existing instance, if provided - if (id) { - let instance = await env.MY_WORKFLOW.get(id); - return Response.json({ - status: await instance.status(), - }); - } - - // Spawn a new instance and return the ID and status - let instance = await env.MY_WORKFLOW.create(); - return Response.json({ - id: instance.id, - details: await instance.status(), - }); - }, -}; -``` + Refer to the [triggering Workflows](/workflows/build/trigger-workflows/) documentation for how to trigger a Workflow from other Workers' handler functions. diff --git a/src/content/docs/workflows/get-started/guide.mdx b/src/content/docs/workflows/get-started/guide.mdx index 6aafe1a1ba70d82..791469ed2484ec9 100644 --- a/src/content/docs/workflows/get-started/guide.mdx +++ b/src/content/docs/workflows/get-started/guide.mdx @@ -7,7 +7,7 @@ sidebar: --- -import { Render, PackageManagers } from "~/components" +import { GitHubCode, Render, PackageManagers } from "~/components" :::note @@ -45,70 +45,12 @@ This will create a new folder called `workflows-starter`. Open the `src/index.ts` file in your text editor. This file contains the following code, which is the most basic instance of a Workflow definition: -```ts title="src/index.ts" -import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers'; - -type Env = { - // Add your bindings here, e.g. Workers KV, D1, Workers AI, etc. - MY_WORKFLOW: Workflow; -}; - -// User-defined params passed to your workflow -type Params = { - email: string; - metadata: Record; -}; - -export class MyWorkflow extends WorkflowEntrypoint { - async run(event: WorkflowEvent, step: WorkflowStep) { - // Can access bindings on `this.env` - // Can access params on `event.params` - - const files = await step.do('my first step', async () => { - // Fetch a list of files from $SOME_SERVICE - return { - inputParams: event, - files: [ - 'doc_7392_rev3.pdf', - 'report_x29_final.pdf', - 'memo_2024_05_12.pdf', - 'file_089_update.pdf', - 'proj_alpha_v2.pdf', - 'data_analysis_q2.pdf', - 'notes_meeting_52.pdf', - 'summary_fy24_draft.pdf', - ], - }; - }); - - const apiResponse = await step.do('some other step', async () => { - let resp = await fetch('https://api.cloudflare.com/client/v4/ips'); - return await resp.json(); - }); - - await step.sleep('wait on something', '1 minute'); - - await step.do( - 'make a call to write that could maybe, just might, fail', - // Define a retry strategy - { - retries: { - limit: 5, - delay: '5 second', - backoff: 'exponential', - }, - timeout: '15 minutes', - }, - async () => { - // Do stuff here, with access to the state from our previous steps - if (Math.random() > 0.5) { - throw new Error('API call to $STORAGE_SYSTEM failed'); - } - }, - ); - } -} -``` + A Workflow definition: @@ -132,27 +74,13 @@ A `step` is what makes a Workflow powerful, as you can encapsulate errors and pe At its most basic, a step looks like this: -```ts title="src/index.ts" -// Import the Workflow definition -import { WorkflowEntrypoint, WorkflowEvent, WorkflowStep } from "cloudflare:workers" - -type Params = {} - -// Create your own class that implements a Workflow -export class MyWorkflow extends WorkflowEntrypoint { - // Define a run() method - async run(event: WorkflowEvent, step: WorkflowStep) { - // Define one or more steps that optionally return state. - let state = step.do("my first step", async () => { - - }) - - step.do("my second step", async () => { - - }) - } -} -``` + Each call to `step.do` accepts three arguments: @@ -185,20 +113,13 @@ Before you can deploy a Workflow, you need to configure it. Open the `wrangler.toml` file at the root of your `workflows-starter` folder, which contains the following `[[workflows]]` configuration: -```toml title="wrangler.toml" -#:schema node_modules/wrangler/config-schema.json -name = "workflows-starter" -main = "src/index.ts" -compatibility_date = "2024-10-22" - -[[workflows]] -# name of your workflow -name = "workflows-starter" -# binding name env.MY_WORKFLOW -binding = "MY_WORKFLOW" -# this is class that extends the Workflow class in src/index.ts -class_name = "MyWorkflow" -``` + :::note @@ -219,35 +140,13 @@ We have a very basic Workflow definition, but now need to provide a way to call Return to the `src/index.ts` file we created in the previous step and add a `fetch` handler that _binds_ to our Workflow. This binding allows us to create new Workflow instances, fetch the status of an existing Workflow, pause and/or terminate a Workflow. -```ts title="src/index.ts" -// This is in the same file as your Workflow definition - -export default { - async fetch(req: Request, env: Env): Promise { - let url = new URL(req.url); - - if (url.pathname.startsWith('/favicon')) { - return Response.json({}, { status: 404 }); - } - - // Get the status of an existing instance, if provided - let id = url.searchParams.get('instanceId'); - if (id) { - let instance = await env.MY_WORKFLOW.get(id); - return Response.json({ - status: await instance.status(), - }); - } - - // Spawn a new instance and return the ID and status - let instance = await env.MY_WORKFLOW.create(); - return Response.json({ - id: instance.id, - details: await instance.status(), - }); - }, -}; -``` + The code here exposes a HTTP endpoint that generates a random ID and runs the Workflow, returning the ID and the Workflow status. It also accepts an optional `instanceId` query parameter that retrieves the status of a Workflow instance by its ID. @@ -267,96 +166,12 @@ This is the full contents of the `src/index.ts` file pulled down when you used t Before you deploy, you can review the full Workflows code and the `fetch` handler that will allow you to trigger your Workflow over HTTP: -```ts title="src/index.ts" -import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers'; - -type Env = { - // Add your bindings here, e.g. Workers KV, D1, Workers AI, etc. - MY_WORKFLOW: Workflow; -}; - -// User-defined params passed to your workflow -type Params = { - email: string; - metadata: Record; -}; - -export class MyWorkflow extends WorkflowEntrypoint { - async run(event: WorkflowEvent, step: WorkflowStep) { - // Can access bindings on `this.env` - // Can access params on `event.params` - - const files = await step.do('my first step', async () => { - // Fetch a list of files from $SOME_SERVICE - return { - inputParams: event, - files: [ - 'doc_7392_rev3.pdf', - 'report_x29_final.pdf', - 'memo_2024_05_12.pdf', - 'file_089_update.pdf', - 'proj_alpha_v2.pdf', - 'data_analysis_q2.pdf', - 'notes_meeting_52.pdf', - 'summary_fy24_draft.pdf', - ], - }; - }); - - const apiResponse = await step.do('some other step', async () => { - let resp = await fetch('https://api.cloudflare.com/client/v4/ips'); - return await resp.json(); - }); - - await step.sleep('wait on something', '1 minute'); - - await step.do( - 'make a call to write that could maybe, just might, fail', - // Define a retry strategy - { - retries: { - limit: 5, - delay: '5 second', - backoff: 'exponential', - }, - timeout: '15 minutes', - }, - async () => { - // Do stuff here, with access to the state from our previous steps - if (Math.random() > 0.5) { - throw new Error('API call to $STORAGE_SYSTEM failed'); - } - }, - ); - } -} - -export default { - async fetch(req: Request, env: Env): Promise { - let url = new URL(req.url); - - if (url.pathname.startsWith('/favicon')) { - return Response.json({}, { status: 404 }); - } - - // Get the status of an existing instance, if provided - let id = url.searchParams.get('instanceId'); - if (id) { - let instance = await env.MY_WORKFLOW.get(id); - return Response.json({ - status: await instance.status(), - }); - } - - // Spawn a new instance and return the ID and status - let instance = await env.MY_WORKFLOW.create(); - return Response.json({ - id: instance.id, - details: await instance.status(), - }); - }, -}; -``` + ## 5. Deploy your Workflow @@ -507,6 +322,8 @@ curl -s https://workflows-starter.YOUR_WORKERS_SUBDOMAIN.workers.dev/ {"id":"16ac31e5-db9d-48ae-a58f-95b95422d0fa","details":{"status":"queued","error":null,"output":null}} ``` +{/* + ## 7. (Optional) Clean up You can optionally delete the Workflow, which will prevent the creation of any (all) instances by using `wrangler`: @@ -517,6 +334,8 @@ npx wrangler workflows delete my-workflow Re-deploying the Workers script containing your Workflow code will re-create the Workflow. +*/} + --- ## Next steps diff --git a/src/content/partials/magic-transit/graphql/query-magic-transit-bandwidth-graphql.mdx b/src/content/partials/magic-transit/graphql/query-magic-transit-bandwidth-graphql.mdx index 3c761c0da5b04b5..3ce16e0a44a715b 100644 --- a/src/content/partials/magic-transit/graphql/query-magic-transit-bandwidth-graphql.mdx +++ b/src/content/partials/magic-transit/graphql/query-magic-transit-bandwidth-graphql.mdx @@ -7,7 +7,7 @@ import { Markdown } from "~/components"; In this example, you are going to use the GraphQL Analytics API to query {props.productName} ingress tunnel traffic over a specified time period. -The following API call will request {props.productName} ingress tunnel traffic over a one-hour period and output the requested fields. Be sure to replace `` with your account ID, ``, ``, and `` with your API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed. +The following API call will request {props.productName} ingress tunnel traffic over a one-hour period and output the requested fields. Be sure to replace `` with your account ID, ``, ``[^1] (legacy) or ``[^2] (preferred method) with your API credentials, and adjust the `datetime_geq` and `datetime_leq` values as needed. The following example queries for ingress traffic. To query for egress, change the value in the direction filter. @@ -38,7 +38,7 @@ PAYLOAD='{ "query": } }", "variables": { - "accountTag": "", + "accountTag": "", "direction": "ingress", "datetimeStart": "2022-05-04T11:00:00.000Z", "datetimeEnd": "2022-05-04T12:00:00.000Z" @@ -102,3 +102,6 @@ curl https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Authenticate with a Cloudflare API key](/analytics/graphql-api/getting-started/authentication/api-key-auth/) for more information. +[^2]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/partials/magic-transit/graphql/query-magic-transit-health-checks.mdx b/src/content/partials/magic-transit/graphql/query-magic-transit-health-checks.mdx index cb1ba3ad9565bd2..258b460cd143e8f 100644 --- a/src/content/partials/magic-transit/graphql/query-magic-transit-health-checks.mdx +++ b/src/content/partials/magic-transit/graphql/query-magic-transit-health-checks.mdx @@ -7,7 +7,7 @@ import { Markdown } from "~/components"; In this example, you are going to use the GraphQL Analytics API to query {props.productName} health check results which are aggregated from individual health checks carried out by Cloudflare servers to Generic Routing Encapsulation (GRE) tunnels you have set up to work with {props.productName} during the onboarding process. You can query up to one week of data for dates up to three months ago. -The following API call will request a particular account's tunnel health checks over a one day period for a particular Cloudflare data center, and outputs the requested fields. Be sure to replace ``, ``, and `` with your API credentials, and adjust the `datetimeStart`, `datetimeEnd` variables as needed. +The following API call will request a particular account's tunnel health checks over a one day period for a particular Cloudflare data center, and outputs the requested fields. Be sure to replace `` and ``[^1] with your API credentials, and adjust the `datetimeStart`, `datetimeEnd` variables as needed. It will return the tunnel health check results by Cloudflare data center. The result for each data center is aggregated from the healthchecks conducted on individual servers. The tunnel state field in the value represents the state of the tunnel. These states are used by {props.productName} for routing. The value `0` for the tunnel state represents it being down, the value `0.5` being degraded and the value `1` as healthy. @@ -37,7 +37,7 @@ echo '{ "query": } }", "variables": { - "accountTag": "", + "accountTag": "", "datetimeStart": "2022-08-04T00:00:00.000Z", "datetimeEnd": "2022-08-04T01:00:00.000Z" } @@ -96,3 +96,5 @@ https://api.cloudflare.com/client/v4/graphql \ #=> "errors": null #=> } ``` + +[^1]: Refer to [Configure an Analytics API token](/analytics/graphql-api/getting-started/authentication/api-token-auth/) for more information on configuration and permissions. \ No newline at end of file diff --git a/src/content/partials/ruleset-engine/raw-fields-definition-with-link.mdx b/src/content/partials/ruleset-engine/raw-fields-definition-with-link.mdx new file mode 100644 index 000000000000000..2d876447ed90489 --- /dev/null +++ b/src/content/partials/ruleset-engine/raw-fields-definition-with-link.mdx @@ -0,0 +1,8 @@ +--- +params: + - fieldLink +--- + +import { Markdown } from "~/components"; + +This is the raw field version of the field. Raw fields, prefixed with `raw.`, preserve original request values for later evaluations. These fields are immutable during the entire request evaluation workflow, and they are not affected by the actions of previously matched rules. diff --git a/src/content/partials/version-management/product-limitations.mdx b/src/content/partials/version-management/product-limitations.mdx index 7e438a84a3284b3..5e20b8d6629fe22 100644 --- a/src/content/partials/version-management/product-limitations.mdx +++ b/src/content/partials/version-management/product-limitations.mdx @@ -22,10 +22,8 @@ Version Management does not currently support or have limited support for the fo
-- [Cache](/workers/runtime-apis/cache/) configurations are versioned, but cache keys are not. -- Caching a new URL on staging would cache it for production as well. -- Purging cache on staging would purge it on production too. -- Promoting a new version to production would wipe all exiting cache. +- [Cache Reserve](/cache/advanced-configuration/cache-reserve/) is intended for production use only. +- [Tiered Cache](/cache/how-to/tiered-cache/) does not support versioning.
diff --git a/src/pages/workers/ai.astro b/src/pages/workers/ai.astro index 6e0775e893fdbb8..414dcb49d5b17b3 100644 --- a/src/pages/workers/ai.astro +++ b/src/pages/workers/ai.astro @@ -30,14 +30,13 @@ import CursorLight from "~/assets/images/workers/ai/cursor-light.png";

Cursor is an experimental AI assistant, trained to answer questions - about Cloudflare's Developer Platform and powered by + about Cloudflare and powered by Cloudflare Workers, Workers AI, Vectorize, and AI Gateway. - Cursor is here to help answer your Cloudflare Workers and Developer - Platform questions, so ask away! + Cursor is here to help answer your Cloudflare questions, so ask away!