diff --git a/src/content/changelog/ai-crawl-control/2025-12-10-pay-per-crawl-enhancements.mdx b/src/content/changelog/ai-crawl-control/2025-12-10-pay-per-crawl-enhancements.mdx index 3f27e57aef2ecf4..4cfb33c51a13ecd 100644 --- a/src/content/changelog/ai-crawl-control/2025-12-10-pay-per-crawl-enhancements.mdx +++ b/src/content/changelog/ai-crawl-control/2025-12-10-pay-per-crawl-enhancements.mdx @@ -20,7 +20,7 @@ Payment headers (`crawler-exact-price` or `crawler-max-price`) must now be inclu ### New `crawler-error` header -Pay Per Crawl error responses now include a new `crawler-error` header with 11 specific [error codes](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/crawl-pages/#error-code-reference) for programmatic handling. Error response bodies remain unchanged for compatibility. These codes enable robust error handling, automated retry logic, and accurate spending tracking. +Pay Per Crawl error responses now include a new `crawler-error` header with 11 specific [error codes](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/error-codes/) for programmatic handling. Error response bodies remain unchanged for compatibility. These codes enable robust error handling, automated retry logic, and accurate spending tracking. ## For site owners diff --git a/src/content/changelog/queues/2025-08-19-event-subscriptions.mdx b/src/content/changelog/queues/2025-08-19-event-subscriptions.mdx index db4db43ffc78661..fcbfb52680d58b5 100644 --- a/src/content/changelog/queues/2025-08-19-event-subscriptions.mdx +++ b/src/content/changelog/queues/2025-08-19-event-subscriptions.mdx @@ -12,7 +12,7 @@ You can now subscribe to events from other Cloudflare services (for example, [Wo Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products can publish structured events to a queue, which you can then consume with [Workers](/workers/) or [pull via HTTP from anywhere](/queues/configuration/pull-consumers/). -To create a subscription, use the dashboard or [Wrangler](/workers/wrangler/commands/#subscription-create): +To create a subscription, use the dashboard or [Wrangler](/workers/wrangler/commands/queues/#subscription-create): ```bash npx wrangler queues subscription create my-queue --source r2 --events bucket.created diff --git a/src/content/changelog/workers/2025-06-09-workers-integrations-changes.mdx b/src/content/changelog/workers/2025-06-09-workers-integrations-changes.mdx index 88c7d9d0a56441a..4f72936643bde15 100644 --- a/src/content/changelog/workers/2025-06-09-workers-integrations-changes.mdx +++ b/src/content/changelog/workers/2025-06-09-workers-integrations-changes.mdx @@ -6,7 +6,7 @@ products: date: 2025-06-09 --- -Workers native integrations were [originally launched in May 2023](https://blog.cloudflare.com/announcing-database-integrations/) to connect to popular database and observability providers with your Worker in just a few clicks. We are changing how developers connect Workers to these external services. The **Integrations** tab in the dashboard has been removed in favor of a more direct, command-line-based approach using [Wrangler secrets](/workers/wrangler/commands/#secret). +Workers native integrations were [originally launched in May 2023](https://blog.cloudflare.com/announcing-database-integrations/) to connect to popular database and observability providers with your Worker in just a few clicks. We are changing how developers connect Workers to these external services. The **Integrations** tab in the dashboard has been removed in favor of a more direct, command-line-based approach using [Wrangler secrets](/workers/wrangler/commands/general/#secret). ## What's changed diff --git a/src/content/changelog/workers/2025-07-23-workers-preview-urls.mdx b/src/content/changelog/workers/2025-07-23-workers-preview-urls.mdx index 34ef32a063ec566..cdfb5847ef6dfaf 100644 --- a/src/content/changelog/workers/2025-07-23-workers-preview-urls.mdx +++ b/src/content/changelog/workers/2025-07-23-workers-preview-urls.mdx @@ -31,7 +31,7 @@ When you create a pull request: ## Custom alias name -You can also assign a custom preview alias using the [Wrangler CLI](/workers/wrangler/), by passing the `--preview-alias` flag when [uploading a version](/workers/wrangler/commands/#versions-upload) of your Worker: +You can also assign a custom preview alias using the [Wrangler CLI](/workers/wrangler/), by passing the `--preview-alias` flag when [uploading a version](/workers/wrangler/commands/general/#versions-upload) of your Worker: ```bash wrangler versions upload --preview-alias staging diff --git a/src/content/changelog/workers/2025-09-11-increased-version-rollback-limit.mdx b/src/content/changelog/workers/2025-09-11-increased-version-rollback-limit.mdx index 71c52fcc523d5bf..ad92d184dcdbf29 100644 --- a/src/content/changelog/workers/2025-09-11-increased-version-rollback-limit.mdx +++ b/src/content/changelog/workers/2025-09-11-increased-version-rollback-limit.mdx @@ -13,6 +13,6 @@ This allows you to: * Split traffic using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/) between your latest code and any of the 100 most recent versions. -You can do this through the Cloudflare dashboard or with [Wrangler's rollback command](/workers/wrangler/commands/#rollback) +You can do this through the Cloudflare dashboard or with [Wrangler's rollback command](/workers/wrangler/commands/general/#rollback) Learn more about [versioned deployments](/workers/configuration/versions-and-deployments/) and [rollbacks](/workers/configuration/versions-and-deployments/rollbacks/). diff --git a/src/content/changelog/workers/2025-11-21-wrangler-deploy-remote-config-management.mdx b/src/content/changelog/workers/2025-11-21-wrangler-deploy-remote-config-management.mdx index 253a446b009f215..4e5276bec3fe7ba 100644 --- a/src/content/changelog/workers/2025-11-21-wrangler-deploy-remote-config-management.mdx +++ b/src/content/changelog/workers/2025-11-21-wrangler-deploy-remote-config-management.mdx @@ -9,7 +9,7 @@ date: 2025-11-21 import { Example } from "~/components"; Until now, if a Worker had been previously deployed via the [Cloudflare Dashboard](https://dash.cloudflare.com), a subsequent deployment done via the Cloudflare Workers CLI, [**Wrangler**](/workers/wrangler/) -(through the [`deploy` command](/workers/wrangler/commands/#deploy)), would allow the user to override the Worker's dashboard settings without providing details on +(through the [`deploy` command](/workers/wrangler/commands/general/#deploy)), would allow the user to override the Worker's dashboard settings without providing details on what dashboard settings would be lost. Now instead, `wrangler deploy` presents a helpful representation of the differences between the [local configuration](/workers/wrangler/configuration/) diff --git a/src/content/changelog/workers/2025-12-18-wrangler-auth-token.mdx b/src/content/changelog/workers/2025-12-18-wrangler-auth-token.mdx index 0551d19f31c166c..fab8b0ce259c30a 100644 --- a/src/content/changelog/workers/2025-12-18-wrangler-auth-token.mdx +++ b/src/content/changelog/workers/2025-12-18-wrangler-auth-token.mdx @@ -6,7 +6,7 @@ products: date: 2025-12-18 --- -Wrangler now includes a new [`wrangler auth token`](/workers/wrangler/commands/#auth-token) command that retrieves your current authentication token or credentials for use with other tools and scripts. +Wrangler now includes a new [`wrangler auth token`](/workers/wrangler/commands/general/#auth-token) command that retrieves your current authentication token or credentials for use with other tools and scripts. ```sh wrangler auth token diff --git a/src/content/changelog/workers/2026-01-09-wrangler-tab-completion.mdx b/src/content/changelog/workers/2026-01-09-wrangler-tab-completion.mdx index b10ba944c4e9c5b..e20749eec2f7155 100644 --- a/src/content/changelog/workers/2026-01-09-wrangler-tab-completion.mdx +++ b/src/content/changelog/workers/2026-01-09-wrangler-tab-completion.mdx @@ -37,4 +37,4 @@ wrangler kv # shows subcommands: namespace, key, bulk Tab completions are dynamically generated from Wrangler's command registry, so they stay up-to-date as new commands and options are added. This feature is powered by [`@bomb.sh/tab`](https://github.com/bombshell-dev/tab/). -See the [`wrangler complete` documentation](/workers/wrangler/commands/#complete) for more details. +See the [`wrangler complete` documentation](/workers/wrangler/commands/general/#complete) for more details. diff --git a/src/content/changelog/workers/2026-01-11-wrangler-types-check.mdx b/src/content/changelog/workers/2026-01-11-wrangler-types-check.mdx index f4b9216a38f58af..071de99742d1429 100644 --- a/src/content/changelog/workers/2026-01-11-wrangler-types-check.mdx +++ b/src/content/changelog/workers/2026-01-11-wrangler-types-check.mdx @@ -16,4 +16,4 @@ npx wrangler types --check If your types are up to date, the command will succeed silently. If they are out of date, you'll see an error message indicating which files need to be regenerated. -For more information, see the [Wrangler types documentation](/workers/wrangler/commands/#types). +For more information, see the [Wrangler types documentation](/workers/wrangler/commands/general/#types). diff --git a/src/content/changelog/workers/2026-01-13-wrangler-types-multi-environment.mdx b/src/content/changelog/workers/2026-01-13-wrangler-types-multi-environment.mdx index cdd650c6050f027..9c6c716442099b5 100644 --- a/src/content/changelog/workers/2026-01-13-wrangler-types-multi-environment.mdx +++ b/src/content/changelog/workers/2026-01-13-wrangler-types-multi-environment.mdx @@ -20,4 +20,4 @@ If you want the previous behavior of generating types for only a specific enviro wrangler types --env production ``` -Learn more about [generating types for your Worker](/workers/wrangler/commands/#types) in the Wrangler documentation. +Learn more about [generating types for your Worker](/workers/wrangler/commands/general/#types) in the Wrangler documentation. diff --git a/src/content/changelog/workers/2026-02-12-quick-editor-dev-tools-deprecation.mdx b/src/content/changelog/workers/2026-02-12-quick-editor-dev-tools-deprecation.mdx index 9b6544b208d7ab6..6a6702be9c1c1d6 100644 --- a/src/content/changelog/workers/2026-02-12-quick-editor-dev-tools-deprecation.mdx +++ b/src/content/changelog/workers/2026-02-12-quick-editor-dev-tools-deprecation.mdx @@ -12,4 +12,4 @@ This aligns our logging with `wrangler tail` and gives us the opportunity to foc We have made improvements to this logging viewer based on your feedback such that you can log object and array types, and easily clear the list of logs. This does not include class instances. Limitations are documented in the [Workers Playground docs](/workers/playground/). -If you do need to develop your Worker with a remote inspector, you can still do this using Wrangler locally. Cloning a project from your quick editor to your computer for local development can be done with the `wrangler init --from-dash` command. For more information, refer to [Wrangler commands](/workers/wrangler/commands/#init). +If you do need to develop your Worker with a remote inspector, you can still do this using Wrangler locally. Cloning a project from your quick editor to your computer for local development can be done with the `wrangler init --from-dash` command. For more information, refer to [Wrangler commands](/workers/wrangler/commands/general/#init). diff --git a/src/content/changelog/workers/2026-02-25-wrangler-autoconfig-ga.mdx b/src/content/changelog/workers/2026-02-25-wrangler-autoconfig-ga.mdx index 22692fd971883c3..1695050f48c3236 100644 --- a/src/content/changelog/workers/2026-02-25-wrangler-autoconfig-ga.mdx +++ b/src/content/changelog/workers/2026-02-25-wrangler-autoconfig-ga.mdx @@ -8,7 +8,7 @@ date: 2026-02-25 You can now deploy any existing project to Cloudflare Workers — even without a Wrangler configuration file — and `wrangler deploy` will _just work_. -Starting with Wrangler **4.68.0**, running [`wrangler deploy`](/workers/wrangler/commands/#deploy) [automatically configures your project](/workers/framework-guides/automatic-configuration/) by detecting your framework, installing required adapters, and deploying it to Cloudflare Workers. +Starting with Wrangler **4.68.0**, running [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) [automatically configures your project](/workers/framework-guides/automatic-configuration/) by detecting your framework, installing required adapters, and deploying it to Cloudflare Workers. ### Using Wrangler locally @@ -24,7 +24,7 @@ When you run `wrangler deploy` in a project without a configuration file, Wrangl 4. Generates a `wrangler.jsonc` [configuration file](/workers/wrangler/configuration/) 5. Deploys your project to Cloudflare Workers -You can also use [`wrangler setup`](/workers/wrangler/commands/#setup) to configure without deploying, or pass [`--yes`](/workers/wrangler/commands/#deploy) to skip prompts. +You can also use [`wrangler setup`](/workers/wrangler/commands/general/#setup) to configure without deploying, or pass [`--yes`](/workers/wrangler/commands/general/#deploy) to skip prompts. ### Using the Cloudflare dashboard diff --git a/src/content/docs/agents/api-reference/mcp-agent-api.mdx b/src/content/docs/agents/api-reference/mcp-agent-api.mdx index 6242a1ad4925dc4..2613aa848983fc5 100644 --- a/src/content/docs/agents/api-reference/mcp-agent-api.mdx +++ b/src/content/docs/agents/api-reference/mcp-agent-api.mdx @@ -149,7 +149,7 @@ Available jurisdictions include `"eu"` (European Union) and `"fedramp"` (FedRAMP ## Hibernation support -`McpAgent` instances automatically support [WebSockets Hibernation](/durable-objects/best-practices/websockets/#websocket-hibernation-api), allowing stateful MCP servers to sleep during inactive periods while preserving their state. This means your agents only consume compute resources when actively processing requests, optimizing costs while maintaining the full context and conversation history. +`McpAgent` instances automatically support [WebSockets Hibernation](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), allowing stateful MCP servers to sleep during inactive periods while preserving their state. This means your agents only consume compute resources when actively processing requests, optimizing costs while maintaining the full context and conversation history. Hibernation is enabled by default and requires no additional configuration. diff --git a/src/content/docs/agents/getting-started/add-to-existing-project.mdx b/src/content/docs/agents/getting-started/add-to-existing-project.mdx index 00d721bbcc52d5f..aa31e364251f994 100644 --- a/src/content/docs/agents/getting-started/add-to-existing-project.mdx +++ b/src/content/docs/agents/getting-started/add-to-existing-project.mdx @@ -214,7 +214,7 @@ Configure assets in the Wrangler configuration file: ## 6. Generate TypeScript types -Do not hand-write your `Env` interface. Run [`wrangler types`](/workers/wrangler/commands/#types) to generate a type definition file that matches your Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time. +Do not hand-write your `Env` interface. Run [`wrangler types`](/workers/wrangler/commands/general/#types) to generate a type definition file that matches your Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time. Re-run `wrangler types` whenever you add or rename a binding. diff --git a/src/content/docs/agents/getting-started/testing-your-agent.mdx b/src/content/docs/agents/getting-started/testing-your-agent.mdx index 39d1a7928e9fdae..e8f0e7fad05b8de 100644 --- a/src/content/docs/agents/getting-started/testing-your-agent.mdx +++ b/src/content/docs/agents/getting-started/testing-your-agent.mdx @@ -149,4 +149,4 @@ Your worker has access to the following bindings: This spins up a local development server that runs the same runtime as Cloudflare Workers, and allows you to iterate on your Agent's code and test it locally without deploying it. -Visit the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) docs to review the CLI flags and configuration options. +Visit the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/general/#dev) docs to review the CLI flags and configuration options. diff --git a/src/content/docs/ai-crawl-control/configuration/ai-crawl-control-with-waf.mdx b/src/content/docs/ai-crawl-control/configuration/ai-crawl-control-with-waf.mdx index e17852eb7133221..ef2dba03389440e 100644 --- a/src/content/docs/ai-crawl-control/configuration/ai-crawl-control-with-waf.mdx +++ b/src/content/docs/ai-crawl-control/configuration/ai-crawl-control-with-waf.mdx @@ -44,7 +44,7 @@ You may have both of the following features enabled: - [WAF custom rule to block traffic from specific countries](/waf/custom-rules/use-cases/block-traffic-from-specific-countries/) - AI Crawl Control's [pay per crawl](/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/) to charge AI crawlers when they request access to your content -Since WAF custom rules are enforced before pay per crawl, traffic (including AI crawlers) from your blocked countries will continue to be blocked, even if they provide the [required headers](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/crawl-pages/#1-include-required-headers) for pay per crawl. +Since WAF custom rules are enforced before pay per crawl, traffic (including AI crawlers) from your blocked countries will continue to be blocked, even if they provide the [required headers](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/crawl-pages/#1-identify-payment-requirements) for pay per crawl. ### Allowed search engine bots via WAF custom rule vs pay per crawl diff --git a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/connect-to-stripe.mdx b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/connect-to-stripe.mdx index 3657f3c657ed4f6..e7380f2d8093fc4 100644 --- a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/connect-to-stripe.mdx +++ b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/connect-to-stripe.mdx @@ -43,6 +43,6 @@ Cloudflare is not responsible for configuring spending limits. Ensure you have c ## Billing -Charges are recorded upon successful delivery of content that is requested with valid [crawler price headers](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/crawl-pages/#22-include-payment-headers). +Charges are recorded upon successful delivery of content that is requested with valid [crawler price headers](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-ai-owner/crawl-pages/#21-include-payment-headers). Invoices are created and managed via Stripe. Crawlers are responsible for setting and enforcing their own spending limits. \ No newline at end of file diff --git a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/monitor-activity.mdx b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/monitor-activity.mdx index 29a8c2d19f9005e..c20b665f29a76db 100644 --- a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/monitor-activity.mdx +++ b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/monitor-activity.mdx @@ -41,7 +41,7 @@ The metrics help you understand: - Request patterns and trends - Robots.txt violations -For detailed information about available metrics, refer to [View AI Crawl Control metrics](/ai-crawl-control/features/analyze-ai-traffic/#view-ai-crawl-control-metrics). +For detailed information about available metrics, refer to [View AI Crawl Control metrics](/ai-crawl-control/features/analyze-ai-traffic/#view-the-metrics-tab). :::note[Balance visibility] Your accrued earnings balance is not currently visible in the dashboard. You can request balance updates from your Cloudflare team. diff --git a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/set-a-pay-per-crawl-price.mdx b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/set-a-pay-per-crawl-price.mdx index a060311c3919bf2..613f775e14b93ff 100644 --- a/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/set-a-pay-per-crawl-price.mdx +++ b/src/content/docs/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/set-a-pay-per-crawl-price.mdx @@ -23,7 +23,8 @@ click E "/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owne Once your domain's visibility is set to **Visible** in Account Settings, you can set a pay per crawl price and enable pay per crawl for that domain. -{/* prettier-ignore */} +{/* prettier-ignore-start */} + 1. Go to **AI Crawl Control**. @@ -32,10 +33,12 @@ Once your domain's visibility is set to **Visible** in Account Settings, you can 2. Go to the **Settings** tab. 3. In the **Pay Per Crawl** card, select **Enable**. 4. Set your default per crawl price. This is the amount charged for each successful content retrieval (HTTP 200 response) by an AI crawler. - - (Optional) To set different prices for different content, select **Enable custom pricing**. Refer to [Advanced configuration](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/advanced-configuration/#custom-pricing) for details. + - (Optional) To set different prices for different content, select **Enable custom pricing**. Refer to [Advanced configuration](/ai-crawl-control/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/advanced-configuration/) for details. 5. Select **Save**. +{/* prettier-ignore-end */} + After enabling and setting a price, the domain's status in Account Settings will change to **Enabled**. :::note[Pricing considerations] diff --git a/src/content/docs/ai-gateway/integrations/aig-workers-ai-binding.mdx b/src/content/docs/ai-gateway/integrations/aig-workers-ai-binding.mdx index 3bb7af60fd0a777..262971fdd761188 100644 --- a/src/content/docs/ai-gateway/integrations/aig-workers-ai-binding.mdx +++ b/src/content/docs/ai-gateway/integrations/aig-workers-ai-binding.mdx @@ -108,7 +108,7 @@ Up to this point, you have created an AI binding for your Worker and configured ## 4. Develop locally with Wrangler -While in your project directory, test Workers AI locally by running [`wrangler dev`](/workers/wrangler/commands/#dev): +While in your project directory, test Workers AI locally by running [`wrangler dev`](/workers/wrangler/commands/general/#dev): ```bash npx wrangler dev diff --git a/src/content/docs/ai-gateway/tutorials/deploy-aig-worker.mdx b/src/content/docs/ai-gateway/tutorials/deploy-aig-worker.mdx index fb8edce8a72b3d9..b1f1ee84e9fbb3a 100644 --- a/src/content/docs/ai-gateway/tutorials/deploy-aig-worker.mdx +++ b/src/content/docs/ai-gateway/tutorials/deploy-aig-worker.mdx @@ -89,7 +89,7 @@ export default { }; ``` -To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/#secret-put) to set your `OPENAI_API_KEY`. This will save the API key to your environment so your Worker can access it when deployed. This key is the API key you created earlier in the OpenAI dashboard: +To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/general/#secret-put) to set your `OPENAI_API_KEY`. This will save the API key to your environment so your Worker can access it when deployed. This key is the API key you created earlier in the OpenAI dashboard: diff --git a/src/content/docs/browser-rendering/workers-bindings/browser-rendering-with-DO.mdx b/src/content/docs/browser-rendering/workers-bindings/browser-rendering-with-DO.mdx index 978696f94cfb71b..0e25f013b97f81c 100644 --- a/src/content/docs/browser-rendering/workers-bindings/browser-rendering-with-DO.mdx +++ b/src/content/docs/browser-rendering/workers-bindings/browser-rendering-with-DO.mdx @@ -247,7 +247,7 @@ Run `npx wrangler dev` to test your Worker locally. ## 7. Deploy -Run [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) to deploy your Worker to the Cloudflare global network. +Run [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy) to deploy your Worker to the Cloudflare global network. ## Related resources diff --git a/src/content/docs/cloudflare-for-platforms/workers-for-platforms/configuration/hostname-routing.mdx b/src/content/docs/cloudflare-for-platforms/workers-for-platforms/configuration/hostname-routing.mdx index 2c415e79a34ce2c..5cfa40082556952 100644 --- a/src/content/docs/cloudflare-for-platforms/workers-for-platforms/configuration/hostname-routing.mdx +++ b/src/content/docs/cloudflare-for-platforms/workers-for-platforms/configuration/hostname-routing.mdx @@ -8,14 +8,14 @@ sidebar: import { Render, PackageManagers, WranglerConfig } from "~/components"; -You can use [dynamic dispatch](/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/) Workers to route millions of vanity domains or subdomains to Workers without hitting traditional [route limits](/workers/platform/limits/#number-of-routes-per-zone). These hostnames can be subdomains under your managed domain (e.g. `customer1.saas.com`) or vanity domains controlled by your end customers (e.g. `mystore.com`), which can be managed through [custom hostnames](/cloudflare-for-platforms/cloudflare-for-saas/domain-support/). +You can use [dynamic dispatch](/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/) Workers to route millions of vanity domains or subdomains to Workers without hitting traditional [route limits](/workers/platform/limits/#routes-and-domains). These hostnames can be subdomains under your managed domain (e.g. `customer1.saas.com`) or vanity domains controlled by your end customers (e.g. `mystore.com`), which can be managed through [custom hostnames](/cloudflare-for-platforms/cloudflare-for-saas/domain-support/). ## (Recommended) Wildcard route with a dispatch Worker Configure a wildcard [Route](/workers/configuration/routing/routes/) (`*/*`) on your SaaS domain (the domain where you configure custom hostnames) to point to your dynamic dispatch Worker. This allows you to: - **Support both subdomains and vanity domains**: Handle `customer1.myplatform.com` (subdomain) and `shop.customer.com` (custom hostname) with the same routing logic. -- **Avoid route limits**: Instead of creating individual routes for every domain, which can cause you to hit [Routes limits](/workers/platform/limits/#number-of-routes-per-zone), you can handle the routing logic in code and proxy millions of domains to individual Workers. +- **Avoid route limits**: Instead of creating individual routes for every domain, which can cause you to hit [Routes limits](/workers/platform/limits/#routes-and-domains), you can handle the routing logic in code and proxy millions of domains to individual Workers. - **Programmatically control routing logic**: Write custom code to route requests based on hostname, [custom metadata](/cloudflare-for-platforms/cloudflare-for-saas/domain-support/custom-metadata/), path, or any other properties. :::note diff --git a/src/content/docs/cloudflare-one/access-controls/ai-controls/saas-mcp.mdx b/src/content/docs/cloudflare-one/access-controls/ai-controls/saas-mcp.mdx index f0499b8b400e6b5..fa9fb497dc56cd8 100644 --- a/src/content/docs/cloudflare-one/access-controls/ai-controls/saas-mcp.mdx +++ b/src/content/docs/cloudflare-one/access-controls/ai-controls/saas-mcp.mdx @@ -23,7 +23,7 @@ This guide walks through how to deploy a remote Devices", splitTunnelsURL: "/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/route-traffic/split-tunnels/", warpDeploymentURL: "/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/", diff --git a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/intune.mdx b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/intune.mdx index bedcbf31235fa63..7df779057955a84 100644 --- a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/intune.mdx +++ b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/intune.mdx @@ -120,7 +120,7 @@ The following steps outline deploying the Cloudflare One Client on macOS using I ### Prerequisites - A [Microsoft Intune account](https://login.microsoftonline.com/). -- A Cloudflare account that has a [Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization). +- A Cloudflare account that has a [Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization). - macOS devices enrolled in Intune. ### Deployment order @@ -438,7 +438,7 @@ By completing this step, you deliver the Cloudflare One Client to targeted iOS d 6. Review your configuration in **Review + create** and select **Create**. -By completing this step, you preconfigure the Cloudflare One Agent with your [Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization) and connection settings so that enrolled iOS devices automatically apply a consistent Cloudflare One Client configuration when the app installs. +By completing this step, you preconfigure the Cloudflare One Agent with your [Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization) and connection settings so that enrolled iOS devices automatically apply a consistent Cloudflare One Client configuration when the app installs. ### Intune configuration diff --git a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/jamf.mdx b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/jamf.mdx index 068dfd6399097ad..9b7cd7e0ea22e17 100644 --- a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/jamf.mdx +++ b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/partners/jamf.mdx @@ -16,7 +16,7 @@ This guide covers how to deploy the Cloudflare One Client (formerly WARP) using ### Prerequisites - A [Jamf Pro account](https://www.jamf.com/products/jamf-pro/) -- A Cloudflare account that has a [Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization) +- A Cloudflare account that has a [Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization) - macOS devices enrolled in Jamf ### 1. Upload the Cloudflare One Client package diff --git a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/troubleshooting/client-errors.mdx b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/troubleshooting/client-errors.mdx index b8545a15680722a..7ead8ce04edc90d 100644 --- a/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/troubleshooting/client-errors.mdx +++ b/src/content/docs/cloudflare-one/team-and-resources/devices/cloudflare-one-client/troubleshooting/client-errors.mdx @@ -260,7 +260,7 @@ The device is not connected to a Wi-Fi network or LAN that has connectivity to t ### Cause -The device is not authenticated to an [organization](/cloudflare-one/setup/#create-a-zero-trust-organization) because: +The device is not authenticated to an [organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization) because: - The device was revoked in Zero Trust. - The registration was corrupted or deleted for an unknown reason. @@ -295,7 +295,7 @@ The device is not authenticated to an [organization](/cloudflare-one/setup/#crea #### Cause -Your device was unenrolled from your company's [organization](/cloudflare-one/setup/#create-a-zero-trust-organization) by an administrator on your account. +Your device was unenrolled from your company's [organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization) by an administrator on your account. #### Resolution diff --git a/src/content/docs/cloudflare-one/team-and-resources/devices/device-registration.mdx b/src/content/docs/cloudflare-one/team-and-resources/devices/device-registration.mdx index f7296f977379485..da367fee6fbb9a0 100644 --- a/src/content/docs/cloudflare-one/team-and-resources/devices/device-registration.mdx +++ b/src/content/docs/cloudflare-one/team-and-resources/devices/device-registration.mdx @@ -7,7 +7,7 @@ sidebar: import { Render, TabItem, Tabs, APIRequest } from "~/components"; -A device registration represents an individual session of the [Cloudflare One Client](/cloudflare-one/team-and-resources/devices/cloudflare-one-client/) on a physical device, linking a user (or service token) and the device to your [Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization). A device registration is created when the Cloudflare One Client first authenticates. Each device registration has associated configuration, which includes a unique public key, device profile, and virtual IP addresses (one IPv4 and one IPv6). +A device registration represents an individual session of the [Cloudflare One Client](/cloudflare-one/team-and-resources/devices/cloudflare-one-client/) on a physical device, linking a user (or service token) and the device to your [Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization). A device registration is created when the Cloudflare One Client first authenticates. Each device registration has associated configuration, which includes a unique public key, device profile, and virtual IP addresses (one IPv4 and one IPv6). A single physical device can have [multiple device registrations](/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/mdm-deployment/windows-multiuser/), for example, if multiple users share a single laptop and each enrolls the Cloudflare One Client with their own credentials. diff --git a/src/content/docs/cloudflare-one/tutorials/deploy-client-headless-linux.mdx b/src/content/docs/cloudflare-one/tutorials/deploy-client-headless-linux.mdx index 0fa6645a8ae16a7..44fa0c2e75bc0bc 100644 --- a/src/content/docs/cloudflare-one/tutorials/deploy-client-headless-linux.mdx +++ b/src/content/docs/cloudflare-one/tutorials/deploy-client-headless-linux.mdx @@ -20,7 +20,7 @@ This tutorial focuses on deploying the Cloudflare One Client as an endpoint devi ## Prerequisites -- [Cloudflare Zero Trust account](/cloudflare-one/setup/#create-a-zero-trust-organization) +- [Cloudflare Zero Trust account](/cloudflare-one/setup/#2-create-a-zero-trust-organization) ## 1. Create a service token diff --git a/src/content/docs/cloudflare-wan/zero-trust/cloudflare-one-client.mdx b/src/content/docs/cloudflare-wan/zero-trust/cloudflare-one-client.mdx index 078ec39b808eeab..e507b389129ad07 100644 --- a/src/content/docs/cloudflare-wan/zero-trust/cloudflare-one-client.mdx +++ b/src/content/docs/cloudflare-wan/zero-trust/cloudflare-one-client.mdx @@ -16,7 +16,7 @@ import { Render } from "~/components"; params={{ warpURL: "/cloudflare-one/team-and-resources/devices/cloudflare-one-client/", greIpsecURL: "/cloudflare-wan/configuration/manually/how-to/configure-tunnel-endpoints/#add-tunnels", - setupZeroTrustAccountURL: "/cloudflare-one/setup/#create-a-zero-trust-organization", + setupZeroTrustAccountURL: "/cloudflare-one/setup/#2-create-a-zero-trust-organization", ztDashPath: "My Team > Devices", splitTunnelsURL: "/cloudflare-one/team-and-resources/devices/cloudflare-one-client/configure/route-traffic/split-tunnels/", warpDeploymentURL: "/cloudflare-one/team-and-resources/devices/cloudflare-one-client/deployment/", diff --git a/src/content/docs/containers/index.mdx b/src/content/docs/containers/index.mdx index bf85bf33c7ed507..c454884427d89c1 100644 --- a/src/content/docs/containers/index.mdx +++ b/src/content/docs/containers/index.mdx @@ -37,7 +37,7 @@ Enhance your Workers with serverless containers Run code written in any programming language, built for any runtime, as part of apps built on [Workers](/workers). Deploy your container image to Region:Earth without worrying about managing infrastructure - just define your -Worker and [`wrangler deploy`](/workers/wrangler/commands/#deploy). +Worker and [`wrangler deploy`](/workers/wrangler/commands/general/#deploy). With Containers you can run: @@ -146,7 +146,7 @@ regional placement, Workflow and Queue integrations, AI-generated code execution Learn more about the commands to develop, build and push images, and deploy diff --git a/src/content/docs/containers/local-dev.mdx b/src/content/docs/containers/local-dev.mdx index 74d3bab264ca966..f72b93d79ed7801 100644 --- a/src/content/docs/containers/local-dev.mdx +++ b/src/content/docs/containers/local-dev.mdx @@ -5,7 +5,7 @@ sidebar: order: 6 --- -You can run both your container and your Worker locally by simply running [`npx wrangler dev`](/workers/wrangler/commands/#dev) (or `vite dev` for Vite projects using the [Cloudflare Vite plugin](/workers/vite-plugin/)) in your project's directory. +You can run both your container and your Worker locally by simply running [`npx wrangler dev`](/workers/wrangler/commands/general/#dev) (or `vite dev` for Vite projects using the [Cloudflare Vite plugin](/workers/vite-plugin/)) in your project's directory. To develop Container-enabled Workers locally, you will need to first ensure that a Docker compatible CLI tool and Engine are installed. For instance, you could use [Docker Desktop](https://docs.docker.com/desktop/) or [Colima](https://github.com/abiosoft/colima). diff --git a/src/content/docs/containers/platform-details/image-management.mdx b/src/content/docs/containers/platform-details/image-management.mdx index c97292b00117b5c..71b52039a3b9267 100644 --- a/src/content/docs/containers/platform-details/image-management.mdx +++ b/src/content/docs/containers/platform-details/image-management.mdx @@ -118,7 +118,7 @@ The following example grants access to all image repositories under AWS account } ``` -You can then use the credentials for the IAM User to [configure a registry in Wrangler](/workers/wrangler/commands/#containers-registries). +You can then use the credentials for the IAM User to [configure a registry in Wrangler](/workers/wrangler/commands/containers/#containers-registries). Wrangler will prompt you to create a Secrets Store store if one does not already exist, and then create your secret. - SQL API methods accessed with `ctx.storage.sql` are only allowed on [Durable Object classes with SQLite storage backend](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) and will return an error if called on Durable Object classes with a KV-storage backend. -- When writing data, every row update of an index counts as an additional row. However, indexes may be beneficial for read-heavy use cases. Refer to [Index for SQLite Durable Objects](/durable-objects/best-practices/access-durable-objects-storage/#index-for-sqlite-durable-objects). +- When writing data, every row update of an index counts as an additional row. However, indexes may be beneficial for read-heavy use cases. Refer to [Index for SQLite Durable Objects](/durable-objects/best-practices/access-durable-objects-storage/#indexes-in-sqlite). - Writing data to [SQLite virtual tables](https://www.sqlite.org/vtab.html) also counts towards rows written. diff --git a/src/content/docs/durable-objects/api/state.mdx b/src/content/docs/durable-objects/api/state.mdx index 90224bba7b8357e..b1be20e5e19bb09 100644 --- a/src/content/docs/durable-objects/api/state.mdx +++ b/src/content/docs/durable-objects/api/state.mdx @@ -165,11 +165,11 @@ class MyDurableObject(DurableObject): ### `acceptWebSocket` -`acceptWebSocket` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`acceptWebSocket` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `acceptWebSocket` adds a WebSocket to the set of WebSockets attached to the Durable Object. Once called, any incoming messages will be delivered by calling the Durable Object's `webSocketMessage` handler, and `webSocketClose` will be invoked upon disconnect. After calling `acceptWebSocket`, the WebSocket is accepted and its `send` and `close` methods can be used. -The [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api) takes the place of the standard [WebSockets API](/workers/runtime-apis/websockets/). Therefore, `ws.accept` must not have been called separately and `ws.addEventListener` method will not receive events as they will instead be delivered to the Durable Object. +The [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api) takes the place of the standard [WebSockets API](/workers/runtime-apis/websockets/). Therefore, `ws.accept` must not have been called separately and `ws.addEventListener` method will not receive events as they will instead be delivered to the Durable Object. The WebSocket Hibernation API permits a maximum of 32,768 WebSocket connections per Durable Object, but the CPU and memory usage of a given workload may further limit the practical number of simultaneous connections. @@ -184,7 +184,7 @@ The WebSocket Hibernation API permits a maximum of 32,768 WebSocket connections ### `getWebSockets` -`getWebSockets` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`getWebSockets` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `getWebSockets` returns an `Array` which is the set of WebSockets attached to the Durable Object. An optional tag argument can be used to filter the list according to tags supplied when calling [`DurableObjectState::acceptWebSocket`](/durable-objects/api/state/#acceptwebsocket). @@ -204,7 +204,7 @@ Disconnected WebSockets are not returned by this method, but `getWebSockets` may ### `setWebSocketAutoResponse` -`setWebSocketAutoResponse` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`setWebSocketAutoResponse` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `setWebSocketAutoResponse` sets an automatic response, auto-response, for the request provided for all WebSockets attached to the Durable Object. If a request is received matching the provided request then the auto-response will be returned without waking WebSockets in hibernation and incurring billable duration charges. @@ -238,7 +238,7 @@ Disconnected WebSockets are not returned by this method, but `getWebSockets` may ### `getWebSocketAutoResponseTimestamp` -`getWebSocketAutoResponseTimestamp` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`getWebSocketAutoResponseTimestamp` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `getWebSocketAutoResponseTimestamp` gets the most recent `Date` on which the given WebSocket sent an auto-response, or null if the given WebSocket never sent an auto-response. @@ -252,7 +252,7 @@ Disconnected WebSockets are not returned by this method, but `getWebSockets` may ### `setHibernatableWebSocketEventTimeout` -`setHibernatableWebSocketEventTimeout` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`setHibernatableWebSocketEventTimeout` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `setHibernatableWebSocketEventTimeout` sets the maximum amount of time in milliseconds that a WebSocket event can run for. @@ -268,7 +268,7 @@ If no parameter or a parameter of `0` is provided and a timeout has been previou ### `getHibernatableWebSocketEventTimeout` -`getHibernatableWebSocketEventTimeout` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`getHibernatableWebSocketEventTimeout` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `getHibernatableWebSocketEventTimeout` gets the currently set hibernatable WebSocket event timeout if one has been set via [`DurableObjectState::setHibernatableWebSocketEventTimeout`](/durable-objects/api/state/#sethibernatablewebsocketeventtimeout). @@ -282,7 +282,7 @@ If no parameter or a parameter of `0` is provided and a timeout has been previou ### `getTags` -`getTags` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. +`getTags` is part of the [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), which allows a Durable Object to be removed from memory to save costs while keeping its WebSockets connected. `getTags` returns tags associated with a given WebSocket. This method throws an exception if the WebSocket has not been associated with the Durable Object via [`DurableObjectState::acceptWebSocket`](/durable-objects/api/state/#acceptwebsocket). diff --git a/src/content/docs/durable-objects/best-practices/rules-of-durable-objects.mdx b/src/content/docs/durable-objects/best-practices/rules-of-durable-objects.mdx index 661bb1e16b29d76..8d6193c6350b4a8 100644 --- a/src/content/docs/durable-objects/best-practices/rules-of-durable-objects.mdx +++ b/src/content/docs/durable-objects/best-practices/rules-of-durable-objects.mdx @@ -1410,7 +1410,7 @@ export class Subscription extends DurableObject { ### Clean up storage with `deleteAll()` -To fully clear a Durable Object's storage, call `deleteAll()`. Simply deleting individual keys or dropping tables is not sufficient, as some internal metadata may remain. Workers with a compatibility date before [2026-02-24](/workers/configuration/compatibility-flags/#delete-all-deletes-alarms) and an alarm set should delete the alarm first with `deleteAlarm()`. +To fully clear a Durable Object's storage, call `deleteAll()`. Simply deleting individual keys or dropping tables is not sufficient, as some internal metadata may remain. Workers with a compatibility date before [2026-02-24](/workers/configuration/compatibility-flags/#durable-object-deleteall-deletes-alarms) and an alarm set should delete the alarm first with `deleteAlarm()`. ```ts diff --git a/src/content/docs/durable-objects/concepts/what-are-durable-objects.mdx b/src/content/docs/durable-objects/concepts/what-are-durable-objects.mdx index e3ef1e994026ff9..4727822044c6797 100644 --- a/src/content/docs/durable-objects/concepts/what-are-durable-objects.mdx +++ b/src/content/docs/durable-objects/concepts/what-are-durable-objects.mdx @@ -50,7 +50,7 @@ The [Durable Object Storage API](/durable-objects/api/sqlite-storage-api/) allow There are two flavors of the storage API, a [key-value (KV) API](/durable-objects/api/legacy-kv-storage-api/) and an [SQL API](/durable-objects/api/sqlite-storage-api/). -When using the [new SQLite in Durable Objects storage backend](/durable-objects/reference/durable-objects-migrations/#enable-sqlite-storage-backend-on-new-durable-object-class-migration), you have access to both the APIs. However, if you use the previous storage backend you only have access to the key-value API. +When using the [new SQLite in Durable Objects storage backend](/durable-objects/reference/durable-objects-migrations/#create-migration), you have access to both the APIs. However, if you use the previous storage backend you only have access to the key-value API. ### Alarms API @@ -64,7 +64,7 @@ WebSockets are long-lived TCP connections that enable bi-directional, real-time Because Durable Objects provide a single-point-of-coordination between Cloudflare Workers, a single Durable Object instance can be used in parallel with WebSockets to coordinate between multiple clients, such as participants in a chat room or a multiplayer game. -Durable Objects support the [WebSocket Standard API](/durable-objects/best-practices/websockets/#websocket-standard-api), as well as the [WebSockets Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api) which extends the Web Standard WebSocket API to reduce costs by not incurring billing charges during periods of inactivity. +Durable Objects support the [WebSocket Standard API](/durable-objects/best-practices/websockets/#websocket-standard-api), as well as the [WebSockets Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api) which extends the Web Standard WebSocket API to reduce costs by not incurring billing charges during periods of inactivity. ### RPC diff --git a/src/content/docs/durable-objects/get-started.mdx b/src/content/docs/durable-objects/get-started.mdx index 25384749cfc9f33..9aa4a213f93ac7d 100644 --- a/src/content/docs/durable-objects/get-started.mdx +++ b/src/content/docs/durable-objects/get-started.mdx @@ -303,7 +303,7 @@ Refer to [Durable Objects migrations](/durable-objects/reference/durable-objects ## 6. Develop a Durable Object Worker locally -To test your Durable Object locally, run [`wrangler dev`](/workers/wrangler/commands/#dev): +To test your Durable Object locally, run [`wrangler dev`](/workers/wrangler/commands/general/#dev): ```sh npx wrangler dev diff --git a/src/content/docs/durable-objects/index.mdx b/src/content/docs/durable-objects/index.mdx index 51a43c510cbd6f2..ce848ad79212495 100644 --- a/src/content/docs/durable-objects/index.mdx +++ b/src/content/docs/durable-objects/index.mdx @@ -38,7 +38,7 @@ Use Durable Objects to build applications that need coordination among multiple :::note SQLite-backed Durable Objects are now available on the Workers Free plan with these [limits](/durable-objects/platform/pricing/). -[SQLite storage](/durable-objects/best-practices/access-durable-objects-storage/) and corresponding [Storage API](/durable-objects/api/sqlite-storage-api/) methods like `sql.exec` have moved from beta to general availability. New Durable Object classes should use wrangler configuration for [SQLite storage](/durable-objects/best-practices/access-durable-objects-storage/#wrangler-configuration-for-sqlite-durable-objects). +[SQLite storage](/durable-objects/best-practices/access-durable-objects-storage/) and corresponding [Storage API](/durable-objects/api/sqlite-storage-api/) methods like `sql.exec` have moved from beta to general availability. New Durable Object classes should use wrangler configuration for [SQLite storage](/durable-objects/reference/durable-objects-migrations/#create-migration). ::: ### What are Durable Objects? @@ -63,7 +63,7 @@ Learn how Durable Objects provide transactional, strongly consistent, and serial - + Learn how WebSocket Hibernation allows you to manage the connections of multiple clients at scale. diff --git a/src/content/docs/durable-objects/observability/metrics-and-analytics.mdx b/src/content/docs/durable-objects/observability/metrics-and-analytics.mdx index 6505f1dc10fa018..8ef732024882dc9 100644 --- a/src/content/docs/durable-objects/observability/metrics-and-analytics.mdx +++ b/src/content/docs/durable-objects/observability/metrics-and-analytics.mdx @@ -80,7 +80,7 @@ Use [GraphQL Introspection](/analytics/graphql-api/features/discovery/introspect Durable Objects using [WebSockets](/durable-objects/best-practices/websockets/) will see request metrics across several GraphQL datasets because WebSockets have different types of requests. * Metrics for a WebSocket connection itself is represented in `durableObjectsInvocationsAdaptiveGroups` once the connection closes. Since WebSocket connections are long-lived, connections often do not terminate until the Durable Object terminates. -* Metrics for incoming and outgoing WebSocket messages on a WebSocket connection are available in `durableObjectsPeriodicGroups`. If a WebSocket connection uses [WebSocket Hibernation](/durable-objects/best-practices/websockets/#websocket-hibernation-api), incoming WebSocket messages are instead represented in `durableObjectsInvocationsAdaptiveGroups`. +* Metrics for incoming and outgoing WebSocket messages on a WebSocket connection are available in `durableObjectsPeriodicGroups`. If a WebSocket connection uses [WebSocket Hibernation](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api), incoming WebSocket messages are instead represented in `durableObjectsInvocationsAdaptiveGroups`. ## Example GraphQL query for Durable Objects diff --git a/src/content/docs/durable-objects/observability/troubleshooting.mdx b/src/content/docs/durable-objects/observability/troubleshooting.mdx index 4b2ced33dbb8fad..d4b20b5fa192a2d 100644 --- a/src/content/docs/durable-objects/observability/troubleshooting.mdx +++ b/src/content/docs/durable-objects/observability/troubleshooting.mdx @@ -7,7 +7,7 @@ sidebar: ## Debugging -[`wrangler dev`](/workers/wrangler/commands/#dev) and [`wrangler tail`](/workers/wrangler/commands/#tail) are both available to help you debug your Durable Objects. +[`wrangler dev`](/workers/wrangler/commands/general/#dev) and [`wrangler tail`](/workers/wrangler/commands/general/#tail) are both available to help you debug your Durable Objects. The `wrangler dev --remote` command opens a tunnel from your local development environment to Cloudflare's global network, letting you test your Durable Objects code in the Workers environment as you write it. diff --git a/src/content/docs/durable-objects/platform/known-issues.mdx b/src/content/docs/durable-objects/platform/known-issues.mdx index a4123e42f2137a4..a0999b692715960 100644 --- a/src/content/docs/durable-objects/platform/known-issues.mdx +++ b/src/content/docs/durable-objects/platform/known-issues.mdx @@ -26,11 +26,11 @@ For this reason, it is best practice to ensure that API changes between your Wor ## Development tools -[`wrangler tail`](/workers/wrangler/commands/#tail) logs from requests that are upgraded to WebSockets are delayed until the WebSocket is closed. `wrangler tail` should not be connected to a Worker that you expect will receive heavy volumes of traffic. +[`wrangler tail`](/workers/wrangler/commands/general/#tail) logs from requests that are upgraded to WebSockets are delayed until the WebSocket is closed. `wrangler tail` should not be connected to a Worker that you expect will receive heavy volumes of traffic. The Workers editor in the [Cloudflare dashboard](https://dash.cloudflare.com/) allows you to interactively edit and preview your Worker and Durable Objects. In the editor, Durable Objects can only be talked to by a preview request if the Worker being previewed both exports the Durable Object class and binds to it. Durable Objects exported by other Workers cannot be talked to in the editor preview. -[`wrangler dev`](/workers/wrangler/commands/#dev) has read access to Durable Object storage, but writes will be kept in memory and will not affect persistent data. However, if you specify the `script_name` explicitly in the [Durable Object binding](/workers/runtime-apis/bindings/), then writes will affect persistent data. Wrangler will emit a warning in that case. +[`wrangler dev`](/workers/wrangler/commands/general/#dev) has read access to Durable Object storage, but writes will be kept in memory and will not affect persistent data. However, if you specify the `script_name` explicitly in the [Durable Object binding](/workers/runtime-apis/bindings/), then writes will affect persistent data. Wrangler will emit a warning in that case. ## Alarms in local development diff --git a/src/content/docs/durable-objects/platform/pricing.mdx b/src/content/docs/durable-objects/platform/pricing.mdx index 7a4a00f69fcfb2d..25c5776206910e4 100644 --- a/src/content/docs/durable-objects/platform/pricing.mdx +++ b/src/content/docs/durable-objects/platform/pricing.mdx @@ -27,7 +27,7 @@ On Workers Free plan: These examples exclude the costs for the Workers calling the Durable Objects. When modelling the costs of a Durable Object, note that: - Inactive objects receiving no requests do not incur any duration charges. -- The [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api) can dramatically reduce duration-related charges for Durable Objects communicating with clients over the WebSocket protocol, especially if messages are only transmitted occasionally at sparse intervals. +- The [WebSocket Hibernation API](/durable-objects/best-practices/websockets/#durable-objects-hibernation-websocket-api) can dramatically reduce duration-related charges for Durable Objects communicating with clients over the WebSocket protocol, especially if messages are only transmitted occasionally at sparse intervals. ### Example 1 diff --git a/src/content/docs/durable-objects/reference/durable-objects-migrations.mdx b/src/content/docs/durable-objects/reference/durable-objects-migrations.mdx index ab43f972f86990a..93e6b9eddcb185d 100644 --- a/src/content/docs/durable-objects/reference/durable-objects-migrations.mdx +++ b/src/content/docs/durable-objects/reference/durable-objects-migrations.mdx @@ -29,7 +29,7 @@ You must initiate a migration process when you: :::note -Updating the code for an existing Durable Object class does not require a migration. To update the code for an existing Durable Object class, run [`npx wrangler deploy`](/workers/wrangler/commands/#deploy). This is true even for changes to how the code interacts with persistent storage. Because of [global uniqueness](/durable-objects/platform/known-issues/#global-uniqueness), you do not have to be concerned about old and new code interacting with the same storage simultaneously. However, it is your responsibility to ensure that the new code is backwards compatible with existing stored data. +Updating the code for an existing Durable Object class does not require a migration. To update the code for an existing Durable Object class, run [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy). This is true even for changes to how the code interacts with persistent storage. Because of [global uniqueness](/durable-objects/platform/known-issues/#global-uniqueness), you do not have to be concerned about old and new code interacting with the same storage simultaneously. However, it is your responsibility to ensure that the new code is backwards compatible with existing stored data. ::: diff --git a/src/content/docs/durable-objects/tutorials/build-a-seat-booking-app.mdx b/src/content/docs/durable-objects/tutorials/build-a-seat-booking-app.mdx index 164548ec82a23d7..f7614e88144d54d 100644 --- a/src/content/docs/durable-objects/tutorials/build-a-seat-booking-app.mdx +++ b/src/content/docs/durable-objects/tutorials/build-a-seat-booking-app.mdx @@ -12,13 +12,13 @@ description: >- import { Render, PackageManagers, Details, WranglerConfig } from "~/components"; -In this tutorial, you will learn how to build a seat reservation app using Durable Objects. This app will allow users to book a seat for a flight. The app will be written in TypeScript and will use the new [SQLite storage backend in Durable Object](/durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend) to store the data. +In this tutorial, you will learn how to build a seat reservation app using Durable Objects. This app will allow users to book a seat for a flight. The app will be written in TypeScript and will use the new [SQLite storage backend in Durable Object](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) to store the data. Using Durable Objects, you can write reusable code that can handle coordination and state management for multiple clients. Moreover, writing data to SQLite in Durable Objects is synchronous and uses local disks, therefore all queries are executed with great performance. You can learn more about SQLite storage in Durable Objects in the [SQLite in Durable Objects blog post](https://blog.cloudflare.com/sqlite-in-durable-objects). :::note[SQLite in Durable Objects] -SQLite in Durable Objects is currently in beta. You can learn more about the limitations of SQLite in Durable Objects in the [SQLite in Durable Objects documentation](/durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend). +SQLite in Durable Objects is currently in beta. You can learn more about the limitations of SQLite in Durable Objects in the [SQLite in Durable Objects documentation](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class). ::: diff --git a/src/content/docs/email-routing/email-workers/local-development.mdx b/src/content/docs/email-routing/email-workers/local-development.mdx index 8d5b0c19d88dca5..1fb43fafcc13bc6 100644 --- a/src/content/docs/email-routing/email-workers/local-development.mdx +++ b/src/content/docs/email-routing/email-workers/local-development.mdx @@ -8,7 +8,7 @@ sidebar: import { Render, Type, MetaInfo, WranglerConfig } from "~/components"; -You can test the behavior of an Email Worker script in local development using Wrangler with [wrangler dev](/workers/wrangler/commands/#dev), or using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). +You can test the behavior of an Email Worker script in local development using Wrangler with [wrangler dev](/workers/wrangler/commands/general/#dev), or using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This is the minimal wrangler configuration required to run an Email Worker locally: diff --git a/src/content/docs/email-routing/limits.mdx b/src/content/docs/email-routing/limits.mdx index 1d80ce1f47e66c6..7131ce9a3eaa8f4 100644 --- a/src/content/docs/email-routing/limits.mdx +++ b/src/content/docs/email-routing/limits.mdx @@ -10,7 +10,7 @@ import { Render } from "~/components" ## Email Workers size limits -When you process emails with Email Workers and you are on [Workers’ free pricing tier](/workers/platform/pricing/) you might encounter an allocation error. This may happen due to the size of the emails you are processing and/or the complexity of your Email Worker. Refer to [Worker limits](/workers/platform/limits/#worker-limits) for more information. +When you process emails with Email Workers and you are on [Workers’ free pricing tier](/workers/platform/pricing/) you might encounter an allocation error. This may happen due to the size of the emails you are processing and/or the complexity of your Email Worker. Refer to [Worker limits](/workers/platform/limits/#account-plan-limits) for more information. You can use the [log functionality for Workers](/workers/observability/logs/) to look for messages related to CPU limits (such as `EXCEEDED_CPU`) and troubleshoot any issues regarding allocation errors. diff --git a/src/content/docs/fundamentals/manage-members/dashboard-sso.mdx b/src/content/docs/fundamentals/manage-members/dashboard-sso.mdx index ab01a31f67ed21f..eb3d211ed23d6d7 100644 --- a/src/content/docs/fundamentals/manage-members/dashboard-sso.mdx +++ b/src/content/docs/fundamentals/manage-members/dashboard-sso.mdx @@ -30,7 +30,7 @@ Cloudflare Dashboard SSO is available for free to all plans. 2. You must be a super administrator and be able to access the Cloudflare API. -3. A Cloudflare Zero Trust organization with any subscription tier (including Free) must be created. To set up a Cloudflare Zero Trust organization, refer to [Create a Cloudflare Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization). +3. A Cloudflare Zero Trust organization with any subscription tier (including Free) must be created. To set up a Cloudflare Zero Trust organization, refer to [Create a Cloudflare Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization). ## 1. Set up an IdP diff --git a/src/content/docs/hyperdrive/configuration/local-development.mdx b/src/content/docs/hyperdrive/configuration/local-development.mdx index b5a71863f98adc4..8365e88c85f7ac0 100644 --- a/src/content/docs/hyperdrive/configuration/local-development.mdx +++ b/src/content/docs/hyperdrive/configuration/local-development.mdx @@ -116,10 +116,10 @@ Use `wrangler dev --remote` with caution. Since your Worker runs in Cloudflare's ::: -Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/#dev) to learn more about how to configure a local development session. +Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/general/#dev) to learn more about how to configure a local development session. ## Related resources -- Use [`wrangler dev`](/workers/wrangler/commands/#dev) to run your Worker and Hyperdrive locally and debug issues before deploying. +- Use [`wrangler dev`](/workers/wrangler/commands/general/#dev) to run your Worker and Hyperdrive locally and debug issues before deploying. - Learn [how Hyperdrive works](/hyperdrive/concepts/how-hyperdrive-works/). - Understand how to [configure query caching in Hyperdrive](/hyperdrive/concepts/query-caching/). diff --git a/src/content/docs/images/manage-images/serve-images/serve-private-images.mdx b/src/content/docs/images/manage-images/serve-images/serve-private-images.mdx index d4291350f8635d0..e826639c46bbab8 100644 --- a/src/content/docs/images/manage-images/serve-images/serve-private-images.mdx +++ b/src/content/docs/images/manage-images/serve-images/serve-private-images.mdx @@ -29,7 +29,7 @@ Signed URLs are generated server-side to protect your signing key. The example b The Worker accepts a regular Images URL and returns a signed URL that expires after one day. Adjust the `EXPIRATION` value to set a different expiry period. :::note -Never hardcode your signing key in source code. Store it as a secret using [`npx wrangler secret put`](/workers/wrangler/commands/#secret) and access it via the `env` parameter. For more information, refer to [Secrets](/workers/configuration/secrets/). +Never hardcode your signing key in source code. Store it as a secret using [`npx wrangler secret put`](/workers/wrangler/commands/general/#secret) and access it via the `env` parameter. For more information, refer to [Secrets](/workers/configuration/secrets/). ::: diff --git a/src/content/docs/kv/get-started.mdx b/src/content/docs/kv/get-started.mdx index dc8813051087020..746cfae3751f1fa 100644 --- a/src/content/docs/kv/get-started.mdx +++ b/src/content/docs/kv/get-started.mdx @@ -339,7 +339,7 @@ You can view key-value pairs directly from the dashboard. :::note -When using [`wrangler dev`](/workers/wrangler/commands/#dev) to develop locally, Wrangler defaults to using a local version of KV to avoid interfering with any of your live production data in KV. This means that reading keys that you have not written locally returns null. +When using [`wrangler dev`](/workers/wrangler/commands/general/#dev) to develop locally, Wrangler defaults to using a local version of KV to avoid interfering with any of your live production data in KV. This means that reading keys that you have not written locally returns null. To have `wrangler dev` connect to your Workers KV namespace running on Cloudflare's global network, you can set `"remote" : true` in the KV binding configuration. Refer to the [remote bindings documentation](/workers/development-testing/#remote-bindings) for more information. diff --git a/src/content/docs/kv/index.mdx b/src/content/docs/kv/index.mdx index 56df1c961836aaf..e0dbd89af528136 100644 --- a/src/content/docs/kv/index.mdx +++ b/src/content/docs/kv/index.mdx @@ -159,7 +159,7 @@ See the full Workers KV [REST API and SDK reference](/api/resources/kv/) for det -The Workers command-line interface, Wrangler, allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#publish) your Workers projects. +The Workers command-line interface, Wrangler, allows you to [create](/workers/wrangler/commands/general/#init), [test](/workers/wrangler/commands/general/#dev), and [deploy](/workers/wrangler/commands/pages/#publish) your Workers projects. diff --git a/src/content/docs/learning-paths/workers/get-started/c3-and-wrangler.mdx b/src/content/docs/learning-paths/workers/get-started/c3-and-wrangler.mdx index 5d8f1790ad6a538..e52f9eb0665c06f 100644 --- a/src/content/docs/learning-paths/workers/get-started/c3-and-wrangler.mdx +++ b/src/content/docs/learning-paths/workers/get-started/c3-and-wrangler.mdx @@ -29,7 +29,7 @@ You will use C3 for new project creation. [Wrangler](/workers/wrangler/) is a command-line tool for building with Cloudflare developer products. -With Wrangler, you can [develop](/workers/wrangler/commands/#dev) your Worker locally and remotely, [roll back](/workers/wrangler/commands/#rollback) to a previous deployment of your Worker, [delete](/workers/wrangler/commands/#delete) a Worker and its bound Developer Platform resources, and more. Refer to [Wrangler Commands](/workers/wrangler/commands/) to view the full reference of Wrangler commands. +With Wrangler, you can [develop](/workers/wrangler/commands/general/#dev) your Worker locally and remotely, [roll back](/workers/wrangler/commands/general/#rollback) to a previous deployment of your Worker, [delete](/workers/wrangler/commands/general/#delete) a Worker and its bound Developer Platform resources, and more. Refer to [Wrangler Commands](/workers/wrangler/commands/) to view the full reference of Wrangler commands. When you run C3 to create your project, C3 will install the latest version of Wrangler and you do not need to install Wrangler again. You can [update Wrangler](/workers/wrangler/install-and-update/#update-wrangler) to a newer version in your project to access new Wrangler capabilities and features. diff --git a/src/content/docs/learning-paths/workers/get-started/first-worker.mdx b/src/content/docs/learning-paths/workers/get-started/first-worker.mdx index 8e1bb44fffa1559..52db4665cb61c09 100644 --- a/src/content/docs/learning-paths/workers/get-started/first-worker.mdx +++ b/src/content/docs/learning-paths/workers/get-started/first-worker.mdx @@ -47,7 +47,7 @@ You will be asked if you would like to deploy the project to Cloudflare. - If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed to the Cloudflare global network and available on your custom [`workers.dev` subdomain](/workers/configuration/routing/workers-dev/). -- If you choose not to deploy, go to the newly created project directory to begin writing code. Deploy your project by running the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command. +- If you choose not to deploy, go to the newly created project directory to begin writing code. Deploy your project by running the [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) command. Refer to [How to run Wrangler commands](/workers/wrangler/commands/#how-to-run-wrangler-commands) to learn how to run Wrangler commands according to your package manager. diff --git a/src/content/docs/pages/configuration/api.mdx b/src/content/docs/pages/configuration/api.mdx index a7346d1b663db23..08700dc669dceef 100644 --- a/src/content/docs/pages/configuration/api.mdx +++ b/src/content/docs/pages/configuration/api.mdx @@ -48,7 +48,7 @@ export default { method: "POST", headers: { "Content-Type": "application/json;charset=UTF-8", - // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/#secret + // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/general/#secret Authorization: `Bearer ${env.API_TOKEN}`, }, }; @@ -74,7 +74,7 @@ export default { const init = { headers: { "Content-Type": "application/json;charset=UTF-8", - // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/#secret + // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/general/#secret Authorization: `Bearer ${env.API_TOKEN}`, }, }; @@ -119,7 +119,7 @@ export default { const init = { headers: { "content-type": "application/json;charset=UTF-8", - // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/#secret + // We recommend you store the API token as a secret using the Workers dashboard or using Wrangler as documented here: https://developers.cloudflare.com/workers/wrangler/commands/general/#secret Authorization: `Bearer ${env.API_TOKEN}`, }, }; diff --git a/src/content/docs/pages/functions/bindings.mdx b/src/content/docs/pages/functions/bindings.mdx index e699bdf3a04ff1f..0e064fc991dfa66 100644 --- a/src/content/docs/pages/functions/bindings.mdx +++ b/src/content/docs/pages/functions/bindings.mdx @@ -66,7 +66,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your KV namespace bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. To interact with your KV namespace binding locally by passing arguments to the Wrangler CLI, add `-k ` or `--kv=` to the `wrangler pages dev` command. For example, if your KV namespace is bound your Function via the `TODO_LIST` binding, access the KV namespace in local development by running: @@ -132,7 +132,7 @@ export const onRequestGet: PagesFunction = async (context) => { You can interact with your Durable Object bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. While developing locally, to interact with a Durable Object namespace, run `wrangler dev` in the directory of the Worker exporting the Durable Object. In another terminal, run `wrangler pages dev` in the directory of your Pages project. @@ -196,7 +196,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your R2 bucket bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. :::note @@ -268,7 +268,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your D1 database bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. To interact with a D1 database via the Wrangler CLI while [developing locally](/d1/best-practices/local-development/#develop-locally-with-pages), add `--d1 =` to the `wrangler pages dev` command. @@ -435,7 +435,7 @@ export const onRequest: PagesFunction = async (context) => { To bind Workers AI to your Pages Function, you can configure a Workers AI binding in the [Wrangler configuration file](/pages/functions/wrangler-configuration/#workers-ai) or the Cloudflare dashboard. -When developing locally using Wrangler, you can define an AI binding using the `--ai` flag. Start Wrangler in development mode by running [`wrangler pages dev --ai AI`](/workers/wrangler/commands/#dev) to expose the `context.env.AI` binding. +When developing locally using Wrangler, you can define an AI binding using the `--ai` flag. Start Wrangler in development mode by running [`wrangler pages dev --ai AI`](/workers/wrangler/commands/general/#dev) to expose the `context.env.AI` binding. To configure a Workers AI binding via the Cloudflare dashboard: @@ -493,7 +493,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your Workers AI bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. To interact with a Workers AI binding via the Wrangler CLI while developing locally, run: @@ -549,7 +549,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your Service bindings locally in one of two ways: -- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/#dev-1). +- Configure your Pages project's Wrangler file and run [`npx wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1). - Pass arguments to `wrangler pages dev` directly. To interact with a [Service binding](/workers/runtime-apis/bindings/service-bindings/) while developing locally, run the Worker you want to bind to via `wrangler dev` and in parallel, run `wrangler pages dev` with `--service =` where `SCRIPT_NAME` indicates the name of the Worker. For example, if your Worker is called `my-worker`, connect with this Worker by running it via `npx wrangler dev` (in the Worker's directory) alongside `npx wrangler pages dev --service MY_SERVICE=my-worker` (in the Pages' directory). Interact with this binding by using `context.env` (for example, `context.env.MY_SERVICE`). @@ -715,7 +715,7 @@ export const onRequest: PagesFunction = async (context) => { ### Interact with your Hyperdrive binding locally -To interact with your Hyperdrive binding locally, you must provide a local connection string to your database that your Pages project will connect to directly. You can set an environment variable `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_` with the connection string of the database, or use the Wrangler file to configure your Hyperdrive binding with a `localConnectionString` as specified in [Hyperdrive documentation for local development](/hyperdrive/configuration/local-development/). Then, run [`npx wrangler pages dev `](/workers/wrangler/commands/#dev-1). +To interact with your Hyperdrive binding locally, you must provide a local connection string to your database that your Pages project will connect to directly. You can set an environment variable `CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_` with the connection string of the database, or use the Wrangler file to configure your Hyperdrive binding with a `localConnectionString` as specified in [Hyperdrive documentation for local development](/hyperdrive/configuration/local-development/). Then, run [`npx wrangler pages dev `](/workers/wrangler/commands/pages/#dev-1). ## Analytics Engine @@ -830,7 +830,7 @@ export const onRequest: PagesFunction = async (context) => { You can interact with your environment variables locally in one of two ways: - Configure your Pages project's Wrangler file and running `npx wrangler pages dev`. -- Pass arguments to [`wrangler pages dev`](/workers/wrangler/commands/#dev-1) directly. +- Pass arguments to [`wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1) directly. To interact with your environment variables locally via the Wrangler CLI, add `--binding==` to the `wrangler pages dev` command: diff --git a/src/content/docs/pages/functions/debugging-and-logging.mdx b/src/content/docs/pages/functions/debugging-and-logging.mdx index 2e98204e8ce5e70..b54c949cb7067cd 100644 --- a/src/content/docs/pages/functions/debugging-and-logging.mdx +++ b/src/content/docs/pages/functions/debugging-and-logging.mdx @@ -7,7 +7,7 @@ sidebar: import { DashButton } from "~/components"; -Access your Functions logs by using the Cloudflare dashboard or the [Wrangler CLI](/workers/wrangler/commands/#deployment-tail). +Access your Functions logs by using the Cloudflare dashboard or the [Wrangler CLI](/workers/wrangler/commands/pages/#deployment-tail). Logs are a powerful debugging tool that can help you test and monitor the behavior of your Pages Functions once they have been deployed. Logs are available for every deployment of your Pages project. @@ -85,7 +85,7 @@ The output of each `wrangler pages deployment tail` log is a structured JSON obj } ``` -`wrangler pages deployment tail` allows you to customize a logging session to better suit your needs. Refer to the [`wrangler pages deployment tail` documentation](/workers/wrangler/commands/#deployment-tail) for available configuration options. +`wrangler pages deployment tail` allows you to customize a logging session to better suit your needs. Refer to the [`wrangler pages deployment tail` documentation](/workers/wrangler/commands/pages/#deployment-tail) for available configuration options. ## View logs in the Cloudflare Dashboard diff --git a/src/content/docs/pages/functions/get-started.mdx b/src/content/docs/pages/functions/get-started.mdx index 27d0d3242b59c05..33a2c69d8b0ea28 100644 --- a/src/content/docs/pages/functions/get-started.mdx +++ b/src/content/docs/pages/functions/get-started.mdx @@ -44,7 +44,7 @@ Refer to [Routing](/pages/functions/routing/) for more information on route cust [Workers runtime features](/workers/runtime-apis/) are configurable on Pages Functions, including [compatibility with a subset of Node.js APIs](/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](/workers/configuration/compatibility-dates/). -Set these configurations by passing an argument to your [Wrangler](/workers/wrangler/commands/#dev-1) command or by setting them in the dashboard. To set Pages compatibility flags in the Cloudflare dashboard: +Set these configurations by passing an argument to your [Wrangler](/workers/wrangler/commands/pages/#dev-1) command or by setting them in the dashboard. To set Pages compatibility flags in the Cloudflare dashboard: 1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages** and select your Pages project. @@ -58,7 +58,7 @@ Additionally, use other Cloudflare products such as [D1](/d1/) (serverless DB) a After you have set up your Function, deploy your Pages project. Deploy your project by: - Connecting your [Git provider](/pages/get-started/git-integration/). -- Using [Wrangler](/workers/wrangler/commands/#pages) from the command line. +- Using [Wrangler](/workers/wrangler/commands/pages/#pages) from the command line. :::caution diff --git a/src/content/docs/pages/functions/local-development.mdx b/src/content/docs/pages/functions/local-development.mdx index cf587570e9f7d8b..3fa1726c44c32ad 100644 --- a/src/content/docs/pages/functions/local-development.mdx +++ b/src/content/docs/pages/functions/local-development.mdx @@ -25,13 +25,13 @@ This will then start serving your Pages project. You can press `b` to open the b :::note -If you have a [Wrangler configuration file](/pages/functions/wrangler-configuration/) file configured for your Pages project, you can run [`wrangler pages dev`](/workers/wrangler/commands/#dev-1) without specifying a directory. +If you have a [Wrangler configuration file](/pages/functions/wrangler-configuration/) file configured for your Pages project, you can run [`wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1) without specifying a directory. ::: ### HTTPS support -To serve your local development server over HTTPS with a self-signed certificate, you can [set `local_protocol` via the [Wrangler configuration file](/pages/functions/wrangler-configuration/#local-development-settings) or you can pass the `--local-protocol=https` argument to [`wrangler pages dev`](/workers/wrangler/commands/#dev-1): +To serve your local development server over HTTPS with a self-signed certificate, you can [set `local_protocol` via the [Wrangler configuration file](/pages/functions/wrangler-configuration/#local-development-settings) or you can pass the `--local-protocol=https` argument to [`wrangler pages dev`](/workers/wrangler/commands/pages/#dev-1): ```sh npx wrangler pages dev --local-protocol=https diff --git a/src/content/docs/pages/functions/metrics.mdx b/src/content/docs/pages/functions/metrics.mdx index cbe70ad912e5fba..a104842a0741cbd 100644 --- a/src/content/docs/pages/functions/metrics.mdx +++ b/src/content/docs/pages/functions/metrics.mdx @@ -48,7 +48,7 @@ Function invocation statuses indicate whether a Function executed successfully o | Exceeded resources^1 | Worker script exceeded runtime limits | 1102, 1027 | exceededResources | | Internal error^2 | Workers runtime encountered an error | | internalError | -1. The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a script exceeding startup time or free tier limits. +1. The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-and-response-limits). The most common cause is excessive CPU time, but is also caused by a script exceeding startup time or free tier limits. 2. The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Function code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](http://www.cloudflarestatus.com). To further investigate exceptions, refer to [Debugging and Logging](/pages/functions/debugging-and-logging) diff --git a/src/content/docs/pages/functions/source-maps.mdx b/src/content/docs/pages/functions/source-maps.mdx index abffbec3e009681..d7c6d483f6a56ff 100644 --- a/src/content/docs/pages/functions/source-maps.mdx +++ b/src/content/docs/pages/functions/source-maps.mdx @@ -17,7 +17,7 @@ Support for uploading source maps for Pages is available now in open beta. Minim ## Source Maps -To enable source maps, provide the `--upload-source-maps` flag to [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1) or add the following to your Pages application's [Wrangler configuration file](/pages/functions/wrangler-configuration/) if you are using the Pages build environment: +To enable source maps, provide the `--upload-source-maps` flag to [`wrangler pages deploy`](/workers/wrangler/commands/general/#deploy-1) or add the following to your Pages application's [Wrangler configuration file](/pages/functions/wrangler-configuration/) if you are using the Pages build environment: @@ -29,7 +29,7 @@ To enable source maps, provide the `--upload-source-maps` flag to [`wrangler pag -When uploading source maps is enabled, Wrangler will automatically generate and upload source map files when you run [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1). +When uploading source maps is enabled, Wrangler will automatically generate and upload source map files when you run [`wrangler pages deploy`](/workers/wrangler/commands/general/#deploy-1). ## Stack traces diff --git a/src/content/docs/pages/functions/typescript.mdx b/src/content/docs/pages/functions/typescript.mdx index e6e3f0016883dba..e488ba60682253e 100644 --- a/src/content/docs/pages/functions/typescript.mdx +++ b/src/content/docs/pages/functions/typescript.mdx @@ -30,7 +30,7 @@ Then configure the types by creating a `functions/tsconfig.json` file: } ``` -See [the `wrangler types` command docs](/workers/wrangler/commands/#types) for more details. +See [the `wrangler types` command docs](/workers/wrangler/commands/general/#types) for more details. If you already have a `tsconfig.json` at the root of your project, you may wish to explicitly exclude the `/functions` directory to avoid conflicts. To exclude the `/functions` directory: diff --git a/src/content/docs/pages/get-started/direct-upload.mdx b/src/content/docs/pages/get-started/direct-upload.mdx index 6a34f5dca009fd2..e9e2b30a8d72c09 100644 --- a/src/content/docs/pages/get-started/direct-upload.mdx +++ b/src/content/docs/pages/get-started/direct-upload.mdx @@ -32,7 +32,7 @@ After you have your prebuilt assets ready, there are two ways to begin uploading :::note -Within a Direct Upload project, you can switch between creating deployments with either Wrangler or drag and drop. For existing Git-integrated projects, you can manually create deployments using [`wrangler deploy`](/workers/wrangler/commands/#deploy). However, you cannot use drag and drop on the dashboard with existing Git-integrated projects. +Within a Direct Upload project, you can switch between creating deployments with either Wrangler or drag and drop. For existing Git-integrated projects, you can manually create deployments using [`wrangler deploy`](/workers/wrangler/commands/general/#deploy). However, you cannot use drag and drop on the dashboard with existing Git-integrated projects. ::: @@ -51,7 +51,7 @@ To begin, install [`npm`](https://docs.npmjs.com/getting-started). Then [install #### Create your project -Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). Then run the [`pages project create` command](/workers/wrangler/commands/#project-create): +Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/general/#login). Then run the [`pages project create` command](/workers/wrangler/commands/pages/#project-create): ```sh npx wrangler pages project create @@ -63,7 +63,7 @@ Subsequent deployments will reuse both of these values (saved in your `node_modu #### Deploy your assets -From here, you have created an empty project and can now deploy your assets for your first deployment and for all subsequent deployments in your production environment. To do this, run the [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1) command: +From here, you have created an empty project and can now deploy your assets for your first deployment and for all subsequent deployments in your production environment. To do this, run the [`wrangler pages deploy`](/workers/wrangler/commands/general/#deploy-1) command: ```sh npx wrangler pages deploy @@ -97,13 +97,13 @@ If you would like to streamline the project creation and asset deployment steps, #### Other useful commands -If you would like to use Wrangler to obtain a list of all available projects for Direct Upload, use [`pages project list`](/workers/wrangler/commands/#project-list): +If you would like to use Wrangler to obtain a list of all available projects for Direct Upload, use [`pages project list`](/workers/wrangler/commands/pages/#project-list): ```sh npx wrangler pages project list ``` -If you would like to use Wrangler to obtain a list of all unique preview URLs for a particular project, use [`pages deployment list`](/workers/wrangler/commands/#deployment-list): +If you would like to use Wrangler to obtain a list of all unique preview URLs for a particular project, use [`pages deployment list`](/workers/wrangler/commands/pages/#deployment-list): ```sh npx wrangler pages deployment list diff --git a/src/content/docs/pages/how-to/refactor-a-worker-to-pages-functions.mdx b/src/content/docs/pages/how-to/refactor-a-worker-to-pages-functions.mdx index 51a2728833ca77c..9588afbbf31b364 100644 --- a/src/content/docs/pages/how-to/refactor-a-worker-to-pages-functions.mdx +++ b/src/content/docs/pages/how-to/refactor-a-worker-to-pages-functions.mdx @@ -49,7 +49,7 @@ This step creates the boilerplate to write your Airtable submission Worker. Afte The following code block shows an example of a Worker that handles Airtable form submission. -The `submitHandler` async function is called if the pathname of the work is `/submit`. This function checks that the request method is a `POST` request and then proceeds to parse and post the form entries to Airtable using your credentials, which you can store using [Wrangler `secret`](/workers/wrangler/commands/#secret). +The `submitHandler` async function is called if the pathname of the work is `/submit`. This function checks that the request method is a `POST` request and then proceeds to parse and post the form entries to Airtable using your credentials, which you can store using [Wrangler `secret`](/workers/wrangler/commands/general/#secret). ```js export default { diff --git a/src/content/docs/pages/how-to/use-direct-upload-with-continuous-integration.mdx b/src/content/docs/pages/how-to/use-direct-upload-with-continuous-integration.mdx index c7cd6741bf7debf..d04e963bc1074bd 100644 --- a/src/content/docs/pages/how-to/use-direct-upload-with-continuous-integration.mdx +++ b/src/content/docs/pages/how-to/use-direct-upload-with-continuous-integration.mdx @@ -156,7 +156,7 @@ Wrangler requires a Node version of at least `16.17.0`. You must upgrade your No ::: -You can modify the Wrangler command with any [`wrangler pages deploy` options](/workers/wrangler/commands/#deploy-1). +You can modify the Wrangler command with any [`wrangler pages deploy` options](/workers/wrangler/commands/general/#deploy-1). After all the specified steps, define a `workflow` at the end of your file. You can learn more about creating a custom process with CircleCI from the [official documentation](https://circleci.com/docs/2.0/concepts/). @@ -195,4 +195,4 @@ env: This will set the Node.js version to 18. You have also set branches you want your continuous integration to run on. Finally, input your `PROJECT NAME` in the script section and your CI process should work as expected. -You can also modify the Wrangler command with any [`wrangler pages deploy` options](/workers/wrangler/commands/#deploy-1). +You can also modify the Wrangler command with any [`wrangler pages deploy` options](/workers/wrangler/commands/general/#deploy-1). diff --git a/src/content/docs/pages/tutorials/build-an-api-with-pages-functions.mdx b/src/content/docs/pages/tutorials/build-an-api-with-pages-functions.mdx index db54b3df2c6f142..3b2fa7c5cd12591 100644 --- a/src/content/docs/pages/tutorials/build-an-api-with-pages-functions.mdx +++ b/src/content/docs/pages/tutorials/build-an-api-with-pages-functions.mdx @@ -219,7 +219,7 @@ After you have configured your Pages application and Pages Function, deploy your ### Deploy with Wrangler -In your `blog-frontend` directory, run [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1) to deploy your project to the Cloudflare dashboard. +In your `blog-frontend` directory, run [`wrangler pages deploy`](/workers/wrangler/commands/general/#deploy-1) to deploy your project to the Cloudflare dashboard. ```sh wrangler pages deploy blog-frontend diff --git a/src/content/docs/pages/tutorials/localize-a-website.mdx b/src/content/docs/pages/tutorials/localize-a-website.mdx index 7bd5cdb76f41503..7f770f3741f4196 100644 --- a/src/content/docs/pages/tutorials/localize-a-website.mdx +++ b/src/content/docs/pages/tutorials/localize-a-website.mdx @@ -157,7 +157,7 @@ class ElementHandler { } ``` -To review that everything looks as expected, use the preview functionality built into Wrangler. Call [`wrangler pages dev ./public`](/workers/wrangler/commands/#dev) to open up a live preview of your project. The command is refreshed after every code change that you make. +To review that everything looks as expected, use the preview functionality built into Wrangler. Call [`wrangler pages dev ./public`](/workers/wrangler/commands/general/#dev) to open up a live preview of your project. The command is refreshed after every code change that you make. You can expand on this translation functionality to provide country-specific translations, based on the incoming request’s `Accept-Language` header. By taking this header, parsing it, and passing the parsed language into your `ElementHandler`, you can retrieve a translated string in your user’s home language, provided that it is defined in `strings`. diff --git a/src/content/docs/pipelines/pipelines/manage-pipelines.mdx b/src/content/docs/pipelines/pipelines/manage-pipelines.mdx index a27f56745efaaae..67f71e57e9082fc 100644 --- a/src/content/docs/pipelines/pipelines/manage-pipelines.mdx +++ b/src/content/docs/pipelines/pipelines/manage-pipelines.mdx @@ -30,7 +30,7 @@ Pipelines execute SQL statements that define how data flows from streams to sink ### Wrangler CLI -To create a pipeline, run the [`pipelines create`](/workers/wrangler/commands/#pipelines-create) command: +To create a pipeline, run the [`pipelines create`](/workers/wrangler/commands/pipelines/#pipelines-create) command: ```bash npx wrangler pipelines create my-pipeline \ @@ -44,7 +44,7 @@ npx wrangler pipelines create my-pipeline \ --sql-file pipeline.sql ``` -Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/#pipelines-setup) command: +Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/pipelines/#pipelines-setup) command: ```bash npx wrangler pipelines setup @@ -112,13 +112,13 @@ FROM my_stream ### Wrangler CLI -To view a specific pipeline, run the [`pipelines get`](/workers/wrangler/commands/#pipelines-get) command: +To view a specific pipeline, run the [`pipelines get`](/workers/wrangler/commands/pipelines/#pipelines-get) command: ```bash npx wrangler pipelines get ``` -To list all pipelines in your account, run the [`pipelines list`](/workers/wrangler/commands/#pipelines-list) command: +To list all pipelines in your account, run the [`pipelines list`](/workers/wrangler/commands/pipelines/#pipelines-list) command: ```bash npx wrangler pipelines list @@ -139,7 +139,7 @@ Deleting a pipeline stops data flow from the connected stream to sink. ### Wrangler CLI -To delete a pipeline, run the [`pipelines delete`](/workers/wrangler/commands/#pipelines-delete) command: +To delete a pipeline, run the [`pipelines delete`](/workers/wrangler/commands/pipelines/#pipelines-delete) command: ```bash npx wrangler pipelines delete diff --git a/src/content/docs/pipelines/reference/legacy-pipelines.mdx b/src/content/docs/pipelines/reference/legacy-pipelines.mdx index 67630e1de2df297..5871d84c062596b 100644 --- a/src/content/docs/pipelines/reference/legacy-pipelines.mdx +++ b/src/content/docs/pipelines/reference/legacy-pipelines.mdx @@ -7,7 +7,7 @@ sidebar: Legacy pipelines, those created before September 25, 2025 via the legacy API, are on a deprecation path. -To check if your pipelines are legacy pipelines, view them in the dashboard under **Pipelines** > **Pipelines** or run the [`pipelines list`](/workers/wrangler/commands/#pipelines-list) command in [Wrangler](/workers/wrangler/). Legacy pipelines are labeled "legacy" in both locations. +To check if your pipelines are legacy pipelines, view them in the dashboard under **Pipelines** > **Pipelines** or run the [`pipelines list`](/workers/wrangler/commands/pipelines/#pipelines-list) command in [Wrangler](/workers/wrangler/). Legacy pipelines are labeled "legacy" in both locations. New pipelines offer SQL transformations, multiple output formats, and improved architecture. diff --git a/src/content/docs/pipelines/sinks/available-sinks/r2-data-catalog.mdx b/src/content/docs/pipelines/sinks/available-sinks/r2-data-catalog.mdx index 0ce4d384fc217a6..73076b5f9e2d100 100644 --- a/src/content/docs/pipelines/sinks/available-sinks/r2-data-catalog.mdx +++ b/src/content/docs/pipelines/sinks/available-sinks/r2-data-catalog.mdx @@ -8,7 +8,7 @@ sidebar: R2 Data Catalog sinks write processed data from pipelines as [Apache Iceberg](https://iceberg.apache.org/) tables to [R2 Data Catalog](/r2/data-catalog/). Iceberg tables provide ACID transactions, schema evolution, and time travel capabilities for analytics workloads. -To create an R2 Data Catalog sink, run the [`pipelines sinks create`](/workers/wrangler/commands/#pipelines-sinks-create) command and specify the sink type, target bucket, namespace, and table name: +To create an R2 Data Catalog sink, run the [`pipelines sinks create`](/workers/wrangler/commands/pipelines/#pipelines-sinks-create) command and specify the sink type, target bucket, namespace, and table name: ```bash npx wrangler pipelines sinks create my-sink \ diff --git a/src/content/docs/pipelines/sinks/available-sinks/r2.mdx b/src/content/docs/pipelines/sinks/available-sinks/r2.mdx index 7f1bef3a91e9447..59593658c80e009 100644 --- a/src/content/docs/pipelines/sinks/available-sinks/r2.mdx +++ b/src/content/docs/pipelines/sinks/available-sinks/r2.mdx @@ -8,7 +8,7 @@ sidebar: R2 sinks write processed data from pipelines as raw files to [R2 object storage](/r2/). They currently support writing to JSON and Parquet formats. -To create an R2 sink, run the [`pipelines sinks create`](/workers/wrangler/commands/#pipelines-sinks-create) command and specify the sink type and target [bucket](/r2/buckets/): +To create an R2 sink, run the [`pipelines sinks create`](/workers/wrangler/commands/pipelines/#pipelines-sinks-create) command and specify the sink type and target [bucket](/r2/buckets/): ```bash npx wrangler pipelines sinks create my-sink \ diff --git a/src/content/docs/pipelines/sinks/manage-sinks.mdx b/src/content/docs/pipelines/sinks/manage-sinks.mdx index 67857528a6cf3ea..ffecd6b509a7362 100644 --- a/src/content/docs/pipelines/sinks/manage-sinks.mdx +++ b/src/content/docs/pipelines/sinks/manage-sinks.mdx @@ -30,7 +30,7 @@ Sinks are made available to pipelines as SQL tables using the sink name (e.g., ` ### Wrangler CLI -To create a sink, run the [`pipelines sinks create`](/workers/wrangler/commands/#pipelines-sinks-create) command: +To create a sink, run the [`pipelines sinks create`](/workers/wrangler/commands/pipelines/#pipelines-sinks-create) command: ```bash npx wrangler pipelines sinks create \ @@ -40,7 +40,7 @@ npx wrangler pipelines sinks create \ For sink-specific configuration options, refer to [Available sinks](/pipelines/sinks/available-sinks/). -Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/#pipelines-setup) command: +Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/pipelines/#pipelines-setup) command: ```bash npx wrangler pipelines setup @@ -59,13 +59,13 @@ npx wrangler pipelines setup ### Wrangler CLI -To view a specific sink, run the [`pipelines sinks get`](/workers/wrangler/commands/#pipelines-sinks-get) command: +To view a specific sink, run the [`pipelines sinks get`](/workers/wrangler/commands/pipelines/#pipelines-sinks-get) command: ```bash npx wrangler pipelines sinks get ``` -To list all sinks in your account, run the [`pipelines sinks list`](/workers/wrangler/commands/#pipelines-sinks-list) command: +To list all sinks in your account, run the [`pipelines sinks list`](/workers/wrangler/commands/pipelines/#pipelines-sinks-list) command: ```bash npx wrangler pipelines sinks list @@ -86,7 +86,7 @@ npx wrangler pipelines sinks list ### Wrangler CLI -To delete a sink, run the [`pipelines sinks delete`](/workers/wrangler/commands/#pipelines-sinks-delete) command: +To delete a sink, run the [`pipelines sinks delete`](/workers/wrangler/commands/pipelines/#pipelines-sinks-delete) command: ```bash npx wrangler pipelines sinks delete diff --git a/src/content/docs/pipelines/streams/manage-streams.mdx b/src/content/docs/pipelines/streams/manage-streams.mdx index 5b904fcd6af4be4..396983a7b60f257 100644 --- a/src/content/docs/pipelines/streams/manage-streams.mdx +++ b/src/content/docs/pipelines/streams/manage-streams.mdx @@ -30,13 +30,13 @@ Streams are made available to pipelines as SQL tables using the stream name (for ### Wrangler CLI -To create a stream, run the [`pipelines streams create`](/workers/wrangler/commands/#pipelines-streams-create) command: +To create a stream, run the [`pipelines streams create`](/workers/wrangler/commands/pipelines/#pipelines-streams-create) command: ```bash npx wrangler pipelines streams create ``` -Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/#pipelines-setup) command: +Alternatively, to use the interactive setup wizard that helps you configure a stream, sink, and pipeline, run the [`pipelines setup`](/workers/wrangler/commands/pipelines/#pipelines-setup) command: ```bash npx wrangler pipelines setup @@ -128,13 +128,13 @@ Events that do not match the defined schema are accepted during ingestion but wi ### Wrangler CLI -To view a specific stream, run the [`pipelines streams get`](/workers/wrangler/commands/#pipelines-streams-get) command: +To view a specific stream, run the [`pipelines streams get`](/workers/wrangler/commands/pipelines/#pipelines-streams-get) command: ```bash npx wrangler pipelines streams get ``` -To list all streams in your account, run the [`pipelines streams list`](/workers/wrangler/commands/#pipelines-streams-list) command: +To list all streams in your account, run the [`pipelines streams list`](/workers/wrangler/commands/pipelines/#pipelines-streams-list) command: ```bash npx wrangler pipelines streams list @@ -180,7 +180,7 @@ For details on configuring authentication tokens and making authenticated reques ### Wrangler CLI -To delete a stream, run the [`pipelines streams delete`](/workers/wrangler/commands/#pipelines-streams-delete) command: +To delete a stream, run the [`pipelines streams delete`](/workers/wrangler/commands/pipelines/#pipelines-streams-delete) command: ```bash npx wrangler pipelines streams delete diff --git a/src/content/docs/pipelines/streams/writing-to-streams.mdx b/src/content/docs/pipelines/streams/writing-to-streams.mdx index f839fdcefdd43b1..ec16d69f4296238 100644 --- a/src/content/docs/pipelines/streams/writing-to-streams.mdx +++ b/src/content/docs/pipelines/streams/writing-to-streams.mdx @@ -55,7 +55,7 @@ export default { ### Typed pipeline bindings -When a stream has a defined schema, running `wrangler types` generates schema-specific TypeScript types for your pipeline bindings. Instead of the generic `Pipeline`, your bindings get a named record type with full autocomplete and compile-time type checking. Refer to the [`wrangler types` documentation](/workers/wrangler/commands/#types) to learn more. +When a stream has a defined schema, running `wrangler types` generates schema-specific TypeScript types for your pipeline bindings. Instead of the generic `Pipeline`, your bindings get a named record type with full autocomplete and compile-time type checking. Refer to the [`wrangler types` documentation](/workers/wrangler/commands/general/#types) to learn more. #### Generated types diff --git a/src/content/docs/queues/configuration/local-development.mdx b/src/content/docs/queues/configuration/local-development.mdx index f943c77623acbde..4932314b2c9c002 100644 --- a/src/content/docs/queues/configuration/local-development.mdx +++ b/src/content/docs/queues/configuration/local-development.mdx @@ -34,7 +34,7 @@ Your worker has access to the following bindings: Local development sessions create a standalone, local-only environment that mirrors the production environment Queues runs in so you can test your Workers _before_ you deploy to production. -Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/#dev) to learn more about how to configure a local development session. +Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/general/#dev) to learn more about how to configure a local development session. ## Separating producer & consumer Workers Wrangler supports running multiple Workers simultaneously with a single command. If your architecture separates the producer and consumer into distinct Workers, you can use this functionality to test the entire message flow locally. diff --git a/src/content/docs/queues/get-started.mdx b/src/content/docs/queues/get-started.mdx index 75e5d48cc2056f9..9bbca8691bfc548 100644 --- a/src/content/docs/queues/get-started.mdx +++ b/src/content/docs/queues/get-started.mdx @@ -151,7 +151,7 @@ You have built a queue and a producer Worker to publish messages to the queue. Y A consumer Worker receives messages from your queue. When the consumer Worker receives your queue's messages, it can write them to another source, such as a logging console or storage objects. -In this guide, you will create a consumer Worker and use it to log and inspect the messages with [`wrangler tail`](/workers/wrangler/commands/#tail). You will create your consumer Worker in the same Worker project that you created your producer Worker. +In this guide, you will create a consumer Worker and use it to log and inspect the messages with [`wrangler tail`](/workers/wrangler/commands/general/#tail). You will create your consumer Worker in the same Worker project that you created your producer Worker. :::note diff --git a/src/content/docs/queues/platform/limits.mdx b/src/content/docs/queues/platform/limits.mdx index b32bc1738c9cba8..437313f714418cb 100644 --- a/src/content/docs/queues/platform/limits.mdx +++ b/src/content/docs/queues/platform/limits.mdx @@ -42,7 +42,7 @@ The following limits apply to both Workers Paid and Workers Free plans with the ### Increasing Queue Consumer Worker CPU Limits -[Queue consumer Workers](/queues/reference/how-queues-works/#consumers) are Worker scripts, and share the same [per invocation CPU limits](/workers/platform/limits/#worker-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O. +[Queue consumer Workers](/queues/reference/how-queues-works/#consumers) are Worker scripts, and share the same [per invocation CPU limits](/workers/platform/limits/#account-plan-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O. By default, the maximum CPU time per consumer Worker invocation is set to 30 seconds, but can be increased by setting `limits.cpu_ms` in your Wrangler configuration: diff --git a/src/content/docs/r2-sql/query-data.mdx b/src/content/docs/r2-sql/query-data.mdx index 73c966c0fc71bf4..0475aa299173ac7 100644 --- a/src/content/docs/r2-sql/query-data.mdx +++ b/src/content/docs/r2-sql/query-data.mdx @@ -12,7 +12,7 @@ Query [Apache Iceberg](https://iceberg.apache.org/) tables managed by [R2 Data C ## Get your warehouse name -To query data with R2 SQL, you'll need your warehouse name associated with your [catalog](/r2/data-catalog/manage-catalogs/). To retrieve it, you can run the [`r2 bucket catalog get` command](/workers/wrangler/commands/#r2-bucket-catalog-get): +To query data with R2 SQL, you'll need your warehouse name associated with your [catalog](/r2/data-catalog/manage-catalogs/). To retrieve it, you can run the [`r2 bucket catalog get` command](/workers/wrangler/commands/r2/#r2-bucket-catalog-get): ```bash npx wrangler r2 bucket catalog get @@ -40,7 +40,7 @@ WRANGLER_R2_SQL_AUTH_TOKEN=YOUR_API_TOKEN Where `YOUR_API_TOKEN` is the token you created with the [required permissions](#authentication). For more information on setting environment variables, refer to [Wrangler system environment variables](/workers/wrangler/system-environment-variables/). -To run a SQL query, run the [`r2 sql query` command](/workers/wrangler/commands/#r2-sql-query): +To run a SQL query, run the [`r2 sql query` command](/workers/wrangler/commands/r2/#r2-sql-query): ```bash npx wrangler r2 sql query "SELECT * FROM namespace.table_name limit 10;" diff --git a/src/content/docs/r2/api/workers/workers-api-usage.mdx b/src/content/docs/r2/api/workers/workers-api-usage.mdx index 7912e9a4dbdadfc..77fca2010a4dfb1 100644 --- a/src/content/docs/r2/api/workers/workers-api-usage.mdx +++ b/src/content/docs/r2/api/workers/workers-api-usage.mdx @@ -361,7 +361,7 @@ This secret is now available as `AUTH_KEY_SECRET` on the `env` parameter in your ## 6. Deploy your Worker -With your Worker and bucket set up, run the `npx wrangler deploy` [command](/workers/wrangler/commands/#deploy) to deploy to Cloudflare's global network: +With your Worker and bucket set up, run the `npx wrangler deploy` [command](/workers/wrangler/commands/general/#deploy) to deploy to Cloudflare's global network: ```sh npx wrangler deploy diff --git a/src/content/docs/r2/buckets/bucket-locks.mdx b/src/content/docs/r2/buckets/bucket-locks.mdx index 3df691778dbdcd7..0ade9ceee47841d 100644 --- a/src/content/docs/r2/buckets/bucket-locks.mdx +++ b/src/content/docs/r2/buckets/bucket-locks.mdx @@ -28,14 +28,14 @@ Before getting started, you will need: 1. Install [`npm`](https://docs.npmjs.com/getting-started). 2. Install [Wrangler, the Developer Platform CLI](/workers/wrangler/install-and-update/). -3. Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). -4. Add a bucket lock rule to your bucket by running the [`r2 bucket lock add` command](/workers/wrangler/commands/#r2-bucket-lock-add). +3. Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/general/#login). +4. Add a bucket lock rule to your bucket by running the [`r2 bucket lock add` command](/workers/wrangler/commands/r2/#r2-bucket-lock-add). ```sh npx wrangler r2 bucket lock add [OPTIONS] ``` -Alternatively, you can set the entire bucket lock configuration for a bucket from a JSON file using the [`r2 bucket lock set` command](/workers/wrangler/commands/#r2-bucket-lock-set). +Alternatively, you can set the entire bucket lock configuration for a bucket from a JSON file using the [`r2 bucket lock set` command](/workers/wrangler/commands/r2/#r2-bucket-lock-set). ```sh npx wrangler r2 bucket lock set --file @@ -99,7 +99,7 @@ If your bucket is setup with [jurisdictional restrictions](/r2/reference/data-lo ### Wrangler -To list bucket lock rules, run the [`r2 bucket lock list` command](/workers/wrangler/commands/#r2-bucket-lock-list): +To list bucket lock rules, run the [`r2 bucket lock list` command](/workers/wrangler/commands/r2/#r2-bucket-lock-list): ```sh npx wrangler r2 bucket lock list @@ -122,7 +122,7 @@ For more information on required parameters and examples of how to get bucket lo ### Wrangler -To remove a bucket lock rule, run the [`r2 bucket lock remove` command](/workers/wrangler/commands/#r2-bucket-lock-remove): +To remove a bucket lock rule, run the [`r2 bucket lock remove` command](/workers/wrangler/commands/r2/#r2-bucket-lock-remove): ```sh npx wrangler r2 bucket lock remove --id diff --git a/src/content/docs/r2/buckets/create-buckets.mdx b/src/content/docs/r2/buckets/create-buckets.mdx index 9ed4b7e4ab39ed7..964655c8eb66a6a 100644 --- a/src/content/docs/r2/buckets/create-buckets.mdx +++ b/src/content/docs/r2/buckets/create-buckets.mdx @@ -17,7 +17,7 @@ The R2 support in Wrangler allows you to manage buckets and perform basic operat ## Bucket-Level Operations -Create a bucket with the [`r2 bucket create`](/workers/wrangler/commands/#r2-bucket-create) command: +Create a bucket with the [`r2 bucket create`](/workers/wrangler/commands/r2/#r2-bucket-create) command: ```sh wrangler r2 bucket create your-bucket-name @@ -33,13 +33,13 @@ The placeholder text is only for the example. ::: -List buckets in the current account with the [`r2 bucket list`](/workers/wrangler/commands/#r2-bucket-list) command: +List buckets in the current account with the [`r2 bucket list`](/workers/wrangler/commands/r2/#r2-bucket-list) command: ```sh wrangler r2 bucket list ``` -Delete a bucket with the [`r2 bucket delete`](/workers/wrangler/commands/#r2-bucket-delete) command. Note that the bucket must be empty and all objects must be deleted. +Delete a bucket with the [`r2 bucket delete`](/workers/wrangler/commands/r2/#r2-bucket-delete) command. Note that the bucket must be empty and all objects must be deleted. ```sh wrangler r2 bucket delete BUCKET_TO_DELETE diff --git a/src/content/docs/r2/buckets/event-notifications.mdx b/src/content/docs/r2/buckets/event-notifications.mdx index 496443727023289..ed99ee2fb8c4419 100644 --- a/src/content/docs/r2/buckets/event-notifications.mdx +++ b/src/content/docs/r2/buckets/event-notifications.mdx @@ -35,7 +35,7 @@ To begin, install [`npm`](https://docs.npmjs.com/getting-started). Then [install #### Enable event notifications on your R2 bucket -Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). Then add an [event notification rule](/r2/buckets/event-notifications/#event-notification-rules) to your bucket by running the [`r2 bucket notification create` command](/workers/wrangler/commands/#r2-bucket-notification-create). +Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/general/#login). Then add an [event notification rule](/r2/buckets/event-notifications/#event-notification-rules) to your bucket by running the [`r2 bucket notification create` command](/workers/wrangler/commands/r2/#r2-bucket-notification-create). ```sh npx wrangler r2 bucket notification create --event-type --queue diff --git a/src/content/docs/r2/buckets/object-lifecycles.mdx b/src/content/docs/r2/buckets/object-lifecycles.mdx index 5437bc4c9cb6cac..c414ce0ca8c0428 100644 --- a/src/content/docs/r2/buckets/object-lifecycles.mdx +++ b/src/content/docs/r2/buckets/object-lifecycles.mdx @@ -42,14 +42,14 @@ When you create an object lifecycle rule, you can specify which prefix you would 1. Install [`npm`](https://docs.npmjs.com/getting-started). 2. Install [Wrangler, the Developer Platform CLI](/workers/wrangler/install-and-update/). -3. Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). -4. Add a lifecycle rule to your bucket by running the [`r2 bucket lifecycle add` command](/workers/wrangler/commands/#r2-bucket-lifecycle-add). +3. Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/general/#login). +4. Add a lifecycle rule to your bucket by running the [`r2 bucket lifecycle add` command](/workers/wrangler/commands/r2/#r2-bucket-lifecycle-add). ```sh npx wrangler r2 bucket lifecycle add [OPTIONS] ``` -Alternatively you can set the entire lifecycle configuration for a bucket from a JSON file using the [`r2 bucket lifecycle set` command](/workers/wrangler/commands/#r2-bucket-lifecycle-set). +Alternatively you can set the entire lifecycle configuration for a bucket from a JSON file using the [`r2 bucket lifecycle set` command](/workers/wrangler/commands/r2/#r2-bucket-lifecycle-set). ```sh npx wrangler r2 bucket lifecycle set --file @@ -145,7 +145,7 @@ await client ### Wrangler -To get the list of lifecycle rules associated with your bucket, run the [`r2 bucket lifecycle list` command](/workers/wrangler/commands/#r2-bucket-lifecycle-list). +To get the list of lifecycle rules associated with your bucket, run the [`r2 bucket lifecycle list` command](/workers/wrangler/commands/r2/#r2-bucket-lifecycle-list). ```sh npx wrangler r2 bucket lifecycle list @@ -190,7 +190,7 @@ console.log( ### Wrangler -To remove a specific lifecycle rule from your bucket, run the [`r2 bucket lifecycle remove` command](/workers/wrangler/commands/#r2-bucket-lifecycle-remove). +To remove a specific lifecycle rule from your bucket, run the [`r2 bucket lifecycle remove` command](/workers/wrangler/commands/r2/#r2-bucket-lifecycle-remove). ```sh npx wrangler r2 bucket lifecycle remove --id diff --git a/src/content/docs/r2/data-catalog/manage-catalogs.mdx b/src/content/docs/r2/data-catalog/manage-catalogs.mdx index ef4d15d4d15b1c4..276c1d13a6c363f 100644 --- a/src/content/docs/r2/data-catalog/manage-catalogs.mdx +++ b/src/content/docs/r2/data-catalog/manage-catalogs.mdx @@ -43,7 +43,7 @@ Enabling the catalog on a bucket turns on the REST catalog interface and provide -To enable the catalog on your bucket, run the [`r2 bucket catalog enable command`](/workers/wrangler/commands/#r2-bucket-catalog-enable): +To enable the catalog on your bucket, run the [`r2 bucket catalog enable command`](/workers/wrangler/commands/r2/#r2-bucket-catalog-enable): ```bash npx wrangler r2 bucket catalog enable @@ -70,7 +70,7 @@ When you disable the catalog on a bucket, it immediately stops serving requests -To disable the catalog on your bucket, run the [`r2 bucket catalog disable command`](/workers/wrangler/commands/#r2-bucket-catalog-disable): +To disable the catalog on your bucket, run the [`r2 bucket catalog disable command`](/workers/wrangler/commands/r2/#r2-bucket-catalog-disable): ```bash npx wrangler r2 bucket catalog disable @@ -104,7 +104,7 @@ Refer to [Authenticate your Iceberg engine](#authenticate-your-iceberg-engine) f -To enable the compaction on your catalog, run the [`r2 bucket catalog compaction enable` command](/workers/wrangler/commands/#r2-bucket-catalog-compaction-enable): +To enable the compaction on your catalog, run the [`r2 bucket catalog compaction enable` command](/workers/wrangler/commands/r2/#r2-bucket-catalog-compaction-enable): ```bash # Enable catalog-level compaction (all tables) @@ -146,7 +146,7 @@ Disabling compaction will prevent the process from running for all tables (catal -To disable the compaction on your catalog, run the [`r2 bucket catalog compaction disable` command](/workers/wrangler/commands/#r2-bucket-catalog-compaction-disable): +To disable the compaction on your catalog, run the [`r2 bucket catalog compaction disable` command](/workers/wrangler/commands/r2/#r2-bucket-catalog-compaction-disable): ```bash # Disable catalog-level compaction (all tables) @@ -167,7 +167,7 @@ Snapshot expiration automatically removes old table snapshots to reduce metadata Snapshot expiration commands are available as of Wrangler version 4.56.0. ::: -To enable snapshot expiration on your catalog, run the [`r2 bucket catalog snapshot-expiration enable` command](/workers/wrangler/commands/#r2-bucket-catalog-snapshot-expiration-enable): +To enable snapshot expiration on your catalog, run the [`r2 bucket catalog snapshot-expiration enable` command](/workers/wrangler/commands/r2/#r2-bucket-catalog-snapshot-expiration-enable): ```bash # Enable catalog-level snapshot expiration (all tables) diff --git a/src/content/docs/r2/data-migration/sippy.mdx b/src/content/docs/r2/data-migration/sippy.mdx index ae7316ba9cfc7db..daac6010c417e0b 100644 --- a/src/content/docs/r2/data-migration/sippy.mdx +++ b/src/content/docs/r2/data-migration/sippy.mdx @@ -60,7 +60,7 @@ To begin, install [`npm`](https://docs.npmjs.com/getting-started). Then [install #### Enable Sippy on your R2 bucket -Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). Then run the [`r2 bucket sippy enable` command](/workers/wrangler/commands/#r2-bucket-sippy-enable): +Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/general/#login). Then run the [`r2 bucket sippy enable` command](/workers/wrangler/commands/r2/#r2-bucket-sippy-enable): ```sh npx wrangler r2 bucket sippy enable @@ -114,7 +114,7 @@ You can optionally select a time window to query. This defaults to the last 24 h ### Wrangler -To disable Sippy, run the [`r2 bucket sippy disable` command](/workers/wrangler/commands/#r2-bucket-sippy-disable): +To disable Sippy, run the [`r2 bucket sippy disable` command](/workers/wrangler/commands/r2/#r2-bucket-sippy-disable): ```sh npx wrangler r2 bucket sippy disable diff --git a/src/content/docs/r2/objects/delete-objects.mdx b/src/content/docs/r2/objects/delete-objects.mdx index 05dfb021ea1e188..021a91d5ad09eca 100644 --- a/src/content/docs/r2/objects/delete-objects.mdx +++ b/src/content/docs/r2/objects/delete-objects.mdx @@ -95,7 +95,7 @@ Deleting objects from a bucket is irreversible. ::: -Use [Wrangler](/workers/wrangler/install-and-update/) to delete objects. Run the [`r2 object delete` command](/workers/wrangler/commands/#r2-object-delete): +Use [Wrangler](/workers/wrangler/install-and-update/) to delete objects. Run the [`r2 object delete` command](/workers/wrangler/commands/r2/#r2-object-delete): ```sh wrangler r2 object delete test-bucket/image.png diff --git a/src/content/docs/r2/objects/download-objects.mdx b/src/content/docs/r2/objects/download-objects.mdx index 83441b0a5922890..a82d497b85416f8 100644 --- a/src/content/docs/r2/objects/download-objects.mdx +++ b/src/content/docs/r2/objects/download-objects.mdx @@ -99,7 +99,7 @@ For details on generating and using presigned URLs, refer to [Presigned URLs](/r ## Download via Wrangler -Use [Wrangler](/workers/wrangler/install-and-update/) to download objects. Run the [`r2 object get` command](/workers/wrangler/commands/#r2-object-get): +Use [Wrangler](/workers/wrangler/install-and-update/) to download objects. Run the [`r2 object get` command](/workers/wrangler/commands/r2/#r2-object-get): ```sh wrangler r2 object get test-bucket/image.png diff --git a/src/content/docs/r2/objects/upload-objects.mdx b/src/content/docs/r2/objects/upload-objects.mdx index e12c1e74e2081e9..bfc269cb4c8b217 100644 --- a/src/content/docs/r2/objects/upload-objects.mdx +++ b/src/content/docs/r2/objects/upload-objects.mdx @@ -790,7 +790,7 @@ Wrangler supports uploading files up to 315 MB and only allows one object at a t ::: -Use [Wrangler](/workers/wrangler/install-and-update/) to upload objects. Run the [`r2 object put` command](/workers/wrangler/commands/#r2-object-put): +Use [Wrangler](/workers/wrangler/install-and-update/) to upload objects. Run the [`r2 object put` command](/workers/wrangler/commands/r2/#r2-object-put): ```sh wrangler r2 object put test-bucket/image.png --file=image.png diff --git a/src/content/docs/r2/tutorials/summarize-pdf.mdx b/src/content/docs/r2/tutorials/summarize-pdf.mdx index 0ee2b74718a898c..9d82961ea358ecf 100644 --- a/src/content/docs/r2/tutorials/summarize-pdf.mdx +++ b/src/content/docs/r2/tutorials/summarize-pdf.mdx @@ -463,7 +463,7 @@ The queue handler now adds the summary to the R2 bucket as a text file. ## 9. Enable event notifications -Your `queue` handler is ready to handle incoming event notification messages. You need to enable event notifications with the [`wrangler r2 bucket notification create` command](/workers/wrangler/commands/#r2-bucket-notification-create) for your bucket. The following command creates an event notification for the `object-create` event type for the `pdf` suffix: +Your `queue` handler is ready to handle incoming event notification messages. You need to enable event notifications with the [`wrangler r2 bucket notification create` command](/workers/wrangler/commands/r2/#r2-bucket-notification-create) for your bucket. The following command creates an event notification for the `object-create` event type for the `pdf` suffix: ```sh npx wrangler r2 bucket notification create --event-type object-create --queue pdf-summarizer --suffix "pdf" @@ -475,7 +475,7 @@ An event notification is created for the `pdf` suffix. When a new file with the ## 10. Deploy your Worker -To deploy your Worker, run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: +To deploy your Worker, run the [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) command: ```sh npx wrangler deploy @@ -487,7 +487,7 @@ In the output of the `wrangler deploy` command, copy the URL. This is the URL of To test the application, navigate to the URL of your deployed application and upload a PDF file. Alternatively, you can use the [Cloudflare dashboard](https://dash.cloudflare.com/) to upload a PDF file. -To view the logs, you can use the [`wrangler tail`](/workers/wrangler/commands/#tail) command. +To view the logs, you can use the [`wrangler tail`](/workers/wrangler/commands/general/#tail) command. ```sh npx wrangler tail diff --git a/src/content/docs/r2/tutorials/upload-logs-event-notifications.mdx b/src/content/docs/r2/tutorials/upload-logs-event-notifications.mdx index 5c0edc5eac26d6c..d1910f4be3751f4 100644 --- a/src/content/docs/r2/tutorials/upload-logs-event-notifications.mdx +++ b/src/content/docs/r2/tutorials/upload-logs-event-notifications.mdx @@ -136,7 +136,7 @@ export default { ## 7. Deploy your Worker -To deploy your consumer Worker, run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: +To deploy your consumer Worker, run the [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) command: ```sh npx wrangler deploy @@ -144,7 +144,7 @@ npx wrangler deploy ## 8. Enable event notifications -Now that you have your consumer Worker ready to handle incoming event notification messages, you need to enable event notifications with the [`wrangler r2 bucket notification create` command](/workers/wrangler/commands/#r2-bucket-notification-create) for `example-upload-bucket`: +Now that you have your consumer Worker ready to handle incoming event notification messages, you need to enable event notifications with the [`wrangler r2 bucket notification create` command](/workers/wrangler/commands/r2/#r2-bucket-notification-create) for `example-upload-bucket`: ```sh npx wrangler r2 bucket notification create example-upload-bucket --event-type object-create --queue example-event-notification-queue diff --git a/src/content/docs/secrets-store/integrations/workers.mdx b/src/content/docs/secrets-store/integrations/workers.mdx index ab4b9bd3ff4829a..6f554d1faa65f64 100644 --- a/src/content/docs/secrets-store/integrations/workers.mdx +++ b/src/content/docs/secrets-store/integrations/workers.mdx @@ -23,7 +23,7 @@ This is different from Workers [Variables and Secrets](/workers/configuration/se - You should also have a store created under the **Secrets Store** tab on the Dashboard. The first store in your account is created automatically when a user with [Super Administrator or Secrets Store Admin role](/secrets-store/access-control/) interacts with it. - - If no store exists in your account yet and you have the necessary permissions, you can use the [Wrangler command](/workers/wrangler/commands/#secrets-store-store) `secrets-store store create --remote` to create your first store. + - If no store exists in your account yet and you have the necessary permissions, you can use the [Wrangler command](/workers/wrangler/commands/secrets-store/#secrets-store-store) `secrets-store store create --remote` to create your first store. :::caution[Local development mode] This guide assumes you are working in production. To use Secrets Store locally, you must use `secrets-store secret` [Wrangler commands](/workers/wrangler/commands/) without the `--remote` flag. @@ -38,7 +38,7 @@ You may also add account secrets directly from the Workers settings on the dashb ::: -Use the [Wrangler command](/workers/wrangler/commands/#secrets-store-secret) `secrets-store secret create`. +Use the [Wrangler command](/workers/wrangler/commands/secrets-store/#secrets-store-secret) `secrets-store secret create`. To use the following example, replace the store ID and secret name by your actual data. You can find and copy the store ID from the [Secrets Store tab](https://dash.cloudflare.com/?to=/:account/secrets-store/) on the dashboard or use `wrangler secrets-store store list`. @@ -69,7 +69,7 @@ npx wrangler secrets-store secret create --name MY_SECRET_NAME --scop -You can find and copy the store ID from the [Secrets Store tab](https://dash.cloudflare.com/?to=/:account/secrets-store/) on the dashboard or use the [Wrangler command](/workers/wrangler/commands/#secrets-store-store). Also, make sure your secret `name` does not contain spaces. +You can find and copy the store ID from the [Secrets Store tab](https://dash.cloudflare.com/?to=/:account/secrets-store/) on the dashboard or use the [Wrangler command](/workers/wrangler/commands/secrets-store/#secrets-store-store). Also, make sure your secret `name` does not contain spaces. Refer to [Secrets Store API](/api/resources/secrets_store/) for the full API documentation. diff --git a/src/content/docs/secrets-store/manage-secrets/how-to.mdx b/src/content/docs/secrets-store/manage-secrets/how-to.mdx index 532c79bdb5e15f1..abb9bfd4a4fc48b 100644 --- a/src/content/docs/secrets-store/manage-secrets/how-to.mdx +++ b/src/content/docs/secrets-store/manage-secrets/how-to.mdx @@ -13,7 +13,7 @@ You must have a [Super Administrator or Secrets Store Admin role](/secrets-store ## Manage via Wrangler -[Wrangler](/workers/wrangler/) is a command-line interface (CLI) that allows you to manage [Cloudflare Workers](/workers/) projects. Refer to [Wrangler commands](/workers/wrangler/commands/#secrets-store-secret) for guidance on how to use it with Secrets Store. +[Wrangler](/workers/wrangler/) is a command-line interface (CLI) that allows you to manage [Cloudflare Workers](/workers/) projects. Refer to [Wrangler commands](/workers/wrangler/commands/secrets-store/#secrets-store-secret) for guidance on how to use it with Secrets Store. ## Create a secret diff --git a/src/content/docs/secrets-store/manage-secrets/index.mdx b/src/content/docs/secrets-store/manage-secrets/index.mdx index af19467595ff102..982de4d9ae9ee87 100644 --- a/src/content/docs/secrets-store/manage-secrets/index.mdx +++ b/src/content/docs/secrets-store/manage-secrets/index.mdx @@ -22,7 +22,7 @@ If you use [Wrangler](/secrets-store/manage-secrets/how-to/#manage-via-wrangler) ## Resources -- [Manage via Wrangler](/workers/wrangler/commands/#secrets-store-secret) +- [Manage via Wrangler](/workers/wrangler/commands/secrets-store/#secrets-store-secret) - [Create a secret](/secrets-store/manage-secrets/how-to/#create-a-secret) - [Duplicate a secret](/secrets-store/manage-secrets/how-to/#duplicate-a-secret) - [Edit a secret](/secrets-store/manage-secrets/how-to/#edit-a-secret) diff --git a/src/content/docs/style-guide/components/resources-by-selector.mdx b/src/content/docs/style-guide/components/resources-by-selector.mdx index 03d7546122f01f5..6fb230b4a9326d6 100644 --- a/src/content/docs/style-guide/components/resources-by-selector.mdx +++ b/src/content/docs/style-guide/components/resources-by-selector.mdx @@ -50,4 +50,4 @@ import { ResourcesBySelector } from "~/components"; - `showLastUpdated` - If set to `true`, will add the last updated date, which is added in the [`updated` frontmatter value](/style-guide/frontmatter/custom-properties/#updated). + If set to `true`, will add the last updated date, which is added in the [`updated` frontmatter value](/style-guide/frontmatter/custom-properties/#properties). diff --git a/src/content/docs/vectorize/best-practices/create-indexes.mdx b/src/content/docs/vectorize/best-practices/create-indexes.mdx index cbaa0d8aa52cf25..026826899705257 100644 --- a/src/content/docs/vectorize/best-practices/create-indexes.mdx +++ b/src/content/docs/vectorize/best-practices/create-indexes.mdx @@ -33,7 +33,7 @@ To create an index with `wrangler`: npx wrangler vectorize create your-index-name --dimensions=NUM_DIMENSIONS --metric=SELECTED_METRIC ``` -To create an index that can accept vector embeddings from Worker's AI's [`@cf/baai/bge-base-en-v1.5`](/workers-ai/models/#text-embeddings) embedding model, which outputs vectors with 768 dimensions, use the following command: +To create an index that can accept vector embeddings from Worker's AI's [`@cf/baai/bge-base-en-v1.5`](/workers-ai/models/?tasks=Text+Embeddings) embedding model, which outputs vectors with 768 dimensions, use the following command: ```sh npx wrangler vectorize create your-index-name --dimensions=768 --metric=cosine @@ -89,7 +89,7 @@ The following table highlights some example embeddings models and their output d | Google Cloud - `multimodalembedding` | 1408 | Multi-modal (text, images) | :::note[Learn more about Workers AI] -Refer to the [Workers AI documentation](/workers-ai/models/#text-embeddings) to learn about its built-in embedding models. +Refer to the [Workers AI documentation](/workers-ai/models/?tasks=Text+Embeddings) to learn about its built-in embedding models. ::: ## Distance metrics diff --git a/src/content/docs/vectorize/best-practices/query-vectors.mdx b/src/content/docs/vectorize/best-practices/query-vectors.mdx index a45c80be1ec8ea9..18762ac794abe80 100644 --- a/src/content/docs/vectorize/best-practices/query-vectors.mdx +++ b/src/content/docs/vectorize/best-practices/query-vectors.mdx @@ -85,7 +85,7 @@ High-precision scoring is enabled by setting `returnValues: true` on your query. ## Workers AI -If you are generating embeddings from a [Workers AI](/workers-ai/models/#text-embeddings) text embedding model, the response type from `env.AI.run()` is an object that includes both the `shape` of the response vector - e.g. `[1,768]` - and the vector `data` as an array of vectors: +If you are generating embeddings from a [Workers AI](/workers-ai/models/?tasks=Text+Embeddings) text embedding model, the response type from `env.AI.run()` is an object that includes both the `shape` of the response vector - e.g. `[1,768]` - and the vector `data` as an array of vectors: ```ts interface EmbeddingResponse { diff --git a/src/content/docs/vectorize/get-started/embeddings.mdx b/src/content/docs/vectorize/get-started/embeddings.mdx index 71caf4c4a0b1e01..d239fb46a4b80bb 100644 --- a/src/content/docs/vectorize/get-started/embeddings.mdx +++ b/src/content/docs/vectorize/get-started/embeddings.mdx @@ -134,7 +134,7 @@ Specifically: ## 4. Set up Workers AI -Before you deploy your embedding example, ensure your Worker uses your model catalog, including the [text embedding model](/workers-ai/models/#text-embeddings) built-in. +Before you deploy your embedding example, ensure your Worker uses your model catalog, including the [text embedding model](/workers-ai/models/?tasks=Text+Embeddings) built-in. From within the `embeddings-tutorial` directory, open your Wrangler file in your editor and add the new `[[ai]]` binding to make Workers AI's models available in your Worker: diff --git a/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx b/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx index 1de1bb3c7225b2c..924e1bd248e2f2b 100644 --- a/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx +++ b/src/content/docs/vectorize/reference/what-is-a-vector-database.mdx @@ -69,7 +69,7 @@ In Vectorize, a database and an index are the same concept. Each index you creat Vector embeddings represent the features of a machine learning model as a numerical vector (array of numbers). They are a one-way representation that encodes how a machine learning model understands the input(s) provided to it, based on how the model was originally trained and its' internal structure. -For example, a [text embedding model](/workers-ai/models/#text-embeddings) available in Workers AI is able to take text input and represent it as a 768-dimension vector. The text `This is a story about an orange cloud`, when represented as a vector embedding, resembles the following: +For example, a [text embedding model](/workers-ai/models/?tasks=Text+Embeddings) available in Workers AI is able to take text input and represent it as a 768-dimension vector. The text `This is a story about an orange cloud`, when represented as a vector embedding, resembles the following: ```json [-0.019273685291409492,-0.01913292706012726,<764 dimensions here>,0.0007094172760844231,0.043409910053014755] diff --git a/src/content/docs/waf/analytics/security-analytics.mdx b/src/content/docs/waf/analytics/security-analytics.mdx index 0ff0e7993830294..3f1431d1a88a6ba 100644 --- a/src/content/docs/waf/analytics/security-analytics.mdx +++ b/src/content/docs/waf/analytics/security-analytics.mdx @@ -100,7 +100,7 @@ Only available in the [new security dashboard](/security/). The suspicious activity section gives you information about suspicious requests that were identified by the Cloudflare detections you have enabled. The supported detections include: -- [Account takeover](/bots/additional-configurations/detection-ids/#account-takeover-detections) +- [Account takeover](/bots/additional-configurations/detection-ids/account-takeover-detections/) - [Leaked credential check](/waf/detections/leaked-credentials/) (only for user and password leaked) - [Malicious uploads](/waf/detections/malicious-uploads/) - [WAF attack score](/waf/detections/attack-score/) diff --git a/src/content/docs/waf/detections/leaked-credentials/examples.mdx b/src/content/docs/waf/detections/leaked-credentials/examples.mdx index 1933ac19b6d38e5..b4b97002ee0b6cc 100644 --- a/src/content/docs/waf/detections/leaked-credentials/examples.mdx +++ b/src/content/docs/waf/detections/leaked-credentials/examples.mdx @@ -17,9 +17,9 @@ import { Example } from "~/components"; Access to the `cf.waf.credential_check.username_and_password_leaked` field requires a Pro plan or above. ::: -[Create a rate limiting rule](/waf/rate-limiting-rules/create-zone-dashboard/) using [account takeover (ATO) detection](/bots/additional-configurations/detection-ids/#account-takeover-detections) and leaked credentials fields to limit volumetric attacks from particular IP addresses, JA4 Fingerprints, or countries. +[Create a rate limiting rule](/waf/rate-limiting-rules/create-zone-dashboard/) using [account takeover (ATO) detection](/bots/additional-configurations/detection-ids/account-takeover-detections/) and leaked credentials fields to limit volumetric attacks from particular IP addresses, JA4 Fingerprints, or countries. -The following example rule applies rate limiting to requests with a specific [ATO detection ID](/bots/additional-configurations/detection-ids/#account-takeover-detections) (corresponding to `Observes all login traffic to the zone`) that contain a previously leaked username and password: +The following example rule applies rate limiting to requests with a specific [ATO detection ID](/bots/additional-configurations/detection-ids/account-takeover-detections/) (corresponding to `Observes all login traffic to the zone`) that contain a previously leaked username and password: diff --git a/src/content/docs/workers-ai/get-started/workers-wrangler.mdx b/src/content/docs/workers-ai/get-started/workers-wrangler.mdx index 1189e92bfe18a1b..bbb8b1a8805249e 100644 --- a/src/content/docs/workers-ai/get-started/workers-wrangler.mdx +++ b/src/content/docs/workers-ai/get-started/workers-wrangler.mdx @@ -101,7 +101,7 @@ Up to this point, you have created an AI binding for your Worker and configured ## 4. Develop locally with Wrangler -While in your project directory, test Workers AI locally by running [`wrangler dev`](/workers/wrangler/commands/#dev): +While in your project directory, test Workers AI locally by running [`wrangler dev`](/workers/wrangler/commands/general/#dev): ```sh npx wrangler dev diff --git a/src/content/docs/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai.mdx b/src/content/docs/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai.mdx index 8abda360652a61e..d78e43173350795 100644 --- a/src/content/docs/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai.mdx +++ b/src/content/docs/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai.mdx @@ -69,9 +69,9 @@ cd rag-ai-tutorial ## 2. Develop with Wrangler CLI -The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. C3 will install Wrangler in projects by default. +The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to [create](/workers/wrangler/commands/general/#init), [test](/workers/wrangler/commands/general/#dev), and [deploy](/workers/wrangler/commands/general/#deploy) your Workers projects. C3 will install Wrangler in projects by default. -After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to test your Worker locally during development. +After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/general/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to test your Worker locally during development. ```sh npx wrangler dev @@ -87,7 +87,7 @@ To begin using Cloudflare's AI products, you can add the `ai` block to the [Wran If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. -If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation for more information. +If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/general/#login) documentation for more information. ::: diff --git a/src/content/docs/workers/best-practices/workers-best-practices.mdx b/src/content/docs/workers/best-practices/workers-best-practices.mdx index 00c71010825fb93..36916453a1204d7 100644 --- a/src/content/docs/workers/best-practices/workers-best-practices.mdx +++ b/src/content/docs/workers/best-practices/workers-best-practices.mdx @@ -56,7 +56,7 @@ For more information, refer to [Node.js compatibility](/workers/runtime-apis/nod ### Generate binding types with wrangler types -Do not hand-write your `Env` interface. Run [`wrangler types`](/workers/wrangler/commands/#types) to generate a type definition file that matches your actual Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time. +Do not hand-write your `Env` interface. Run [`wrangler types`](/workers/wrangler/commands/general/#types) to generate a type definition file that matches your actual Wrangler configuration. This catches mismatches between your config and code at compile time instead of at deploy time. Re-run `wrangler types` whenever you add or rename a binding. @@ -79,7 +79,7 @@ export default { -For more information, refer to [wrangler types](/workers/wrangler/commands/#types). +For more information, refer to [wrangler types](/workers/wrangler/commands/general/#types). ### Store secrets with wrangler secret, not in source diff --git a/src/content/docs/workers/ci-cd/builds/advanced-setups.mdx b/src/content/docs/workers/ci-cd/builds/advanced-setups.mdx index e81ce45b191b7e5..ecd92ae20d51891 100644 --- a/src/content/docs/workers/ci-cd/builds/advanced-setups.mdx +++ b/src/content/docs/workers/ci-cd/builds/advanced-setups.mdx @@ -69,7 +69,7 @@ When a new commit is made to `ecommerce-monorepo`, a build and deploy will be tr You can use [Wrangler Environments](/workers/wrangler/environments/) with Workers Builds by completing the following steps: -1. [Deploy via Wrangler](/workers/wrangler/commands/#deploy) to create the Workers for your environments on the Dashboard, if you do not already have them. +1. [Deploy via Wrangler](/workers/wrangler/commands/general/#deploy) to create the Workers for your environments on the Dashboard, if you do not already have them. 2. Find the Workers for your environments. They are typically named `[name of Worker] - [environment name]`. 3. Connect your repository to each of the Workers for your environment. 4. In each of the Workers, edit your Wrangler commands to include the flag `--env: ` in the build configurations for both the deploy command, and the non-production branch deploy command ([if applicable](/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds)). diff --git a/src/content/docs/workers/ci-cd/builds/configuration.mdx b/src/content/docs/workers/ci-cd/builds/configuration.mdx index eeb39d648f2877c..880177fe78e07e7 100644 --- a/src/content/docs/workers/ci-cd/builds/configuration.mdx +++ b/src/content/docs/workers/ci-cd/builds/configuration.mdx @@ -33,7 +33,7 @@ Note that when you update and save build settings, the updated settings will be | **Git repository** | Choose the Git repository you would like to connect your Worker to. | | **Git branch** | Select the branch you would like Cloudflare to listen to for new commits. This will be defaulted to `main`. | | **Build command** _(Optional)_ | Set a build command if your project requires a build step (e.g. `npm run build`). This is necessary, for example, when using a [front-end framework](/workers/ci-cd/builds/configuration/#framework-support) such as Next.js or Remix. | -| **[Deploy command](/workers/ci-cd/builds/configuration/#deploy-command)** | The deploy command lets you set the [specific Wrangler command](/workers/wrangler/commands/#deploy) used to deploy your Worker. Your deploy command will default to `npx wrangler deploy` but you may customize this command. Workers Builds will use the Wrangler version set in your `package json`. | +| **[Deploy command](/workers/ci-cd/builds/configuration/#deploy-command)** | The deploy command lets you set the [specific Wrangler command](/workers/wrangler/commands/general/#deploy) used to deploy your Worker. Your deploy command will default to `npx wrangler deploy` but you may customize this command. Workers Builds will use the Wrangler version set in your `package json`. | | **[Non-production branch deploy command](/workers/ci-cd/builds/configuration/#non-production-branch-deploy-command)** | Set a command to run when executing [a build for commit on a non-production branch](/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds). This will default to `npx wrangler versions upload` but you may customize this command. Workers Builds will use the Wrangler version set in your `package json`. | | **Root directory** _(Optional)_ | Specify the path to your project. The root directory defines where the build command will be run and can be helpful in [monorepos](/workers/ci-cd/builds/advanced-setups/#monorepos) to isolate a specific project within the repository for builds. | | **[API token](/workers/ci-cd/builds/configuration/#api-token)** _(Optional)_ | The API token is used to authenticate your build request and authorize the upload and deployment of your Worker to Cloudflare. By default, Cloudflare will automatically generate an API token for your account when using Workers Builds, and continue to use this API token for all subsequent builds. Alternatively, you can [create your own API token](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/authentication/#generate-tokens), or select one that you already own. | diff --git a/src/content/docs/workers/ci-cd/builds/git-integration/index.mdx b/src/content/docs/workers/ci-cd/builds/git-integration/index.mdx index 25300d8c9954fe9..259e7f2cae1bfd4 100644 --- a/src/content/docs/workers/ci-cd/builds/git-integration/index.mdx +++ b/src/content/docs/workers/ci-cd/builds/git-integration/index.mdx @@ -16,7 +16,7 @@ Adding a Git integration also lets you monitor build statuses directly in your G Cloudflare supports connecting Cloudflare Workers to your GitHub and GitLab repositories. Workers Builds does not currently support connecting self-hosted instances of GitHub or GitLab. -If you using a different Git provider (e.g. Bitbucket), you can use an [external CI/CD provider (e.g. GitHub Actions)](/workers/ci-cd/external-cicd/) and deploy using [Wrangler CLI](/workers/wrangler/commands/#deploy). +If you using a different Git provider (e.g. Bitbucket), you can use an [external CI/CD provider (e.g. GitHub Actions)](/workers/ci-cd/external-cicd/) and deploy using [Wrangler CLI](/workers/wrangler/commands/general/#deploy). ## Add a Git Integration diff --git a/src/content/docs/workers/ci-cd/external-cicd/github-actions.mdx b/src/content/docs/workers/ci-cd/external-cicd/github-actions.mdx index 7da89c8c4a3f34d..113cba8e6ef4e12 100644 --- a/src/content/docs/workers/ci-cd/external-cicd/github-actions.mdx +++ b/src/content/docs/workers/ci-cd/external-cicd/github-actions.mdx @@ -9,7 +9,7 @@ You can deploy Workers with [GitHub Actions](https://github.com/marketplace/acti ## 1. Authentication -When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API. +When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/general/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API. ### Cloudflare account ID diff --git a/src/content/docs/workers/ci-cd/external-cicd/gitlab-cicd.mdx b/src/content/docs/workers/ci-cd/external-cicd/gitlab-cicd.mdx index 3fb8bd3ec765987..f2e3a0fdbee8034 100644 --- a/src/content/docs/workers/ci-cd/external-cicd/gitlab-cicd.mdx +++ b/src/content/docs/workers/ci-cd/external-cicd/gitlab-cicd.mdx @@ -9,7 +9,7 @@ You can deploy Workers with [GitLab CI/CD](https://docs.gitlab.com/ee/ci/pipelin ## 1. Authentication -When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API. +When running Wrangler locally, authentication to the Cloudflare API happens via the [`wrangler login`](/workers/wrangler/commands/general/#login) command, which initiates an interactive authentication flow. Since CI/CD environments are non-interactive, Wrangler requires a [Cloudflare API token](/fundamentals/api/get-started/create-token/) and [account ID](/fundamentals/account/find-account-and-zone-ids/) to authenticate with the Cloudflare API. ### Cloudflare account ID @@ -49,4 +49,4 @@ Don't store the value of `CLOUDFLARE_API_TOKEN` in your repository, as it gives ### GitLab Pipelines -Refer to [GitLab's blog](https://about.gitlab.com/blog/2022/11/21/deploy-remix-with-gitlab-and-cloudflare/) for an example pipeline. Under the `script` key, replace `npm run deploy` with [`npx wrangler deploy`](/workers/wrangler/commands/#deploy). +Refer to [GitLab's blog](https://about.gitlab.com/blog/2022/11/21/deploy-remix-with-gitlab-and-cloudflare/) for an example pipeline. Under the `script` key, replace `npm run deploy` with [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy). diff --git a/src/content/docs/workers/configuration/compatibility-dates.mdx b/src/content/docs/workers/configuration/compatibility-dates.mdx index d631dc2e676b2f4..f0915d8e94ed41a 100644 --- a/src/content/docs/workers/configuration/compatibility-dates.mdx +++ b/src/content/docs/workers/configuration/compatibility-dates.mdx @@ -14,7 +14,7 @@ The compatibility date and flags are how you, as a developer, opt into these run ## Setting compatibility date -When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) command. +When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy) command. There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible. diff --git a/src/content/docs/workers/configuration/cron-triggers.mdx b/src/content/docs/workers/configuration/cron-triggers.mdx index 99efc6a3d48d651..dd9af8d7bfcb5ce 100644 --- a/src/content/docs/workers/configuration/cron-triggers.mdx +++ b/src/content/docs/workers/configuration/cron-triggers.mdx @@ -190,7 +190,7 @@ Some common time intervals that may be useful for setting up your Cron Trigger: ## Test Cron Triggers locally -Test Cron Triggers using Wrangler with [`wrangler dev`](/workers/wrangler/commands/#dev), or using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This will expose a `/cdn-cgi/handler/scheduled` route which can be used to test using a HTTP request. +Test Cron Triggers using Wrangler with [`wrangler dev`](/workers/wrangler/commands/general/#dev), or using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). This will expose a `/cdn-cgi/handler/scheduled` route which can be used to test using a HTTP request. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled" diff --git a/src/content/docs/workers/configuration/integrations/apis.mdx b/src/content/docs/workers/configuration/integrations/apis.mdx index 449d22303138b64..fd2122715a6b3a5 100644 --- a/src/content/docs/workers/configuration/integrations/apis.mdx +++ b/src/content/docs/workers/configuration/integrations/apis.mdx @@ -29,7 +29,7 @@ async function handleRequest(request) { ## Authentication -If your API requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: +If your API requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/general/#secret) command: ```sh wrangler secret put SECRET_NAME diff --git a/src/content/docs/workers/configuration/integrations/external-services.mdx b/src/content/docs/workers/configuration/integrations/external-services.mdx index 5f4dc8d06d79b22..03d9dfe8e45d12f 100644 --- a/src/content/docs/workers/configuration/integrations/external-services.mdx +++ b/src/content/docs/workers/configuration/integrations/external-services.mdx @@ -7,7 +7,7 @@ Many external services provide libraries and SDKs to interact with their APIs. W ## Authentication -If your service requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: +If your service requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/general/#secret) command: ```sh wrangler secret put SECRET_NAME diff --git a/src/content/docs/workers/configuration/integrations/index.mdx b/src/content/docs/workers/configuration/integrations/index.mdx index 0105f4419cb1cde..363acb965ada742 100644 --- a/src/content/docs/workers/configuration/integrations/index.mdx +++ b/src/content/docs/workers/configuration/integrations/index.mdx @@ -23,7 +23,7 @@ To use any of the available integrations: * Determine which integration you want to use and make sure you have the necessary accounts and credentials for it. * In your Cloudflare Workers code, import the necessary libraries or modules for the integration. * Use the provided APIs and functions to connect to the integration and access its data or functionality. -* Store necessary secrets and keys using secrets via [`wrangler secret put `](/workers/wrangler/commands/#secret). +* Store necessary secrets and keys using secrets via [`wrangler secret put `](/workers/wrangler/commands/general/#secret). ## Tips and best practices diff --git a/src/content/docs/workers/configuration/previews.mdx b/src/content/docs/workers/configuration/previews.mdx index dd1f5b95492bf32..c476bb5e70d6bfc 100644 --- a/src/content/docs/workers/configuration/previews.mdx +++ b/src/content/docs/workers/configuration/previews.mdx @@ -41,8 +41,8 @@ Every time you create a new [version](/workers/configuration/versions-and-deploy New versions of a Worker are created when you run: -- [`wrangler deploy`](/workers/wrangler/commands/#deploy) -- [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload) +- [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) +- [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload) - Or when you make edits via the Cloudflare dashboard If Preview URLs have been enabled, they are public and available immediately after version creation. @@ -53,7 +53,7 @@ Minimum required Wrangler version: 3.74.0. Check your version by running `wrangl #### View versioned preview URLs using Wrangler -The [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload) command uploads a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded. +The [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload) command uploads a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded. #### View versioned preview URLs on the Workers dashboard diff --git a/src/content/docs/workers/configuration/secrets.mdx b/src/content/docs/workers/configuration/secrets.mdx index da30ec070ccf56f..c18fce4040ca291 100644 --- a/src/content/docs/workers/configuration/secrets.mdx +++ b/src/content/docs/workers/configuration/secrets.mdx @@ -77,7 +77,7 @@ Secrets described on this page are defined and managed on a per-Worker level. If #### Via Wrangler -Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/#secret) or [`wrangler versions secret put`](/workers/wrangler/commands/#versions-secret-put) commands. +Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/general/#secret) or [`wrangler versions secret put`](/workers/wrangler/commands/general/#versions-secret-put) commands. `wrangler secret put` creates a new version of the Worker and deploys it immediately. @@ -85,7 +85,7 @@ Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/ npx wrangler secret put ``` -If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy). +If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy). :::note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. @@ -112,7 +112,7 @@ To add a secret via the dashboard: #### Via Wrangler -Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/commands/#secret-delete) or [`wrangler versions secret delete`](/workers/wrangler/commands/#versions-secret-delete) commands. +Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/commands/general/#secret-delete) or [`wrangler versions secret delete`](/workers/wrangler/commands/general/#versions-secret-delete) commands. `wrangler secret delete` creates a new version of the Worker and deploys it immediately. @@ -120,7 +120,7 @@ Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/comm npx wrangler secret delete ``` -If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy). +If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy). ```sh npx wrangler versions secret delete @@ -144,5 +144,5 @@ To delete a secret from your Worker project via the dashboard: ## Related resources -- [Wrangler secret commands](/workers/wrangler/commands/#secret) - Review the Wrangler commands to create, delete and list secrets. +- [Wrangler secret commands](/workers/wrangler/commands/general/#secret) - Review the Wrangler commands to create, delete and list secrets. - [Cloudflare Secrets Store](/secrets-store/) - Encrypt and store sensitive information as secrets that are securely reusable across your account. diff --git a/src/content/docs/workers/configuration/sites/start-from-scratch.mdx b/src/content/docs/workers/configuration/sites/start-from-scratch.mdx index c53136d787b0aa7..5ebaf4f7c4638b9 100644 --- a/src/content/docs/workers/configuration/sites/start-from-scratch.mdx +++ b/src/content/docs/workers/configuration/sites/start-from-scratch.mdx @@ -25,7 +25,7 @@ This guide shows how to quickly start a new Workers Sites project from scratch. 3. Run `npm install` to install all dependencies. -4. You can preview your site by running the [`wrangler dev`](/workers/wrangler/commands/#dev) command: +4. You can preview your site by running the [`wrangler dev`](/workers/wrangler/commands/general/#dev) command: ```sh wrangler dev diff --git a/src/content/docs/workers/configuration/versions-and-deployments/gradual-deployments.mdx b/src/content/docs/workers/configuration/versions-and-deployments/gradual-deployments.mdx index a39a086eee2ce8a..713f248ee9f1a60 100644 --- a/src/content/docs/workers/configuration/versions-and-deployments/gradual-deployments.mdx +++ b/src/content/docs/workers/configuration/versions-and-deployments/gradual-deployments.mdx @@ -46,7 +46,7 @@ Answer `yes` or `no` to using TypeScript. Answer `yes` to deploying your applica #### 2. Create a new version of the Worker -To create a new version of the Worker, edit the Worker code by changing the `Response` content to your desired text and upload the Worker by using the [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload) command. +To create a new version of the Worker, edit the Worker code by changing the `Response` content to your desired text and upload the Worker by using the [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload) command. ```sh npx wrangler versions upload @@ -56,7 +56,7 @@ This will create a new version of the Worker that is not automatically deployed. #### 3. Create a new deployment -Use the [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy) command to +Use the [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy) command to create a new deployment that splits traffic between two versions of the Worker. Follow the interactive prompts to create a deployment with the versions uploaded in [step #1](/workers/configuration/versions-and-deployments/gradual-deployments/#1-create-and-deploy-a-new-worker) and [step #2](/workers/configuration/versions-and-deployments/gradual-deployments/#2-create-a-new-version-of-the-worker). Select your desired percentages for each version. ```sh @@ -165,7 +165,7 @@ curl -s https://example.com -H 'Cloudflare-Workers-Version-Overrides: my-worker- The dictionary can contain multiple key-value pairs. Each key indicates the name of the Worker the override should be applied to. The value indicates the version ID that should be used and must be a [String](https://www.rfc-editor.org/rfc/rfc8941#name-strings). -A version override will only be applied if the specified version is in the current deployment. The versions in the current deployment can be found using the [`wrangler deployments list`](/workers/wrangler/commands/#deployments-list) command or on the **Workers & Pages** page of the Cloudflare dashboard > Select your Workers > Deployments > Active Deployment. +A version override will only be applied if the specified version is in the current deployment. The versions in the current deployment can be found using the [`wrangler deployments list`](/workers/wrangler/commands/general/#deployments-list) command or on the **Workers & Pages** page of the Cloudflare dashboard > Select your Workers > Deployments > Active Deployment. :::note[Verifying that the version override was applied] @@ -190,7 +190,7 @@ In this example, your deployment is initially configured to route all traffic to | :----------------------------------: | :--------: | | db7cd8d3-4425-4fe7-8c81-01bf963b6067 | 100% | -Create a new deployment using [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy) and specify 0% for the new version whilst keeping the previous version at 100%. +Create a new deployment using [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy) and specify 0% for the new version whilst keeping the previous version at 100%. | Version ID | Percentage | | :----------------------------------: | :--------: | diff --git a/src/content/docs/workers/configuration/versions-and-deployments/index.mdx b/src/content/docs/workers/configuration/versions-and-deployments/index.mdx index 5aba8e440346e4c..fe6e7fb8a6c7c12 100644 --- a/src/content/docs/workers/configuration/versions-and-deployments/index.mdx +++ b/src/content/docs/workers/configuration/versions-and-deployments/index.mdx @@ -46,8 +46,8 @@ Review the different ways you can create versions of your Worker and deploy them A new version that is automatically deployed to 100% of traffic when: -- Changes are uploaded with [`wrangler deploy`](/workers/wrangler/commands/#deploy) via the Cloudflare Dashboard -- Changes are deployed with the command [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) via [Workers Builds](/workers/ci-cd/builds) +- Changes are uploaded with [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) via the Cloudflare Dashboard +- Changes are deployed with the command [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy) via [Workers Builds](/workers/ci-cd/builds) - Changes are uploaded with the [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) #### Upload a new version to be gradually deployed or deployed at a later time @@ -58,12 +58,12 @@ Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. ::: -To create a new version of your Worker that is not deployed immediately, use the [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload) command or create a new version via the Cloudflare dashboard using the **Save** button. You can find the **Save** option under the down arrow beside the "Deploy" button. +To create a new version of your Worker that is not deployed immediately, use the [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload) command or create a new version via the Cloudflare dashboard using the **Save** button. You can find the **Save** option under the down arrow beside the "Deploy" button. -Versions created in this way can then be deployed all at once or gradually deployed using the [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy) command or via the Cloudflare dashboard under the **Deployments** tab. +Versions created in this way can then be deployed all at once or gradually deployed using the [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy) command or via the Cloudflare dashboard under the **Deployments** tab. :::note -When using [Wrangler](/workers/wrangler/), changes made to a Worker's triggers [routes, domains](/workers/configuration/routing/) or [cron triggers](/workers/configuration/cron-triggers/) need to be applied with the command [`wrangler triggers deploy`](/workers/wrangler/commands/#triggers). +When using [Wrangler](/workers/wrangler/), changes made to a Worker's triggers [routes, domains](/workers/configuration/routing/) or [cron triggers](/workers/configuration/cron-triggers/) need to be applied with the command [`wrangler triggers deploy`](/workers/wrangler/commands/general/#triggers). ::: :::note @@ -78,7 +78,7 @@ See examples of creating a Worker, Versions, and Deployments directly with the A #### Via Wrangler -Wrangler allows you to view the 100 most recent versions and deployments. Refer to the [`versions list`](/workers/wrangler/commands/#list-4) and [`deployments`](/workers/wrangler/commands/#list-5) documentation to view the commands. +Wrangler allows you to view the 100 most recent versions and deployments. Refer to the [`versions list`](/workers/wrangler/commands/general/#list-4) and [`deployments`](/workers/wrangler/commands/general/#list-5) documentation to view the commands. #### Via the Cloudflare dashboard @@ -94,16 +94,16 @@ To view your deployments in the Cloudflare dashboard: ### First upload -You must use [C3](/workers/get-started/guide/#1-create-a-new-worker-project) or [`wrangler deploy`](/workers/wrangler/commands/#deploy) the first time you create a new Workers project. Using [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload) the first time you upload a Worker will fail. +You must use [C3](/workers/get-started/guide/#1-create-a-new-worker-project) or [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) the first time you create a new Workers project. Using [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload) the first time you upload a Worker will fail. ### Service worker syntax -Service worker syntax is not supported for versions that are uploaded through [`wrangler versions upload`](/workers/wrangler/commands/#versions-upload). You must use ES modules format. +Service worker syntax is not supported for versions that are uploaded through [`wrangler versions upload`](/workers/wrangler/commands/general/#versions-upload). You must use ES modules format. Refer to [Migrate from Service Workers to ES modules](/workers/reference/migrate-to-module-workers/#advantages-of-migrating) to learn how to migrate your Workers from the service worker format to the ES modules format. ### Durable Object migrations -Uploading a version with [Durable Object migrations](/durable-objects/reference/durable-objects-migrations/) is not supported. Use [`wrangler deploy`](/workers/wrangler/commands/#deploy) if you are applying a [Durable Object migration](/durable-objects/reference/durable-objects-migrations/). +Uploading a version with [Durable Object migrations](/durable-objects/reference/durable-objects-migrations/) is not supported. Use [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) if you are applying a [Durable Object migration](/durable-objects/reference/durable-objects-migrations/). This will be supported in the near future. diff --git a/src/content/docs/workers/configuration/versions-and-deployments/rollbacks.mdx b/src/content/docs/workers/configuration/versions-and-deployments/rollbacks.mdx index f7f9187bbcbda56..479aa94a6c47c3f 100644 --- a/src/content/docs/workers/configuration/versions-and-deployments/rollbacks.mdx +++ b/src/content/docs/workers/configuration/versions-and-deployments/rollbacks.mdx @@ -6,11 +6,11 @@ description: Revert to an older version of your Worker. --- import { DashButton } from "~/components"; -You can roll back to a previously deployed [version](/workers/configuration/versions-and-deployments/#versions) of your Worker using [Wrangler](/workers/wrangler/commands/#rollback) or the Cloudflare dashboard. Rolling back to a previous version of your Worker will immediately create a new [deployment](/workers/configuration/versions-and-deployments/#deployments) with the version specified and become the active deployment across all your deployed routes and domains. +You can roll back to a previously deployed [version](/workers/configuration/versions-and-deployments/#versions) of your Worker using [Wrangler](/workers/wrangler/commands/general/#rollback) or the Cloudflare dashboard. Rolling back to a previous version of your Worker will immediately create a new [deployment](/workers/configuration/versions-and-deployments/#deployments) with the version specified and become the active deployment across all your deployed routes and domains. ## Via Wrangler -To roll back to a specified version of your Worker via Wrangler, use the [`wrangler rollback`](/workers/wrangler/commands/#rollback) command. +To roll back to a specified version of your Worker via Wrangler, use the [`wrangler rollback`](/workers/wrangler/commands/general/#rollback) command. ## Via the Cloudflare Dashboard @@ -38,7 +38,7 @@ You can only roll back to the 100 most recently published versions. :::note -When using Wrangler in interactive mode, only the 10 most recent versions will be displayed for selection. To roll back to an older version (beyond the 10 most recent), you must specify the version ID directly on the command line. Refer to the [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy) documentation for details on specifying version IDs. +When using Wrangler in interactive mode, only the 10 most recent versions will be displayed for selection. To roll back to an older version (beyond the 10 most recent), you must specify the version ID directly on the command line. Refer to the [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy) documentation for details on specifying version IDs. We plan to address this limitation soon to allow displaying all 100 available versions in interactive mode. diff --git a/src/content/docs/workers/databases/connecting-to-databases.mdx b/src/content/docs/workers/databases/connecting-to-databases.mdx index 8123c8838e89485..d4aacf18c7a97b1 100644 --- a/src/content/docs/workers/databases/connecting-to-databases.mdx +++ b/src/content/docs/workers/databases/connecting-to-databases.mdx @@ -65,7 +65,7 @@ Once you have installed the necessary packages, use the APIs provided by these p ## Authentication -If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: +If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/general/#secret) command: ```sh wrangler secret put diff --git a/src/content/docs/workers/databases/third-party-integrations/index.mdx b/src/content/docs/workers/databases/third-party-integrations/index.mdx index e8b547abec000b5..cfe7e973396b698 100644 --- a/src/content/docs/workers/databases/third-party-integrations/index.mdx +++ b/src/content/docs/workers/databases/third-party-integrations/index.mdx @@ -19,7 +19,7 @@ If your Worker is connecting to a regional database, you can reduce your query l ## Database credentials -When you rotate or update database credentials, you must update the corresponding [secrets](/workers/configuration/secrets/) in your Worker. Use the [`wrangler secret put`](/workers/wrangler/commands/#secret) command to update secrets securely or update the secret directly in the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings). +When you rotate or update database credentials, you must update the corresponding [secrets](/workers/configuration/secrets/) in your Worker. Use the [`wrangler secret put`](/workers/wrangler/commands/general/#secret) command to update secrets securely or update the secret directly in the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings). ## Database limits diff --git a/src/content/docs/workers/development-testing/index.mdx b/src/content/docs/workers/development-testing/index.mdx index 759f0bd243add96..c896d1a2222e9e7 100644 --- a/src/content/docs/workers/development-testing/index.mdx +++ b/src/content/docs/workers/development-testing/index.mdx @@ -38,7 +38,7 @@ When developing Workers, it's important to understand two distinct concepts: **You can start a local development server using:** -1. The Cloudflare Workers CLI [**Wrangler**](/workers/wrangler/), using the built-in [`wrangler dev`](/workers/wrangler/commands/#dev) command. +1. The Cloudflare Workers CLI [**Wrangler**](/workers/wrangler/), using the built-in [`wrangler dev`](/workers/wrangler/commands/general/#dev) command. @@ -413,7 +413,7 @@ async function startOrUpdateDevSession() { ## `wrangler dev --remote` (Legacy) -Separate from Miniflare-powered local development, Wrangler also offers a fully remote development mode via [`wrangler dev --remote`](/workers/wrangler/commands/#dev). Remote development is [**not** supported in the Vite plugin](/workers/development-testing/wrangler-vs-vite/). +Separate from Miniflare-powered local development, Wrangler also offers a fully remote development mode via [`wrangler dev --remote`](/workers/wrangler/commands/general/#dev). Remote development is [**not** supported in the Vite plugin](/workers/development-testing/wrangler-vs-vite/). @@ -432,4 +432,4 @@ When using remote development, all bindings automatically connect to their remot ### Limitations -- When you run a remote development session using the `--remote` flag, a limit of 50 [routes](/workers/configuration/routing/routes/) per zone is enforced. Learn more in[ Workers platform limits](/workers/platform/limits/#number-of-routes-per-zone-when-using-wrangler-dev---remote). +- When you run a remote development session using the `--remote` flag, a limit of 50 [routes](/workers/configuration/routing/routes/) per zone is enforced. Learn more in[ Workers platform limits](/workers/platform/limits/#routes-and-domains-when-using-wrangler-dev---remote). diff --git a/src/content/docs/workers/development-testing/local-data.mdx b/src/content/docs/workers/development-testing/local-data.mdx index 3b377b62888e421..0518d64f8efdb8c 100644 --- a/src/content/docs/workers/development-testing/local-data.mdx +++ b/src/content/docs/workers/development-testing/local-data.mdx @@ -31,7 +31,7 @@ When you first start developing, your local resources will be empty. You'll need Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more in the [Wrangler commands for KV page](/kv/reference/kv-commands/). ::: -#### [Add a single key-value pair](/workers/wrangler/commands/#kv-key) +#### [Add a single key-value pair](/workers/wrangler/commands/kv/#kv-key) -#### [Bulk upload](/workers/wrangler/commands/#kv-bulk) +#### [Bulk upload](/workers/wrangler/commands/kv/#kv-bulk) -You may also include [other metadata](/workers/wrangler/commands/#r2-object-put). +You may also include [other metadata](/workers/wrangler/commands/r2/#r2-object-put). ### D1 databases -#### [Execute a SQL statement](/workers/wrangler/commands/#d1-execute) +#### [Execute a SQL statement](/workers/wrangler/commands/d1/#d1-execute) -#### [Execute a SQL file](/workers/wrangler/commands/#d1-execute) +#### [Execute a SQL file](/workers/wrangler/commands/d1/#d1-execute) @@ -124,7 +124,7 @@ Wrangler will detect your framework, show the configuration it will apply, and p ### Configure without deploying -To configure your project without deploying, use [`wrangler setup`](/workers/wrangler/commands/#setup): +To configure your project without deploying, use [`wrangler setup`](/workers/wrangler/commands/general/#setup): @@ -140,7 +140,7 @@ This outputs a summary of the configuration that would be generated. ## Non-interactive mode -To skip the confirmation prompts, use the [`--yes` flag](/workers/wrangler/commands/#deploy): +To skip the confirmation prompts, use the [`--yes` flag](/workers/wrangler/commands/general/#deploy): diff --git a/src/content/docs/workers/framework-guides/web-apps/vike.mdx b/src/content/docs/workers/framework-guides/web-apps/vike.mdx index c176237ac9a0a4b..5f2e8422d74f75d 100644 --- a/src/content/docs/workers/framework-guides/web-apps/vike.mdx +++ b/src/content/docs/workers/framework-guides/web-apps/vike.mdx @@ -108,7 +108,7 @@ env.LOG_LEVEL ## TypeScript -If you use TypeScript, run [`wrangler types`](/workers/wrangler/commands/#types) whenever you change your Cloudflare configuration to update the `worker-configuration.d.ts` file. +If you use TypeScript, run [`wrangler types`](/workers/wrangler/commands/general/#types) whenever you change your Cloudflare configuration to update the `worker-configuration.d.ts` file. -If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation. +If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/general/#login) documentation. diff --git a/src/content/docs/workers/languages/rust/index.mdx b/src/content/docs/workers/languages/rust/index.mdx index a72a0dde9a348c6..824b2ebf0e8c870 100644 --- a/src/content/docs/workers/languages/rust/index.mdx +++ b/src/content/docs/workers/languages/rust/index.mdx @@ -51,7 +51,7 @@ Your project will be created in a new directory that you named, in which you wil ## 2. Develop locally -After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command to start a local server for developing your Worker. This will allow you to test your Worker in development. +After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/general/#dev) command to start a local server for developing your Worker. This will allow you to test your Worker in development. ```sh npx wrangler dev @@ -60,7 +60,7 @@ npx wrangler dev If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. :::note -If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation for more information. +If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/general/#login) documentation for more information. ::: Go to [http://localhost:8787](http://localhost:8787) to review your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker. diff --git a/src/content/docs/workers/languages/typescript/index.mdx b/src/content/docs/workers/languages/typescript/index.mdx index 7dd27e60efd3145..3afcaf22e15cf83 100644 --- a/src/content/docs/workers/languages/typescript/index.mdx +++ b/src/content/docs/workers/languages/typescript/index.mdx @@ -12,7 +12,7 @@ import { TabItem, Tabs, PackageManagers, Render } from "~/components"; TypeScript is a first-class language on Cloudflare Workers. All APIs provided in Workers are fully typed, and type definitions are generated directly from [workerd](https://github.com/cloudflare/workerd), the open-source Workers runtime. -We recommend you generate types for your Worker by running [`wrangler types`](/workers/wrangler/commands/#types). Cloudflare also publishes type definitions to [GitHub](https://github.com/cloudflare/workers-types) and [npm](https://www.npmjs.com/package/@cloudflare/workers-types) (`npm install -D @cloudflare/workers-types`). +We recommend you generate types for your Worker by running [`wrangler types`](/workers/wrangler/commands/general/#types). Cloudflare also publishes type definitions to [GitHub](https://github.com/cloudflare/workers-types) and [npm](https://www.npmjs.com/package/@cloudflare/workers-types) (`npm install -D @cloudflare/workers-types`).

Generate types that match your Worker's configuration @@ -34,7 +34,7 @@ To ensure that your type definitions always match your Worker's configuration, y -See [the `wrangler types` command docs](/workers/wrangler/commands/#types) for more details. +See [the `wrangler types` command docs](/workers/wrangler/commands/general/#types) for more details. :::note diff --git a/src/content/docs/workers/observability/errors.mdx b/src/content/docs/workers/observability/errors.mdx index 46220785c5d82f4..3457a1ea9b95804 100644 --- a/src/content/docs/workers/observability/errors.mdx +++ b/src/content/docs/workers/observability/errors.mdx @@ -18,12 +18,12 @@ When a Worker running in production has an error that prevents it from returning | `1101` | Worker threw a JavaScript exception. | | `1102` | Worker exceeded [CPU time limit](/workers/platform/limits/#cpu-time). | | `1103` | The owner of this worker needs to contact [Cloudflare Support](/support/contacting-cloudflare-support/) | -| `1015` | Worker hit the [burst rate limit](/workers/platform/limits/#burst-rate). | +| `1015` | Worker hit the [burst rate limit](/workers/platform/limits/#daily-requests). | | `1019` | Worker hit [loop limit](#loop-limit). | | `1021` | Worker has requested a host it cannot access. | | `1022` | Cloudflare has failed to route the request to the Worker. | | `1024` | Worker cannot make a subrequest to a Cloudflare-owned IP address. | -| `1027` | Worker exceeded free tier [daily request limit](/workers/platform/limits/#daily-request). | +| `1027` | Worker exceeded free tier [daily request limit](/workers/platform/limits/#daily-requests). | | `1042` | Worker tried to fetch from another Worker on the same zone, which is only [supported](/workers/runtime-apis/fetch/) when the [`global_fetch_strictly_public` compatibility flag](/workers/configuration/compatibility-flags/#global-fetch-strictly-public) is used. | | `10162` | Module has an unsupported Content-Type. | diff --git a/src/content/docs/workers/observability/logs/real-time-logs.mdx b/src/content/docs/workers/observability/logs/real-time-logs.mdx index f60c4050d6bb8ba..ed34441c06158e1 100644 --- a/src/content/docs/workers/observability/logs/real-time-logs.mdx +++ b/src/content/docs/workers/observability/logs/real-time-logs.mdx @@ -36,7 +36,7 @@ To view real-time logs associated with any deployed Worker using the Cloudflare To view real-time logs associated with any deployed Worker using Wrangler: 1. Go to your Worker project directory. -2. Run [`npx wrangler tail`](/workers/wrangler/commands/#tail). +2. Run [`npx wrangler tail`](/workers/wrangler/commands/general/#tail). This will log any incoming requests to your application available in your local terminal. @@ -72,13 +72,13 @@ npx wrangler tail | jq .event.request.url "https://www.bytesized.xyz/page-data/app-data.json" ``` -You can customize how `wrangler tail` works to fit your needs. Refer to [the `wrangler tail` documentation](/workers/wrangler/commands/#tail) for available configuration options. +You can customize how `wrangler tail` works to fit your needs. Refer to [the `wrangler tail` documentation](/workers/wrangler/commands/general/#tail) for available configuration options. ## Limits :::note -You can filter real-time logs in the dashboard or using [`wrangler tail`](/workers/wrangler/commands/#tail). If your Worker has a high volume of messages, filtering real-time logs can help mitgate messages from being dropped. +You can filter real-time logs in the dashboard or using [`wrangler tail`](/workers/wrangler/commands/general/#tail). If your Worker has a high volume of messages, filtering real-time logs can help mitgate messages from being dropped. ::: diff --git a/src/content/docs/workers/observability/metrics-and-analytics.mdx b/src/content/docs/workers/observability/metrics-and-analytics.mdx index 2c414dfcad65069..ca2d391dfb534ae 100644 --- a/src/content/docs/workers/observability/metrics-and-analytics.mdx +++ b/src/content/docs/workers/observability/metrics-and-analytics.mdx @@ -101,11 +101,11 @@ Worker invocation statuses indicate whether a Worker executed successfully or fa | Exceeded resources¹ | Worker exceeded runtime limits | 1102, 1027 | `exceededResources` | | Internal error² | Workers runtime encountered an error | | `internalError` | -¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits. +¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-and-response-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits. ² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](https://www.cloudflarestatus.com/). -To further investigate exceptions, use [`wrangler tail`](/workers/wrangler/commands/#tail). +To further investigate exceptions, use [`wrangler tail`](/workers/wrangler/commands/general/#tail). ### Request duration diff --git a/src/content/docs/workers/observability/source-maps.mdx b/src/content/docs/workers/observability/source-maps.mdx index 5e4b0ac485183c8..be3ebac6d02a0b2 100644 --- a/src/content/docs/workers/observability/source-maps.mdx +++ b/src/content/docs/workers/observability/source-maps.mdx @@ -25,7 +25,7 @@ To enable source maps, add the following to your Worker's [Wrangler configuratio -When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy). +When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy). ​​ :::note diff --git a/src/content/docs/workers/platform/limits.mdx b/src/content/docs/workers/platform/limits.mdx index 55559c221723664..17a1290ebbec944 100644 --- a/src/content/docs/workers/platform/limits.mdx +++ b/src/content/docs/workers/platform/limits.mdx @@ -281,7 +281,7 @@ To reduce Worker size: A Worker must parse and execute its global scope (top-level code outside of handlers) within 1 second. Larger bundles and expensive initialization code in global scope increase startup time. -When the platform rejects a deployment because the Worker exceeds the startup time limit, the validation returns the error `Script startup exceeded CPU time limit` (error code `10021`). Wrangler automatically generates a CPU profile that you can import into Chrome DevTools or open in VS Code. Refer to [`wrangler check startup`](/workers/wrangler/commands/#startup) for more details. +When the platform rejects a deployment because the Worker exceeds the startup time limit, the validation returns the error `Script startup exceeded CPU time limit` (error code `10021`). Wrangler automatically generates a CPU profile that you can import into Chrome DevTools or open in VS Code. Refer to [`wrangler check startup`](/workers/wrangler/commands/general/#startup) for more details. To measure startup time, run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`. Wrangler reports `startup_time_ms` in the output. diff --git a/src/content/docs/workers/platform/pricing.mdx b/src/content/docs/workers/platform/pricing.mdx index 9b747db787efbfb..b13ec84f3ac442c 100644 --- a/src/content/docs/workers/platform/pricing.mdx +++ b/src/content/docs/workers/platform/pricing.mdx @@ -9,7 +9,7 @@ description: Workers plans and pricing information. import { GlossaryTooltip, Render, DashButton } from "~/components"; -By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions, Workers KV and Hyperdrive. Read more about the [Free plan limits](/workers/platform/limits/#worker-limits). +By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions, Workers KV and Hyperdrive. Read more about the [Free plan limits](/workers/platform/limits/#account-plan-limits). The Workers Paid plan includes Workers, Pages Functions, Workers KV, Hyperdrive, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan. There are no additional charges for data transfer (egress) or throughput (bandwidth). @@ -28,7 +28,7 @@ Users on the Workers Paid plan have access to the Standard usage model. Workers | | Requests1, 2, 3 | Duration | CPU time | | ------------ | ------------------------------------------------------------------ | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Free** | 100,000 per day | No charge for duration | 10 milliseconds of CPU time per invocation | -| **Standard** | 10 million included per month
+$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month
+$0.02 per additional million CPU milliseconds

Max of [5 minutes of CPU time](/workers/platform/limits/#worker-limits) per invocation (default: 30 seconds)
Max of 15 minutes of CPU time per [Cron Trigger](/workers/configuration/cron-triggers/) or [Queue Consumer](/queues/configuration/javascript-apis/#consumer) invocation | +| **Standard** | 10 million included per month
+$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month
+$0.02 per additional million CPU milliseconds

Max of [5 minutes of CPU time](/workers/platform/limits/#account-plan-limits) per invocation (default: 30 seconds)
Max of 15 minutes of CPU time per [Cron Trigger](/workers/configuration/cron-triggers/) or [Queue Consumer](/queues/configuration/javascript-apis/#consumer) invocation | 1 Inbound requests to your Worker. Cloudflare does not bill for [subrequests](/workers/platform/limits/#subrequests) you make from your Worker. diff --git a/src/content/docs/workers/playground.mdx b/src/content/docs/workers/playground.mdx index c8cdde6e1bc14c3..bb8d66b5f14060e 100644 --- a/src/content/docs/workers/playground.mdx +++ b/src/content/docs/workers/playground.mdx @@ -72,7 +72,7 @@ The log viewer supports the following: At this time, the log viewer does not support logging class instances or their properties (for example, `request.url`). -If you need a more complete development experience with full debugging capabilities, you can use [Wrangler](/workers/wrangler/install-and-update/) locally. To clone an existing Worker from your dashboard for local development, sign up and use the [`wrangler init --from-dash`](/workers/wrangler/commands/#init) command once your worker is deployed. +If you need a more complete development experience with full debugging capabilities, you can use [Wrangler](/workers/wrangler/install-and-update/) locally. To clone an existing Worker from your dashboard for local development, sign up and use the [`wrangler init --from-dash`](/workers/wrangler/commands/general/#init) command once your worker is deployed. ## Share diff --git a/src/content/docs/workers/runtime-apis/bindings/mTLS.mdx b/src/content/docs/workers/runtime-apis/bindings/mTLS.mdx index 371ba8bb56ca012..05730633678320f 100644 --- a/src/content/docs/workers/runtime-apis/bindings/mTLS.mdx +++ b/src/content/docs/workers/runtime-apis/bindings/mTLS.mdx @@ -20,7 +20,7 @@ Currently, mTLS for Workers cannot be used for requests made to a service that i ::: -First, upload a certificate and its private key to your account using the [`wrangler mtls-certificate`](/workers/wrangler/commands/#mtls-certificate) command: +First, upload a certificate and its private key to your account using the [`wrangler mtls-certificate`](/workers/wrangler/commands/certificates/#mtls-certificate) command: :::caution diff --git a/src/content/docs/workers/runtime-apis/bindings/service-bindings/index.mdx b/src/content/docs/workers/runtime-apis/bindings/service-bindings/index.mdx index 4f6da02c64e98f9..2f465d7086329cf 100644 --- a/src/content/docs/workers/runtime-apis/bindings/service-bindings/index.mdx +++ b/src/content/docs/workers/runtime-apis/bindings/service-bindings/index.mdx @@ -133,7 +133,7 @@ For more about the lifecycle of calling a Worker over a Service Binding via RPC, ## Local development -Local development is supported for Service bindings. For each Worker, open a new terminal and use [`wrangler dev`](/workers/wrangler/commands/#dev) in the relevant directory. When running `wrangler dev`, service bindings will show as `connected`/`not connected` depending on whether Wrangler can find a running `wrangler dev` session for that Worker. For example: +Local development is supported for Service bindings. For each Worker, open a new terminal and use [`wrangler dev`](/workers/wrangler/commands/general/#dev) in the relevant directory. When running `wrangler dev`, service bindings will show as `connected`/`not connected` depending on whether Wrangler can find a running `wrangler dev` session for that Worker. For example: ```sh $ wrangler dev diff --git a/src/content/docs/workers/runtime-apis/context.mdx b/src/content/docs/workers/runtime-apis/context.mdx index 31a95158c7a5ae2..a1bca22be4da9ae 100644 --- a/src/content/docs/workers/runtime-apis/context.mdx +++ b/src/content/docs/workers/runtime-apis/context.mdx @@ -178,7 +178,7 @@ Note that `props` values specified in this way are allowed to contain any "persi ### TypeScript types for `ctx.exports` and `ctx.props` -If using TypeScript, you should use [the `wrangler types` command](/workers/wrangler/commands/#types) to auto-generate types for your project. The generated types will ensure `ctx.exports` is typed correctly. +If using TypeScript, you should use [the `wrangler types` command](/workers/wrangler/commands/general/#types) to auto-generate types for your project. The generated types will ensure `ctx.exports` is typed correctly. When declaring an entrypoint class that accepts `props`, make sure to declare it as `extends WorkerEntrypoint`, where `Props` is the type of `ctx.props`. See the example above. diff --git a/src/content/docs/workers/runtime-apis/handlers/scheduled.mdx b/src/content/docs/workers/runtime-apis/handlers/scheduled.mdx index ba661f9a35d50e9..d73b9ba93009cb3 100644 --- a/src/content/docs/workers/runtime-apis/handlers/scheduled.mdx +++ b/src/content/docs/workers/runtime-apis/handlers/scheduled.mdx @@ -13,7 +13,7 @@ When a Worker is invoked via a [Cron Trigger](/workers/configuration/cron-trigge You can test the behavior of your `scheduled()` handler in local development using Wrangler. -Cron Triggers can be tested using `Wrangler` by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a http request. To simulate different cron patterns, a `cron` query parameter can be passed in. +Cron Triggers can be tested using `Wrangler` by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/general/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a http request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled diff --git a/src/content/docs/workers/runtime-apis/headers.mdx b/src/content/docs/workers/runtime-apis/headers.mdx index b57b1ad9a719a98..4e5a45e217e5510 100644 --- a/src/content/docs/workers/runtime-apis/headers.mdx +++ b/src/content/docs/workers/runtime-apis/headers.mdx @@ -32,7 +32,7 @@ headers.get('x-foo'); //=> "hello, world" The Workers implementation of the `Headers` API differs from the web standard in several ways. These differences are intentional, and reflect the server-side nature of the Workers runtime. :::note[TypeScript users] -Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/#types)) define a `Headers` type that includes Workers-specific methods like `getAll()`. This type is not directly compatible with the standard `Headers` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions. +Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/general/#types)) define a `Headers` type that includes Workers-specific methods like `getAll()`. This type is not directly compatible with the standard `Headers` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions. ::: ### `getAll()` method diff --git a/src/content/docs/workers/runtime-apis/request.mdx b/src/content/docs/workers/runtime-apis/request.mdx index 724060aad6ae502..5fa8ad7d86058e7 100644 --- a/src/content/docs/workers/runtime-apis/request.mdx +++ b/src/content/docs/workers/runtime-apis/request.mdx @@ -432,7 +432,7 @@ Using any other type of `ReadableStream` as the body of a request will result in The Workers implementation of the `Request` interface includes several extensions to the web standard `Request` API. These differences are intentional and provide additional functionality specific to the Workers runtime. :::note[TypeScript users] -Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/#types)) define a `Request` type that includes Workers-specific properties like `cf`. This type is not directly compatible with the standard `Request` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions or create a new `Request` object. +Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/general/#types)) define a `Request` type that includes Workers-specific properties like `cf`. This type is not directly compatible with the standard `Request` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions or create a new `Request` object. ::: ### The `cf` property diff --git a/src/content/docs/workers/runtime-apis/response.mdx b/src/content/docs/workers/runtime-apis/response.mdx index 2e183fc3dc446c0..9f18cf85f4aa610 100644 --- a/src/content/docs/workers/runtime-apis/response.mdx +++ b/src/content/docs/workers/runtime-apis/response.mdx @@ -148,7 +148,7 @@ Using any other type of `ReadableStream` as the body of a response will result i The Workers implementation of the `Response` interface includes several extensions to the web standard `Response` API. These differences are intentional and provide additional functionality specific to the Workers runtime. :::note[TypeScript users] -Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/#types)) define a `Response` type that includes Workers-specific properties like `cf` and `webSocket`. This type is not directly compatible with the standard `Response` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions. +Workers type definitions (from `@cloudflare/workers-types` or generated via [`wrangler types`](/workers/wrangler/commands/general/#types)) define a `Response` type that includes Workers-specific properties like `cf` and `webSocket`. This type is not directly compatible with the standard `Response` type from `lib.dom.d.ts`. If you are working with code that uses both Workers types and standard web types, you may need to use type assertions. ::: ### The `cf` property diff --git a/src/content/docs/workers/static-assets/get-started.mdx b/src/content/docs/workers/static-assets/get-started.mdx index b1aa514190b5dc8..8ae6907964e9ae9 100644 --- a/src/content/docs/workers/static-assets/get-started.mdx +++ b/src/content/docs/workers/static-assets/get-started.mdx @@ -59,7 +59,7 @@ cd my-static-site ### 2. Develop locally -After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. +After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/general/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev @@ -69,7 +69,7 @@ npx wrangler dev Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). -The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. +The [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy @@ -117,7 +117,7 @@ cd my-dynamic-site ### 2. Develop locally -After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. +After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/general/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev @@ -136,7 +136,7 @@ Then, save the files and reload the page. Your project's output will have change Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). -The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. +The [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy diff --git a/src/content/docs/workers/static-assets/headers.mdx b/src/content/docs/workers/static-assets/headers.mdx index c47c7350f6bfa09..40a44b79b9363fa 100644 --- a/src/content/docs/workers/static-assets/headers.mdx +++ b/src/content/docs/workers/static-assets/headers.mdx @@ -13,7 +13,7 @@ When serving static assets, Workers will attach some headers to the response by - **`Content-Type`** - A `Content-Type` header is attached to the response if one is provided during [the asset upload process](/workers/static-assets/direct-upload/). [Wrangler](/workers/wrangler/commands/#deploy) automatically determines the MIME type of the file, based on its extension. + A `Content-Type` header is attached to the response if one is provided during [the asset upload process](/workers/static-assets/direct-upload/). [Wrangler](/workers/wrangler/commands/general/#deploy) automatically determines the MIME type of the file, based on its extension. - **`Cache-Control: public, max-age=0, must-revalidate`** diff --git a/src/content/docs/workers/static-assets/migration-guides/migrate-from-pages.mdx b/src/content/docs/workers/static-assets/migration-guides/migrate-from-pages.mdx index ad29e20a1c94762..d60dd7f0ceebfc9 100644 --- a/src/content/docs/workers/static-assets/migration-guides/migrate-from-pages.mdx +++ b/src/content/docs/workers/static-assets/migration-guides/migrate-from-pages.mdx @@ -168,7 +168,7 @@ Then, update your configuration file's `main` field to point to the location of ##### Pages Functions with a `functions/` folder -If you use **Pages Functions with a [folder of `functions/`](/pages/functions/)**, you must first compile these functions into a single Worker script with the [`wrangler pages functions build`](/workers/wrangler/commands/#functions-build) command. +If you use **Pages Functions with a [folder of `functions/`](/pages/functions/)**, you must first compile these functions into a single Worker script with the [`wrangler pages functions build`](/workers/wrangler/commands/pages/#functions-build) command. @@ -77,7 +77,7 @@ npx wrangler r2 bucket create Replace `` with your desired bucket name. Note that bucket names must be lowercase and can only contain dashes. -Next, upload a file using the [`wrangler r2 object put`](/workers/wrangler/commands/#r2-object-put) command. +Next, upload a file using the [`wrangler r2 object put`](/workers/wrangler/commands/r2/#r2-object-put) command. ```sh npx wrangler r2 object put -f @@ -236,7 +236,7 @@ app.get("/jobs", async (c) => { After you have created your Worker application and added the required functions, deploy the application. -Before you deploy, you must set the `OPENAI_API_KEY` [secret](/workers/configuration/secrets/) for your application. Do this by running the [`wrangler secret put`](/workers/wrangler/commands/#secret-put) command: +Before you deploy, you must set the `OPENAI_API_KEY` [secret](/workers/configuration/secrets/) for your application. Do this by running the [`wrangler secret put`](/workers/wrangler/commands/general/#secret-put) command: ```sh npx wrangler secret put OPENAI_API_KEY @@ -244,7 +244,7 @@ npx wrangler secret put OPENAI_API_KEY To deploy your Worker application to the Cloudflare global network: -1. Make sure you are in your Worker project's directory, then run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: +1. Make sure you are in your Worker project's directory, then run the [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) command: ```sh npx wrangler deploy diff --git a/src/content/docs/workers/tutorials/github-sms-notifications-using-twilio.mdx b/src/content/docs/workers/tutorials/github-sms-notifications-using-twilio.mdx index 97d38cdcf82320d..fd05e0bd00a76fc 100644 --- a/src/content/docs/workers/tutorials/github-sms-notifications-using-twilio.mdx +++ b/src/content/docs/workers/tutorials/github-sms-notifications-using-twilio.mdx @@ -148,7 +148,7 @@ function checkSignature(text, headers, githubSecretToken) { } ``` -To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/#secret-put) to set your `GITHUB_SECRET_TOKEN`. This token is the secret you picked earlier when configuring you GitHub webhook: +To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/general/#secret-put) to set your `GITHUB_SECRET_TOKEN`. This token is the secret you picked earlier when configuring you GitHub webhook: ```sh npx wrangler secret put GITHUB_SECRET_TOKEN @@ -206,7 +206,7 @@ async function sendText(accountSid, authToken, message) { } ``` -To make this work, you need to set some secrets to hide your `ACCOUNT_SID` and `AUTH_TOKEN` from the source code. You can set secrets with [`wrangler secret put`](/workers/wrangler/commands/#secret-put) in your command line. +To make this work, you need to set some secrets to hide your `ACCOUNT_SID` and `AUTH_TOKEN` from the source code. You can set secrets with [`wrangler secret put`](/workers/wrangler/commands/general/#secret-put) in your command line. ```sh npx wrangler secret put TWILIO_ACCOUNT_SID diff --git a/src/content/docs/workers/tutorials/handle-form-submissions-with-airtable.mdx b/src/content/docs/workers/tutorials/handle-form-submissions-with-airtable.mdx index ef7d30a83950795..57ec7b2af2806ba 100644 --- a/src/content/docs/workers/tutorials/handle-form-submissions-with-airtable.mdx +++ b/src/content/docs/workers/tutorials/handle-form-submissions-with-airtable.mdx @@ -130,7 +130,7 @@ You will also need to create a **Personal access token** that you'll use to acce - Scope: the `data.records:write` scope must be set on the token - Access: access should be granted to the base you have been working with in this tutorial -The results access token should now be set in your application. To make the token available in your codebase, use the [`wrangler secret`](/workers/wrangler/commands/#secret) command. The `secret` command encrypts and stores environment variables for use in your function, without revealing them to users. +The results access token should now be set in your application. To make the token available in your codebase, use the [`wrangler secret`](/workers/wrangler/commands/general/#secret) command. The `secret` command encrypts and stores environment variables for use in your function, without revealing them to users. Run `wrangler secret put`, passing `AIRTABLE_ACCESS_TOKEN` as the name of your secret: diff --git a/src/content/docs/workers/tutorials/openai-function-calls-workers.mdx b/src/content/docs/workers/tutorials/openai-function-calls-workers.mdx index 6f1f9747205e45a..97e84ebe4975ef9 100644 --- a/src/content/docs/workers/tutorials/openai-function-calls-workers.mdx +++ b/src/content/docs/workers/tutorials/openai-function-calls-workers.mdx @@ -95,7 +95,7 @@ async fetch(request, env, ctx) { }, ``` -Use [`wrangler secret put`](/workers/wrangler/commands/#secret-put) to set `OPENAI_API_KEY`. This [secret's](/workers/configuration/secrets/) value is the API key you created earlier in the OpenAI dashboard: +Use [`wrangler secret put`](/workers/wrangler/commands/general/#secret-put) to set `OPENAI_API_KEY`. This [secret's](/workers/configuration/secrets/) value is the API key you created earlier in the OpenAI dashboard: ```sh npx wrangler secret put diff --git a/src/content/docs/workers/tutorials/postgres.mdx b/src/content/docs/workers/tutorials/postgres.mdx index 7abfb1c8a1eb90a..0782ba3024fc175 100644 --- a/src/content/docs/workers/tutorials/postgres.mdx +++ b/src/content/docs/workers/tutorials/postgres.mdx @@ -92,7 +92,7 @@ postgresql://username:password@host:port/database Replace `username`, `password`, `host`, `port`, and `database` with the appropriate values for your PostgreSQL database. -Set your connection string as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text. Use [`wrangler secret put`](/workers/wrangler/commands/#secret) with the example variable name `DB_URL`: +Set your connection string as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text. Use [`wrangler secret put`](/workers/wrangler/commands/general/#secret) with the example variable name `DB_URL`: ```sh npx wrangler secret put DB_URL @@ -131,7 +131,7 @@ Configure each database parameter as an [environment variable](/workers/configur -To set your password as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text, use [`wrangler secret put`](/workers/wrangler/commands/#secret). `DB_PASSWORD` is an example variable name for this secret to be accessed in your Worker: +To set your password as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text, use [`wrangler secret put`](/workers/wrangler/commands/general/#secret). `DB_PASSWORD` is an example variable name for this secret to be accessed in your Worker: ```sh npx wrangler secret put DB_PASSWORD diff --git a/src/content/docs/workers/tutorials/upload-assets-with-r2.mdx b/src/content/docs/workers/tutorials/upload-assets-with-r2.mdx index 4be8dec4a750e82..69099f0bc925e4e 100644 --- a/src/content/docs/workers/tutorials/upload-assets-with-r2.mdx +++ b/src/content/docs/workers/tutorials/upload-assets-with-r2.mdx @@ -124,7 +124,7 @@ The code written above fetches and returns data from the R2 bucket when a `GET` ## Upload securely to an R2 bucket -Next, you will add the ability to upload to your R2 bucket using authentication. To securely authenticate your upload requests, use [Wrangler's secret capability](/workers/wrangler/commands/#secret). Wrangler was installed when you ran the `create cloudflare@latest` command. +Next, you will add the ability to upload to your R2 bucket using authentication. To securely authenticate your upload requests, use [Wrangler's secret capability](/workers/wrangler/commands/general/#secret). Wrangler was installed when you ran the `create cloudflare@latest` command. Create a secret value of your choice -- for instance, a random string or password. Using the Wrangler CLI, add the secret to your project as `AUTH_SECRET`: diff --git a/src/content/docs/workers/vite-plugin/reference/migrating-from-wrangler-dev.mdx b/src/content/docs/workers/vite-plugin/reference/migrating-from-wrangler-dev.mdx index 2a52740dcfee478..880982f5e5f2eb0 100644 --- a/src/content/docs/workers/vite-plugin/reference/migrating-from-wrangler-dev.mdx +++ b/src/content/docs/workers/vite-plugin/reference/migrating-from-wrangler-dev.mdx @@ -6,7 +6,7 @@ sidebar: description: Migrating from wrangler dev to the Vite plugin --- -In most cases, migrating from [`wrangler dev`](/workers/wrangler/commands/#dev) is straightforward and you can follow the instructions in [Get started](/workers/vite-plugin/get-started/). +In most cases, migrating from [`wrangler dev`](/workers/wrangler/commands/general/#dev) is straightforward and you can follow the instructions in [Get started](/workers/vite-plugin/get-started/). There are a few key differences to highlight: ## Input and output Worker config files diff --git a/src/content/docs/workers/wrangler/bundling.mdx b/src/content/docs/workers/wrangler/bundling.mdx index 0d2b7ebaa3128f8..b3a8d2a337b5dd5 100644 --- a/src/content/docs/workers/wrangler/bundling.mdx +++ b/src/content/docs/workers/wrangler/bundling.mdx @@ -94,7 +94,7 @@ Disabling bundling is not recommended in most scenarios. Use this option only wh If your build tooling already produces build artifacts suitable for direct deployment to Cloudflare, you can opt out of bundling by using the `--no-bundle` command line flag: `npx wrangler deploy --no-bundle`. If you opt out of bundling, Wrangler will not process your code and some features introduced by Wrangler bundling (for example minification, and polyfills injection) will not be available. -Use [Custom Builds](/workers/wrangler/custom-builds/) to customize what Wrangler will bundle and upload to the Cloudflare global network when you use [`wrangler dev`](/workers/wrangler/commands/#dev) and [`wrangler deploy`](/workers/wrangler/commands/#deploy). +Use [Custom Builds](/workers/wrangler/custom-builds/) to customize what Wrangler will bundle and upload to the Cloudflare global network when you use [`wrangler dev`](/workers/wrangler/commands/general/#dev) and [`wrangler deploy`](/workers/wrangler/commands/general/#deploy). ## Generated Wrangler configuration diff --git a/src/content/docs/workers/wrangler/configuration.mdx b/src/content/docs/workers/wrangler/configuration.mdx index 062e98ee70e3714..bda6d3be4bd9638 100644 --- a/src/content/docs/workers/wrangler/configuration.mdx +++ b/src/content/docs/workers/wrangler/configuration.mdx @@ -236,7 +236,7 @@ The `main` key is optional for assets-only Workers. - Configures static assets that will be served. Refer to [Assets](/workers/static-assets/binding/) for more details. * `migrations` - - Maps a Durable Object from a class name to a runtime state. This communicates changes to the Durable Object (creation / deletion / rename / transfer) to the Workers runtime and provides the runtime with instructions on how to deal with those changes. Refer to [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/#durable-object-migrations-in-wranglertoml). + - Maps a Durable Object from a class name to a runtime state. This communicates changes to the Durable Object (creation / deletion / rename / transfer) to the Workers runtime and provides the runtime with instructions on how to deal with those changes. Refer to [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/#migration-wrangler-configuration). * `placement` - Configures where your Worker runs to minimize latency to back-end services. Refer to [Placement](/workers/configuration/placement/). @@ -552,7 +552,7 @@ To bind D1 databases to your Worker, assign an array of the below object to the - `migrations_dir` - The migration directory containing the migration files. By default, `wrangler d1 migrations create` creates a folder named `migrations`. You can use `migrations_dir` to specify a different folder containing the migration files (for example, if you have a mono-repo setup, and want to use a single D1 instance across your apps/packages). - - For more information, refer to [D1 Wrangler `migrations` commands](/workers/wrangler/commands/#migrations-create) and [D1 migrations](/d1/reference/migrations/). + - For more information, refer to [D1 Wrangler `migrations` commands](/workers/wrangler/commands/d1/#migrations-create) and [D1 migrations](/d1/reference/migrations/). :::note @@ -1495,7 +1495,7 @@ You can configure various aspects of local development, such as the local protoc ### Secrets -[Secrets](/workers/configuration/secrets/) are a type of binding that allow you to [attach encrypted text values](/workers/wrangler/commands/#secret) to your Worker. +[Secrets](/workers/configuration/secrets/) are a type of binding that allow you to [attach encrypted text values](/workers/wrangler/commands/general/#secret) to your Worker. @@ -1600,7 +1600,7 @@ In many cases, this allows you to work provide just enough of an API to make a d [Source maps](/workers/observability/source-maps/) translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace. - `upload_source_maps` - - When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#versions-deploy). + - When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/general/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/general/#versions-deploy). Example: diff --git a/src/content/docs/workers/wrangler/deprecations.mdx b/src/content/docs/workers/wrangler/deprecations.mdx index d1029ff47f0c4d7..1dec24ebb06d78d 100644 --- a/src/content/docs/workers/wrangler/deprecations.mdx +++ b/src/content/docs/workers/wrangler/deprecations.mdx @@ -35,13 +35,13 @@ Use `npm create cloudflare@latest` for new Workers and Pages projects. The `wrangler deploy` command is deprecated, but still active in v3. `wrangler deploy` will be fully removed in v4. -Use [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) to deploy Workers. +Use [`npx wrangler deploy`](/workers/wrangler/commands/general/#deploy) to deploy Workers. #### `pages publish` The `wrangler pages publish` command is deprecated, but still active in v3. `wrangler pages publish` will be fully removed in v4. -Use [`wrangler pages deploy`](/workers/wrangler/commands/#deploy-1) to deploy Pages. +Use [`wrangler pages deploy`](/workers/wrangler/commands/general/#deploy-1) to deploy Pages. #### `version` diff --git a/src/content/docs/workers/wrangler/environments.mdx b/src/content/docs/workers/wrangler/environments.mdx index 4463008d6db4ecb..95ca3edaf84186d 100644 --- a/src/content/docs/workers/wrangler/environments.mdx +++ b/src/content/docs/workers/wrangler/environments.mdx @@ -152,7 +152,7 @@ In the example below, we have two Workers, both with a `staging` environment. `w ### Secrets for production -You may assign environment-specific [secrets](/workers/configuration/secrets/) by running the command [`wrangler secret put -env`](/workers/wrangler/commands/#secret-put). You can also create `dotenv` type files named `.dev.vars.`. +You may assign environment-specific [secrets](/workers/configuration/secrets/) by running the command [`wrangler secret put -env`](/workers/wrangler/commands/general/#secret-put). You can also create `dotenv` type files named `.dev.vars.`. Like other environment variables, secrets are [non-inheritable](/workers/wrangler/configuration/#non-inheritable-keys) and must be defined per environment. diff --git a/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration.mdx b/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration.mdx index d5f7f60520218f3..2c60d40fa1a0467 100644 --- a/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration.mdx +++ b/src/content/docs/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration.mdx @@ -168,7 +168,7 @@ Cloudflare will continue to support `rust` and `webpack` project types, but reco - Configures cron triggers for running a Worker on a schedule. - `usage_model` inherited optional - - Specifies the [Usage Model](/workers/platform/pricing/#workers) for your Worker. There are two options - [`bundled`](/workers/platform/limits/#worker-limits) and [`unbound`](/workers/platform/limits/#worker-limits). For newly created Workers, if the Usage Model is omitted it will be set to the [default Usage Model set on the account](https://dash.cloudflare.com/?account=workers/default-usage-model). For existing Workers, if the Usage Model is omitted, it will be set to the Usage Model configured in the dashboard for that Worker. + - Specifies the [Usage Model](/workers/platform/pricing/#workers) for your Worker. There are two options - [`bundled`](/workers/platform/limits/#account-plan-limits) and [`unbound`](/workers/platform/limits/#account-plan-limits). For newly created Workers, if the Usage Model is omitted it will be set to the [default Usage Model set on the account](https://dash.cloudflare.com/?account=workers/default-usage-model). For existing Workers, if the Usage Model is omitted, it will be set to the Usage Model configured in the dashboard for that Worker. - `build` top level optional - Configures a custom build step to be run by Wrangler when building your Worker. Refer to the [custom builds documentation](#build) for more details. @@ -220,7 +220,7 @@ Alternatively, you can define `vars` using an inline table format. This style sh :::note -Secrets should be handled using the [`wrangler secret`](/workers/wrangler/commands/#secret) command. +Secrets should be handled using the [`wrangler secret`](/workers/wrangler/commands/general/#secret) command. ::: diff --git a/src/content/docs/workflows/build/events-and-parameters.mdx b/src/content/docs/workflows/build/events-and-parameters.mdx index 55778fc91524d4d..b4ecb6776d5b77a 100644 --- a/src/content/docs/workflows/build/events-and-parameters.mdx +++ b/src/content/docs/workflows/build/events-and-parameters.mdx @@ -21,7 +21,7 @@ Events are a powerful part of a Workflow, as you often want a Workflow to act on You can pass parameters to a Workflow in three ways: -- As an optional argument to the `create` method on a [Workflow binding](/workers/wrangler/commands/#trigger) when triggering a Workflow from a Worker. +- As an optional argument to the `create` method on a [Workflow binding](/workers/wrangler/commands/general/#trigger) when triggering a Workflow from a Worker. - Via the `--params` flag when using the `wrangler` CLI to trigger a Workflow. - Via the `step.waitForEvent` API, which allows a Workflow instance to wait for an event (and optional data) to be received _while it is running_. Workflow instances can be sent events from external services over HTTP or via the Workers API for Workflows. diff --git a/src/content/docs/workflows/build/local-development.mdx b/src/content/docs/workflows/build/local-development.mdx index 45190af930d6542..98af1c6965fb571 100644 --- a/src/content/docs/workflows/build/local-development.mdx +++ b/src/content/docs/workflows/build/local-development.mdx @@ -50,7 +50,7 @@ Your worker has access to the following bindings: Local development sessions create a standalone, local-only environment that mirrors the production environment Workflows runs in so you can test your Workflows _before_ you deploy to production. -Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/#dev) to learn more about how to configure a local development session. +Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/general/#dev) to learn more about how to configure a local development session. ## Known Issues diff --git a/src/content/docs/workflows/build/trigger-workflows.mdx b/src/content/docs/workflows/build/trigger-workflows.mdx index 3c35a9ba6a8716d..4db3efcfe6205ac 100644 --- a/src/content/docs/workflows/build/trigger-workflows.mdx +++ b/src/content/docs/workflows/build/trigger-workflows.mdx @@ -13,7 +13,7 @@ You can trigger Workflows both programmatically and via the Workflows APIs, incl 1. With [Workers](/workers) via HTTP requests in a `fetch` handler, or bindings from a `queue` or `scheduled` handler 2. Using the [Workflows REST API](/api/resources/workflows/methods/list/) -3. Via the [wrangler CLI](/workers/wrangler/commands/#workflows) in your terminal +3. Via the [wrangler CLI](/workers/wrangler/commands/workflows/#workflows) in your terminal ## Workers API (Bindings) diff --git a/src/content/docs/workflows/build/workers-api.mdx b/src/content/docs/workflows/build/workers-api.mdx index 358ee5f72aa2d14..86a65a39c848719 100644 --- a/src/content/docs/workflows/build/workers-api.mdx +++ b/src/content/docs/workflows/build/workers-api.mdx @@ -265,7 +265,7 @@ Ensure you have a compatibility date `2024-10-22` or later installed when bindin ::: The `Workflow` type provides methods that allow you to create, inspect the status, and manage running Workflow instances from within a Worker script. -It is part of the generated types produced by [`wrangler types`](/workers/wrangler/commands/#types). +It is part of the generated types produced by [`wrangler types`](/workers/wrangler/commands/general/#types). ```ts title="./worker-configuration.d.ts" interface Env { diff --git a/src/content/docs/workflows/python/index.mdx b/src/content/docs/workflows/python/index.mdx index 0b9f09fd8f64bae..92f23708023083f 100644 --- a/src/content/docs/workflows/python/index.mdx +++ b/src/content/docs/workflows/python/index.mdx @@ -85,7 +85,7 @@ To run a Python Workflow locally, use [Wrangler](/workers/wrangler/), the CLI fo npx wrangler@latest dev ``` -To deploy a Python Workflow to Cloudflare, run [`wrangler deploy`](/workers/wrangler/commands/#deploy): +To deploy a Python Workflow to Cloudflare, run [`wrangler deploy`](/workers/wrangler/commands/general/#deploy): ```bash npx wrangler@latest deploy diff --git a/src/content/docs/workflows/reference/limits.mdx b/src/content/docs/workflows/reference/limits.mdx index 1fae0e94bb19952..428d33b4d7c09c2 100644 --- a/src/content/docs/workflows/reference/limits.mdx +++ b/src/content/docs/workflows/reference/limits.mdx @@ -28,7 +28,7 @@ Workflows cannot be deployed to Workers for Platforms namespaces, as Workflows d | Maximum state that can be persisted per Workflow instance | 100MB | 1GB | | Maximum `step.sleep` duration | 365 days (1 year) | 365 days (1 year) | | Maximum steps per Workflow [^5] | 1,024 | 10,000 (default) / configurable up to 25,000 | -| Maximum Workflow executions | 100,000 per day [shared with Workers daily limit](/workers/platform/limits/#worker-limits) | Unlimited | +| Maximum Workflow executions | 100,000 per day [shared with Workers daily limit](/workers/platform/limits/#account-plan-limits) | Unlimited | | Concurrent Workflow instances (executions) per account [^7] | 100 | 10,000 | | Maximum Workflow instance creation rate [^8] | 100 per second [^6] | 100 per second [^6] | | Maximum number of [queued instances](/workflows/observability/metrics-analytics/#event-types) | 100,000 | 1,000,000 | @@ -109,7 +109,7 @@ Each Workflow instance supports 10,000 steps by default, but this can be increas ### Increasing Workflow CPU limits -Workflows are Worker scripts, and share the same [per invocation CPU limits](/workers/platform/limits/#worker-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O, which don't count towards your CPU time or Workflows compute consumption. +Workflows are Worker scripts, and share the same [per invocation CPU limits](/workers/platform/limits/#account-plan-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O, which don't count towards your CPU time or Workflows compute consumption. If your Workflow exceeds its CPU time limit, it will throw the following error: @@ -117,7 +117,7 @@ If your Workflow exceeds its CPU time limit, it will throw the following error: Error: Worker exceeded CPU time limit. ``` -This will appear as `exceededCpu` in [`wrangler tail`](/workers/wrangler/commands/#tail) outcomes and as `exceededResources` in [Workers metrics](/workers/observability/metrics-and-analytics/#invocation-statuses). +This will appear as `exceededCpu` in [`wrangler tail`](/workers/wrangler/commands/general/#tail) outcomes and as `exceededResources` in [Workers metrics](/workers/observability/metrics-and-analytics/#invocation-statuses). By default, the maximum CPU time per Workflow invocation is set to 30 seconds, but can be increased for all invocations associated with a Workflow definition by setting `limits.cpu_ms` in your Wrangler configuration: @@ -147,7 +147,7 @@ If your Workflow exceeds its subrequest limit, it will throw the following error Error: Too many subrequests. ``` -This will appear as `exceededResources` in [Workers metrics](/workers/observability/metrics-and-analytics/#invocation-statuses) and as `exception` in [`wrangler tail`](/workers/wrangler/commands/#tail) outcomes. +This will appear as `exceededResources` in [Workers metrics](/workers/observability/metrics-and-analytics/#invocation-statuses) and as `exception` in [`wrangler tail`](/workers/wrangler/commands/general/#tail) outcomes. By default, the maximum number of subrequests per Workflow instance is 10,000 on Workers Paid plans, but this can be increased up to 10 million by setting `limits.subrequests` in your Wrangler configuration: diff --git a/src/content/docs/workflows/reference/pricing.mdx b/src/content/docs/workflows/reference/pricing.mdx index 2ecd673b1f07d5a..75a5f1262c3e5ba 100644 --- a/src/content/docs/workflows/reference/pricing.mdx +++ b/src/content/docs/workflows/reference/pricing.mdx @@ -49,7 +49,7 @@ Storage is billed using gigabyte-month (GB-month) as the billing metric, identic * Storage is calculated across all instances, and includes running, errored, sleeping and completed instances. * By default, instance state is retained for [3 days on the Free plan](/workflows/reference/limits/) and [7 days on the Paid plan](/workflows/reference/limits/). * When creating a Workflow instance, you can set a shorter state retention period if you do not need to retain state for errored or completed Workflows. -* Deleting instances via the [Workers API](/workflows/build/workers-api/), [Wrangler CLI](/workers/wrangler/commands/#workflows), REST API, or dashboard will free up storage. Note that it may take a few minutes for storage limits to update. +* Deleting instances via the [Workers API](/workflows/build/workers-api/), [Wrangler CLI](/workers/wrangler/commands/workflows/#workflows), REST API, or dashboard will free up storage. Note that it may take a few minutes for storage limits to update. An instance that attempts to store state when your have reached the storage limit on the Free plan will cause an error to be thrown. diff --git a/src/content/partials/cloudflare-one/email-security/onboarding-prerequisites.mdx b/src/content/partials/cloudflare-one/email-security/onboarding-prerequisites.mdx index 99c76fc9d593871..49cd1e994f8a9f8 100644 --- a/src/content/partials/cloudflare-one/email-security/onboarding-prerequisites.mdx +++ b/src/content/partials/cloudflare-one/email-security/onboarding-prerequisites.mdx @@ -4,5 +4,5 @@ --- - A [Cloudflare account](https://dash.cloudflare.com/sign-up) -- A [Zero Trust organization](/cloudflare-one/setup/#create-a-zero-trust-organization) +- A [Zero Trust organization](/cloudflare-one/setup/#2-create-a-zero-trust-organization) - A domain to protect \ No newline at end of file diff --git a/src/content/partials/durable-objects/api-storage-other-methods.mdx b/src/content/partials/durable-objects/api-storage-other-methods.mdx index 44553d94354648b..a032c0d88ffc79d 100644 --- a/src/content/partials/durable-objects/api-storage-other-methods.mdx +++ b/src/content/partials/durable-objects/api-storage-other-methods.mdx @@ -32,7 +32,7 @@ import { Type, MetaInfo } from "~/components"; - `txn` - Provides access to the `put()`, `get()`, `delete()`, and `list()` methods documented above to run in the current transaction context. In order to get transactional behavior within a transaction closure, you must call the methods on the `txn` Object instead of on the top-level `ctx.storage` Object.

Also supports a `rollback()` function that ensures any changes made during the transaction will be rolled back rather than committed. After `rollback()` is called, any subsequent operations on the `txn` Object will fail with an exception. `rollback()` takes no parameters and returns nothing to the caller. - - When using [the SQLite-backed storage engine](/durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend), the `txn` object is obsolete. Any storage operations performed directly on the `ctx.storage` object, including SQL queries using [`ctx.storage.sql.exec()`](/durable-objects/api/sqlite-storage-api/#exec), will be considered part of the transaction. + - When using [the SQLite-backed storage engine](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class), the `txn` object is obsolete. Any storage operations performed directly on the `ctx.storage` object, including SQL queries using [`ctx.storage.sql.exec()`](/durable-objects/api/sqlite-storage-api/#exec), will be considered part of the transaction. ### `sync` diff --git a/src/content/partials/durable-objects/do-faq-limits.mdx b/src/content/partials/durable-objects/do-faq-limits.mdx index f96ce6e65bced50..b00bd7670473483 100644 --- a/src/content/partials/durable-objects/do-faq-limits.mdx +++ b/src/content/partials/durable-objects/do-faq-limits.mdx @@ -25,7 +25,7 @@ Durable Objects are designed such that the number of individual objects in the s ### Can I increase Durable Objects' CPU limit? -Durable Objects are Worker scripts, and have the same [per invocation CPU limits](/workers/platform/limits/#worker-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O, which don't count towards your CPU time or Durable Objects compute consumption. +Durable Objects are Worker scripts, and have the same [per invocation CPU limits](/workers/platform/limits/#account-plan-limits) as any Workers do. Note that CPU time is active processing time: not time spent waiting on network requests, storage calls, or other general I/O, which don't count towards your CPU time or Durable Objects compute consumption. By default, the maximum CPU time per Durable Objects invocation (HTTP request, WebSocket message, or Alarm) is set to 30 seconds, but can be increased for all Durable Objects associated with a Durable Object definition by setting `limits.cpu_ms` in your Wrangler configuration: diff --git a/src/content/partials/durable-objects/do-plans-note.mdx b/src/content/partials/durable-objects/do-plans-note.mdx index 1dfe793cc087623..36771fa0ccba055 100644 --- a/src/content/partials/durable-objects/do-plans-note.mdx +++ b/src/content/partials/durable-objects/do-plans-note.mdx @@ -5,7 +5,7 @@ :::note Durable Objects are available both on Workers Free and Workers Paid plans. -- **Workers Free plan**: Only Durable Objects with [SQLite storage backend](/durable-objects/best-practices/access-durable-objects-storage/#wrangler-configuration-for-sqlite-backed-durable-objects) are available. +- **Workers Free plan**: Only Durable Objects with [SQLite storage backend](/durable-objects/reference/durable-objects-migrations/#create-migration) are available. - **Workers Paid plan**: Durable Objects with either SQLite storage backend or [key-value storage backend](/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) are available. If you wish to downgrade from a Workers Paid plan to a Workers Free plan, you must first ensure that you have deleted all Durable Object namespaces with the key-value storage backend. diff --git a/src/content/partials/durable-objects/durable-objects-vs-d1.mdx b/src/content/partials/durable-objects/durable-objects-vs-d1.mdx index 250701de7ca9a23..e23bff19d0acc98 100644 --- a/src/content/partials/durable-objects/durable-objects-vs-d1.mdx +++ b/src/content/partials/durable-objects/durable-objects-vs-d1.mdx @@ -22,4 +22,4 @@ Durable Objects require a bit more effort, but in return, give you more flexibil With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1. -SQL query pricing and limits are intended to be identical between D1 ([pricing](/d1/platform/pricing/), [limits](/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](/durable-objects/platform/pricing/#sql-storage-billing), [limits](/durable-objects/platform/limits/)). +SQL query pricing and limits are intended to be identical between D1 ([pricing](/d1/platform/pricing/), [limits](/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](/durable-objects/platform/pricing/#sqlite-storage-backend), [limits](/durable-objects/platform/limits/)). diff --git a/src/content/partials/fundamentals/account-permissions-table.mdx b/src/content/partials/fundamentals/account-permissions-table.mdx index a07438db266c3c4..334e3b7e6d20215 100644 --- a/src/content/partials/fundamentals/account-permissions-table.mdx +++ b/src/content/partials/fundamentals/account-permissions-table.mdx @@ -172,7 +172,7 @@ import { Markdown } from "~/components"; | Workers R2 Storage {props.editWord} | Grants write access to [Cloudflare R2 Storage](/r2/). | | Workers Scripts Read | Grants read access to [Cloudflare Workers scripts](/workers/). | | Workers Scripts {props.editWord} | Grants write access to [Cloudflare Workers scripts](/workers/). | -| Workers Tail Read | Grants [`wrangler tail`](/workers/wrangler/commands/#tail) read permissions. | +| Workers Tail Read | Grants [`wrangler tail`](/workers/wrangler/commands/general/#tail) read permissions. | | Zero Trust Read | Grants read access to [Cloudflare Zero Trust](/cloudflare-one/) resources. | | Zero Trust Report | Grants reporting access to [Cloudflare Zero Trust](/cloudflare-one/). | | Zero Trust {props.editWord} | Grants write access to [Cloudflare Zero Trust](/cloudflare-one/) resources. | diff --git a/src/content/partials/workers/bindings_per_env.mdx b/src/content/partials/workers/bindings_per_env.mdx index c68272a5d57dfbf..d878a49738a7ce9 100644 --- a/src/content/partials/workers/bindings_per_env.mdx +++ b/src/content/partials/workers/bindings_per_env.mdx @@ -4,9 +4,9 @@ ## Local development -**Local simulations**: During local development, your Worker code always executes locally and bindings connect to locally simulated resources [by default](/workers/development-testing/#remote-bindings). This is supported in [`wrangler dev`](/workers/wrangler/commands/#dev) and the [Cloudflare Vite plugin](/workers/vite-plugin/). +**Local simulations**: During local development, your Worker code always executes locally and bindings connect to locally simulated resources [by default](/workers/development-testing/#remote-bindings). This is supported in [`wrangler dev`](/workers/wrangler/commands/general/#dev) and the [Cloudflare Vite plugin](/workers/vite-plugin/). -**Remote binding connections:**: Allows you to connect to remote resources on a [per-binding basis](/workers/development-testing/#remote-bindings). This is supported in [`wrangler dev`](/workers/wrangler/commands/#dev) and the [Cloudflare Vite plugin](/workers/vite-plugin/). +**Remote binding connections:**: Allows you to connect to remote resources on a [per-binding basis](/workers/development-testing/#remote-bindings). This is supported in [`wrangler dev`](/workers/wrangler/commands/general/#dev) and the [Cloudflare Vite plugin](/workers/vite-plugin/). | Binding | Local simulations | Remote binding connections | | --------------------------------------- | :---------------: | :------------------------: | @@ -36,7 +36,7 @@ During remote development, all of your Worker code is uploaded and executed on Cloudflare's infrastructure, and bindings always connect to remote resources. **We recommend using local development with remote binding connections instead** for faster iteration and debugging. -Supported only in [`wrangler dev --remote`](/workers/wrangler/commands/#dev) - there is **no Vite plugin equivalent**. +Supported only in [`wrangler dev --remote`](/workers/wrangler/commands/general/#dev) - there is **no Vite plugin equivalent**. | Binding | Remote development | | --------------------------------------- | :----------------: | diff --git a/src/content/partials/workers/storage-products-table.mdx b/src/content/partials/workers/storage-products-table.mdx index 1f8daae3a2abb10..9b832c39da70b20 100644 --- a/src/content/partials/workers/storage-products-table.mdx +++ b/src/content/partials/workers/storage-products-table.mdx @@ -10,6 +10,6 @@ | Global coordination & stateful serverless | [Durable Objects](/durable-objects/) | Building collaborative applications; global coordination across clients; real-time WebSocket applications; strongly consistent, transactional storage. | | Lightweight SQL database | [D1](/d1/) | Relational data, including user profiles, product listings and orders, and/or customer data. | | Task processing, batching and messaging | [Queues](/queues/) | Background job processing (emails, notifications, APIs), message queuing, and deferred tasks. | -| Vector search & embeddings queries | [Vectorize](/vectorize/) | Storing [embeddings](/workers-ai/models/#text-embeddings) from AI models for semantic search and classification tasks. | +| Vector search & embeddings queries | [Vectorize](/vectorize/) | Storing [embeddings](/workers-ai/models/?tasks=Text+Embeddings) from AI models for semantic search and classification tasks. | | Streaming ingestion | [Pipelines](/pipelines/) | Streaming data ingestion and processing, including clickstream analytics, telemetry/log data, and structured data for querying | | Time-series metrics | [Analytics Engine](/analytics/analytics-engine/) | Write and query high-cardinality time-series data, usage metrics, and service-level telemetry using Workers and/or SQL. | \ No newline at end of file diff --git a/src/content/partials/workers/wrangler-typegen.mdx b/src/content/partials/workers/wrangler-typegen.mdx index 018b08047c3e14f..5489de9d394917d 100644 --- a/src/content/partials/workers/wrangler-typegen.mdx +++ b/src/content/partials/workers/wrangler-typegen.mdx @@ -1 +1 @@ -If you're using TypeScript, run [`wrangler types`](/workers/wrangler/commands/#types) whenever you modify your Wrangler configuration file. This generates types for the `env` object based on your bindings, as well as [runtime types](/workers/languages/typescript/). +If you're using TypeScript, run [`wrangler types`](/workers/wrangler/commands/general/#types) whenever you modify your Wrangler configuration file. This generates types for the `env` object based on your bindings, as well as [runtime types](/workers/languages/typescript/).