diff --git a/public/__redirects b/public/__redirects
index 8f2e0944cd84db..1d24e3ba36fd52 100644
--- a/public/__redirects
+++ b/public/__redirects
@@ -2421,4 +2421,9 @@
/ai-gateway/guardrails/* /ai-gateway/features/guardrails/:splat 301
/ai-gateway/websockets-api/* /ai-gateway/usage/websockets-api/:splat 301
-
+# Containers
+/containers/image-management /containers/platform-details/image-management 301
+/containers/scaling-and-routing /containers/platform-details/scaling-and-routing 301
+/containers/architecture /containers/platform-details/architecture 301
+/containers/durable-object-methods /containers/platform-details/durable-object-methods 301
+/containers/platform-details /containers/platform-details/architecture 301
\ No newline at end of file
diff --git a/src/content/docs/containers/architecture.mdx b/src/content/docs/containers/architecture.mdx
deleted file mode 100644
index e4e8269b77fe1c..00000000000000
--- a/src/content/docs/containers/architecture.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
----
-pcx_content_type: reference
-title: Architecture
-sidebar:
- order: 9
----
-
-This page describes the architecture of Cloudflare Containers.
-
-## How and where containers run
-
-After you deploy a Worker that uses a Container, your image is uploaded to
-[Cloudflare's Registry](/containers/image-management) and distributed globally to Cloudflare's Network.
-Cloudflare will pre-schedule instances and pre-fetch images across the globe to ensure quick start
-times when scaling up the number of concurrent container instances. This allows you to call
-`env.YOUR_CONTAINER.get(id)` and get a new instance quickly without worrying
-about the underlying scaling.
-
-When a request is made to start a new container instance, the nearest location
-with a pre-fetched image is selected. Subsequent requests to the same instance,
-regardless of where they originate, will be routed to this location as long as
-the instance stays alive.
-
-Starting additional container instances will use other locations with pre-fetched images,
-and Cloudflare will automatically begin prepping additional machines behind the scenes
-for additional scaling and quick cold starts. Because there are a finite number pre-warmed
-locations, some container instances may be started in locations that are farther away from
-the end-user. This is done to ensure that the container instance starts quickly. You are
-only charged for actively running instances and not for any unused pre-warmed images.
-
-Each container instance runs inside its own VM, which provides strong
-isolation from other workloads running on Cloudflare's network. Containers
-should be built for the `linux/amd64` architecture, and should stay within
-[size limits](/containers/platform-details/#limits). Logging, metrics collection, and
-networking are automatically set up on each container.
-
-## Life of a Container Request
-
-When a request is made to any Worker, including one with an associated Container, it is generally handled
-by a datacenter in a location with the best latency between itself and the requesting user.
-A different datacenter may be selected to optimize overall latency, if [Smart Placement](/workers/configuration/smart-placement/)
-is on, or if the nearest location is under heavy load.
-
-When a request is made to a Container instance, it is sent through a Durable Object, which
-can be defined by either using a `DurableObject` or the [`Container` class](/containers/container-package), which
-extends Durable Objects with Container-specific APIs and helpers. We recommend using `Container`, see
-the [`Container` class documentation](/containers/container-package) for more details.
-
-Each Durable Object is a globally routable isolate that can execute code and store state. This allows
-developers to easily address and route to specific container instances (no matter where they are placed),
-define and run hooks on container status changes, execute recurring checks on the instance, and store persistent
-state associated with each instance.
-
-As mentioned above, when a container instance starts, it is launched in the nearest pre-warmed location. This means that
-code in a container is usually executed in a different location than the one handling the Workers request.
-
-:::note
-Currently, Durable Objects may be co-located with their associated Container instance, but often are not.
-
-Cloudflare is currently working on expanding the number of locations in which a Durable Object can run,
-which will allow container instances to always run in the same location as their Durable Object.
-:::
-
-Because all Container requests are passed through a Worker, end-users cannot make TCP or
-UDP requests to a Container instance. If you have a use case that requires inbound TCP
-or UDP from an end-user, please [let us know](https://forms.gle/AGSq54VvUje6kmKu8).
diff --git a/src/content/docs/containers/beta-info.mdx b/src/content/docs/containers/beta-info.mdx
index 4afe156719af3c..b0877ec8f7ed8c 100644
--- a/src/content/docs/containers/beta-info.mdx
+++ b/src/content/docs/containers/beta-info.mdx
@@ -2,7 +2,7 @@
pcx_content_type: reference
title: Beta Info & Roadmap
sidebar:
- order: 2
+ order: 9
---
Currently, Containers are in beta. There are several changes we plan to make prior to GA:
@@ -24,7 +24,7 @@ by calling `get()` on their binding with a unique ID.
We plan to add official support for utilization-based autoscaling and latency-aware load balancing
in the future.
-See the [Autoscaling documentation](/containers/scaling-and-routing) for more information.
+See the [Autoscaling documentation](/containers/platform-details/scaling-and-routing) for more information.
### Reduction of log noise
@@ -38,7 +38,6 @@ We plan to automatically reduce log noise in the future.
The dashboard will be updated to show:
-- the status of Container rollouts
- links from Workers to their associated Containers
### Co-locating Durable Objects and Containers
@@ -71,6 +70,6 @@ There are several areas where we wish to gather feedback from users:
- Do you want to integrate Containers with any other Cloudflare services? If so, which ones and how?
- Do you want more ways to interact with a Container via Workers? If so, how?
- Do you need different mechanisms for routing requests to containers?
-- Do you need different mechanisms for scaling containers? (see [scaling documentation](/containers/scaling-and-routing) for information on autoscaling plans)
+- Do you need different mechanisms for scaling containers? (see [scaling documentation](/containers/platform-details/scaling-and-routing) for information on autoscaling plans)
At any point during the Beta, feel free to [give feedback using this form](https://forms.gle/CscdaEGuw5Hb6H2s7).
diff --git a/src/content/docs/containers/container-package.mdx b/src/content/docs/containers/container-package.mdx
index 81c5b60b713b3b..147ae4ebb8a4e7 100644
--- a/src/content/docs/containers/container-package.mdx
+++ b/src/content/docs/containers/container-package.mdx
@@ -2,13 +2,21 @@
pcx_content_type: navigation
title: Container Package
sidebar:
- order: 8
+ order: 5
---
+import { PackageManagers } from "~/components";
+
When writing code that interacts with a container instance, you can either use a
-Durable Object directly or use the [`Container` module](https://github.com/cloudflare/containers)
+[Durable Object directly](/containers/platform-details/durable-object-methods) or use the [`Container` class](https://github.com/cloudflare/containers)
importable from [`@cloudflare/containers`](https://www.npmjs.com/package/@cloudflare/containers).
+We recommend using the `Container` class for most use cases.
+
+
+
+Then, you can define a class that extends `Container`, and use it in your Worker:
+
```javascript
import { Container } from "@cloudflare/containers";
@@ -16,13 +24,16 @@ class MyContainer extends Container {
defaultPort = 8080;
sleepAfter = "5m";
}
-```
-
-We recommend using the `Container` class for most use cases.
-Install it with `npm install @cloudflare/containers`.
+export default {
+ async fetch(request, env) {
+ // gets default instance and forwards request from outside Worker
+ return env.MY_CONTAINER.getByName("hello").fetch(request);
+ },
+};
+```
-The `Container` class extends `DurableObject` so all Durable Object functionality is available.
+The `Container` class extends `DurableObject` so all [Durable Object](/durable-objects) functionality is available.
It also provides additional functionality and a nice interface for common container behaviors,
such as:
diff --git a/src/content/docs/containers/examples/container-backend.mdx b/src/content/docs/containers/examples/container-backend.mdx
index a2c044927b4c2d..2a50097d134f8c 100644
--- a/src/content/docs/containers/examples/container-backend.mdx
+++ b/src/content/docs/containers/examples/container-backend.mdx
@@ -1,5 +1,4 @@
---
-
summary: A simple frontend app with a containerized backend
pcx_content_type: example
title: Static Frontend, Container Backend
@@ -33,6 +32,7 @@ For a full example, see the [Static Frontend + Container Backend Template](https
{
"class_name": "Backend",
"image": "./Dockerfile",
+ "max_instances": 3
}
],
"durable_objects": {
@@ -166,7 +166,7 @@ select of of N instances of a Container to route requests to.
In the future, we will provide improved latency-aware load balancing and autoscaling.
This will make scaling stateless instances simple and routing more efficient. See the
-[autoscaling documentation](/containers/scaling-and-routing) for more details.
+[autoscaling documentation](/containers/platform-details/scaling-and-routing) for more details.
:::
## Define a backend container
@@ -176,6 +176,7 @@ Your container should be able to handle requests to `/api/widgets`.
In this case, we'll use a simple Golang backend that returns a hard-coded list of widgets.
+
```go
package main
@@ -186,22 +187,23 @@ import (
)
func handler(w http.ResponseWriter, r \*http.Request) {
- widgets := []map[string]interface{}{
- {"id": 1, "name": "Widget A"},
- {"id": 2, "name": "Sprocket B"},
- {"id": 3, "name": "Gear C"},
- }
+ widgets := []map[string]interface{}{
+ {"id": 1, "name": "Widget A"},
+ {"id": 2, "name": "Sprocket B"},
+ {"id": 3, "name": "Gear C"},
+ }
- w.Header().Set("Content-Type", "application/json")
- w.Header().Set("Access-Control-Allow-Origin", "*")
- json.NewEncoder(w).Encode(widgets)
+ w.Header().Set("Content-Type", "application/json")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ json.NewEncoder(w).Encode(widgets)
}
func main() {
- http.HandleFunc("/api/widgets", handler)
- log.Fatal(http.ListenAndServe(":8080", nil))
+ http.HandleFunc("/api/widgets", handler)
+ log.Fatal(http.ListenAndServe(":8080", nil))
}
```
+
diff --git a/src/content/docs/containers/examples/cron.mdx b/src/content/docs/containers/examples/cron.mdx
index fc2b9e593e1122..ac982fedd20994 100644
--- a/src/content/docs/containers/examples/cron.mdx
+++ b/src/content/docs/containers/examples/cron.mdx
@@ -1,5 +1,4 @@
---
-
summary: Running a container on a schedule using Cron Triggers
pcx_content_type: example
title: Cron Container
@@ -43,9 +42,7 @@ Use a cron expression in your Wrangler config to specify the schedule:
},
"migrations": [
{
- "new_sqlite_classes": [
- "CronContainer"
- ],
+ "new_sqlite_classes": ["CronContainer"],
"tag": "v1"
}
]
@@ -61,7 +58,6 @@ import { Container, getContainer } from "@cloudflare/containers";
export class CronContainer extends Container {
sleepAfter = "5m";
- manualStart = true;
}
export default {
@@ -71,13 +67,16 @@ export default {
);
},
+ // scheduled is called when a cron trigger fires
async scheduled(
_controller: any,
env: { CRON_CONTAINER: DurableObjectNamespace },
) {
- await getContainer(env.CRON_CONTAINER).startContainer({
- envVars: {
- MESSAGE: "Start Time: " + new Date().toISOString(),
+ await getContainer(env.CRON_CONTAINER).startAndWaitForPorts({
+ startOptions: {
+ envVars: {
+ MESSAGE: "Start Time: " + new Date().toISOString(),
+ },
},
});
},
diff --git a/src/content/docs/containers/examples/env-vars-and-secrets.mdx b/src/content/docs/containers/examples/env-vars-and-secrets.mdx
index 36c46d0c942dab..973b59f9cf1faf 100644
--- a/src/content/docs/containers/examples/env-vars-and-secrets.mdx
+++ b/src/content/docs/containers/examples/env-vars-and-secrets.mdx
@@ -10,7 +10,7 @@ description: Pass in environment variables and secrets to your container
import { WranglerConfig, PackageManagers } from "~/components";
Environment variables can be passed into a Container using the `envVars` field
-in the `Container` class, or by setting manually when the Container starts.
+in the [`Container`](/containers/container-package) class, or by setting manually when the Container starts.
Secrets can be passed into a Container by using [Worker Secrets](/workers/configuration/secrets/)
or the [Secret Store](/secrets-store/integrations/workers/), then passing them into the Container
@@ -19,25 +19,21 @@ as environment variables.
These examples show the various ways to pass in secrets and environment variables. In each, we will
be passing in:
-- the variable `"ACCOUNT_NAME"` as a hard-coded environment variable
-- the secret `"CONTAINER_SECRET_KEY"` as a secret from Worker Secrets
-- the secret `"ACCOUNT_API_KEY"` as a secret from the Secret Store
+- the variable `"ENV_VAR"` as a hard-coded environment variable
+- the secret `"WORKER_SECRET"` as a secret from Worker Secrets
+- the secret `"SECRET_STORE_SECRET"` as a secret from the Secret Store
In practice, you may just use one of the methods for storing secrets, but
we will show both for completeness.
## Creating secrets
-First, let's create the `"CONTAINER_SECRET_KEY"` secret in Worker Secrets:
+First, let's create the `"WORKER_SECRET"` secret in Worker Secrets:
-
+
Then, let's create a store called "demo" in the Secret Store, and add
-the `"ACCOUNT_API_KEY"` secret to it:
+the `"SECRET_STORE_SECRET"` secret to it:
For full details on how to create secrets, see the [Workers Secrets documentation](/workers/configuration/secrets/)
@@ -65,13 +61,13 @@ in Wrangler configuration.
{
"name": "my-container-worker",
"vars": {
- "ACCOUNT_NAME": "my-account"
+ "ENV_VAR": "my-env-var"
},
"secrets_store_secrets": [
{
"binding": "SECRET_STORE",
"store_id": "demo",
- "secret_name": "ACCOUNT_API_KEY"
+ "secret_name": "SECRET_STORE_SECRET"
}
]
// rest of the configuration...
@@ -80,11 +76,11 @@ in Wrangler configuration.
-Note that `"CONTAINER_SECRET_KEY"` does not need to be set, at it is automatically
+Note that `"WORKER_SECRET"` does not need to be specified in the Wrangler config file, as it is automatically
added to `env`.
Also note that we did not configure anything specific for environment variables
-or secrets in the container-related portion of wrangler configuration.
+or secrets in the _container-related_ portion of the Wrangler configuration file.
## Using `envVars` on the Container class
@@ -97,9 +93,9 @@ export class MyContainer extends Container {
defaultPort = 8080;
sleepAfter = "10s";
envVars = {
- ACCOUNT_NAME: env.ACCOUNT_NAME,
- ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY,
- CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY,
+ WORKER_SECRET: env.WORKER_SECRET,
+ ENV_VAR: env.ENV_VAR,
+ // we can't set the secret store binding as a default here, as getting the secret value is asynchronous
};
}
```
@@ -111,37 +107,39 @@ set as environment variables when it launches.
But what if you want to set environment variables on a per-instance basis?
-In this case, set `manualStart` then use the `start` method to pass in environment variables for each instance.
-We'll assume that we've set additional secrets in the Secret Store.
+In this case, use the `startAndWaitForPorts()` method to pass in environment variables for each instance.
```js
export class MyContainer extends Container {
defaultPort = 8080;
sleepAfter = "10s";
- manualStart = true;
}
export default {
async fetch(request, env) {
if (new URL(request.url).pathname === "/launch-instances") {
let instanceOne = env.MY_CONTAINER.getByName("foo");
- let instanceTwo = env.MY_CONTAINER.getByName("foo");
+ let instanceTwo = env.MY_CONTAINER.getByName("bar");
// Each instance gets a different set of environment variables
- await instanceOne.start({
- envVars: {
- ACCOUNT_NAME: env.ACCOUNT_NAME + "-1",
- ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY_ONE,
- CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY_TWO,
+ await instanceOne.startAndWaitForPorts({
+ startOptions: {
+ envVars: {
+ ENV_VAR: env.ENV_VAR + "foo",
+ WORKER_SECRET: env.WORKER_SECRET,
+ SECRET_STORE_SECRET: await env.SECRET_STORE.get(),
+ },
},
});
- await instanceTwo.start({
- envVars: {
- ACCOUNT_NAME: env.ACCOUNT_NAME + "-2",
- ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY_TWO,
- CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY_TWO,
+ await instanceTwo.startAndWaitForPorts({
+ startOptions: {
+ envVars: {
+ ENV_VAR: env.ENV_VAR + "foo",
+ WORKER_SECRET: env.ANOTHER_WORKER_SECRET,
+ SECRET_STORE_SECRET: await env.OTHER_SECRET_STORE.get(),
+ },
},
});
return new Response("Container instances launched");
@@ -151,3 +149,7 @@ export default {
},
};
```
+
+## Build-time environment variables
+
+Finally, you can also set build-time environment variables that are only available when building the container image via the `image_vars` field in the Wrangler configuration.
diff --git a/src/content/docs/containers/examples/stateless.mdx b/src/content/docs/containers/examples/stateless.mdx
index 9ff6a3249a556e..f9f282405dfe8a 100644
--- a/src/content/docs/containers/examples/stateless.mdx
+++ b/src/content/docs/containers/examples/stateless.mdx
@@ -1,5 +1,4 @@
---
-
summary: Run multiple instances across Cloudflare's network
pcx_content_type: example
title: Stateless Instances
@@ -36,5 +35,5 @@ select of of N instances of a Container to route requests to.
In the future, we will provide improved latency-aware load balancing and autoscaling.
This will make scaling stateless instances simple and routing more efficient. See the
-[autoscaling documentation](/containers/scaling-and-routing) for more details.
+[autoscaling documentation](/containers/platform-details/scaling-and-routing) for more details.
:::
diff --git a/src/content/docs/containers/examples/status-hooks.mdx b/src/content/docs/containers/examples/status-hooks.mdx
index d71385bca9a9ee..bd8143cc5b90c8 100644
--- a/src/content/docs/containers/examples/status-hooks.mdx
+++ b/src/content/docs/containers/examples/status-hooks.mdx
@@ -1,5 +1,4 @@
---
-
summary: Execute Workers code in reaction to Container status changes
pcx_content_type: example
title: Status Hooks
@@ -9,7 +8,7 @@ description: Execute Workers code in reaction to Container status changes
---
When a Container starts, stops, and errors, it can trigger code execution in a Worker
-that has defined status hooks on the `Container` class.
+that has defined status hooks on the `Container` class. Refer to the [Container package docs](https://github.com/cloudflare/containers/blob/main/README.md#lifecycle-hooks) for more details.
```js
import { Container } from '@cloudflare/containers';
diff --git a/src/content/docs/containers/examples/websocket.mdx b/src/content/docs/containers/examples/websocket.mdx
index 3add1eeb9b37d3..be17b092a0b22a 100644
--- a/src/content/docs/containers/examples/websocket.mdx
+++ b/src/content/docs/containers/examples/websocket.mdx
@@ -1,5 +1,4 @@
---
-
summary: Forwarding a Websocket request to a Container
pcx_content_type: example
title: Websocket to Container
@@ -8,11 +7,11 @@ sidebar:
description: Forwarding a Websocket request to a Container
---
-WebSocket requests are automatically forwarded to a container using the default`fetch`
+WebSocket requests are automatically forwarded to a container using the default `fetch`
method on the `Container` class:
```js
-import { Container, getContainer } from "@cloudflare/workers-types";
+import { Container, getContainer } from "@cloudflare/containers";
export class MyContainer extends Container {
defaultPort = 8080;
@@ -27,6 +26,5 @@ export default {
};
```
-Additionally, the `containerFetch` method can be used to forward WebSocket requests as well.
-
+View a full example in the [Container class repository](https://github.com/cloudflare/containers/tree/main/examples/websocket).
{/* TODO: Add more advanced examples - like kicking off a WS request then passing messages to container from the WS */}
diff --git a/src/content/docs/containers/faq.mdx b/src/content/docs/containers/faq.mdx
index e164b6e1e4da25..7aec2e5e4c7f70 100644
--- a/src/content/docs/containers/faq.mdx
+++ b/src/content/docs/containers/faq.mdx
@@ -2,9 +2,7 @@
pcx_content_type: navigation
title: Frequently Asked Questions
sidebar:
- order: 5
- group:
- hideIndex: true
+ order: 10
---
import { WranglerConfig } from "~/components";
@@ -66,22 +64,11 @@ An Example:
## How do container updates and rollouts work?
-When you run `wrangler deploy`, the Worker code is updated immediately and Container
-instances are updated using a rolling deploy strategy. The default rollout configuration is two steps,
-where the first step updates 10% of the instances, and the second step updates the remaining 90%.
-This can be configured in your Wrangler config file using the [`rollout_step_percentage`](/workers/wrangler/configuration#containers) property.
-
-When deploying a change, you can also configure a [`rollout_active_grace_period`](/workers/wrangler/configuration#containers), which is the minimum
-number of seconds to wait before an active container instance becomes eligible for updating during a rollout.
-At that point, the container will be sent at `SIGTERM`, and still has 15 minutes to shut down gracefully.
-If the instance does not stop within 15 minutes, it is forcefully stopped with a `SIGKILL` signal.
-If you have cleanup that must occur before a Container instance is stopped, you should do it during this 15 minute period.
-
-Once stopped, the instance is replaced with a new instance running the updated code. Requests may hang while the container is starting up again.
+See [rollout documentation](/containers/platform-details/rollouts/) for details.
## How does scaling work?
-See [scaling & routing documentation](/containers/scaling-and-routing/) for details.
+See [scaling & routing documentation](/containers/platform-details/scaling-and-routing/) for details.
## What are cold starts? How fast are they?
@@ -98,7 +85,7 @@ on image size and code execution time, among other factors.
## How do I use an existing container image?
-See [image management documentation](/containers/image-management/#using-existing-images) for details.
+See [image management documentation](/containers/platform-details/image-management/#using-existing-images) for details.
## Is disk persistent? What happens to my disk when my container sleeps?
diff --git a/src/content/docs/containers/get-started.mdx b/src/content/docs/containers/get-started.mdx
index 5f6b9926a39e1b..d8f679e99e80e4 100644
--- a/src/content/docs/containers/get-started.mdx
+++ b/src/content/docs/containers/get-started.mdx
@@ -2,7 +2,7 @@
pcx_content_type: get-started
title: Getting started
sidebar:
- order: 1
+ order: 2
---
import { WranglerConfig, PackageManagers } from "~/components";
@@ -17,7 +17,7 @@ This example Worker should give you a sense for simple Container use, and provid
### Ensure Docker is running locally
In this guide, we will build and push a container image alongside your Worker code. By default, this process uses
-[Docker](https://www.docker.com/) to do so. You must have Docker running locally when you run `wrangler deploy`. For most people, the best way to install Docker is to follow the [docs for installing Docker Desktop](https://docs.docker.com/desktop/).
+[Docker](https://www.docker.com/) to do so. You must have Docker running locally when you run `wrangler deploy`. For most people, the best way to install Docker is to follow the [docs for installing Docker Desktop](https://docs.docker.com/desktop/). Other tools like [Colima](https://github.com/abiosoft/colima) may also work.
You can check that Docker is running properly by running the `docker info` command in your terminal. If Docker is running, the command will succeed. If Docker is not running,
the `docker info` command will hang or return an error including the message "Cannot connect to the Docker daemon".
@@ -29,9 +29,11 @@ the `docker info` command will hang or return an error including the message "Ca
Run the following command to create and deploy a new Worker with a container, from the starter template:
-```sh
-npm create cloudflare@latest -- --template=cloudflare/templates/containers-template
-```
+
When you want to deploy a code change to either the Worker or Container code, you can run the following command using [Wrangler CLI](/workers/wrangler/):
@@ -40,7 +42,7 @@ When you want to deploy a code change to either the Worker or Container code, yo
When you run `wrangler deploy`, the following things happen:
- Wrangler builds your container image using Docker.
-- Wrangler pushes your image to a [Container Image Registry](/containers/image-management/) that is automatically
+- Wrangler pushes your image to a [Container Image Registry](/containers/platform-details/image-management/) that is automatically
integrated with your Cloudflare account.
- Wrangler deploys your Worker, and configures Cloudflare's network to be ready to spawn instances of your container
@@ -67,7 +69,7 @@ And see images deployed to the Cloudflare Registry with the following command:
Now, open the URL for your Worker. It should look something like `https://hello-containers.YOUR_ACCOUNT_NAME.workers.dev`.
-If you make requests to the paths `/container/1` or `/container/2`, these requests are routed to specific containers.
+If you make requests to the paths `/container/1` or `/container/2`, your Worker routes requests to specific containers.
Each different path after "/container/" routes to a unique container.
If you make requests to `/lb`, you will load balanace requests to one of 3 containers chosen at random.
diff --git a/src/content/docs/containers/index.mdx b/src/content/docs/containers/index.mdx
index d44f19633e7741..c72f2a74907996 100644
--- a/src/content/docs/containers/index.mdx
+++ b/src/content/docs/containers/index.mdx
@@ -45,64 +45,73 @@ With Containers you can run:
- Applications and libraries that require a full filesystem, specific runtime, or Linux-like environment
- Existing applications and tools that have been distributed as container images
-Container instances are spun up on-demand and controlled by code you write in your [Worker](/workers). Instead of chaining together API calls or writing Kubernetes operators, you just write JavaScript:
+Container instances are spun up on-demand and controlled by code you write in your [Worker](/workers). Instead of chaining together API calls or writing Kubernetes operators, you just write JavaScript:
```js
import { Container, getContainer } from "@cloudflare/containers";
- export class MyContainer extends Container {
- defaultPort = 4000; // Port the container is listening on
- sleepAfter = "10m"; // Stop the instance if requests not sent for 10 minutes
- }
-
- export default {
- async fetch(request, env) {
- const { "session-id": sessionId } = await request.json();
- // Get the container instance for the given session ID
- const containerInstance = getContainer(env.MY_CONTAINER, sessionId);
- // Pass the request to the container instance on its default port
- return containerInstance.fetch(request);
- },
- };
- ```
-
-
-
- ```json
- {
- "name": "container-starter",
- "main": "src/index.js",
- "compatibility_date": "$today",
- "containers": [
- {
- "class_name": "MyContainer",
- "image": "./Dockerfile",
- "max_instances": 5
- }
- ],
- "durable_objects": {
- "bindings": [
- {
- "class_name": "MyContainer",
- "name": "MY_CONTAINER"
- }
- ]
- },
- "migrations": [
- {
- "new_sqlite_classes": ["MyContainer"],
- "tag": "v1"
- }
- ]
- }
- ```
-
-
+ export class MyContainer extends Container {
+ defaultPort = 4000; // Port the container is listening on
+ sleepAfter = "10m"; // Stop the instance if requests not sent for 10 minutes
+ }
+
+ export default {
+ async fetch(request, env) {
+ const { "session-id": sessionId } = await request.json();
+ // Get the container instance for the given session ID
+ const containerInstance = getContainer(env.MY_CONTAINER, sessionId);
+ // Pass the request to the container instance on its default port
+ return containerInstance.fetch(request);
+ },
+ };
+ ```
+
+
+
+ ```json
+ {
+ "name": "container-starter",
+ "main": "src/index.js",
+ "compatibility_date": "$today",
+ "containers": [
+ {
+ "class_name": "MyContainer",
+ "image": "./Dockerfile",
+ "max_instances": 5
+ }
+ ],
+ "durable_objects": {
+ "bindings": [
+ {
+ "class_name": "MyContainer",
+ "name": "MY_CONTAINER"
+ }
+ ]
+ },
+ "migrations": [
+ {
+ "new_sqlite_classes": ["MyContainer"],
+ "tag": "v1"
+ }
+ ]
+ }
+ ```
+
+
+
-Get started Containers dashboard
+
+ Get started
+
+
+ Containers dashboard
+
---
@@ -144,7 +153,11 @@ regional placement, Workflow and Queue integrations, AI-generated code execution
containers with Wrangler.
-
+
Learn about what limits Containers have and how to work within them.
@@ -158,4 +171,3 @@ regional placement, Workflow and Queue integrations, AI-generated code execution
-
diff --git a/src/content/docs/containers/local-dev.mdx b/src/content/docs/containers/local-dev.mdx
index 912dd9771890a3..bb2512c4445d6b 100644
--- a/src/content/docs/containers/local-dev.mdx
+++ b/src/content/docs/containers/local-dev.mdx
@@ -2,10 +2,10 @@
pcx_content_type: reference
title: Local Development
sidebar:
- order: 3
+ order: 6
---
-You can run both your container and your Worker locally, without additional configuration, by running [`npx wrangler dev`](/workers/wrangler/commands/#dev) (or `vite dev` for Vite projects using the [Cloudflare Vite plugin](/workers/vite-plugin/)) in your project's directory.
+You can run both your container and your Worker locally by simply running [`npx wrangler dev`](/workers/wrangler/commands/#dev) (or `vite dev` for Vite projects using the [Cloudflare Vite plugin](/workers/vite-plugin/)) in your project's directory.
To develop Container-enabled Workers locally, you will need to first ensure that a
Docker compatible CLI tool and Engine are installed. For instance, you could use [Docker Desktop](https://docs.docker.com/desktop/) or [Colima](https://github.com/abiosoft/colima).
@@ -13,11 +13,15 @@ Docker compatible CLI tool and Engine are installed. For instance, you could use
When you start a dev session, your container image will be built or downloaded. If your
[Wrangler configuration](/workers/wrangler/configuration/#containers) sets
the `image` attribute to a local path, the image will be built using the local Dockerfile.
-If the `image` attribute is set to a URL, the image will be pulled from the associated registry.
+If the `image` attribute is set to a URL, the image will be pulled from the Cloudflare registry.
+
+:::note
+Currently, the Cloudflare Vite-plugin does not support registry links in local development, unlike `wrangler dev`.
+As a workaround, you can create a minimal Dockerfile that uses `FROM `. Make sure to `EXPOSE` a port for local dev as well.
+:::
Container instances will be launched locally when your Worker code calls to create
-a new container. This may happen when calling `.get()` on a `Container` instance or
-by calling `start()` if `manualStart` is set to `true`. Requests will then automatically be routed to the correct locally-running container.
+a new container. Requests will then automatically be routed to the correct locally-running container.
When the dev session ends, all associated container instances should be stopped, but
local images are not removed, so that they can be reused in subsequent builds.
@@ -25,10 +29,11 @@ local images are not removed, so that they can be reused in subsequent builds.
:::note
If your Worker app creates many container instances, your local machine may not be able to run as many containers concurrently as is possible when you deploy to Cloudflare.
+Also, `max_instances` configuration option does not apply during local development.
+
Additionally, if you regularly rebuild containers locally, you may want to clear
out old container images (using `docker image prune` or similar) to reduce disk used.
-Also note that the `max_instances` configuration option is only enforced when running in production on Cloudflare's network. This limit does not apply during local development, so you may run more instances than specified.
:::
## Iterating on Container code
diff --git a/src/content/docs/containers/platform-details/architecture.mdx b/src/content/docs/containers/platform-details/architecture.mdx
new file mode 100644
index 00000000000000..0bb461ef79dba9
--- /dev/null
+++ b/src/content/docs/containers/platform-details/architecture.mdx
@@ -0,0 +1,123 @@
+---
+pcx_content_type: reference
+title: Lifecycle of a Container
+sidebar:
+ order: 1
+---
+
+## Deployment
+
+After you deploy an application with a Container, your image is uploaded to
+[Cloudflare's Registry](/containers/platform-details/image-management) and distributed globally to Cloudflare's Network.
+Cloudflare will pre-schedule instances and pre-fetch images across the globe to ensure quick start
+times when scaling up the number of concurrent container instances.
+
+Unlike Workers, which are updated immediately on deploy, container instances are updated using a rolling deploy strategy.
+This allows you to gracefully shutdown any running instances during a rollout. Refer to [rollouts](/containers/platform-details/rollouts/) for more details.
+
+## Lifecycle of a Request
+
+### Client to Worker
+
+Recall that Containers are backed by Durable Objects and Workers.
+Requests are first routed through a Worker, which is generally handled
+by a datacenter in a location with the best latency between itself and the requesting user.
+A different datacenter may be selected to optimize overall latency, if [Smart Placement](/workers/configuration/smart-placement/)
+is on, or if the nearest location is under heavy load.
+
+Because all Container requests are passed through a Worker, end-users cannot make non-HTTP TCP or
+UDP requests to a Container instance. If you have a use case that requires inbound TCP
+or UDP from an end-user, please [let us know](https://forms.gle/AGSq54VvUje6kmKu8).
+
+### Worker to Durable Object
+
+From the Worker, a request passes through a Durable Object instance (the [Container package](/containers/container-package) extends a Durable Object class).
+Each Durable Object instance is a globally routable isolate that can execute code and store state. This allows
+developers to easily address and route to specific container instances (no matter where they are placed),
+define and run hooks on container status changes, execute recurring checks on the instance, and store persistent
+state associated with each instance.
+
+### Starting a Container
+
+When a Durable Object instance requests to start a new container instance, the **nearest location
+with a pre-fetched image** is selected.
+
+:::note
+Currently, Durable Objects may be co-located with their associated Container instance, but often are not.
+
+Cloudflare is currently working on expanding the number of locations in which a Durable Object can run,
+which will allow container instances to always run in the same location as their Durable Object.
+:::
+
+Starting additional container instances will use other locations with pre-fetched images,
+and Cloudflare will automatically begin prepping additional machines behind the scenes
+for additional scaling and quick cold starts. Because there are a finite number of pre-warmed
+locations, some container instances may be started in locations that are farther away from
+the end-user. This is done to ensure that the container instance starts quickly. You are
+only charged for actively running instances and not for any unused pre-warmed images.
+
+#### Cold starts
+
+A cold start is when a container instance is started from a completely stopped state.
+If you call `env.MY_CONTAINER.get(id)` with a completely novel ID and launch
+this instance for the first time, it will result in a cold start.
+This will start the container image from its entrypoint for the first time. Depending
+on what this entrypoint does, it will take a variable amount of time to start.
+
+Container cold starts can often be the 2-3 second range, but this is dependent
+on image size and code execution time, among other factors.
+
+### Requests to running Containers
+
+When a request _starts_ a new container instance, the nearest location with a pre-fetched image is selected.
+Subsequent requests to a particular instance, regardless of where they originate, will be routed to this location as long as
+the instance stays alive.
+
+However, once that container instance stops and restarts, future requests could be routed to a _different_ location.
+This location will again be the nearest location to the originating request with a pre-fetched image.
+
+### Container runtime
+
+Each container instance runs inside its own VM, which provides strong
+isolation from other workloads running on Cloudflare's network. Containers
+should be built for the `linux/amd64` architecture, and should stay within
+[size limits](/containers/platform-details/limits).
+
+[Logging](/containers/faq/#how-do-container-logs-work), metrics collection, and
+[networking](/containers/faq/#how-do-i-allow-or-disallow-egress-from-my-container) are automatically set up on each container, as configured by the developer.
+
+### Container shutdown
+
+If you do not set [`sleepAfter`](https://github.com/cloudflare/containers/blob/main/README.md#properties)
+on your Container class, or stop the instance manually, the container will shut down soon after the container stops receiving requests.
+By setting `sleepAfter`, the container will stay alive for approximately the specified duration.
+
+You can manually shutdown a container instance by calling `stop()` or `destroy()` on it - refer to the [Container package docs](https://github.com/cloudflare/containers/blob/main/README.md#container-methods) for more details.
+
+When a container instance is going to be shut down, it is sent a `SIGTERM` signal,
+and then a `SIGKILL` signal after 15 minutes. You should perform any necessary
+cleanup to ensure a graceful shutdown in this time.
+
+#### Persistent disk
+
+All disk is ephemeral. When a Container instance goes to sleep, the next time
+it is started, it will have a fresh disk as defined by its container image.
+Persistent disk is something the Cloudflare team is exploring in the future, but
+is not slated for the near term.
+
+## An example request
+
+- A developer deploys a Container. Cloudflare automatically readies instances across its Network.
+- A request is made from a client in Bariloche, Argentina. It reaches the Worker in a nearby
+ Cloudflare location in Neuquen, Argentina.
+- This Worker request calls `getContainer(env.MY_CONTAINER, "session-1337")`. Under the hood, this brings up a Durable
+ Object, which then calls `this.ctx.container.start`.
+- This requests the nearest free Container instance. Cloudflare recognizes that an instance is free in Buenos Aires, Argentina, and
+ starts it there.
+- A different user needs to route to the same container. This user's request reaches
+ the Worker running in Cloudflare's location in San Diego, US.
+- The Worker again calls `getContainer(env.MY_CONTAINER, "session-1337")`.
+- If the initial container instance is still running, the request is routed to the original location
+ in Buenos Aires. If the initial container has gone to sleep, Cloudflare will once
+ again try to find the nearest "free" instance of the Container, likely
+ one in North America, and start an instance there.
diff --git a/src/content/docs/containers/durable-object-methods.mdx b/src/content/docs/containers/platform-details/durable-object-methods.mdx
similarity index 100%
rename from src/content/docs/containers/durable-object-methods.mdx
rename to src/content/docs/containers/platform-details/durable-object-methods.mdx
diff --git a/src/content/docs/containers/platform-details/environment-variables.mdx b/src/content/docs/containers/platform-details/environment-variables.mdx
new file mode 100644
index 00000000000000..b26bca0e7911d8
--- /dev/null
+++ b/src/content/docs/containers/platform-details/environment-variables.mdx
@@ -0,0 +1,36 @@
+---
+pcx_content_type: reference
+title: Environment Variables
+sidebar:
+ order: 7
+---
+
+import { WranglerConfig } from "~/components";
+
+## Runtime environment variables
+
+The container runtime automatically sets the following variables:
+
+- `CLOUDFLARE_APPLICATION_ID` - the ID of the Containers application
+- `CLOUDFLARE_COUNTRY_A2` - the [ISO 3166-1 Alpha 2 code](https://www.iso.org/obp/ui/#search/code/) of a country the container is placed in
+- `CLOUDFLARE_LOCATION` - a name of a location the container is placed in
+- `CLOUDFLARE_REGION` - a region name
+- `CLOUDFLARE_DURABLE_OBJECT_ID` - the ID of the Durable Object instance that the container is bound to. You can use this to identify particular container instances on the dashboard.
+
+## User-defined environment variables
+
+You can set environment variables when defining a Container in your Worker, or when starting a container instance.
+
+For example:
+
+```javascript
+class MyContainer extends Container {
+ defaultPort = 4000;
+ envVars = {
+ MY_CUSTOM_VAR: "value",
+ ANOTHER_VAR: "another_value",
+ };
+}
+```
+
+More details about defining environment variables and secrets can be found in [this example](/containers/examples/env-vars-and-secrets).
diff --git a/src/content/docs/containers/image-management.mdx b/src/content/docs/containers/platform-details/image-management.mdx
similarity index 62%
rename from src/content/docs/containers/image-management.mdx
rename to src/content/docs/containers/platform-details/image-management.mdx
index 70b124d6cdf579..cb0798195004a3 100644
--- a/src/content/docs/containers/image-management.mdx
+++ b/src/content/docs/containers/platform-details/image-management.mdx
@@ -11,8 +11,7 @@ import { WranglerConfig, PackageManagers } from "~/components";
## Pushing images during `wrangler deploy`
-When running `wrangler deploy`, if you set the `image` attribute in you [Wrangler configuration](/workers/wrangler/configuration/#containers)
-file to a path, wrangler will build your container image locally using Docker, then push it to a registry run by Cloudflare.
+When running `wrangler deploy`, if you set the `image` attribute in your [Wrangler configuration](/workers/wrangler/configuration/#containers) to a path to a Dockerfile, Wrangler will build your container image locally using Docker, then push it to a registry run by Cloudflare.
This registry is integrated with your Cloudflare account and is backed by [R2](/r2/). All authentication is handled automatically by
Cloudflare both when pushing and pulling images.
@@ -33,16 +32,30 @@ Just provide the path to your Dockerfile:
And deploy your Worker with `wrangler deploy`. No other image management is necessary.
-On subsequent deploys, Wrangler will only push image layers that have changed, which saves space and time on `wrangler deploy`
-calls after the initial deploy.
+On subsequent deploys, Wrangler will only push image layers that have changed, which saves space and time.
:::note
Docker or a Docker-compatible CLI tool must be running for Wrangler to build and push images.
+This is not necessary if you are using a pre-built image, as described below.
:::
## Using pre-built container images
-If you wish to use a pre-built image, first, push it to the Cloudflare Registry:
+Currently, all images must use `registry.cloudflare.com`.
+
+:::note
+We plan to allow other image registries. Cloudflare will download your image, optionally using auth credentials,
+then cache it globally in the Cloudflare Registry.
+
+This is not yet available.
+:::
+
+If you wish to use a pre-built image, first, make sure it exists locally, then push it to the Cloudflare Registry:
+
+```
+docker pull
+docker tag :
+```
Wrangler provides a command to push images to the Cloudflare Registry:
@@ -52,7 +65,7 @@ Wrangler provides a command to push images to the Cloudflare Registry:
args="containers push :"
/>
-Additionally, you can use the `-p` flag with `wrangler containers build` to build and push an image in one step:
+Or, you can use the `-p` flag with `wrangler containers build` to build and push an image in one step:
-Then you can specify the URL in the image attribute:
+This will output an image registry URI that you can then use in your Wrangler configuration:
```json
{
"containers": {
- "image": "registry.cloudflare.com/your-image:tag"
+ "image": "registry.cloudflare.com/your-account-id/your-image:tag"
// ...rest of config...
}
}
@@ -75,21 +88,9 @@ Then you can specify the URL in the image attribute:
-Currently, all images must use `registry.cloudflare.com`, which is the default registry for Wrangler.
-
-To use an existing image from another repo, you can pull it, tag it, then push it to the Cloudflare Registry:
-
-```bash
-docker pull
-docker tag :
-wrangler containers push :
-```
-
:::note
-We plan to allow configuring public images directly in Wrangler config. Cloudflare will
-download your image, optionally using auth credentials, then cache it globally in the Cloudflare Registry.
-
-This is not yet available.
+Currently, the Cloudflare Vite-plugin does not support registry links in local development, unlike `wrangler dev`.
+As a workaround, you can create a minimal Dockerfile that uses `FROM `. Make sure to `EXPOSE` a port in local dev as well.
:::
## Pushing images with CI
diff --git a/src/content/docs/containers/platform-details/index.mdx b/src/content/docs/containers/platform-details/index.mdx
new file mode 100644
index 00000000000000..4e6dd8f5cf20af
--- /dev/null
+++ b/src/content/docs/containers/platform-details/index.mdx
@@ -0,0 +1,8 @@
+---
+pcx_content_type: navigation
+title: Platform Reference
+sidebar:
+ order: 4
+ group:
+ hideIndex: true
+---
diff --git a/src/content/docs/containers/platform-details.mdx b/src/content/docs/containers/platform-details/limits.mdx
similarity index 63%
rename from src/content/docs/containers/platform-details.mdx
rename to src/content/docs/containers/platform-details/limits.mdx
index 7c50684e2fa7a7..53e62996b49f26 100644
--- a/src/content/docs/containers/platform-details.mdx
+++ b/src/content/docs/containers/platform-details/limits.mdx
@@ -1,14 +1,10 @@
---
-pcx_content_type: navigation
-title: Platform
+pcx_content_type: reference
+title: Limits and Instance Types
sidebar:
order: 2
- group:
- hideIndex: true
---
-import { WranglerConfig } from "~/components";
-
## Instance Types
The memory, vCPU, and disk space for Containers are set through predefined instance types.
@@ -37,31 +33,3 @@ While in open beta, the following limits are currently in effect:
[^1]: This limit will be raised as we continue the beta.
[^2]: Delete container images with `wrangler containers delete` to free up space. Note that if you delete a container image and then [roll back](/workers/configuration/versions-and-deployments/rollbacks/) your Worker to a previous version, this version may no longer work.
-
-## Environment variables
-
-The container runtime automatically sets the following variables:
-
-- `CLOUDFLARE_APPLICATION_ID` - the ID of the Containers application
-- `CLOUDFLARE_COUNTRY_A2` - the [ISO 3166-1 Alpha 2 code](https://www.iso.org/obp/ui/#search/code/) of a country the container is placed in
-- `CLOUDFLARE_DEPLOYMENT_ID` - the ID of the container instance
-- `CLOUDFLARE_LOCATION` - a name of a location the container is placed in
-- `CLOUDFLARE_PLACEMENT_ID` - a placement ID
-- `CLOUDFLARE_REGION` - a region name
-- `CLOUDFLARE_DURABLE_OBJECT_ID` - the ID of the Durable Object that the container is bound to
-
-:::note
-If you supply environment variables with the same names, supplied values will override predefined values.
-:::
-
-Custom environment variables can be set when defining a Container in your Worker:
-
-```javascript
-class MyContainer extends Container {
- defaultPort = 4000;
- envVars = {
- MY_CUSTOM_VAR: "value",
- ANOTHER_VAR: "another_value",
- };
-}
-```
diff --git a/src/content/docs/containers/platform-details/rollouts.mdx b/src/content/docs/containers/platform-details/rollouts.mdx
new file mode 100644
index 00000000000000..4bcf9075197983
--- /dev/null
+++ b/src/content/docs/containers/platform-details/rollouts.mdx
@@ -0,0 +1,43 @@
+---
+pcx_content_type: reference
+title: Rollouts
+sidebar:
+ order: 2
+---
+
+import { WranglerConfig } from "~/components";
+
+## How rollouts work
+When you run `wrangler deploy`, the Worker code is updated immediately and Container
+instances are updated using a rolling deploy strategy. The default rollout configuration is two steps,
+where the first step updates 10% of the instances, and the second step updates the remaining 90%.
+This can be configured in your Wrangler config file using the [`rollout_step_percentage`](/workers/wrangler/configuration#containers) property.
+
+When deploying a change, you can also configure a [`rollout_active_grace_period`](/workers/wrangler/configuration#containers), which is the minimum
+number of seconds to wait before an active container instance becomes eligible for updating during a rollout.
+At that point, the container will be sent at `SIGTERM`, and still has 15 minutes to shut down gracefully.
+If the instance does not stop within 15 minutes, it is forcefully stopped with a `SIGKILL` signal.
+If you have cleanup that must occur before a Container instance is stopped, you should do it during this 15 minute period.
+
+Once stopped, the instance is replaced with a new instance running the updated code. Requests may hang while the container is starting up again.
+
+Here is an example configuration that sets a 5 minute grace period and a two step rollout where the first step updates 10% of instances and the second step updates 100% of instances:
+
+
+```toml
+[[containers]]
+max_instances = 10
+class_name = "MyContainer"
+image = "./Dockerfile"
+rollout_active_grace_period = 300
+rollout_step_percentage = [10, 100]
+
+[[durable_objects.bindings]]
+name = "MY_CONTAINER"
+class_name = "MyContainer"
+
+[[migrations]]
+tag = "v1"
+new_sqlite_classes = ["MyContainer"]
+```
+
\ No newline at end of file
diff --git a/src/content/docs/containers/scaling-and-routing.mdx b/src/content/docs/containers/platform-details/scaling-and-routing.mdx
similarity index 85%
rename from src/content/docs/containers/scaling-and-routing.mdx
rename to src/content/docs/containers/platform-details/scaling-and-routing.mdx
index 83026942ab1366..0d1a6f8606af64 100644
--- a/src/content/docs/containers/scaling-and-routing.mdx
+++ b/src/content/docs/containers/platform-details/scaling-and-routing.mdx
@@ -7,15 +7,24 @@ sidebar:
### Scaling container instances with `get()`
-Currently, Containers are only scaled manually by calling `BINDING.get()` with a unique ID, then
-starting the container. Unless `manualStart` is set to `true` on the Container class, each
-instance will start when `get()` is called.
+:::note
+This section uses helpers from the [Container package](/containers/container-package).
+:::
-```
-// gets 3 container instances
-env.MY_CONTAINER.get(idOne)
-env.MY_CONTAINER.get(idTwo)
-env.MY_CONTAINER.get(idThree)
+Currently, Containers are only scaled manually by getting containers with a unique ID, then
+starting the container. Note that that getting a container does not automatically start it.
+
+```typescript
+// get and start two container instances
+const containerOne = getContainer(
+ env.MY_CONTAINER,
+ idOne,
+).startAndWaitForPorts();
+
+const containerTwo = getContainer(
+ env.MY_CONTAINER,
+ idTwo,
+).startAndWaitForPorts();
```
Each instance will run until its `sleepAfter` time has elapsed, or until it is manually stopped.
diff --git a/src/content/docs/containers/pricing.mdx b/src/content/docs/containers/pricing.mdx
index 88bd4002bd0502..8764044beca089 100644
--- a/src/content/docs/containers/pricing.mdx
+++ b/src/content/docs/containers/pricing.mdx
@@ -2,7 +2,7 @@
pcx_content_type: reference
title: Pricing
sidebar:
- order: 4
+ order: 11
---
## vCPU, Memory and Disk
diff --git a/src/content/docs/containers/wrangler-commands.mdx b/src/content/docs/containers/wrangler-commands.mdx
index b36d8b8d88e417..ffe30f7104b6c2 100644
--- a/src/content/docs/containers/wrangler-commands.mdx
+++ b/src/content/docs/containers/wrangler-commands.mdx
@@ -3,5 +3,5 @@ pcx_content_type: navigation
title: Wrangler Commands
external_link: /workers/wrangler/commands/#containers
sidebar:
- order: 70
+ order: 8
---
diff --git a/src/content/docs/containers/wrangler-configuration.mdx b/src/content/docs/containers/wrangler-configuration.mdx
index a1ca245635efef..f80d5e3115cb37 100644
--- a/src/content/docs/containers/wrangler-configuration.mdx
+++ b/src/content/docs/containers/wrangler-configuration.mdx
@@ -3,5 +3,5 @@ pcx_content_type: navigation
title: Wrangler Configuration
external_link: /workers/wrangler/configuration/#containers
sidebar:
- order: 60
+ order: 7
---
diff --git a/src/content/docs/workers/wrangler/configuration.mdx b/src/content/docs/workers/wrangler/configuration.mdx
index 46e4651df8c2d6..be3544cc25dedc 100644
--- a/src/content/docs/workers/wrangler/configuration.mdx
+++ b/src/content/docs/workers/wrangler/configuration.mdx
@@ -996,6 +996,8 @@ The following options are available:
request to run a container than this number, the container request will error. You may have more Durable Objects
than this number over a longer time period, but you may not have more concurrently.
+ - Defaults to 1.
+
- This value is only enforced when running in production on Cloudflare's network. This limit does not apply during local development, so you may run more instances than specified.
- `name`
diff --git a/src/content/partials/fundamentals/account-permissions-table.mdx b/src/content/partials/fundamentals/account-permissions-table.mdx
index 46457341e12eb6..9cd444e3e3ab1d 100644
--- a/src/content/partials/fundamentals/account-permissions-table.mdx
+++ b/src/content/partials/fundamentals/account-permissions-table.mdx
@@ -77,6 +77,8 @@ import { Markdown } from "~/components";
| { props.src === "dash" ? "Email Security" : "Cloud Email Security:" } {props.editWord} | Grants write access to [Email Security](/email-security/). |
| Constellation Read | Grants read access to [Constellation](/constellation/). |
| Constellation {props.editWord} | Grants write access to [Constellation](/constellation/). |
+| Containers Read | Grants read access to [Containers](/containers/). |
+| Containers {props.editWord} | Grants write access to [Containers](/containers/). |
| D1 Read | Grants read access to [D1](/d1/). |
| D1 {props.editWord} | Grants write access to [D1](/d1/). |
| DDoS Botnet Feed Read | Grants read access to Botnet Feed reports. |