You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you want to deploy a code change to either the Worker or Container code, you can run the following command using [Wrangler CLI](/workers/wrangler/:
@@ -46,14 +46,14 @@ When you run `wrangler deploy`, the following things happen:
46
46
integrated with your Cloudflare account.
47
47
- Wrangler deploys your Worker, and configures Cloudflare's network to be ready to spawn instances of your container
48
48
49
-
:::note
50
49
The build and push usually take the longest on the first deploy. Subsequent deploys
51
50
are faster, because they [reuse cached image layers](https://docs.docker.com/build/cache/).
52
-
:::
53
51
52
+
:::note
54
53
After you deploy your Worker for the first time, you will need to wait several minutes until
55
54
it is ready to receive requests. Unlike Workers, Containers take a few minutes to be provisioned.
56
55
During this time, requests are sent to the Worker, but calls to the Container will error.
56
+
:::
57
57
58
58
### Check deployment status
59
59
@@ -79,6 +79,7 @@ You can confirm this behavior by reading the output of each request.
79
79
## Understanding the Code
80
80
81
81
Now that you've deployed your first container, let's explain what is happening in your Worker's code, in your configuration file, in your container's code, and how requests are routed.
82
+
82
83
## Each Container is backed by its own Durable Object
83
84
84
85
Incoming requests are initially handled by the Worker, then passed to a container-enabled [Durable Object](/durable-objects).
@@ -204,9 +205,9 @@ When a request enters Cloudflare, your Worker's [`fetch` handler](/workers/runti
204
205
205
206
```js
206
207
if (pathname.startsWith("/container")) {
207
-
let id =env.MY_CONTAINER.idFromName("container");
208
-
let container =env.MY_CONTAINER.get(id);
209
-
returnawaitcontainer.fetch(request);
208
+
constid=env.MY_CONTAINER.idFromName(pathname);
209
+
constcontainer=env.MY_CONTAINER.get(id);
210
+
returnawaitcontainer.fetch(request);
210
211
}
211
212
```
212
213
@@ -216,7 +217,7 @@ When a request enters Cloudflare, your Worker's [`fetch` handler](/workers/runti
216
217
217
218
```js
218
219
if (pathname.startsWith("/lb")) {
219
-
let container =awaitgetRandom(env.MY_CONTAINER, 3);
Copy file name to clipboardExpand all lines: src/content/docs/containers/local-dev.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ sidebar:
8
8
You can run both your container and your Worker locally, without additional configuration, by running [`npx wrangler dev`](/workers/wrangler/commands/#dev) in your project's directory.
9
9
10
10
To develop Container-enabled Workers locally, you will need to first ensure that a
11
-
Docker compatible CLI tool is installed. For instance, you can use [Docker Desktop](https://docs.docker.com/desktop/)
11
+
Docker compatible CLI tool and Engine are installed. For instance, you can use [Docker Desktop](https://docs.docker.com/desktop/)
12
12
on Mac, Windows, or Linux.
13
13
14
14
When you run `wrangler dev`, your container image will be built or downloaded. If your
Copy file name to clipboardExpand all lines: src/content/docs/containers/pricing.mdx
+14-10Lines changed: 14 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,28 +9,32 @@ sidebar:
9
9
10
10
Containers are billed for every 10ms that they are actively running at the following rates, with included monthly usage as part of the $5 USD per month [Workers Paid plan](/workers/platform/pricing/):
|**Workers Paid**| 25 GiB-hours/month included <br/> +$0.0000025 per additional GiB-second | 375 vCPU-minutes/month <br/>+ $0.000020 per additional vCPU-second | 200 GB-hours/month <br/> +$0.00000007 per additional GB-second |
16
16
17
17
You only pay for what you use — charges start when a request is sent to the container or when it is manually started. Charges stop after the container instance goes to sleep, which can happen automatically after a timeout. This makes it easy to scale to zero, and allows you to get high utilization even with bursty traffic.
18
18
19
19
#### Instance Types
20
20
21
21
When you add containers to your Worker, you specify an [instance type](/containers/platform-details/#instance-types). The instance type you select will impact your bill — larger instances include more vCPUs, memory and disk, and therefore incur additional usage costs. The following instance types are currently available, and larger instance types are coming soon:
22
22
23
-
| Name | Memory | CPU| Disk |
24
-
|----------|----------|------------|------|
25
-
| dev | 256 MiB | 1/16 vCPU| 2 GB |
26
-
| basic | 1 GiB | 1/4 vCPU| 4 GB |
27
-
| standard | 4 GiB | 1/2 vCPU| 4 GB |
23
+
| Name | Memory | CPU | Disk |
24
+
|--------|-------|---------|----|
25
+
| dev | 256 MiB | 1/16 vCPU | 2 GB |
26
+
| basic | 1 GiB | 1/4 vCPU | 4 GB |
27
+
| standard | 4 GiB | 1/2 vCPU | 4 GB |
28
28
29
29
## Network Egress
30
30
31
31
Egress from Containers is priced at the following rates:
32
32
33
-
TODO
33
+
| Region | Price per GB | Included Allotment per month |
0 commit comments