Skip to content

Latest commit

 

History

History
107 lines (71 loc) · 6.59 KB

File metadata and controls

107 lines (71 loc) · 6.59 KB
pcx_content_type title description sidebar
reference
Local Development
Learn how to run Container-enabled Workers locally with `wrangler dev` and `vite dev`.
order
6

You can run both your container and your Worker locally by simply running npx wrangler dev (or vite dev for Vite projects using the Cloudflare Vite plugin) in your project's directory.

To develop Container-enabled Workers locally, you will need to first ensure that a Docker compatible CLI tool and Engine are installed. For instance, you could use Docker Desktop or Colima.

When you start a dev session, your container image will be built or downloaded. If your Wrangler configuration sets the image attribute to a local path, the image will be built using the local Dockerfile. If the image attribute is set to an image reference, the image will be pulled from the referenced registry, such as the Cloudflare Registry, Docker Hub, or Amazon ECR.

:::note With wrangler dev, image references from the Cloudflare Registry, Docker Hub, and Amazon ECR are supported in local development.

With vite dev, image references from external registries such as Docker Hub and Amazon ECR are supported, but vite dev cannot pull directly from the Cloudflare Registry.

If you use a private Docker Hub or ECR image with vite dev, authenticate to that registry locally, for example with docker login.

As a workaround for Cloudflare Registry images, point vite dev at a local Dockerfile that uses FROM <IMAGE_REFERENCE>. Docker then pulls the base image during the local build. Make sure to EXPOSE a port for local dev as well. :::

Container instances will be launched locally when your Worker code calls to create a new container. Requests will then automatically be routed to the correct locally-running container.

When the dev session ends, all associated container instances should be stopped, but local images are not removed, so that they can be reused in subsequent builds.

:::note If your Worker app creates many container instances, your local machine may not be able to run as many containers concurrently as is possible when you deploy to Cloudflare.

Also, max_instances configuration option does not apply during local development.

Additionally, if you regularly rebuild containers locally, you may want to clear out old container images (using docker image prune or similar) to reduce disk used.

:::

Iterating on Container code

When you develop with Wrangler or Vite, your Worker's code is automatically reloaded each time you save a change, but code running within the container is not.

To rebuild your container with new code changes, you can hit the [r] key on your keyboard, which triggers a rebuild. Container instances will then be restarted with the newly built images.

You may prefer to set up your own code watchers and reloading mechanisms, or mount a local directory into the local container images to sync code changes. This can be done, but there is no built-in mechanism for doing so, and best-practices will depend on the languages and frameworks you are using in your container code.

Troubleshooting

Exposing Ports

In production, all of your container's ports will be accessible by your Worker, so you do not need to specifically expose ports using the EXPOSE instruction in your Dockerfile.

But for local development you will need to declare any ports you need to access in your Dockerfile with the EXPOSE instruction; for example: EXPOSE 4000, if you will be accessing port 4000.

If you have not exposed any ports, you will see the following error in local development:

The container "MyContainer" does not expose any ports. In your Dockerfile, please expose any ports you intend to connect to.

And if you try to connect to any port that you have not exposed in your Dockerfile you will see the following error:

connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.

You may also see this while the container is starting up and no ports are available yet. You should retry until the ports become available. This retry logic should be handled for you if you are using the containers package.

Socket configuration - internal error

If you see an opaque internal error when attempting to connect to your container, you may need to set the DOCKER_HOST environment variable to the socket path your container engine is listening on. Wrangler or Vite will attempt to automatically find the correct socket to use to communicate with your container engine, but if that does not work, you may have to set this environment variable to the appropriate socket path.

SSL errors with the Cloudflare One Client or a VPN

If you are running the Cloudflare One Client or a VPN that performs TLS inspection, HTTPS requests made during the Docker build process may fail with SSL or certificate errors. This happens because the VPN intercepts HTTPS traffic and re-signs it with its own certificate authority, which Docker does not trust by default.

To resolve this, you can either:

  • Disable the Cloudflare One Client or your VPN while running wrangler dev or wrangler deploy, then re-enable it afterwards.

  • Add the certificate to your Docker build context. the Cloudflare One Client exposes its certificate via the NODE_EXTRA_CA_CERTS and SSL_CERT_FILE environment variables on your host machine. You can pass the certificate into your Docker build as an environment variable, so that it is available during the build without being baked into the final image.

    RUN if [ -n "$SSL_CERT_FILE" ]; then \
        cp "$SSL_CERT_FILE" /usr/local/share/ca-certificates/Custom_CA.crt && \
        update-ca-certificates; \
        fi

    :::note The above Dockerfile snippet is an example. Depending on your base image, the commands to install certificates may differ (for example, Alpine uses apk add ca-certificates and a different certificate path).

    This snippet will store the certificate into the image. Depending on whether your production environment needs the certificate, you may choose to do this only during development or use it in production too. :::

    Wrangler invokes Docker automatically when you run wrangler dev or wrangler deploy, so if you need to pass build secrets, you will need to build and push the image manually using wrangler containers push.