diff --git a/src/assets/images/workers-observability/axiom-example.png b/src/assets/images/workers-observability/axiom-example.png new file mode 100644 index 000000000000000..adaf00a296b78a7 Binary files /dev/null and b/src/assets/images/workers-observability/axiom-example.png differ diff --git a/src/assets/images/workers-observability/destination-setup.png b/src/assets/images/workers-observability/destination-setup.png new file mode 100644 index 000000000000000..8a08f2d9ec05a0f Binary files /dev/null and b/src/assets/images/workers-observability/destination-setup.png differ diff --git a/src/assets/images/workers-observability/destinations.png b/src/assets/images/workers-observability/destinations.png new file mode 100644 index 000000000000000..46e8e3170ec5029 Binary files /dev/null and b/src/assets/images/workers-observability/destinations.png differ diff --git a/src/assets/images/workers-observability/grafana-traces.png b/src/assets/images/workers-observability/grafana-traces.png new file mode 100644 index 000000000000000..e80d2daddc5a6cd Binary files /dev/null and b/src/assets/images/workers-observability/grafana-traces.png differ diff --git a/src/assets/images/workers-observability/honeycomb-example.png b/src/assets/images/workers-observability/honeycomb-example.png new file mode 100644 index 000000000000000..5734c2ca9a79428 Binary files /dev/null and b/src/assets/images/workers-observability/honeycomb-example.png differ diff --git a/src/assets/images/workers-observability/sentry-example.png b/src/assets/images/workers-observability/sentry-example.png new file mode 100644 index 000000000000000..6dad299d6889136 Binary files /dev/null and b/src/assets/images/workers-observability/sentry-example.png differ diff --git a/src/assets/images/workers-observability/trace-waterfall-example.png b/src/assets/images/workers-observability/trace-waterfall-example.png new file mode 100644 index 000000000000000..9b7041d0691517e Binary files /dev/null and b/src/assets/images/workers-observability/trace-waterfall-example.png differ diff --git a/src/content/docs/workers/observability/exporting-opentelemetry-data/axiom.mdx b/src/content/docs/workers/observability/exporting-opentelemetry-data/axiom.mdx new file mode 100644 index 000000000000000..52cc2af6fe570e8 --- /dev/null +++ b/src/content/docs/workers/observability/exporting-opentelemetry-data/axiom.mdx @@ -0,0 +1,106 @@ +--- +pcx_content_type: how-to +title: Export to Axiom +sidebar: + order: 3 +--- + +import {WranglerConfig} from "~/components"; + +Axiom is a serverless log analytics platform that helps you store, search, and analyze massive amounts of data. By exporting your Cloudflare Workers application telemetry to Axiom, you can: +- Store and query logs and traces at scale +- Create dashboards and alerts to monitor your Workers + +![Trace view with timing information displayed on a timeline](~/assets/images/workers-observability/axiom-example.png) + +This guide will walk you through exporting OpenTelemetry-compliant traces and logs to Axiom from your Cloudflare Worker application + +## Prerequisites + +Before you begin, ensure you have: + +- An active [Axiom account](https://app.axiom.co/register) (free tier available) +- A deployed Worker that you want to monitor +- An Axiom dataset to send data to + +## Step 1: Create a dataset + +If you don't already have a dataset to send data to: +1. Log in to your [Axiom account](https://app.axiom.co/) +2. Navigate to **Datasets** in the left sidebar +3. Click **New Dataset** +4. Enter a name (e.g. `cloudflare-workers-otel`) +5. Click **Create Dataset** + +## Step 2: Get your Axiom API token and dataset + +1. Navigate to **Settings** in the left sidebar +2. Click on **API Tokens** +3. Click **Create API Token** +4. Configure your API token: + - **Name**: Enter a descriptive name (e.g., `cloudflare-workers-otel`) + - **Permissions**: Select **Ingest** permission (required for sending telemetry data) + - **Datasets**: Choose which datasets this token can write to, or select **All Datasets** +5. Click **Create** +6. **Important**: Copy the API token immediately and store it securely - you won't be able to see it again + +The API token will look something like: `xaat-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` + +## Step 3: Configure Cloudflare destinations + +Now you'll create destinations in the Cloudflare dashboard that point to Axiom. + +### Axiom OTLP endpoints + +Axiom provides separate OTLP endpoints for traces and logs: +- **Traces**: `https://api.axiom.co/v1/traces` +- **Logs**: `https://api.axiom.co/v1/logs` + +### Configure trace or logs destination + +1. Navigate to your Cloudflare account's [Workers Observability](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section +2. Click **Add destination** +3. Configure your trace destination: + - **Destination Name**: `axiom-traces` (or any descriptive name) + - **Destination Type**: Select **Traces** + - **OTLP Endpoint**: `https://api.axiom.co/v1/traces` (or `/v1/logs`) + - **Custom Headers**: Add two required headers: + - Authentication header + - Header name: `Authorization` + - Header value: `Bearer ` + - Dataset header: + - Header name: `X-Axiom-Dataset` + - Header value: Your dataset name (e.g., `cloudflare-workers-otel`) +4. Click **Save** + +## Step 3: Configure your Worker + +With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["axiom-traces"] + }, + "logs": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["axiom-logs"] + } + } +} +``` + + + +After updating your configuration, deploy your Worker for the changes to take effect. + +:::note +It may take a few minutes after deployment for data to appear in Axiom. +::: + diff --git a/src/content/docs/workers/observability/exporting-opentelemetry-data/grafana-cloud.mdx b/src/content/docs/workers/observability/exporting-opentelemetry-data/grafana-cloud.mdx new file mode 100644 index 000000000000000..7fbec8b8287cbd8 --- /dev/null +++ b/src/content/docs/workers/observability/exporting-opentelemetry-data/grafana-cloud.mdx @@ -0,0 +1,74 @@ +--- +pcx_content_type: how-to +title: Export to Grafana Cloud +sidebar: + order: 2 +--- +import {WranglerConfig} from "~/components"; + +Grafana Cloud is a fully managed observability platform that provides visualization, alerting, and analytics for your telemetry data. By exporting your Cloudflare Workers telemetry to Grafana Cloud, you can: +- Visualize distributed traces in **Grafana Tempo** to understand request flows and performance bottlenecks +- Query and analyze logs in **Grafana Loki** alongside your traces + +This guide will walk you through configuring Cloudflare Workers to export OpenTelemetry-compliant traces and logs to your Grafana Cloud stack. + +![Grafana Tempo trace view showing a distributed trace for a service with multiple spans including fetch requests, durable object subrequests, and queue operations, with timing information displayed on a timeline](~/assets/images/workers-observability/grafana-traces.png) + +## Prerequisites + +Before you begin, ensure you have: + +- An active [Grafana Cloud account](https://grafana.com/auth/sign-up/create-user) (free tier available) +- A deployed Worker that you want to monitor + +## Step 1: Access the OpenTelemetry setup guide + +1. Log in to your [Grafana Cloud portal](https://grafana.com/) +2. From your organization's home page, navigate to **Connections** → **Add new connection** +3. Search for "OpenTelemetry" and select **OpenTelemetry (OTLP)** +4. Select **Quickstart** then select **JavaScript** +6. Click **Create a new token** +7. Enter a name for your token (e.g., `cloudflare-workers-otel`) and click **create token** +8. Click on **Close** without copying the token +9. Copy and Save the value for `OTEL_EXPORTER_OTLP_ENDPOINT` and `OTEL_EXPORTER_OTLP_HEADERS` in the `Environment variables` code block as the OTel endpoint and as the Auth header value respectively + + +## Step 2: Set up destination +1. Navigate to your Cloudflare account's [Workers Observability](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section +2. Click **Add destination** and configure a destination name (e.g. `grafana-tracing`) +3. From Grafana, copy your Otel endpoint, auth header, and auth value + * Your OTEL endpoint will look like `https://otlp-gateway-prod-us-east-2.grafana.net/otlp` (append `/v1/traces` for traces and `/v1/logs` for logs) + * Your custom header should include: + * Your auth header name `Authorization` + * Your auth header value `Basic MTMxxx...` + +## Step 3: Configure your Worker + +With your destination created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["grafana-traces"] + }, + "logs": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["grafana-logs"] + } + } +} +``` + + + +After updating your configuration, deploy your Worker for the changes to take effect. + +:::note +It may take a few minutes after deployment for data to appear in Grafana Cloud. +::: diff --git a/src/content/docs/workers/observability/exporting-opentelemetry-data/honeycomb.mdx b/src/content/docs/workers/observability/exporting-opentelemetry-data/honeycomb.mdx new file mode 100644 index 000000000000000..c809de26cb82710 --- /dev/null +++ b/src/content/docs/workers/observability/exporting-opentelemetry-data/honeycomb.mdx @@ -0,0 +1,108 @@ +--- +pcx_content_type: how-to +title: Export to Honeycomb +sidebar: + order: 1 +--- + +import {WranglerConfig} from "~/components"; + +Honeycomb is an observability platform built for high-cardinality data that helps you understand and debug your applications. By exporting your Cloudflare Workers application telemetry to Honeycomb, you can: +- Visualize traces to understand request flows and identify performance bottlenecks +- Query and analyze logs with unlimited dimensionality across any attribute +- Create custom queries and dashboards to monitor your Workers + +![Trace view including POST request, fetch operations, durable object subrequest, and queue send, with timing information displayed on a timeline](~/assets/images/workers-observability/honeycomb-example.png) + +This guide will walk you through configuring your Cloudflare Worker application to export OpenTelemetry-compliant traces and logs to Honeycomb. + +## Prerequisites + +Before you begin, ensure you have: + +- An active [Honeycomb account](https://ui.honeycomb.io/signup) (free tier available) +- A deployed Worker that you want to monitor + +## Step 1: Get your Honeycomb API key + +1. Log in to your [Honeycomb account](https://ui.honeycomb.io/) +2. Navigate to your account settings by clicking on your profile icon in the top right +3. Select **Team Settings** +4. In the left sidebar, click **Environments** and click the gear icon +5. Find your environment (e.g., `production`, `test`) or create a new one +6. Under **API Keys**, click **Create Ingest API Key** +7. Configure your API key: + - **Name**: Enter a descriptive name (e.g., `cloudflare-workers-otel`) + - **Permissions**: Select **Can create services/datasets** (required for OTLP ingestion) +8. Click **Create** +9. **Important**: Copy the API key immediately and store it securely - you won't be able to see it again + +The API key will look something like: `hcaik_01hq...` + +## Step 2: Configure Cloudflare destinations + +Now you'll create destinations in the Cloudflare dashboard that point to Honeycomb. + +### Honeycomb OTLP endpoints + +Honeycomb provides separate OTLP endpoints for traces and logs: +- **Traces**: `https://api.honeycomb.io/v1/traces` +- **Logs**: `https://api.honeycomb.io/v1/logs` + +### Configure trace destination + +1. Navigate to your Cloudflare account's [Workers Observability](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section +2. Click **Add destination** +3. Configure your trace destination: + - **Destination Name**: `honeycomb-traces` (or any descriptive name) + - **Destination Type**: Select **Traces** + - **OTLP Endpoint**: `https://api.honeycomb.io/v1/traces` + - **Custom Headers**: Add the authentication header: + - Header name: `x-honeycomb-team` + - Header value: Your Honeycomb API key (e.g., `hcaik_01hq...`) +4. Click **Save** + +### Configure logs destination + +Repeat the process for logs: + +1. Click **Add destination** again +2. Configure your logs destination: + - **Destination Name**: `honeycomb-logs` (or any descriptive name) + - **Destination Type**: Select **Logs** + - **OTLP Endpoint**: `https://api.honeycomb.io/v1/logs` + - **Custom Headers**: Add the authentication header: + - Header name: `x-honeycomb-team` + - Header value: Your Honeycomb API key (same as above) +3. Click **Save** + +## Step 3: Configure your Worker + +With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["honeycomb-traces"] + }, + "logs": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["honeycomb-logs"] + } + } +} +``` + + + +After updating your configuration, deploy your Worker for the changes to take effect. + +:::note +It may take a few minutes after deployment for data to appear in Honeycomb. +::: diff --git a/src/content/docs/workers/observability/exporting-opentelemetry-data/index.mdx b/src/content/docs/workers/observability/exporting-opentelemetry-data/index.mdx new file mode 100644 index 000000000000000..12f7aaf5ff7a6df --- /dev/null +++ b/src/content/docs/workers/observability/exporting-opentelemetry-data/index.mdx @@ -0,0 +1,116 @@ +--- +pcx_content_type: concept +title: Exporting OpenTelemetry Data +sidebar: + order: 5 +--- + +import { Badge, DirectoryListing, WranglerConfig } from "~/components"; + +Cloudflare Workers supports exporting OpenTelemetry (OTel)-compliant telemetry data to any destination with an available OTel endpoint, allowing you to integrate with your existing monitoring and observability stack. + +### Supported telemetry types + +You can export the following types of telemetry data: + +- **Traces** - Traces showing request flows through your Worker and connected services +- **Logs** - Application logs including `console.log()` output and system-generated logs + +**Note**: exporting Worker metrics and custom metrics is not yet supported. + +### Available OpenTelemetry destinations + +Below are common OTLP endpoint formats for popular observability providers. Refer to your provider's documentation for specific details and authentication requirements. + +| Provider | Traces Endpoint | Logs Endpoint | +|----------|----------------|---------------| +| [**Honeycomb**](/workers/observability/exporting-opentelemetry-data/honeycomb/) | `https://api.honeycomb.io/v1/traces` | `https://api.honeycomb.io/v1/logs` | +| [**Grafana Cloud**](/workers/observability/exporting-opentelemetry-data/grafana-cloud/) | `https://otlp-gateway-{region}.grafana.net/otlp/v1/traces`| `https://otlp-gateway-{region}.grafana.net/otlp/v1/logs`[^1] | +| [**Axiom**](/workers/observability/exporting-opentelemetry-data/axiom/) | `https://api.axiom.co/v1/traces` | `https://api.axiom.co/v1/logs` | +| [**Sentry**](/workers/observability/exporting-opentelemetry-data/sentry/) | `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces` | `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs` | +| [**Datadog**](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/) | Not yet available. Pending release from Datadog | Not yet available. Pending release from Datadog | + +:::note[Authentication] +Most providers require authentication headers. Refer to your provider's documentation for specific authentication requirements. +::: + +## Setting up OpenTelemetry-compatible destinations +To start sending data to your destination, you'll need to create a destination in the Cloudflare dashboard. + +### Creating a destination + +![Observability Destinations dashboard showing configured destinations for Grafana and Honeycomb with their respective endpoints and status](~/assets/images/workers-observability/destinations.png) + +1. Head to your account's [Workers Observability](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section of the dashboard +2. Click add destination. +3. Configure your destination: + - **Destination Name** - A descriptive name (e.g., "Grafana-tracing", "Honeycomb-Logs") + - **Destination Type** - Choose between "Traces" or "Logs" + - **OTLP Endpoint** - The URL where your observability platform accepts OTLP data. + - **Custom Headers** (Optional) - Any authentication headers or other provider-required headers +4. Save your destination + +![Edit Destination dialog showing configuration for Honeycomb tracing with destination name, type selection, OTLP endpoint, and custom headers](~/assets/images/workers-observability/destination-setup.png) + +## Enabling OpenTelemetry export for your Worker + +After setting up destinations in the dashboard, configure your Worker to export telemetry data by updating your Wrangler configuration. Your destination name configured in your configuration file should be the same as the destination configured in the dashboard. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + "destinations": ["tracing-destination-name"], + + // traces sample rate of 5% + "head_sampling_rate": 0.05, + + // (optional disable traces in Cloudflare dashboard + "persist": false + }, + "logs": { + "enabled": true, + "destinations": ["logs-destination-name"], + // logs sample rate of 60% + "head_sampling_rate": 0.6, + + // (optional disable logs in Cloudflare dashboard + "persist": false + } + } +} +``` + + + +Once you've configured your Wrangler configuration file, redeploy your Worker for new configurations to take effect. Note that it may take a few minutes for events to reach your destination. + +## Destination status + +After creating a destination, you can monitor its health and delivery status in the Cloudflare dashboard. Each destination displays a status indicator that shows how recently data was successfully delivered. + +### Status indicators + +| Status | Description | Troubleshooting | +|--------|-------------|-----------------| +| **Last: n minutes ago** | Data was recently delivered successfully. | | +| **Never run** | No data has been delivered to this destination. | •Check if your Worker is receving traffic
• Review sampling rates (low rates generate less data)
| +| **Error** | An error occurred while attempting to deliver data to this destination. |• Verify OTLP endpoint URL is correct
• Check authentication headers are valid
| + +## Limits and pricing +Exporting OTel data is currently **free** to those currently on a Workers Paid subscription or higher during the early beta period. However, starting on **`January 15, 2026`**, tracing will be billed as part of your usage on the Workers Paid plan or contract. + +This includes the following limits and pricing: + +| Plan | Traces | Logs | Pricing | +|------|--------|------|---------| +| **Workers Free** | Not available | Not available | - | +| **Workers Paid** | 10 million events per month included | 10 million events per month included | $0.05 per million additional events | + +## Known limitations +OpenTelemetry data export is currently in beta. Please be aware of the following limitations: +- **Metrics export not yet supported**: Exporting Worker infrastructure metrics and custom metrics via OpenTelemetry is not currently available. We are actively working to add metrics support in the future. +- **Limited OTLP support from some providers**: Some observability providers are still rolling out OTLP endpoint support. Check the [Available OpenTelemetry destinations](#available-opentelemetry-destinations) table above for current availability. diff --git a/src/content/docs/workers/observability/exporting-opentelemetry-data/sentry.mdx b/src/content/docs/workers/observability/exporting-opentelemetry-data/sentry.mdx new file mode 100644 index 000000000000000..27daebeaa5682c2 --- /dev/null +++ b/src/content/docs/workers/observability/exporting-opentelemetry-data/sentry.mdx @@ -0,0 +1,104 @@ +--- +pcx_content_type: how-to +title: Export to Sentry +sidebar: + order: 4 +--- + +import {WranglerConfig} from "~/components"; + +Sentry is a software monitoring tool that helps developers identify and debug performance issues and errors. From end-to-end distributed tracing to performance monitoring, Sentry provides code-level observability that makes it easy to diagnose issues and learn continuously about your application code health across systems and services. By exporting your Cloudflare Workers application telemetry to Sentry, you can: +- Query logs and traces in Sentry +- Create custom alerts and dashboards to monitor your Workers + +![Sentry trace view with timing information displayed on a timeline](~/assets/images/workers-observability/sentry-example.png) + +This guide will walk you through exporting OpenTelemetry-compliant traces and logs to Sentry from your Cloudflare Worker application + +## Prerequisites + +Before you begin, ensure you have: + +- Are signed up for a [Sentry account](https://sentry.io/signup/) (free tier available) +- A deployed Worker that you want to monitor + +## Step 1: Create a Sentry project + +If you don't already have a Sentry project to send data to, you'll need to create one to start sending Cloudflare Workers application telemetry to Sentry. + +1. Log in to your [Sentry account](https://sentry.io/) +2. Navigate to the Insights > Projects in the navigation sidebar, which will open a list of your projects. +3. Click [**New Project**](https://sentry.io/orgredirect/organizations/:orgslug/insights/projects/new/) +4. Fill out the project creation form and click **Create Project** to complete the process. + +## Step 2: Get your Sentry OTLP endpoints + +Sentry provides separate OTLP endpoints for traces and logs which you can use to send your telemetry data to Sentry. +- **Traces**: `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces` +- **Logs**: `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs` + +You can find your OTLP endpoints in the your project settings. + +1. Go to the [Settings > Projects](https://sentry.io/orgredirect/organizations/:orgslug/settings/projects/) page in Sentry. +2. Select your project from the list and click on the project name to open the project settings. +3. Go to the "Client Keys (DSN)" sub-page for this project under the "SDK Setup" heading. + +There you'll find your Sentry project's OTLP logs and OTLP traces endpoints, as well as authentication headers for the endpoints. Make sure to copy the endpoints and authentication headers. + +For more details on how to use Sentry's OTLP endpoints, refer to [Sentry's OTLP documentation](https://docs.sentry.io/concepts/otlp/). + +## Step 3: Set up destination in the Cloudflare dashboard + +To set up a destination in the Cloudflare dashboard, navigate to your Cloudflare account's [Workers Observability](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/pipelines) section. Then click **Add destination** and configure either a traces or logs destination. + +### Traces Destination + +To configure your traces destination, click **Add destination** and configure the following: + - **Destination Name**: `sentry-traces` (or any descriptive name) + - **Destination Type**: Select **Traces** + - **OTLP Endpoint**: Your Sentry OTLP traces endpoint (e.g., `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces`) + - **Custom Headers**: Add the Sentry authentication header: + - Header name: `x-sentry-auth` + - Header value: `sentry sentry_key={SENTRY_PUBLIC_KEY}` where `{SENTRY_PUBLIC_KEY}` is your Sentry project's public key + +### Logs destination + +To configure your logs destination, click **Add destination** and configure the following: + - **Destination Name**: `sentry-logs` (or any descriptive name) + - **Destination Type**: Select **Logs** + - **OTLP Endpoint**: Your Sentry OTLP logs endpoint (e.g., `https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs`) + - **Custom Headers**: Add the Sentry authentication header: + - Header name: `x-sentry-auth` + - Header value: `sentry sentry_key={SENTRY_PUBLIC_KEY}` where `{SENTRY_PUBLIC_KEY}` is your Sentry project's public key + + +## Step 4: Configure your Worker + +With your destinations created in the Cloudflare dashboard, update your Worker's configuration to enable telemetry export. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["sentry-traces"] + }, + "logs": { + "enabled": true, + // Must match the destination name in the dashboard + "destinations": ["sentry-logs"] + } + } +} +``` + + + +After updating your configuration, deploy your Worker for the changes to take effect. + +:::note +It may take a few minutes after deployment for data to appear in Sentry. +::: diff --git a/src/content/docs/workers/observability/query-builder.mdx b/src/content/docs/workers/observability/query-builder.mdx index 2a2dd3060654a86..9b60c683d365760 100644 --- a/src/content/docs/workers/observability/query-builder.mdx +++ b/src/content/docs/workers/observability/query-builder.mdx @@ -4,10 +4,7 @@ title: Query Builder head: [] description: Write structured queries to investigate and visualize your telemetry data. sidebar: - order: 3 - badge: - variant: tip - text: New + order: 4 --- diff --git a/src/content/docs/workers/observability/third-party-integrations/index.mdx b/src/content/docs/workers/observability/third-party-integrations/index.mdx index fe8f70b449d00c2..8861271e83849cc 100644 --- a/src/content/docs/workers/observability/third-party-integrations/index.mdx +++ b/src/content/docs/workers/observability/third-party-integrations/index.mdx @@ -8,6 +8,4 @@ sidebar: import { DirectoryListing } from "~/components"; -Send your telemetry data to third parties. - diff --git a/src/content/docs/workers/observability/traces/index.mdx b/src/content/docs/workers/observability/traces/index.mdx new file mode 100644 index 000000000000000..08c56adedd5064a --- /dev/null +++ b/src/content/docs/workers/observability/traces/index.mdx @@ -0,0 +1,127 @@ +--- +pcx_content_type: navigation +title: Traces +sidebar: + order: 3 + badge: + text: Beta +--- + +import { WranglerConfig } from "~/components"; + +### What is Workers tracing? + +Tracing gives you end-to-end visibility into the life of a request as it travels through your Workers application and connected services. This helps you identify performance bottlenecks, debug issues, and understand complex request flows. With tracing you can answer questions such as: + +- What is the cause of a long-running request? +- How long do subrequests from my Worker take? +- How long are my calls to my KV Namespace or R2 bucket taking? + +![Example trace showing a POST request to a cake shop with multiple spans including fetch requests and durable object operations](~/assets/images/workers-observability/trace-waterfall-example.png) + +### Automatic instrumentation + +Cloudflare Workers provides tracing instrumentation **out of the box** - no code changes or SDK are required. Simply enable tracing on your Worker and Cloudflare automatically captures telemetry data for: + +- **Fetch calls** - All outbound HTTP requests, capturing timing, status codes, and request metadata. This enables you to quickly identify how external dependencies affect your application's performance. +- **Binding calls** - Interactions with various Worker bindings such as KV reads and writes, R2 object storage operations and Durable Object invocations. +- **Handler calls** - The complete lifecycle of each Worker invocation, including triggers such as [fetch handlers](/workers/runtime-apis/handlers/fetch/), + [scheduled handlers](/workers/runtime-apis/handlers/scheduled/), and [queue handlers](/queues/configuration/javascript-apis/#consumer). + +For a full list of instrumented operations , see the [spans and attributes documentation](/workers/observability/traces/spans-and-attributes). + +### How to enable tracing + +If you have already set `observability.enabled = true` in your [wrangler configuration file](/workers/wrangler/configuration/#observability), tracing **and** logs will be automatically enabled. + + + +```json +{ + "observability": { + "enabled": true + } +} +``` + + + +You can also configure tracing independently by setting `observability.traces.enabled = true` in your [wrangler configuration file](/workers/wrangler/configuration/#observability). + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // optional sampling rate (recommended for high-traffic workloads) + "head_sampling_rate": 0.05 + } + } +} +``` + + + + + +### Exporting OpenTelemetry traces to a 3rd party destination + +Workers tracing follows [OpenTelemetry (OTel) standards](https://opentelemetry.io/). This makes it compatible with popular observability platforms, +such as [Honeycomb](/workers/observability/exporting-opentelemetry-data/honeycomb/), [Grafana Cloud](/workers/observability/exporting-opentelemetry-data/grafana-cloud/), and +[Axiom](/workers/observability/exporting-opentelemetry-data/axiom/), while requiring zero development effort from you. If your observability provider has an available OpenTelemetry endpoint, you can export traces (and logs)! + +Learn more about exporting OpenTelemetry data from Workers [here](/workers/observability/exporting-opentelemetry-data/). + +### Sampling + +:::note[Default Sampling Rate] + +The default sampling rate is `1`, meaning 100% of requests will be traced if tracing is enabled. Set `head_sampling_rate` if you want to trace fewer requests. + +::: + +With sampling, you can trace a percentage of incoming requests in your Cloudflare Worker. +This allows you to manage volume and costs, while still providing meaningful insights into your application. + +The valid sampling range is from `0` to `1`, where `0` indicates zero out of one hundred invocations will be traced, and `1` indicates every requests will be traced, +and a number such a `0.05` indicates five out of one hundred requests will be traced. + +If you have not specified a sampling rate, it defaults to `1`, meaning 100% of requests will be traced. + + + +```json +{ + "observability": { + "traces": { + "enabled": true, + // set tracing sampling rate to 5% + "head_sampling_rate": 0.05 + }, + "logs": { + "enabled": true, + // set logging sampling rate to 60% + "head_sampling_rate": 0.6 + } + } +} +``` + + + +If you have `head_sampling_rate` configured for logs, you can also create a separate rate for traces. + +Sampling is [head-based](https://opentelemetry.io/docs/concepts/sampling/#head-sampling), meaning that non-traced requests do not incur any tracing overhead. + +### Limits & Pricing + +Workers tracing is currently **free** during the initial beta period. This includes all tracing functionality such as collecting traces, storing them, and viewing them in the Cloudflare dashboard. + +Starting on January 15, 2026, tracing will be billed as part of your usage on the Workers Free Paid and Enterprise plans. Each span in a trace represents one observability event, sharing the same monthly quota and pricing as [Workers logs](/workers/platform/pricing/#workers-logs): + +| | Events (trace spans or log events) | Retention | +| ---------------- | ------------------------------------------------------------------ | --------- | +| **Workers Free** | 200,000 per day | 3 Days | +| **Workers Paid** | 10 million included per month +$0.60 per additional million events | 7 Days | diff --git a/src/content/docs/workers/observability/traces/known-limitations.md b/src/content/docs/workers/observability/traces/known-limitations.md new file mode 100644 index 000000000000000..dbcc3dce5ae7d07 --- /dev/null +++ b/src/content/docs/workers/observability/traces/known-limitations.md @@ -0,0 +1,51 @@ +--- +pcx_content_type: navigation +title: Known limitations +sidebar: + order: 3 + group: + hideIndex: false +--- + +Workers tracing is currently in open beta. This page documents current limitations and any upcoming features on our roadmap. + +To provide more feedback and send feature requests, head to the [Workers tracing GitHub discussion](https://github.com/cloudflare/workers-sdk/discussions/TODO-GET-THE-LINK). + +### Non-I/O operations may report time of 0 ms + +Due to [security measures put in place to prevent Spectre attacks](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), the Workers +Runtime does not update time until I/O events take place. This means that some spans will return a length of `0 ms` even when the operation took longer. + +The Cloudflare Workers team is exploring security measures that would allow exposing time lengths at milisecond-level granularity in these cases. + +### Trace context propagation not yet supported + +Currently, Workers tracing does not propagate trace IDs to different platforms or accept trace IDs from other platforms. + +This means that spans from Workers will not be nested within traces from services that call Workers (or vice versa). + +Ideally, trace context can flow across service boundaries and automatically link spans together to create complete, end-to-end visibility. +When fully implemented, our automatic trace context propagation will follow [W3C standards](https://www.w3.org/TR/trace-context/) to ensure compatibility across your existing tools and services. +This will allow for traces to include spans from both Workers and other services. + +Without trace context propagation, calls to separate Workers and to Durable Objects create separate traces, rather than nested spans. + +### Incomplete spans attributes + +We are planning to add more detailed attributes on each span. You can find a complete list of what is already instrumented [here](/workers/observability/traces/spans-and-attributes). + +Your feedback on any missing information will help us prioritize additions and changes. Please comment on the [Workers tracing GitHub discussion](https://github.com/cloudflare/workers-sdk/discussions/TODO-GET-THE-LINK) +if specific attributes would be helpful to use tracing effectively. + +### Support for custom spans and attributes + +Automatic instrumentation covers many platform interactions, but we know you need visibility into your own application logic too. We're working to support the [OpenTelemetry API](https://www.npmjs.com/package/@opentelemetry/api) to make it easier for you to instrument custom spans within your application. + +### Span and attribute names subject to change + +As Workers tracing is currently in beta, span names and attribute names are not yet finalized. We may refine these names during the beta period to improve clarity and align with OpenTelemetry semantic conventions. We recommend reviewing the [spans and attributes documentation](/workers/observability/traces/spans-and-attributes) periodically for updates. + +### Known bugs and other call outs + +- There are currently are a few attributes that only apply to some spans (e.g.`service.name`, `faas.name`). When filtering or grouping by the Worker name across traces and logs, use `$metadata.service` instead, as it will apply consistently across all event types. +- While a trace is in progress, the event will show `Trace in Progress` on the root span. Please wait a few moments for the full trace to become available diff --git a/src/content/docs/workers/observability/traces/spans-and-attributes.mdx b/src/content/docs/workers/observability/traces/spans-and-attributes.mdx new file mode 100644 index 000000000000000..26f340bfa84dccf --- /dev/null +++ b/src/content/docs/workers/observability/traces/spans-and-attributes.mdx @@ -0,0 +1,443 @@ +--- +pcx_content_type: concept +title: Spans and attributes +sidebar: + order: 2 +--- + +import { Render } from "~/components"; + +Cloudflare Workers provides automatic tracing instrumentation **out of the box** - no code changes or SDK are required. + +## Currently supported spans and attributes + +### Attributes available on all spans + +- `cloud.provider` - Always set to `cloudflare` +- `cloud.platform` - Always set to `cloudflare.workers` +- `faas.name` - The name of your Worker +- `faas.invocation_id` - A unique identifier for this specific Worker invocation +- `faas.version` - The deployed version tag of your Worker +- `faas.invoked_region` - The region where the Worker was invoked +- `service.name` - The name of your Worker +- `cloudflare.colo` - The three-letter IATA airport code of the Cloudflare data center that processed the request (e.g., `SFO`, `LHR`) +- `cloudflare.script_name` - The name of your Worker +- `cloudflare.script_tags` - Tags associated with your Worker deployment +- `cloudflare.script_version.id` - The version identifier of your deployed Worker +- `cloudflare.invocation.sequence.number` - A counter added to every emitted span and log that can be used to distinguish which was emitted first when the timestamps are the same +- `telemetry.sdk.language` - The programming language used, set to `javascript` +- `telemetry.sdk.name` - The telemetry SDK name, set to `cloudflare` + +--- + +### Attributes available on all root spans + +- `faas.trigger` - The trigger that your Worker was invoked by (e.g., `http`, `cron`, `queue`, `email`) +- `cloudflare.ray_id` - A [unique identifier](/fundamentals/reference/cloudflare-ray-id/) for every request that goes through Cloudflare +- `cloudflare.handler_type` - The type of handler that processed the request (e.g., `fetch`, `scheduled`, `queue`, `email`, `alarm`) +- `cloudflare.entrypoint` - The entrypoint that was invoked in your Worker (e.g. the name of your Durable Object) +- `cloudflare.execution_model` - The execution model of the Worker (e.g., `stateless`, `stateful` for Durable Objects) +- `cloudflare.outcome` - The outcome of the Worker invocation (e.g., `ok`, `exception`, `exceededCpu`, `exceededMemory`) +- `cloudflare.cpu_time_ms` - The CPU time used by the Worker invocation, in milliseconds +- `cloudflare.wall_time_ms` - The wall time used by the Worker invocation, in milliseconds + +--- + +### [Runtime API](/workers/runtime-apis/) + +#### [`fetch`](/workers/runtime-apis/handlers/fetch/) + +- `network.protocol.name` +- `network.protocol.version` +- `url.full` +- `url.scheme` +- `url.path` +- `url.query` +- `server.port` +- `server.address` +- `user_agent.original` +- `http.request.method` +- `http.request.header.content-type` +- `http.request.header.content-length` +- `http.request.header.accept` +- `http.request.header.accept-encoding` +- `http.request.body.size` +- `http.response.status_code` +- `http.response.body.size` + +#### [`cache_put`](/workers/runtime-apis/cache/#put) + +- `cache_control.expiration` +- `cache_control.revalidation` + +#### [`cache_match`](/workers/runtime-apis/cache/#match) + +#### [`cache_delete`](/workers/runtime-apis/cache/#delete) + +--- + +### [Handlers](/workers/runtime-apis/handlers/) + +#### [`Fetch Handler`](/workers/runtime-apis/handlers/fetch/) + +- `cloudflare.verified_bot_category` +- `cloudflare.asn` +- `cloudflare.response.time_to_first_byte_ms` +- `geo.timezone` +- `geo.continent.code` +- `geo.country.code` +- `geo.locality.name` +- `geo.locality.region` +- `user_agent.orginal` +- `user_agent.os.name` +- `user_agent.os.version` +- `user_agent.browser.name` +- `user_agent.browser.major_version` +- `user_agent.browser.version` +- `user_agent.engine.name` +- `user_agent.engine.version` +- `user_agent.device.type` +- `user_agent.device.vendor` +- `user_agent.device.model` +- `http.request.method` +- `http.request.header.accept` +- `http.request.header.accept-encoding` +- `http.request.header.accept-language` +- `url.full` +- `url.path` +- `network.protocol.name` + +#### [`Scheduled Handler`](/workers/runtime-apis/handlers/scheduled/) + +- `faas.cron` +- `cloudflare.scheduled_time` + +#### [`QueueHandler`](/workers/runtime-apis/handlers/queue/) + +- `cloudflare.queue.name` +- `cloudflare.queue.batch_size` + +#### [`RPC Handler`](/workers/runtime-apis/rpc/) + +- `cloudflare.jsrpc.method` + +#### [`Email Handler`](/email-routing/email-workers/runtime-api/) + +- `cloudflare.email.from` +- `cloudflare.email.to` +- `cloudflare.email.size` + +#### [`Tail Handler`](/workers/runtime-apis/handlers/tail/) + +- `cloudflare.trace.count` + +#### [`Alarm Handler`](/durable-objects/api/alarms/#alarm) + +- `cloudflare.scheduled_time` + +--- + +#### [`browser_rendering_fetch`](/browser-rendering/) + +--- + +### [Workers KV](/kv/) + +#### Attributes available on all KV spans + +- `db.system.name` +- `db.operation.name` +- `cloudflare.binding.name` +- `cloudflare.binding.type` + +#### [`kv_get`](/kv/api/read-key-value-pairs/#get-method) + +- `cloudflare.kv.query.keys` +- `cloudflare.kv.query.keys.count` +- `cloudflare.kv.query.type` +- `cloudflare.kv.query.cache_ttl` +- `cloudflare.kv.response.size` +- `cloudflare.kv.response.returned_rows` +- `cloudflare.kv.response.metadata` +- `cloudflare.kv.response.cache_status` + +#### [`kv_getWithMetadata`](/kv/api/read-key-value-pairs/#getwithmetadata-method) + +- `cloudflare.kv.query.keys` +- `cloudflare.kv.query.keys.count` +- `cloudflare.kv.query.type` +- `cloudflare.kv.query.cache_ttl` +- `cloudflare.kv.response.size` +- `cloudflare.kv.response.returned_rows` +- `cloudflare.kv.response.metadata` +- `cloudflare.kv.response.cache_status` + +#### [`kv_put`](/kv/api/write-key-value-pairs/#put-method) + +- `cloudflare.kv.query.keys` +- `cloudflare.kv.query.keys.count` +- `cloudflare.kv.query.value_type` +- `cloudflare.kv.query.expiration` +- `cloudflare.kv.query.expiration_ttl` +- `cloudflare.kv.query.metadata` +- `cloudflare.kv.query.payload.size` + +#### [`kv_delete`](/kv/api/delete-key-value-pairs/#delete-method) + +- `cloudflare.kv.query.keys` +- `cloudflare.kv.query.keys.colunt` + +#### [`kv_list`](/kv/api/list-keys/#list-method) + +- `cloudflare.kv.query.prefix` +- `cloudflare.kv.query.limit` +- `cloudflare.kv.query.cursor` +- `cloudflare.kv.response.size` +- `cloudflare.kv.response.returned_rows` +- `cloudflare.kv.response.list_complete` +- `cloudflare.kv.response.cursor` +- `cloudflare.kv.response.cache_status` +- `cloudflare.kv.response.expiration` + +--- + +### [R2](/r2/) + +#### Attributes available on all R2 spans + +- `cloudflare.binding.type` +- `cloudflare.binding.name` +- `cloudflare.r2.bucket` +- `cloudflare.r2.operation` +- `cloudflare.r2.response.success` +- `cloudflare.r2.error.message` +- `cloudflare.r2.error.code` + +#### [`r2_head`](/r2/api/workers/workers-api-reference/#bucket-method-definitions) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.response.etag` +- `cloudflare.r2.response.size` +- `cloudflare.r2.response.uploaded` +- `cloudflare.r2.response.checksum.value` +- `cloudflare.r2.response.checksum.type` +- `cloudflare.r2.response.storage_class` +- `cloudflare.r2.response.ssec_key` +- `cloudflare.r2.response.content_type` +- `cloudflare.r2.response.content_encoding` +- `cloudflare.r2.response.content_disposition` +- `cloudflare.r2.response.content_language` +- `cloudflare.r2.response.cache_control` +- `cloudflare.r2.response.cache_expiry` +- `cloudflare.r2.response.custom_metadata` + +#### [`r2_get`](/r2/api/workers/workers-api-reference/#r2getoptions) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.range.offset` +- `cloudflare.r2.request.range.length` +- `cloudflare.r2.request.range.suffix` +- `cloudflare.r2.request.range` +- `cloudflare.r2.request.ssec_key` +- `cloudflare.r2.request.only_if.etag_matches` +- `cloudflare.r2.request.only_if.etag_does_not_match` +- `cloudflare.r2.request.only_if.uploaded_before` +- `cloudflare.r2.request.only_if.uploaded_after` +- `cloudflare.r2.response.etag` +- `cloudflare.r2.response.size` +- `cloudflare.r2.response.uploaded` +- `cloudflare.r2.response.checksum.value` +- `cloudflare.r2.response.checksum.type` +- `cloudflare.r2.response.storage_class` +- `cloudflare.r2.response.ssec_key` +- `cloudflare.r2.response.content_type` +- `cloudflare.r2.response.content_encoding` +- `cloudflare.r2.response.content_disposition` +- `cloudflare.r2.response.content_language` +- `cloudflare.r2.response.cache_control` +- `cloudflare.r2.response.cache_expiry` +- `cloudflare.r2.response.custom_metadata` + +#### [`r2_put`](/r2/api/workers/workers-api-reference/#r2putoptions) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.size` +- `cloudflare.r2.request.checksum.type` +- `cloudflare.r2.request.checksum.value` +- `cloudflare.r2.request.custom_metadata` +- `cloudflare.r2.request.http_metadata.content_type` +- `cloudflare.r2.request.http_metadata.content_encoding` +- `cloudflare.r2.request.http_metadata.content_disposition` +- `cloudflare.r2.request.http_metadata.content_language` +- `cloudflare.r2.request.http_metadata.cache_control` +- `cloudflare.r2.request.http_metadata.cache_expiry` +- `cloudflare.r2.request.storage_class` +- `cloudflare.r2.request.ssec_key` +- `cloudflare.r2.request.only_if.etag_matches` +- `cloudflare.r2.request.only_if.etag_does_not_match` +- `cloudflare.r2.request.only_if.uploaded_before` +- `cloudflare.r2.request.only_if.uploaded_after` +- `cloudflare.r2.response.etag` +- `cloudflare.r2.response.size` +- `cloudflare.r2.response.uploaded` +- `cloudflare.r2.response.checksum.value` +- `cloudflare.r2.response.checksum.type` +- `cloudflare.r2.response.storage_class` +- `cloudflare.r2.response.ssec_key` +- `cloudflare.r2.response.content_type` +- `cloudflare.r2.response.content_encoding` +- `cloudflare.r2.response.content_disposition` +- `cloudflare.r2.response.content_language` +- `cloudflare.r2.response.cache_control` +- `cloudflare.r2.response.cache_expiry` +- `cloudflare.r2.response.custom_metadata` + +#### [`r2_list`](/r2/api/workers/workers-api-reference/#r2listoptions) + +- `cloudflare.r2.request.limit` +- `cloudflare.r2.request.prefix` +- `cloudflare.r2.request.cursor` +- `cloudflare.r2.request.delimiter` +- `cloudflare.r2.request.start_after` +- `cloudflare.r2.request.include.http_metadata` +- `cloudflare.r2.request.include.custom_metadata` +- `cloudflare.r2.response.returned_objects` +- `cloudflare.r2.response.delimited_prefixes` +- `cloudflare.r2.response.truncated` +- `cloudflare.r2.response.cursor` + +#### [`r2_delete`](/r2/api/workers/workers-api-reference/#bucket-method-definitions) + +- `cloudflare.r2.request.keys` + +#### [`r2_createMultipartUpload`](/r2/api/workers/workers-api-reference/#r2multipartoptions) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.custom_metadata` +- `cloudflare.r2.request.http_metadata.content_type` +- `cloudflare.r2.request.http_metadata.content_encoding` +- `cloudflare.r2.request.http_metadata.content_disposition` +- `cloudflare.r2.request.http_metadata.content_language` +- `cloudflare.r2.request.http_metadata.cache_control` +- `cloudflare.r2.request.http_metadata.cache_expiry` +- `cloudflare.r2.request.storage_class` +- `cloudflare.r2.request.ssec_key` +- `cloudflare.r2.response.upload_id` + +#### [`r2_uploadPart`](/r2/api/workers/workers-multipart-usage/) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.upload_id` +- `cloudflare.r2.request.part_number` +- `cloudflare.r2.request.ssec_key` +- `cloudflare.r2.request.size` +- `cloudflare.r2.response.etag` + +#### [`r2_abortMultipartUpload`](/r2/api/workers/workers-multipart-usage/) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.upload_id` + +#### [`r2_completeMultipartUpload`](/r2/api/workers/workers-multipart-usage/) + +- `cloudflare.r2.request.key` +- `cloudflare.r2.request.upload_id` +- `cloudflare.r2.request.uploaded_parts` +- `cloudflare.r2.response.etag` +- `cloudflare.r2.response.size` +- `cloudflare.r2.response.uploaded` +- `cloudflare.r2.response.checksum.value` +- `cloudflare.r2.response.checksum.type` +- `cloudflare.r2.response.storage_class` +- `cloudflare.r2.response.ssec_key` +- `cloudflare.r2.response.content_type` +- `cloudflare.r2.response.content_encoding` +- `cloudflare.r2.response.content_disposition` +- `cloudflare.r2.response.content_language` +- `cloudflare.r2.response.cache_control` +- `cloudflare.r2.response.cache_expiry` +- `cloudflare.r2.response.custom_metadata` + +--- + +### [Durable Object API](/durable-objects/) + +#### `durable_object_subrequest` + +--- + +### [Durable Object Storage SQL API](/durable-objects/api/sqlite-storage-api) + +The SQL API allow you to modify the SQLite database embedded within a Durable Object. + +#### [`durable_object_storage_exec`](/durable-objects/api/sqlite-storage-api/#exec) + +- `db.system.name` +- `db.operation.name` +- `db.query.text` +- `cloudflare.durable_object.query.bindings` +- `cloudflare.durable_object.response.rows_read` +- `cloudflare.durable_object.response.rows_written` + +#### [`durable_object_storage_getDatabaseSize`](/durable-objects/api/sqlite-storage-api/#databasesize) + +- `db.operation.name` +- `cloudflare.durable_object.response.db_size` + +#### `durable_object_storage_ingest` + +- `cloudflare.durable_object.response.rows_read` +- `cloudflare.durable_object.response.rows_written` +- `cloudflare.durable_object.response.statement_count` + +--- + +### [Durable Object Storage KV API](/durable-objects/api/legacy-kv-storage-api) + +The legacy KV-backed API allows you to modify embedded storage within a Durable Object. + +#### [`durable_object_storage_get`](/durable-objects/api/legacy-kv-storage-api/#do-kv-async-get) + +#### [`durable_object_storage_put`](/durable-objects/api/legacy-kv-storage-api/#do-kv-async-put) + +#### [`durable_object_storage_delete`](/durable-objects/api/legacy-kv-storage-api/#do-kv-async-delete) + +#### [`durable_object_storage_list`](/durable-objects/api/legacy-kv-storage-api/#do-kv-async-list) + +#### [`durable_object_storage_deleteAll`](/durable-objects/api/legacy-kv-storage-api/#deleteall) + +--- + +### [Durable Object Storage Alarms API](/durable-objects/api/alarms/) + +#### [`durable_object_alarms_getAlarm`](/durable-objects/api/alarms/#getalarm) + +#### [`durable_object_alarms_setAlarm`](/durable-objects/api/alarms/#setalarm) + +#### [`durable_object_alarms_deleteAlarm`](/durable-objects/api/alarms/#deletealarm) + +--- + +### [Email](/email-routing/) + +#### [`reply_email`](/email-routing/email-workers/reply-email-workers/) + +#### [`forward_email`](/email-routing/email-workers/runtime-api/) + +#### [`send_email`](/email-routing/email-workers/send-email-workers/) + +--- + +### [Queues](/queues/) + +#### [`queue_send`](/queues/configuration/javascript-apis/#queue) + +#### [`queue_sendBatch`](/queues/configuration/javascript-apis/#queue) + +--- + +### [`Rate limiting`](/workers/runtime-apis/bindings/rate-limit/) + +#### [`ratelimit_run`](/workers/runtime-apis/bindings/rate-limit/#best-practices) + +---