diff --git a/src/content/docs/pipelines/build-with-pipelines/sources/http.mdx b/src/content/docs/pipelines/build-with-pipelines/sources/http.mdx
index 819786796338f3f..3d558809c9dcab3 100644
--- a/src/content/docs/pipelines/build-with-pipelines/sources/http.mdx
+++ b/src/content/docs/pipelines/build-with-pipelines/sources/http.mdx
@@ -8,7 +8,7 @@ head:
content: Configure HTTP endpoint
---
-import { Render, PackageManagers } from "~/components";
+import { Render, PackageManagers, DashButton } from "~/components";
Pipelines support data ingestion over HTTP. When you create a new pipeline using the default settings you will receive a globally scalable ingestion endpoint. To ingest data, make HTTP POST requests to the endpoint.
@@ -54,12 +54,16 @@ $ npx wrangler pipelines create [PIPELINE-NAME] --r2-bucket [R2-BUCKET-NAME] --r
Once authentication is turned on, you will need to include a Cloudflare API token in your request headers.
### Get API token
-1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
-2. Navigate to your [API Keys](https://dash.cloudflare.com/profile/api-tokens).
-3. Select **Create Token**.
-4. Choose the template for Workers Pipelines. Select **Continue to summary** > **Create token**. Make sure to copy the API token and save it securely.
+
+1. In the Cloudflare dashboard, go to the **Account API tokens** page.
+
+
+
+2. Select **Create Token**.
+3. Choose the template for Workers Pipelines. Select **Continue to summary** > **Create token**. Make sure to copy the API token and save it securely.
### Making authenticated requests
+
Include the API token you created in the previous step in the headers for your request:
```sh
diff --git a/src/content/docs/pipelines/tutorials/send-data-from-client/index.mdx b/src/content/docs/pipelines/tutorials/send-data-from-client/index.mdx
index c162453944cf92c..504e60f555b0e57 100644
--- a/src/content/docs/pipelines/tutorials/send-data-from-client/index.mdx
+++ b/src/content/docs/pipelines/tutorials/send-data-from-client/index.mdx
@@ -12,7 +12,7 @@ tags:
- SQL
---
-import { Render, PackageManagers, Details, WranglerConfig } from "~/components";
+import { Render, PackageManagers, Details, WranglerConfig, DashButton } from "~/components";
In this tutorial, you will learn how to build a data lake of website interaction events (clickstream data), using Pipelines.
@@ -406,17 +406,21 @@ Now, you can access the application at the deployed URL. When you click on the `
## 9. View the data in R2
-You can view the data in the R2 bucket. If you are not signed in to the Cloudflare dashboard, sign in and navigate to the [R2 overview](https://dash.cloudflare.com/?to=/:account/r2/overview) page.
+To view the data in the R2 bucket:
-Open the bucket you configured for your pipeline in Step 3. You can see files, representing the clickstream data. These files are newline delimited JSON files. Each row in a file represents one click event. Download one of the files, and open it in your preferred text editor to see the output:
+1. In the Cloudflare dashboard, go to R2's **Overview** page.
-```json
-{"timestamp":"2025-04-06T16:24:29.213Z","session_id":"1234567890abcdef","user_id":"user965","event_data":{"event_id":673,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:29.213Z","product_id":2},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
-{"timestamp":"2025-04-06T16:24:30.436Z","session_id":"1234567890abcdef","user_id":"user998","event_data":{"event_id":787,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:30.436Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
-{"timestamp":"2025-04-06T16:24:31.330Z","session_id":"1234567890abcdef","user_id":"user22","event_data":{"event_id":529,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:31.330Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
-{"timestamp":"2025-04-06T16:24:31.879Z","session_id":"1234567890abcdef","user_id":"user750","event_data":{"event_id":756,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:31.879Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
-{"timestamp":"2025-04-06T16:24:33.978Z","session_id":"1234567890abcdef","user_id":"user333","event_data":{"event_id":467,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:33.978Z","product_id":6},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
-```
+
+
+2. Open the bucket you configured for your pipeline in Step 3. You can see files, representing the clickstream data. These files are newline delimited JSON files. Each row in a file represents one click event. Download one of the files, and open it in your preferred text editor to see the output:
+
+ ```json
+ {"timestamp":"2025-04-06T16:24:29.213Z","session_id":"1234567890abcdef","user_id":"user965","event_data":{"event_id":673,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:29.213Z","product_id":2},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
+ {"timestamp":"2025-04-06T16:24:30.436Z","session_id":"1234567890abcdef","user_id":"user998","event_data":{"event_id":787,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:30.436Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
+ {"timestamp":"2025-04-06T16:24:31.330Z","session_id":"1234567890abcdef","user_id":"user22","event_data":{"event_id":529,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:31.330Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
+ {"timestamp":"2025-04-06T16:24:31.879Z","session_id":"1234567890abcdef","user_id":"user750","event_data":{"event_id":756,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:31.879Z","product_id":4},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
+ {"timestamp":"2025-04-06T16:24:33.978Z","session_id":"1234567890abcdef","user_id":"user333","event_data":{"event_id":467,"event_type":"product_view","page_url":"https://.workers.dev/","timestamp":"2025-04-06T16:24:33.978Z","product_id":6},"device_info":{"browser":"Chrome","os":"Linux","device":"Mobile","userAgent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"},"referrer":""}
+ ```
## 10. Optional: Connect a query engine to your R2 bucket and query the data
Once you have collected the raw events in R2, you might want to query the events, to answer questions such as "how many events occurred?". You can connect a query engine, such as MotherDuck, to your R2 bucket.
diff --git a/src/content/docs/queues/configuration/consumer-concurrency.mdx b/src/content/docs/queues/configuration/consumer-concurrency.mdx
index 9868fc9647307a0..e0a8875618d9526 100644
--- a/src/content/docs/queues/configuration/consumer-concurrency.mdx
+++ b/src/content/docs/queues/configuration/consumer-concurrency.mdx
@@ -5,7 +5,7 @@ sidebar:
order: 5
---
-import { WranglerConfig } from "~/components";
+import { WranglerConfig, DashButton } from "~/components";
Consumer concurrency allows a [consumer Worker](/queues/reference/how-queues-works/#consumers) processing messages from a queue to automatically scale out horizontally to keep up with the rate that messages are being written to a queue.
@@ -66,11 +66,13 @@ You can configure the concurrency of your consumer Worker in two ways:
To configure the concurrency settings for your consumer Worker from the dashboard:
-1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
-2. Select **Workers & Pages** > **Queues**.
-3. Select your queue > **Settings**.
-4. Select **Edit Consumer** under Consumer details.
-5. Set **Maximum consumer invocations** to a value between `1` and `250`. This value represents the maximum number of concurrent consumer invocations available to your queue.
+1. In the Cloudflare dashboard, go to the **Queues** page.
+
+
+
+2. Select your queue > **Settings**.
+3. Select **Edit Consumer** under Consumer details.
+4. Set **Maximum consumer invocations** to a value between `1` and `250`. This value represents the maximum number of concurrent consumer invocations available to your queue.
To remove a fixed maximum value, select **auto (recommended)**.
diff --git a/src/content/docs/queues/examples/send-messages-from-dash.mdx b/src/content/docs/queues/examples/send-messages-from-dash.mdx
index 27b7af84825d104..35f1a1619adb6f7 100644
--- a/src/content/docs/queues/examples/send-messages-from-dash.mdx
+++ b/src/content/docs/queues/examples/send-messages-from-dash.mdx
@@ -11,17 +11,22 @@ description: Use the dashboard to send messages to a queue.
---
+import { DashButton } from "~/components";
+
Sending messages from the dashboard allows you to debug Queues or queue consumers without a producer Worker.
To send messages from the dashboard:
-1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
-2. Select **Workers & Pages** > **Queues**.
-3. Select the queue to send a message to.
-4. Select the **Messages** tab.
-5. Select **Send message**.
-6. Enter your message. You can choose your message content type by selecting the **Text** or **JSON** tabs. Alternatively, select the **Upload a file** button or drag a file over the textbox to upload a file as a message.
-7. Select **Send message**.
+1. In the Cloudflare dashboard, go to the **Queues** page.
+
+
+
+2. Select the queue to send a message to.
+3. Select the **Messages** tab.
+4. Select **Send**.
+5. Choose your message **Content Type**: _Text_ or _JSON_.
+5. Enter your message. Alternatively, drag a file over the textbox to upload a file as a message.
+6. Select **Send**.
Your message will be sent to the queue.
diff --git a/src/content/docs/radar/investigate/url-scanner.mdx b/src/content/docs/radar/investigate/url-scanner.mdx
index 6c59c82b93ec99d..f90886b8faadd47 100644
--- a/src/content/docs/radar/investigate/url-scanner.mdx
+++ b/src/content/docs/radar/investigate/url-scanner.mdx
@@ -3,9 +3,10 @@ pcx_content_type: reference
title: URL Scanner
sidebar:
order: 7
-
---
+import { DashButton } from "~/components";
+
To better understand Internet usage around the world, use Cloudflare's URL Scanner. With Cloudflare's URL Scanner, you have the ability to investigate the details of a domain, IP, URL, or ASN. Cloudflare's URL Scanner is available in the Security Center of the Cloudflare dashboard, [Cloudflare Radar](https://radar.cloudflare.com/scan), and the Cloudflare [API](/api/resources/url_scanner/).
## Use the API
@@ -119,7 +120,7 @@ To fetch the scan's [screenshots](/api/resources/url_scanner/subresources/scans/
### Search scans
-Use a subset of ElasticSearch Query syntax to filter scans. Search results will include `Public` scans and your own `Unlisted` scans.
+Use a subset of ElasticSearch Query syntax to filter scans. Search results will include `Public` scans and your own `Unlisted` scans.
To search for scans to the hostname `google.com`, use the query parameter `q=page.domain:"google.com"`:
@@ -154,9 +155,11 @@ Go to [Search URL scans](/api/resources/url_scanner/subresources/scans/methods/l
Alternatively, you can search in the Security Center:
-1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
-2. Go to **Security Center** > **Investigate**.
-3. Enter your query and select **Search**.
+1. In the Cloudflare dashboard, go to the **Investigate** page.
+
+
+
+2. Enter your query and select **Search**.
You can scan a URL by location. Scanning a URL by location allows you to analyze how a website may present different content depending on your location. This helps to expose and examine region-specific malicious activities.