diff --git a/public/__redirects b/public/__redirects
index 0515d5c9b86afb..9ae3f191d84218 100644
--- a/public/__redirects
+++ b/public/__redirects
@@ -956,7 +956,7 @@
/load-balancing/local-traffic-management/ltm-magic-wan/ /load-balancing/private-network/magic-wan/ 301
# logs
-/logs/log-fields/ /logs/reference/log-fields/ 301
+/logs/log-fields/ /logs/logpush/logpush-job/datasets/ 301
/logs/logpull-api/ /logs/logpull/ 301
/logs/logpull-api/requesting-logs/ /logs/logpull/requesting-logs/ 301
/logs/logpush/aws-s3/ /logs/logpush/logpush-job/enable-destinations/aws-s3/ 301
@@ -2215,6 +2215,7 @@
/fundamentals/setup/manage-domains/* /fundamentals/manage-domains/:splat 301
/fundamentals/setup/manage-members/* /fundamentals/manage-members/:splat 301
/logs/get-started/enable-destinations/* /logs/logpush/logpush-job/enable-destinations/:splat 301
+/logs/reference/log-fields/* /logs/logpush/logpush-job/datasets/:splat 301
# Cloudflare One / Zero Trust
/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/local/as-a-service/* /cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/:splat 301
diff --git a/src/content/changelog/access/2025-01-15-ssh-logs-and-logpush.mdx b/src/content/changelog/access/2025-01-15-ssh-logs-and-logpush.mdx
index 478108da76517b..c0413456ffaf57 100644
--- a/src/content/changelog/access/2025-01-15-ssh-logs-and-logpush.mdx
+++ b/src/content/changelog/access/2025-01-15-ssh-logs-and-logpush.mdx
@@ -8,6 +8,6 @@ date: 2025-01-15
Only available on Enterprise plans.
:::
-Cloudflare now allows you to send SSH command logs to storage destinations configured in [Logpush](/logs/logpush/), including third-party destinations. Once exported, analyze and audit the data as best fits your organization! For a list of available data fields, refer to the [SSH logs dataset](/logs/reference/log-fields/account/ssh_logs/).
+Cloudflare now allows you to send SSH command logs to storage destinations configured in [Logpush](/logs/logpush/), including third-party destinations. Once exported, analyze and audit the data as best fits your organization! For a list of available data fields, refer to the [SSH logs dataset](/logs/logpush/logpush-job/datasets/account/ssh_logs/).
To set up a Logpush job, refer to [Logpush integration](/cloudflare-one/insights/logs/logpush/).
diff --git a/src/content/changelog/browser-isolation/2025-03-03-user-action-logging.mdx b/src/content/changelog/browser-isolation/2025-03-03-user-action-logging.mdx
index 7e29cfc5bd5f29..9389fa17d04c4b 100644
--- a/src/content/changelog/browser-isolation/2025-03-03-user-action-logging.mdx
+++ b/src/content/changelog/browser-isolation/2025-03-03-user-action-logging.mdx
@@ -4,7 +4,7 @@ description: User action logs for Remote Browser Isolation
date: 2025-03-04
---
-We're excited to announce that new logging capabilities for [Remote Browser Isolation (RBI)](/cloudflare-one/policies/browser-isolation/) through [Logpush](/logs/reference/log-fields/account/) are available in Beta starting today!
+We're excited to announce that new logging capabilities for [Remote Browser Isolation (RBI)](/cloudflare-one/policies/browser-isolation/) through [Logpush](/logs/logpush/logpush-job/datasets/account/) are available in Beta starting today!
With these enhanced logs, administrators can gain visibility into end user behavior in the remote browser and track blocked data extraction attempts, along with the websites that triggered them, in an isolated session.
diff --git a/src/content/changelog/workers/2025-04-07-increase-trace-events-limit.mdx b/src/content/changelog/workers/2025-04-07-increase-trace-events-limit.mdx
index 1099ead4931ae7..c2d4c47d319c1b 100644
--- a/src/content/changelog/workers/2025-04-07-increase-trace-events-limit.mdx
+++ b/src/content/changelog/workers/2025-04-07-increase-trace-events-limit.mdx
@@ -9,7 +9,7 @@ date: 2025-04-07
You can now capture a maximum of 256 KB of log events per Workers invocation, helping you gain better visibility into application behavior.
All console.log() statements, exceptions, request metadata, and headers are automatically captured during the Worker invocation and emitted
-as [JSON object](/logs/reference/log-fields/account/workers_trace_events). [Workers Logs](/workers/observability/logs/workers-logs) deserializes
+as [JSON object](/logs/logpush/logpush-job/datasets/account/workers_trace_events). [Workers Logs](/workers/observability/logs/workers-logs) deserializes
this object before indexing the fields and storing them. You can also capture, transform, and export the JSON object in a
[Tail Worker](/workers/observability/logs/tail-workers).
diff --git a/src/content/changelog/workers/2025-04-09-workers-timing.mdx b/src/content/changelog/workers/2025-04-09-workers-timing.mdx
index dbdb3fdcd818d3..ba2ca1139004bf 100644
--- a/src/content/changelog/workers/2025-04-09-workers-timing.mdx
+++ b/src/content/changelog/workers/2025-04-09-workers-timing.mdx
@@ -9,8 +9,8 @@ date: 2025-04-09
You can now observe and investigate the CPU time and Wall time for every Workers Invocations.
- For [Workers Logs](/workers/observability/logs/workers-logs), CPU time and Wall time are surfaced in the [Invocation Log](/workers/observability/logs/workers-logs/#invocation-logs)..
-- For [Tail Workers](/workers/observability/logs/tail-workers), CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/reference/log-fields/account/workers_trace_events).
-- For [Workers Logpush](/workers/observability/logs/logpush), CPU and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/reference/log-fields/account/workers_trace_events). All new jobs will have these new fields included by default. Existing jobs need to be updated to include CPU time and Wall time.
+- For [Tail Workers](/workers/observability/logs/tail-workers), CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/logpush/logpush-job/datasets/account/workers_trace_events).
+- For [Workers Logpush](/workers/observability/logs/logpush), CPU and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/logpush/logpush-job/datasets/account/workers_trace_events). All new jobs will have these new fields included by default. Existing jobs need to be updated to include CPU time and Wall time.
You can use a Workers Logs filter to search for logs where Wall time exceeds 100ms.
diff --git a/src/content/docs/bots/additional-configurations/ja3-ja4-fingerprint/index.mdx b/src/content/docs/bots/additional-configurations/ja3-ja4-fingerprint/index.mdx
index d05e5d93773f63..e4240112a9a472 100644
--- a/src/content/docs/bots/additional-configurations/ja3-ja4-fingerprint/index.mdx
+++ b/src/content/docs/bots/additional-configurations/ja3-ja4-fingerprint/index.mdx
@@ -83,7 +83,7 @@ To get more information about potential bot requests, use these JA3 and JA4 fing
- [Bot Analytics](/bots/bot-analytics/#enterprise-bot-management)
- [Security Events](/waf/analytics/security-events/) and [Security Analytics](/waf/analytics/security-analytics/)
- [Analytics GraphQL API](/analytics/graphql-api/), specifically the **HTTP Requests** dataset
-- [Logs](/logs/reference/log-fields/zone/http_requests/)
+- [Logs](/logs/logpush/logpush-job/datasets/zone/http_requests/)
## Actions
diff --git a/src/content/docs/bots/reference/bot-management-variables.mdx b/src/content/docs/bots/reference/bot-management-variables.mdx
index 79e309f5d5fd85..7917a9b810a70a 100644
--- a/src/content/docs/bots/reference/bot-management-variables.mdx
+++ b/src/content/docs/bots/reference/bot-management-variables.mdx
@@ -47,7 +47,7 @@ and cf.bot_management.score lt 30
## Log fields
-Once you enable Bot Management, Cloudflare also surfaces bot information in its [HTTP requests log fields](/logs/reference/log-fields/zone/http_requests/):
+Once you enable Bot Management, Cloudflare also surfaces bot information in its [HTTP requests log fields](/logs/logpush/logpush-job/datasets/zone/http_requests/):
- BotDetectionIDs
- BotScore
diff --git a/src/content/docs/cache/advanced-configuration/cache-reserve.mdx b/src/content/docs/cache/advanced-configuration/cache-reserve.mdx
index 4f0f16b3bd520d..8010d2d5fabe3f 100644
--- a/src/content/docs/cache/advanced-configuration/cache-reserve.mdx
+++ b/src/content/docs/cache/advanced-configuration/cache-reserve.mdx
@@ -61,7 +61,7 @@ Not all assets are eligible for Cache Reserve. To be admitted into Cache Reserve
Like the standard CDN, Cache Reserve also uses the `cf-cache-status` header to indicate cache statuses like `MISS`, `HIT`, and `REVALIDATED`. Cache Reserve cache misses and hits are factored into the dashboard's cache hit ratio.
-Individual sampled requests that filled or were served by Cache Reserve are viewable via the [CacheReserveUsed](/logs/reference/log-fields/zone/http_requests/) Logpush field.
+Individual sampled requests that filled or were served by Cache Reserve are viewable via the [CacheReserveUsed](/logs/logpush/logpush-job/datasets/zone/http_requests/) Logpush field.
Cache Reserve monthly operations and storage usage are viewable in the dashboard.
diff --git a/src/content/docs/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics.mdx b/src/content/docs/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics.mdx
index 148869937bd719..6d9151d6b4429c 100644
--- a/src/content/docs/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics.mdx
+++ b/src/content/docs/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics.mdx
@@ -48,6 +48,6 @@ Build custom dashboards to share this information by specifying an individual cu
Using [filters](/logs/logpush/logpush-job/filters/), you can send set sample rates (or not include logs altogether) based on filter criteria. This flexibility allows you to maintain selective logs for custom hostnames without massively increasing your log volume.
-Filtering is available for [all Cloudflare datasets](/logs/reference/log-fields/zone/).
+Filtering is available for [all Cloudflare datasets](/logs/logpush/logpush-job/datasets/zone/).
diff --git a/src/content/docs/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access.mdx b/src/content/docs/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access.mdx
index 28afe1cba22ffa..8bd91fb6dfda2c 100644
--- a/src/content/docs/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access.mdx
+++ b/src/content/docs/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access.mdx
@@ -149,7 +149,7 @@ To manually retrieve logs:
Only available on Enterprise plans.
:::
-Cloudflare allows you to send SSH command logs to storage destinations configured in [Logpush](/logs/logpush/), including third-party destinations. For a list of available data fields, refer to the [SSH logs dataset](/logs/reference/log-fields/account/ssh_logs/).
+Cloudflare allows you to send SSH command logs to storage destinations configured in [Logpush](/logs/logpush/), including third-party destinations. For a list of available data fields, refer to the [SSH logs dataset](/logs/logpush/logpush-job/datasets/account/ssh_logs/).
To set up the Logpush job, refer to [Logpush integration](/cloudflare-one/insights/logs/logpush/).
diff --git a/src/content/docs/cloudflare-one/insights/logs/gateway-logs/index.mdx b/src/content/docs/cloudflare-one/insights/logs/gateway-logs/index.mdx
index c0656b6179f421..1006e80fbccbc8 100644
--- a/src/content/docs/cloudflare-one/insights/logs/gateway-logs/index.mdx
+++ b/src/content/docs/cloudflare-one/insights/logs/gateway-logs/index.mdx
@@ -125,7 +125,7 @@ These settings will only apply to logs displayed in Zero Trust. Logpush data is
:::caution[Failed connection logs]
Gateway will only log TCP traffic with completed connections. If a connection is not complete (such as a TCP SYN with no SYN ACK), Gateway will not log this traffic in network logs.
-Gateway can log failed connections in [network session logs](/logs/reference/log-fields/account/zero_trust_network_sessions/). These logs are available for Enterprise users via [Logpush](/cloudflare-one/insights/logs/logpush/) or [GraphQL](/cloudflare-one/insights/analytics/gateway/#graphql-queries).
+Gateway can log failed connections in [network session logs](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/). These logs are available for Enterprise users via [Logpush](/cloudflare-one/insights/logs/logpush/) or [GraphQL](/cloudflare-one/insights/analytics/gateway/#graphql-queries).
:::
### Explanation of the fields
diff --git a/src/content/docs/cloudflare-one/insights/logs/logpush.mdx b/src/content/docs/cloudflare-one/insights/logs/logpush.mdx
index f1341b581011be..8bdffe90b42aa2 100644
--- a/src/content/docs/cloudflare-one/insights/logs/logpush.mdx
+++ b/src/content/docs/cloudflare-one/insights/logs/logpush.mdx
@@ -35,21 +35,21 @@ You can configure multiple destinations and add additional fields to your logs b
## Zero Trust datasets
-Refer to [Logpush log fields](/logs/reference/log-fields/) for a list of all available fields.
+Refer to [Logpush datasets](/logs/logpush/logpush-job/datasets/) for a list of all available fields.
| Dataset | Description |
| -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [Access Requests](/logs/reference/log-fields/account/access_requests/) | HTTP requests to sites protected by Cloudflare Access |
-| [Audit Logs](/logs/reference/log-fields/account/audit_logs/) | Authentication events through Cloudflare Access |
-| [Browser Isolation User Actions](/logs/reference/log-fields/account/biso_user_actions/) | Data transfer actions performed by a user in the remote browser |
-| [CASB Findings](/logs/reference/log-fields/account/casb_findings/) | Security issues detected by Cloudflare CASB |
-| [Device Posture Results](/logs/reference/log-fields/account/device_posture_results/) | Device posture status from the WARP client |
-| [DLP Forensic Copies](/logs/reference/log-fields/account/dlp_forensic_copies/) | Entire HTTP requests or payloads of HTTP requests captured by [Cloudflare DLP](/cloudflare-one/policies/data-loss-prevention/dlp-policies/logging-options/) |
-| [Gateway DNS](/logs/reference/log-fields/account/gateway_dns/) | DNS queries inspected by Cloudflare Gateway |
-| [Gateway HTTP](/logs/reference/log-fields/account/gateway_http/) | HTTP requests inspected by Cloudflare Gateway |
-| [Gateway Network](/logs/reference/log-fields/account/gateway_network/) | Network packets inspected by Cloudflare Gateway |
-| [SSH Logs](/logs/reference/log-fields/account/ssh_logs/) | SSH command logs for [Access for Infrastructure targets](/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access/) |
-| [Zero Trust Network Session Logs](/logs/reference/log-fields/account/zero_trust_network_sessions/) | Network session logs for traffic proxied by Cloudflare Gateway |
+| [Access Requests](/logs/logpush/logpush-job/datasets/account/access_requests/) | HTTP requests to sites protected by Cloudflare Access |
+| [Audit Logs](/logs/logpush/logpush-job/datasets/account/audit_logs/) | Authentication events through Cloudflare Access |
+| [Browser Isolation User Actions](/logs/logpush/logpush-job/datasets/account/biso_user_actions/) | Data transfer actions performed by a user in the remote browser |
+| [CASB Findings](/logs/logpush/logpush-job/datasets/account/casb_findings/) | Security issues detected by Cloudflare CASB |
+| [Device Posture Results](/logs/logpush/logpush-job/datasets/account/device_posture_results/) | Device posture status from the WARP client |
+| [DLP Forensic Copies](/logs/logpush/logpush-job/datasets/account/dlp_forensic_copies/) | Entire HTTP requests or payloads of HTTP requests captured by [Cloudflare DLP](/cloudflare-one/policies/data-loss-prevention/dlp-policies/logging-options/) |
+| [Gateway DNS](/logs/logpush/logpush-job/datasets/account/gateway_dns/) | DNS queries inspected by Cloudflare Gateway |
+| [Gateway HTTP](/logs/logpush/logpush-job/datasets/account/gateway_http/) | HTTP requests inspected by Cloudflare Gateway |
+| [Gateway Network](/logs/logpush/logpush-job/datasets/account/gateway_network/) | Network packets inspected by Cloudflare Gateway |
+| [SSH Logs](/logs/logpush/logpush-job/datasets/account/ssh_logs/) | SSH command logs for [Access for Infrastructure targets](/cloudflare-one/connections/connect-networks/use-cases/ssh/ssh-infrastructure-access/) |
+| [Zero Trust Network Session Logs](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/) | Network session logs for traffic proxied by Cloudflare Gateway |
## Parse DNS logs
diff --git a/src/content/docs/cloudflare-one/policies/gateway/http-policies/websocket.mdx b/src/content/docs/cloudflare-one/policies/gateway/http-policies/websocket.mdx
index e0cba0a7dd2b77..ac5f2fcb9345f3 100644
--- a/src/content/docs/cloudflare-one/policies/gateway/http-policies/websocket.mdx
+++ b/src/content/docs/cloudflare-one/policies/gateway/http-policies/websocket.mdx
@@ -5,7 +5,7 @@ sidebar:
order: 7
---
-Gateway does not inspect or log [WebSocket](https://datatracker.ietf.org/doc/html/rfc6455) traffic. Instead, Gateway will only log the HTTP details used to make the WebSocket connection, as well as [network session information](/logs/reference/log-fields/account/zero_trust_network_sessions/). To filter your WebSocket traffic, create a policy with the `101` HTTP response code.
+Gateway does not inspect or log [WebSocket](https://datatracker.ietf.org/doc/html/rfc6455) traffic. Instead, Gateway will only log the HTTP details used to make the WebSocket connection, as well as [network session information](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/). To filter your WebSocket traffic, create a policy with the `101` HTTP response code.
| Selector | Operator | Value | Action |
| ------------- | -------- | ------------------------- | ------ |
diff --git a/src/content/docs/data-localization/how-to/cloudflare-for-saas.mdx b/src/content/docs/data-localization/how-to/cloudflare-for-saas.mdx
index 3228c3f2aba53e..9a8df8c60f6669 100644
--- a/src/content/docs/data-localization/how-to/cloudflare-for-saas.mdx
+++ b/src/content/docs/data-localization/how-to/cloudflare-for-saas.mdx
@@ -62,6 +62,6 @@ Below you can find a breakdown of the different ways that you might configure Cl
## Customer Metadata Boundary
-Cloudflare for SaaS [Analytics](/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/) based on [HTTP requests](/logs/reference/log-fields/zone/http_requests/) are fully supported by Customer Metadata Boundary.
+Cloudflare for SaaS [Analytics](/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/) based on [HTTP requests](/logs/logpush/logpush-job/datasets/zone/http_requests/) are fully supported by Customer Metadata Boundary.
Refer to [Cloudflare for SaaS documentation](/cloudflare-for-platforms/cloudflare-for-saas/) for more information.
diff --git a/src/content/docs/data-localization/limitations.mdx b/src/content/docs/data-localization/limitations.mdx
index 832cccc6453154..dadd673fda9ca3 100644
--- a/src/content/docs/data-localization/limitations.mdx
+++ b/src/content/docs/data-localization/limitations.mdx
@@ -44,7 +44,7 @@ Regional Services does not apply to [Subrequests](/workers/platform/limits/#subr
There are certain limitations and caveats when using Customer Metadata Boundary.
-Specifically most of the Zone Analytics & Logs UI Tabs will be showing up as empty, when configuring Customer Metadata Boundary to EU only. It is recommended to use the UI [Security Analytics](/waf/analytics/security-analytics/) instead, or the [HTTP request](/logs/reference/log-fields/zone/http_requests/) logs via [Logpush](/logs/logpush/).
+Specifically most of the Zone Analytics & Logs UI Tabs will be showing up as empty, when configuring Customer Metadata Boundary to EU only. It is recommended to use the UI [Security Analytics](/waf/analytics/security-analytics/) instead, or the [HTTP request](/logs/logpush/logpush-job/datasets/zone/http_requests/) logs via [Logpush](/logs/logpush/).
To configure Customer Metadata Boundary to EU only, you must disable Log Retention for all zones within your account. Log Retention is a legacy feature of [Logpull](/logs/logpull/).
diff --git a/src/content/docs/data-localization/metadata-boundary/logpush-datasets.mdx b/src/content/docs/data-localization/metadata-boundary/logpush-datasets.mdx
index 3c03d944a3d2ee..26196037cf9a33 100644
--- a/src/content/docs/data-localization/metadata-boundary/logpush-datasets.mdx
+++ b/src/content/docs/data-localization/metadata-boundary/logpush-datasets.mdx
@@ -5,7 +5,7 @@ sidebar:
order: 4
---
-The table below lists the [Logpush datasets](/logs/reference/log-fields/) that support zones or accounts with Customer Metadata Boundary (CMB) enabled. The column **Respects CMB** indicates whether enabling CMB impacts the dataset (yes/no). The last two columns inform you if CMB is available with US and EU.
+The table below lists the [Logpush datasets](/logs/logpush/logpush-job/datasets/) that support zones or accounts with Customer Metadata Boundary (CMB) enabled. The column **Respects CMB** indicates whether enabling CMB impacts the dataset (yes/no). The last two columns inform you if CMB is available with US and EU.
Be aware that if you enable CMB for a dataset that does not support your region, no data will be pushed to your destination.
diff --git a/src/content/docs/ddos-protection/reference/logs.mdx b/src/content/docs/ddos-protection/reference/logs.mdx
index 0b86172ef9350d..928193dda5c9d5 100644
--- a/src/content/docs/ddos-protection/reference/logs.mdx
+++ b/src/content/docs/ddos-protection/reference/logs.mdx
@@ -13,4 +13,4 @@ import { GlossaryTooltip } from "~/components"
Retrieve HTTP events using [Cloudflare Logs](/logs/) to integrate them into your SIEM systems.
-Additionally, if you are a Magic Transit or a Spectrum customer on an Enterprise plan, you can export L3/4 traffic and DDoS attack logs using the [Network Analytics logs](/logs/reference/log-fields/account/network_analytics_logs/).
+Additionally, if you are a Magic Transit or a Spectrum customer on an Enterprise plan, you can export L3/4 traffic and DDoS attack logs using the [Network Analytics logs](/logs/logpush/logpush-job/datasets/account/network_analytics_logs/).
diff --git a/src/content/docs/dns/additional-options/analytics.mdx b/src/content/docs/dns/additional-options/analytics.mdx
index 489829610bc1d8..a6a84d264bfaf5 100644
--- a/src/content/docs/dns/additional-options/analytics.mdx
+++ b/src/content/docs/dns/additional-options/analytics.mdx
@@ -122,6 +122,6 @@ query GetTotalDNSQueryCount {
## Logs
-Logs let Enterprise customers view [detailed information](/logs/reference/log-fields/zone/dns_logs/) about individual DNS queries.
+Logs let Enterprise customers view [detailed information](/logs/logpush/logpush-job/datasets/zone/dns_logs/) about individual DNS queries.
For help setting up Logpush, refer to [Logpush](/logs/logpush/) documentation.
\ No newline at end of file
diff --git a/src/content/docs/dns/dns-firewall/analytics.mdx b/src/content/docs/dns/dns-firewall/analytics.mdx
index 5c715ff3a88072..b45573932da20d 100644
--- a/src/content/docs/dns/dns-firewall/analytics.mdx
+++ b/src/content/docs/dns/dns-firewall/analytics.mdx
@@ -28,7 +28,7 @@ You can also use the DNS Firewall API [reports endpoint](/api/resources/dns_fire
## Logs
-You can [set up Logpush](/logs/logpush/) to deliver [DNS Firewall logs](/logs/reference/log-fields/account/dns_firewall_logs/) to a storage service, SIEM, or log management provider.
+You can [set up Logpush](/logs/logpush/) to deliver [DNS Firewall logs](/logs/logpush/logpush-job/datasets/account/dns_firewall_logs/) to a storage service, SIEM, or log management provider.
### Response reasons
diff --git a/src/content/docs/dns/internal-dns/analytics.mdx b/src/content/docs/dns/internal-dns/analytics.mdx
index b3c1bdd560b50a..56d2df5e1d9ba5 100644
--- a/src/content/docs/dns/internal-dns/analytics.mdx
+++ b/src/content/docs/dns/internal-dns/analytics.mdx
@@ -20,6 +20,6 @@ The [fields](/analytics/graphql-api/getting-started/querying-basics/) added to c
## Logs
-Leverage Logpush jobs for [Gateway DNS](/logs/reference/log-fields/account/gateway_dns/#internaldnsfallbackstrategy). For help setting up Logpush, refer to [Logpush](/logs/logpush/) documentation.
+Leverage Logpush jobs for [Gateway DNS](/logs/logpush/logpush-job/datasets/account/gateway_dns/#internaldnsfallbackstrategy). For help setting up Logpush, refer to [Logpush](/logs/logpush/) documentation.
You can also set up [Logpush filters](/logs/logpush/logpush-job/filters/) to only push logs related to a specific [internal zone](/dns/internal-dns/internal-zones/) or [view](/dns/internal-dns/dns-views/) ID.
\ No newline at end of file
diff --git a/src/content/docs/dns/zone-setups/reference/domain-status.mdx b/src/content/docs/dns/zone-setups/reference/domain-status.mdx
index b34e3bf478e44c..c4a5fb25ad943f 100644
--- a/src/content/docs/dns/zone-setups/reference/domain-status.mdx
+++ b/src/content/docs/dns/zone-setups/reference/domain-status.mdx
@@ -74,7 +74,7 @@ Any pending zone with a paid plan (Pro, Business, Enterprise) will remain pendin
Make sure not to use pending zones for production traffic. Cloudflare responds to DNS queries for pending zones on the assigned Cloudflare nameserver IPs but there are associated risks, especially if you do not use [zone holds](/fundamentals/account/account-security/zone-holds/).
:::
-For Enterprise zones, if you want to adjust settings before zone activation, Logpush for [DNS logs](/logs/reference/log-fields/zone/dns_logs/) and [DNS Zone Transfer](/dns/zone-setups/zone-transfers/) configuration work as expected in pending state.
+For Enterprise zones, if you want to adjust settings before zone activation, Logpush for [DNS logs](/logs/logpush/logpush-job/datasets/zone/dns_logs/) and [DNS Zone Transfer](/dns/zone-setups/zone-transfers/) configuration work as expected in pending state.
## Active
diff --git a/src/content/docs/log-explorer/log-search.mdx b/src/content/docs/log-explorer/log-search.mdx
index f68f9a83e38d1f..a8f55cdf3885d5 100644
--- a/src/content/docs/log-explorer/log-search.mdx
+++ b/src/content/docs/log-explorer/log-search.mdx
@@ -144,7 +144,7 @@ WHERE
### Which fields (or columns) are available for querying?
-All fields listed in [Log Fields](/logs/reference/log-fields/) for the [supported datasets](/log-explorer/manage-datasets/#supported-datasets) are viewable in Log Explorer.
+All fields listed in [Datasets](/logs/logpush/logpush-job/datasets/) for the [supported datasets](/log-explorer/manage-datasets/#supported-datasets) are viewable in Log Explorer.
### Why does my query not complete or time out?
diff --git a/src/content/docs/log-explorer/manage-datasets.mdx b/src/content/docs/log-explorer/manage-datasets.mdx
index d4164a8360a778..954d92dfffa536 100644
--- a/src/content/docs/log-explorer/manage-datasets.mdx
+++ b/src/content/docs/log-explorer/manage-datasets.mdx
@@ -13,8 +13,8 @@ Log Explorer allows you to enable or disable which datasets are available to que
Log Explorer currently supports the following datasets:
-- [HTTP requests](/logs/reference/log-fields/zone/http_requests/) (`FROM http_requests`)
-- [Firewall events](/logs/reference/log-fields/zone/firewall_events/) (`FROM firewall_events`)
+- [HTTP requests](/logs/logpush/logpush-job/datasets/zone/http_requests/) (`FROM http_requests`)
+- [Firewall events](/logs/logpush/logpush-job/datasets/zone/firewall_events/) (`FROM firewall_events`)
- [Access](/cloudflare-one/policies/access/)
- [CASB](/cloudflare-one/applications/casb/)
- [Secure Web Gateway](/cloudflare-one/policies/gateway/)
diff --git a/src/content/docs/logs/R2-log-retrieval.mdx b/src/content/docs/logs/R2-log-retrieval.mdx
index e7b8d612a259a0..5af8f87bfa6595 100644
--- a/src/content/docs/logs/R2-log-retrieval.mdx
+++ b/src/content/docs/logs/R2-log-retrieval.mdx
@@ -168,6 +168,6 @@ R2 does not currently have retention controls in place. You can query back as fa
-The retrieval API is compatible with all the datasets we support. The full list is available on the [Log fields](/logs/reference/log-fields/) section.
+The retrieval API is compatible with all the datasets we support. The full list is available on the [Datasets](/logs/logpush/logpush-job/datasets/) section.
diff --git a/src/content/docs/logs/faq/general-faq.mdx b/src/content/docs/logs/faq/general-faq.mdx
index 883b855e642665..dd1e3b93d0727e 100644
--- a/src/content/docs/logs/faq/general-faq.mdx
+++ b/src/content/docs/logs/faq/general-faq.mdx
@@ -39,7 +39,7 @@ Not at this time. Talk to your Cloudflare account team or [Cloudflare Support](/
## Is it possible to track cache purge requests in the logs?
-Only Purge Everything requests are logged in the [Audit Log](/logs/reference/log-fields/account/audit_logs/).
+Only Purge Everything requests are logged in the [Audit Log](/logs/logpush/logpush-job/datasets/account/audit_logs/).
diff --git a/src/content/docs/logs/instant-logs.mdx b/src/content/docs/logs/instant-logs.mdx
index b818e48e016b62..9f00273cb9d44a 100644
--- a/src/content/docs/logs/instant-logs.mdx
+++ b/src/content/docs/logs/instant-logs.mdx
@@ -23,7 +23,7 @@ Instant Logs allows Cloudflare customers to access a live stream of the traffic
4. (optional) Select **Add filter** to narrow down the events to be shown.
-Fields supported in our [HTTP requests dataset](/logs/reference/log-fields/zone/http_requests/) can be used when you add filters. Some fields with additional subscriptions required are not supported in the dashboard, you will need to use CLI instead.
+Fields supported in our [HTTP requests dataset](/logs/logpush/logpush-job/datasets/zone/http_requests/) can be used when you add filters. Some fields with additional subscriptions required are not supported in the dashboard, you will need to use CLI instead.
Once a filter is selected and the stream has started, only log lines that match the filter criteria will appear. Filters are not applied retroactively to logs already showing in the dashboard.
@@ -33,7 +33,7 @@ Once a filter is selected and the stream has started, only log lines that match
Create a session by sending a `POST` request to the Instant Logs job endpoint with the following parameters:
-- **Fields** - List any field available in the [HTTP requests dataset](/logs/reference/log-fields/zone/http_requests/).
+- **Fields** - List any field available in the [HTTP requests dataset](/logs/logpush/logpush-job/datasets/zone/http_requests/).
- **Sample** - The sample parameter is the sample rate of the records set by the client: `"sample": 1` is 100% of records `"sample": 10` is 10% and so on.
diff --git a/src/content/docs/logs/logpull/requesting-logs.mdx b/src/content/docs/logs/logpull/requesting-logs.mdx
index 0505bba85e5b7c..cb1566eb89dd91 100644
--- a/src/content/docs/logs/logpull/requesting-logs.mdx
+++ b/src/content/docs/logs/logpull/requesting-logs.mdx
@@ -118,4 +118,4 @@ curl "https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/received?start=2
Refer to [Download jq](https://jqlang.github.io/jq/download/) for more information on obtaining and installing `jq`.
-Refer to [HTTP request fields](/logs/reference/log-fields/zone/http_requests) for the currently available fields.
+Refer to [HTTP request fields](/logs/logpush/logpush-job/datasets/zone/http_requests) for the currently available fields.
diff --git a/src/content/docs/logs/logpush/examples/example-logpush-curl.mdx b/src/content/docs/logs/logpush/examples/example-logpush-curl.mdx
index 74195744202c8f..bf62b1073f1e67 100644
--- a/src/content/docs/logs/logpush/examples/example-logpush-curl.mdx
+++ b/src/content/docs/logs/logpush/examples/example-logpush-curl.mdx
@@ -94,7 +94,7 @@ When using Sumo Logic, you may find it helpful to have [Live Tail](https://help.
* **name** (optional) - We suggest using your domain name as the job name; the name cannot be changed after the job is created.
* **destination\_conf** - Refer to [Destination](/logs/logpush/logpush-job/api-configuration/#destination) for details.
-* **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets; this parameter cannot be changed after the job is created.
+* **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets; this parameter cannot be changed after the job is created.
* **output\_options** (optional) - Refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/).
* Typically includes the desired fields and timestamp format.
* Set the timestamp format to `RFC 3339` (`×tamps=rfc3339`) for:
diff --git a/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx b/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
index a7665f0782564c..8f711a58bc2e04 100644
--- a/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
@@ -10,7 +10,7 @@ import { APIRequest } from "~/components";
## Endpoints
-The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use `/accounts/{account_id}` and Zone-scoped datasets use `/zone/{zone_id}`. For more information, refer to the [Log fields](/logs/reference/log-fields/) page.
+The table below summarizes the job operations available for both Logpush and Edge Log Delivery jobs. Make sure that Account-scoped datasets use `/accounts/{account_id}` and Zone-scoped datasets use `/zone/{zone_id}`. For more information, refer to the [Datasets](/logs/logpush/logpush-job/datasets/) page.
You can locate `{zone_id}` and `{account_id}` arguments based on the [Find zone and account IDs](/fundamentals/account/find-account-and-zone-ids/) page.
The `{job_id}` argument is numeric, like 123456.
@@ -210,7 +210,7 @@ Logpull\_options has been replaced with Custom Log Formatting output\_options. P
If you are still using logpull\_options, here are the options that you can customize:
-1. **Fields** (optional): Refer to [Log fields](/logs/reference/log-fields/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
+1. **Fields** (optional): Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
2. **Timestamp format** (optional): The format in which timestamp fields will be returned. Value options: `unixnano` (default), `unix`, `rfc3339`.
3. **Redaction for CVE-2021-44228** (optional): This option will replace every occurrence of `${` with `x{`. To enable it, set `CVE-2021-44228=true`.
diff --git a/src/content/docs/logs/logpush/logpush-job/custom-fields.mdx b/src/content/docs/logs/logpush/logpush-job/custom-fields.mdx
index 0786e402a6e861..45065d2e9e428e 100644
--- a/src/content/docs/logs/logpush/logpush-job/custom-fields.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/custom-fields.mdx
@@ -22,7 +22,7 @@ This default behavior can be changed. You can configure either request or respon
Custom fields can be enabled via API or the Cloudflare dashboard.
:::note
-Custom fields are only available for the [HTTP requests dataset](/logs/reference/log-fields/zone/http_requests/).
+Custom fields are only available for the [HTTP requests dataset](/logs/logpush/logpush-job/datasets/zone/http_requests/).
:::
## Enable custom rules via API
diff --git a/src/content/docs/logs/reference/log-fields/account/access_requests.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/access_requests.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/access_requests.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/access_requests.md
diff --git a/src/content/docs/logs/reference/log-fields/account/audit_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/audit_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/audit_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/audit_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/account/biso_user_actions.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/biso_user_actions.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/biso_user_actions.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/biso_user_actions.md
diff --git a/src/content/docs/logs/reference/log-fields/account/casb_findings.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/casb_findings.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/casb_findings.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/casb_findings.md
diff --git a/src/content/docs/logs/reference/log-fields/account/device_posture_results.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/device_posture_results.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/device_posture_results.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/device_posture_results.md
diff --git a/src/content/docs/logs/reference/log-fields/account/dlp_forensic_copies.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/dlp_forensic_copies.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/dlp_forensic_copies.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/dlp_forensic_copies.md
diff --git a/src/content/docs/logs/reference/log-fields/account/dns_firewall_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/dns_firewall_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/dns_firewall_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/dns_firewall_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/account/email_security_alerts.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/email_security_alerts.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/email_security_alerts.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/email_security_alerts.md
diff --git a/src/content/docs/logs/reference/log-fields/account/gateway_dns.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_dns.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/gateway_dns.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_dns.md
diff --git a/src/content/docs/logs/reference/log-fields/account/gateway_http.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_http.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/gateway_http.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_http.md
diff --git a/src/content/docs/logs/reference/log-fields/account/gateway_network.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_network.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/gateway_network.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/gateway_network.md
diff --git a/src/content/docs/logs/reference/log-fields/account/index.mdx b/src/content/docs/logs/logpush/logpush-job/datasets/account/index.mdx
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/index.mdx
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/index.mdx
diff --git a/src/content/docs/logs/reference/log-fields/account/magic_ids_detections.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/magic_ids_detections.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/magic_ids_detections.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/magic_ids_detections.md
diff --git a/src/content/docs/logs/reference/log-fields/account/network_analytics_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/network_analytics_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/network_analytics_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/network_analytics_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/account/sinkhole_http_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/sinkhole_http_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/sinkhole_http_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/sinkhole_http_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/account/ssh_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/ssh_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/ssh_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/ssh_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/account/workers_trace_events.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/workers_trace_events.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/workers_trace_events.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/workers_trace_events.md
diff --git a/src/content/docs/logs/reference/log-fields/account/zero_trust_network_sessions.md b/src/content/docs/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/account/zero_trust_network_sessions.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions.md
diff --git a/src/content/docs/logs/reference/log-fields/index.mdx b/src/content/docs/logs/logpush/logpush-job/datasets/index.mdx
similarity index 89%
rename from src/content/docs/logs/reference/log-fields/index.mdx
rename to src/content/docs/logs/logpush/logpush-job/datasets/index.mdx
index 7a1b0e5ca04728..da1972b8587f34 100644
--- a/src/content/docs/logs/reference/log-fields/index.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/datasets/index.mdx
@@ -1,8 +1,8 @@
---
-title: Log fields
+title: Datasets
pcx_content_type: navigation
sidebar:
- order: 51
+ order: 4
---
@@ -10,8 +10,8 @@ sidebar:
The datasets below describe the fields available by log category:
-* [Zone-scoped datasets](/logs/reference/log-fields/zone/)
-* [Account-scoped datasets](/logs/reference/log-fields/account/)
+* [Zone-scoped datasets](/logs/logpush/logpush-job/datasets/zone/)
+* [Account-scoped datasets](/logs/logpush/logpush-job/datasets/account/)
## API
diff --git a/src/content/docs/logs/reference/log-fields/zone/dns_logs.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/dns_logs.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/dns_logs.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/dns_logs.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/firewall_events.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/firewall_events.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/firewall_events.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/firewall_events.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/http_requests.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/http_requests.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/http_requests.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/http_requests.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/index.mdx b/src/content/docs/logs/logpush/logpush-job/datasets/zone/index.mdx
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/index.mdx
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/index.mdx
diff --git a/src/content/docs/logs/reference/log-fields/zone/nel_reports.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/nel_reports.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/nel_reports.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/nel_reports.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/page_shield_events.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/page_shield_events.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/page_shield_events.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/page_shield_events.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/spectrum_events.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/spectrum_events.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/spectrum_events.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/spectrum_events.md
diff --git a/src/content/docs/logs/reference/log-fields/zone/zaraz_events.md b/src/content/docs/logs/logpush/logpush-job/datasets/zone/zaraz_events.md
similarity index 100%
rename from src/content/docs/logs/reference/log-fields/zone/zaraz_events.md
rename to src/content/docs/logs/logpush/logpush-job/datasets/zone/zaraz_events.md
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/datadog.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/datadog.mdx
index 3ee6d9768d817b..043ecdb22179e6 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/datadog.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/datadog.mdx
@@ -95,7 +95,7 @@ To create a job, make a `POST` request to the Logpush jobs endpoint with the fol
"datadog://?header_DD-API-KEY=&ddsource=cloudflare&service=&host=&ddtags="
```
-* **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+* **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/).
Example request using cURL:
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs.mdx
index 838d697ef3563d..0927a60cdef150 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/ibm-cloud-logs.mdx
@@ -41,7 +41,7 @@ To create a job, make a `POST` request to the Logpush jobs endpoint with the fol
- **max_upload_records** (optional) - The maximum number of log lines per batch. This must be at least 1,000 lines or more. Note that there is no way to specify a minimum number of log lines per batch. This means that log files may contain many fewer lines than specified.
- **max_upload_bytes** (optional) - The maximum uncompressed file size for a batch of logs. We recommend a default value of 2 MB per upload based on IBM's limits, which our system will enforce for this destination. Since minimum file sizes cannot be set, log files may be smaller than the specified batch size.
-- **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+- **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
Example request using cURL:
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/new-relic.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/new-relic.mdx
index 2eb0f9c4904346..3f47f264688b92 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/new-relic.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/new-relic.mdx
@@ -81,7 +81,7 @@ To create a job, make a `POST` request to the Logpush jobs endpoint with the fol
* **max\_upload\_bytes** (optional) - The maximum uncompressed file size of a batch of logs. This must be at least 5 MB. Note that there is no way to set a minimum file size. This means that log files may be much smaller than this batch size. Nevertheless, it is recommended to set this parameter to 5,000,000.
-* **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+* **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
Example request using cURL:
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/r2.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/r2.mdx
index 8f9f33563e2320..0e5111394fc521 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/r2.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/r2.mdx
@@ -18,7 +18,7 @@ If you want to use the automatic setup for your logpush job:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/login).
-2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to [account-scoped datasets](/logs/reference/log-fields/account/) and [zone-scoped datasets](/logs/reference/log-fields/zone/), respectively.
+2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to [account-scoped datasets](/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Go to **Analytics & Logs** > **Logpush**.
@@ -98,7 +98,7 @@ We recommend adding the `{DATE}` parameter in the `destination_conf` to separate
r2:///{DATE}?account-id=&access-key-id=&secret-access-key=
```
-* **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+* **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [API configuration options](/logs/logpush/logpush-job/api-configuration/#options).
Example request using cURL:
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints.mdx
index 59f294b8ad8ce1..02f7f91c6a1125 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints.mdx
@@ -91,7 +91,7 @@ To create a job, make a `POST` request to the Logpush jobs endpoint with the fol
:::
-* **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+* **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
* **output\_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/).
Example request using cURL:
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/splunk.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/splunk.mdx
index f2529d16ff4b6e..dd36010c7918cb 100644
--- a/src/content/docs/logs/logpush/logpush-job/enable-destinations/splunk.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/splunk.mdx
@@ -83,7 +83,7 @@ Cloudflare highly recommends setting this value to fals
"splunk://?channel=&insecure-skip-verify=&sourcetype=&header_Authorization="
```
-- **dataset** - The category of logs you want to receive. Refer to [Log fields](/logs/reference/log-fields/) for the full list of supported datasets.
+- **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
- **output_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.
diff --git a/src/content/docs/logs/logpush/logpush-job/filters.mdx b/src/content/docs/logs/logpush/logpush-job/filters.mdx
index 7674411d628b18..33acfb260fe8f4 100644
--- a/src/content/docs/logs/logpush/logpush-job/filters.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/filters.mdx
@@ -11,7 +11,7 @@ import { Render, APIRequest } from "~/components"
The following table represents the comparison operators that are supported and example values. Filters are added as escaped JSON strings formatted as `{"key":"","operator":"","value":""}`.
-* Refer to the [Log fields](/logs/reference/log-fields/) page for a list of fields related to each dataset.
+* Refer to the [Datasets](/logs/logpush/logpush-job/datasets/) page for a list of fields related to each dataset.
* Comparison operators define how values must relate to fields in the log line for an expression to return true.
@@ -95,6 +95,6 @@ To set filters through the dashboard:
3. Select **Add Logpush job**. A modal window will open.
4. Select the dataset you want to push to a storage service.
5. Below **Select data fields**, in the **Filter** section, you can set up your filters.
-6. You need to select a [Field](/logs/reference/log-fields/), an [Operator](/logs/logpush/logpush-job/filters/#logical-operators), and a **Value**.
+6. You need to select a [dataset field](/logs/logpush/logpush-job/datasets/), an [Operator](/logs/logpush/logpush-job/filters/#logical-operators), and a **Value**.
7. You can connect more filters using `AND` and `OR` logical operators.
8. Select **Next** to continue the setting up of your Logpush job.
diff --git a/src/content/docs/logs/reference/change-notices/2023-02-01-security-fields-updates.mdx b/src/content/docs/logs/reference/change-notices/2023-02-01-security-fields-updates.mdx
index 7843d81e65bb26..b28b92e38b36c4 100644
--- a/src/content/docs/logs/reference/change-notices/2023-02-01-security-fields-updates.mdx
+++ b/src/content/docs/logs/reference/change-notices/2023-02-01-security-fields-updates.mdx
@@ -10,8 +10,8 @@ import { GlossaryTooltip } from "~/components"
Cloudflare will deploy some updates to security-related fields in Cloudflare Logs. These updates will affect the following datasets:
-* [HTTP Requests](/logs/reference/log-fields/zone/http_requests/)
-* [Firewall Events](/logs/reference/log-fields/zone/firewall_events/)
+* [HTTP Requests](/logs/logpush/logpush-job/datasets/zone/http_requests/)
+* [Firewall Events](/logs/logpush/logpush-job/datasets/zone/firewall_events/)
## Timeline
@@ -63,7 +63,7 @@ For more information on these actions, refer to the [Actions](/ruleset-engine/ru
## HTTP Requests dataset changes
-The following fields will be renamed in the [HTTP Requests](/logs/reference/log-fields/zone/http_requests/) dataset according to the two-phase strategy outlined in the [timeline](#timeline):
+The following fields will be renamed in the [HTTP Requests](/logs/logpush/logpush-job/datasets/zone/http_requests/) dataset according to the two-phase strategy outlined in the [timeline](#timeline):
@@ -93,7 +93,7 @@ The following fields are now deprecated and they will be removed from the HTTP R
## Firewall Events dataset changes
-The following fields will be added to the [Firewall Events](/logs/reference/log-fields/zone/firewall_events/) dataset:
+The following fields will be added to the [Firewall Events](/logs/logpush/logpush-job/datasets/zone/firewall_events/) dataset:
diff --git a/src/content/docs/magic-firewall/how-to/use-logpush-with-ids.mdx b/src/content/docs/magic-firewall/how-to/use-logpush-with-ids.mdx
index c56756c839cc5f..2cd1f4c4c1bb96 100644
--- a/src/content/docs/magic-firewall/how-to/use-logpush-with-ids.mdx
+++ b/src/content/docs/magic-firewall/how-to/use-logpush-with-ids.mdx
@@ -14,7 +14,7 @@ You can use Logpush with Magic Firewall IDS to log detected risks:
* Magic IDS is an account-scoped dataset. This means the string `/zone/` in the Cloudflare API URLs in the tutorial should be replaced with `/account/`.
-* Consult the [Magic IDS Detection fields doc](/logs/reference/log-fields/account/magic_ids_detections/) to know what fields you want configured for the job.
+* Consult the [Magic IDS Detection fields doc](/logs/logpush/logpush-job/datasets/account/magic_ids_detections/) to know what fields you want configured for the job.
* When creating the Logpush job, the dataset field should equal `magic_ids_detections`.
diff --git a/src/content/docs/network-error-logging/how-to.mdx b/src/content/docs/network-error-logging/how-to.mdx
index f1bb691d8a3719..7aea5f6beae82a 100644
--- a/src/content/docs/network-error-logging/how-to.mdx
+++ b/src/content/docs/network-error-logging/how-to.mdx
@@ -24,4 +24,4 @@ Click a tab under **Reachability summary** to view specific information related
Under **Reachability by data center**, click a location under Data Centers to filter reachability by a specific location.
-To view the log fields available for NEL, refer to [NEL reports](/logs/reference/log-fields/zone/nel_reports/).
+To view the log fields available for NEL, refer to [NEL reports](/logs/logpush/logpush-job/datasets/zone/nel_reports/).
diff --git a/src/content/docs/network/websockets.mdx b/src/content/docs/network/websockets.mdx
index 4aa3368ca0880b..dab2d60f0e90b5 100644
--- a/src/content/docs/network/websockets.mdx
+++ b/src/content/docs/network/websockets.mdx
@@ -56,7 +56,7 @@ Cloudflare measures a single WebSocket connection in the following way:
- **Bandwidth**: Cloudflare measures data transfer sent from Cloudflare to the client. This typically means that messages from the WebSocket server behind Cloudflare to the WebSocket client are counted towards bandwidth usage.
-Once a WebSocket connection is closed, you can view your aggregated WebSocket usage through [Traffic Analytics](/analytics/account-and-zone-analytics/zone-analytics/#traffic), the [GraphQL Analytics API](/analytics/graphql-api/), and [HTTP requests logs](/logs/reference/log-fields/zone/http_requests/).
+Once a WebSocket connection is closed, you can view your aggregated WebSocket usage through [Traffic Analytics](/analytics/account-and-zone-analytics/zone-analytics/#traffic), the [GraphQL Analytics API](/analytics/graphql-api/), and [HTTP requests logs](/logs/logpush/logpush-job/datasets/zone/http_requests/).
## Technical note
@@ -72,4 +72,4 @@ When Cloudflare releases new code to its global network, we may restart servers,
Investigating issues with Websocket can be facilitated with client tools like [wscat](https://github.com/websockets/wscat).
Being able to reproduce an issue on a single URL with a minimalistic tool helps narrowing down the issue.
-The `EdgeStartTimestamp` and `EdgeStopTimestamp` fields in [HTTP requests logs](/logs/reference/log-fields/zone/http_requests/) represent the duration of the WebSocket connection (they do not represent the initial HTTP connection).
+The `EdgeStartTimestamp` and `EdgeStopTimestamp` fields in [HTTP requests logs](/logs/logpush/logpush-job/datasets/zone/http_requests/) represent the duration of the WebSocket connection (they do not represent the initial HTTP connection).
diff --git a/src/content/docs/page-shield/policies/violations.mdx b/src/content/docs/page-shield/policies/violations.mdx
index 53c8263649cb84..a07becbecf864d 100644
--- a/src/content/docs/page-shield/policies/violations.mdx
+++ b/src/content/docs/page-shield/policies/violations.mdx
@@ -130,6 +130,6 @@ https://api.cloudflare.com/client/v4/graphql \
[Cloudflare Logpush](/logs/logpush/) supports pushing logs to storage services, SIEM systems, and log management providers.
-Information about policy violations is available in the [`page_shield_events` dataset](/logs/reference/log-fields/zone/page_shield_events/).
+Information about policy violations is available in the [`page_shield_events` dataset](/logs/logpush/logpush-job/datasets/zone/page_shield_events/).
For more information on configuring Logpush jobs, refer to [Logpush](/logs/logpush/) documentation.
diff --git a/src/content/docs/r2/platform/audit-logs.mdx b/src/content/docs/r2/platform/audit-logs.mdx
index 2a8700fe2ab896..28e056328538ae 100644
--- a/src/content/docs/r2/platform/audit-logs.mdx
+++ b/src/content/docs/r2/platform/audit-logs.mdx
@@ -113,7 +113,7 @@ The following configuration actions are logged:
:::note
-Logs for data access operations, such as `GetObject` and `PutObject`, are not included in audit logs. To log HTTP requests made to public R2 buckets, use the [HTTP requests](/logs/reference/log-fields/zone/http_requests/) Logpush dataset.
+Logs for data access operations, such as `GetObject` and `PutObject`, are not included in audit logs. To log HTTP requests made to public R2 buckets, use the [HTTP requests](/logs/logpush/logpush-job/datasets/zone/http_requests/) Logpush dataset.
:::
diff --git a/src/content/docs/reference-architecture/design-guides/network-vpn-migration.mdx b/src/content/docs/reference-architecture/design-guides/network-vpn-migration.mdx
index 0e96b99379e726..19fd9c70684d1f 100644
--- a/src/content/docs/reference-architecture/design-guides/network-vpn-migration.mdx
+++ b/src/content/docs/reference-architecture/design-guides/network-vpn-migration.mdx
@@ -166,7 +166,7 @@ As steps are taken in this first phase and the first users will start accessing
Cloudflare provides visibility at different levels, available through the dashboard or exported using [Logpush](/logs/logpush/). For traffic flowing over Magic WAN IPsec tunnels, [Network Analytics](/analytics/network-analytics/) can be found in the dashboard and through the [GraphQL API](/analytics/graphql-api/). This will show sampled statistics of the traffic and can be used for trend and traffic flow analysis.
-Next are more detailed [network session logs](/logs/reference/log-fields/account/zero_trust_network_sessions/) that collect information on all network connections/sessions going through Cloudflare's secure web gateway, including unsuccessful requests. These are followed by [Gateway activity logs](/cloudflare-one/insights/logs/gateway-logs/), which contain information about triggered policies as traffic gets inspected by the gateway engine. A combination of these logs will enable full visibility into all network flows, including users' identities. Using this information, network and security teams can run their analysis on what type of traffic flows where, and use that to plan for the next steps.
+Next are more detailed [network session logs](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/) that collect information on all network connections/sessions going through Cloudflare's secure web gateway, including unsuccessful requests. These are followed by [Gateway activity logs](/cloudflare-one/insights/logs/gateway-logs/), which contain information about triggered policies as traffic gets inspected by the gateway engine. A combination of these logs will enable full visibility into all network flows, including users' identities. Using this information, network and security teams can run their analysis on what type of traffic flows where, and use that to plan for the next steps.
Finally, for real-time alerting, [Cloudflare Notifications](/notifications/get-started/) can be configured for events such as IPsec and `cloudflared` tunnel health, as well as Cloudflare infrastructure status in general.
diff --git a/src/content/docs/spectrum/reference/logs.mdx b/src/content/docs/spectrum/reference/logs.mdx
index 36bf07d2cc7d7b..9e6a16ececdd0e 100644
--- a/src/content/docs/spectrum/reference/logs.mdx
+++ b/src/content/docs/spectrum/reference/logs.mdx
@@ -11,7 +11,7 @@ For each connection, Spectrum logs a connect event and either a disconnect or er
## Configure Logpush
-Spectrum [log events](/logs/reference/log-fields/) can be configured through the dashboard or API, depending on your preferred [destination](/logs/logpush/logpush-job/enable-destinations/).
+Spectrum [log events](/logs/logpush/logpush-job/datasets/) can be configured through the dashboard or API, depending on your preferred [destination](/logs/logpush/logpush-job/enable-destinations/).
## Status Codes
diff --git a/src/content/docs/waf/managed-rules/payload-logging/decrypt-in-logs.mdx b/src/content/docs/waf/managed-rules/payload-logging/decrypt-in-logs.mdx
index 021f196fae740a..5c5866f93e08c7 100644
--- a/src/content/docs/waf/managed-rules/payload-logging/decrypt-in-logs.mdx
+++ b/src/content/docs/waf/managed-rules/payload-logging/decrypt-in-logs.mdx
@@ -7,7 +7,7 @@ sidebar:
import { GlossaryTooltip, RuleID } from "~/components";
-You can include the encrypted matched payload in your [Logpush](/logs/logpush/) jobs by adding the **General** > [**Metadata**](/logs/reference/log-fields/zone/firewall_events/#metadata) field from the Firewall Events dataset to your job.
+You can include the encrypted matched payload in your [Logpush](/logs/logpush/) jobs by adding the **General** > [**Metadata**](/logs/logpush/logpush-job/datasets/zone/firewall_events/#metadata) field from the Firewall Events dataset to your job.
The payload, in its encrypted form, is available in the [`encrypted_matched_data` property](#structure-of-encrypted_matched_data-property-in-logpush) of the `Metadata` field.
diff --git a/src/content/docs/workers/observability/errors.mdx b/src/content/docs/workers/observability/errors.mdx
index 479170770c5817..e1b22a285c406c 100644
--- a/src/content/docs/workers/observability/errors.mdx
+++ b/src/content/docs/workers/observability/errors.mdx
@@ -258,7 +258,7 @@ The **Client disconnected by type** chart shows the number of client disconnect
To find all your errors in Workers Logs, you can use the following filter: `$metadata.error EXISTS`. This will show all the logs that have an error associated with them. You can also filter by `$workers.outcome` to find the requests that resulted in an error. For example, you can filter by `$workers.outcome = "exception"` to find all the requests that resulted in an uncaught exception.
-All the possible outcome values can be found in the [Workers Trace Event](/logs/reference/log-fields/account/workers_trace_events/#outcome) reference.
+All the possible outcome values can be found in the [Workers Trace Event](/logs/logpush/logpush-job/datasets/account/workers_trace_events/#outcome) reference.
## Debug exceptions from `Wrangler`
diff --git a/src/content/docs/workers/observability/logs/logpush.mdx b/src/content/docs/workers/observability/logs/logpush.mdx
index 5c032da14332d7..9f42eab90b4681 100644
--- a/src/content/docs/workers/observability/logs/logpush.mdx
+++ b/src/content/docs/workers/observability/logs/logpush.mdx
@@ -11,7 +11,7 @@ sidebar:
import { WranglerConfig } from "~/components";
-[Cloudflare Logpush](/logs/logpush/) supports the ability to send [Workers Trace Event Logs](/logs/reference/log-fields/account/workers_trace_events/) to a [supported destination](/logs/logpush/logpush-job/enable-destinations/). Worker’s Trace Events Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. This product is available on the Workers Paid plan. For pricing information, refer to [Pricing](/workers/platform/pricing/#workers-trace-events-logpush).
+[Cloudflare Logpush](/logs/logpush/) supports the ability to send [Workers Trace Event Logs](/logs/logpush/logpush-job/datasets/account/workers_trace_events/) to a [supported destination](/logs/logpush/logpush-job/enable-destinations/). Worker’s Trace Events Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. This product is available on the Workers Paid plan. For pricing information, refer to [Pricing](/workers/platform/pricing/#workers-trace-events-logpush).
:::caution
diff --git a/src/content/docs/workers/platform/limits.mdx b/src/content/docs/workers/platform/limits.mdx
index 9784eb0ad22a5e..4c0a7282ee5f6f 100644
--- a/src/content/docs/workers/platform/limits.mdx
+++ b/src/content/docs/workers/platform/limits.mdx
@@ -89,7 +89,7 @@ doing additional work, this time spent waiting **is not** counted towards CPU ti
To understand your CPU usage:
- CPU time and Wall time are surfaced in the [invocation log](/workers/observability/logs/workers-logs/#invocation-logs) within Workers Logs.
-- For Tail Workers, CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/reference/log-fields/account/workers_trace_events/).
+- For Tail Workers, CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/logpush/logpush-job/datasets/account/workers_trace_events/).
- DevTools locally can help identify CPU intensive portions of your code. See the [CPU profiling with DevTools documentation](/workers/observability/dev-tools/cpu-usage/).
You can also set a [custom limit](/workers/wrangler/configuration/#limits) on the amount of CPU time that can be used during each invocation of your Worker.
diff --git a/src/content/docs/workers/runtime-apis/console.mdx b/src/content/docs/workers/runtime-apis/console.mdx
index ebc46aaa4b374b..b892d31e940a87 100644
--- a/src/content/docs/workers/runtime-apis/console.mdx
+++ b/src/content/docs/workers/runtime-apis/console.mdx
@@ -18,7 +18,7 @@ All methods noted as "✅ supported" have the following behavior:
* They will be written to the console in local dev (`npx wrangler@latest dev`)
* They will appear in live logs, when tailing logs in the dashboard or running [`wrangler tail`](https://developers.cloudflare.com/workers/observability/log-from-workers/#use-wrangler-tail)
-* They will create entries in the `logs` field of [Tail Worker](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/), which can be pushed to a destination of your choice via [Logpush](https://developers.cloudflare.com/workers/observability/logpush/).
+* They will create entries in the `logs` field of [Tail Worker](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/workers_trace_events/), which can be pushed to a destination of your choice via [Logpush](https://developers.cloudflare.com/workers/observability/logpush/).
All methods noted as "🟡 partial support" have the following behavior:
diff --git a/src/content/partials/cloudflare-one/gateway/order-of-enforcement.mdx b/src/content/partials/cloudflare-one/gateway/order-of-enforcement.mdx
index 8ac05170c6bcee..87191d52418c49 100644
--- a/src/content/partials/cloudflare-one/gateway/order-of-enforcement.mdx
+++ b/src/content/partials/cloudflare-one/gateway/order-of-enforcement.mdx
@@ -82,7 +82,7 @@ If the TCP connection to the destination server is successful, Gateway will appl
-Connections to Zero Trust will always appear in your [Zero Trust network session logs](/logs/reference/log-fields/account/zero_trust_network_sessions/) regardless of connection success. Because Gateway does not inspect failed connections, they will not appear in your [Gateway activity logs](/cloudflare-one/insights/logs/gateway-logs/).
+Connections to Zero Trust will always appear in your [Zero Trust network session logs](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/) regardless of connection success. Because Gateway does not inspect failed connections, they will not appear in your [Gateway activity logs](/cloudflare-one/insights/logs/gateway-logs/).
## Priority between policy builders
diff --git a/src/content/partials/cloudflare-one/tunnel/troubleshoot-private-networks.mdx b/src/content/partials/cloudflare-one/tunnel/troubleshoot-private-networks.mdx
index d3c0ea1b1bcd83..5b075621fcec0b 100644
--- a/src/content/partials/cloudflare-one/tunnel/troubleshoot-private-networks.mdx
+++ b/src/content/partials/cloudflare-one/tunnel/troubleshoot-private-networks.mdx
@@ -108,7 +108,7 @@ You can also use a packet capture tool such as `tcpdump` or Wireshark to trace w
If there is a problem with [TLS inspection](/cloudflare-one/policies/gateway/http-policies/tls-decryption/), the user will get an `Insecure Upstream` error when they access the application in a browser. They will probably not get an error if they access the application outside of a browser.
-Customers who have [Logpush](/cloudflare-one/insights/logs/logpush/) enabled can check the [Gateway HTTP dataset](/logs/reference/log-fields/account/gateway_http/) for any hostnames which have an elevated rate of `526` HTTP status codes.
+Customers who have [Logpush](/cloudflare-one/insights/logs/logpush/) enabled can check the [Gateway HTTP dataset](/logs/logpush/logpush-job/datasets/account/gateway_http/) for any hostnames which have an elevated rate of `526` HTTP status codes.
To troubleshoot TLS inspection:
diff --git a/src/content/partials/logs/enable-logpush-job.mdx b/src/content/partials/logs/enable-logpush-job.mdx
index b2d6543f73f0db..92cbe981738dda 100644
--- a/src/content/partials/logs/enable-logpush-job.mdx
+++ b/src/content/partials/logs/enable-logpush-job.mdx
@@ -5,7 +5,7 @@
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/login).
-2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to [account-scoped datasets](/logs/reference/log-fields/account/) and [zone-scoped datasets](/logs/reference/log-fields/zone/), respectively.
+2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to [account-scoped datasets](/logs/logpush/logpush-job/datasets/account/) and [zone-scoped datasets](/logs/logpush/logpush-job/datasets/zone/), respectively.
3. Go to **Analytics & Logs** > **Logpush**.
diff --git a/src/content/partials/logs/log-explorer-account-datasets.mdx b/src/content/partials/logs/log-explorer-account-datasets.mdx
index 7ea31e6c8d486c..27e42f217c2750 100644
--- a/src/content/partials/logs/log-explorer-account-datasets.mdx
+++ b/src/content/partials/logs/log-explorer-account-datasets.mdx
@@ -2,10 +2,10 @@
{}
---
-- [Access requests](/logs/reference/log-fields/account/access_requests/) (`FROM access_requests`)
-- [CASB Findings](/logs/reference/log-fields/account/casb_findings/) (`FROM casb_findings`)
-- [Device posture results](/logs/reference/log-fields/account/device_posture_results/) (`FROM device_posture_results`)
-- [Gateway DNS](/logs/reference/log-fields/account/gateway_dns/) (`FROM gateway_dns`)
-- [Gateway HTTP](/logs/reference/log-fields/account/gateway_http/) (`FROM gateway_http`)
-- [Gateway Network](/logs/reference/log-fields/account/gateway_network/) (`FROM gateway_network`)
-- [Zero Trust Network Session Logs](/logs/reference/log-fields/account/zero_trust_network_sessions/) (`FROM zero_trust_network_sessions`)
+- [Access requests](/logs/logpush/logpush-job/datasets/account/access_requests/) (`FROM access_requests`)
+- [CASB Findings](/logs/logpush/logpush-job/datasets/account/casb_findings/) (`FROM casb_findings`)
+- [Device posture results](/logs/logpush/logpush-job/datasets/account/device_posture_results/) (`FROM device_posture_results`)
+- [Gateway DNS](/logs/logpush/logpush-job/datasets/account/gateway_dns/) (`FROM gateway_dns`)
+- [Gateway HTTP](/logs/logpush/logpush-job/datasets/account/gateway_http/) (`FROM gateway_http`)
+- [Gateway Network](/logs/logpush/logpush-job/datasets/account/gateway_network/) (`FROM gateway_network`)
+- [Zero Trust Network Session Logs](/logs/logpush/logpush-job/datasets/account/zero_trust_network_sessions/) (`FROM zero_trust_network_sessions`)
diff --git a/src/content/partials/logs/video-send-network-analytics-logs-to-splunk.mdx b/src/content/partials/logs/video-send-network-analytics-logs-to-splunk.mdx
index a936f6036849b1..fe5c476e8d0fce 100644
--- a/src/content/partials/logs/video-send-network-analytics-logs-to-splunk.mdx
+++ b/src/content/partials/logs/video-send-network-analytics-logs-to-splunk.mdx
@@ -7,6 +7,6 @@ import { Stream } from "~/components"
### Video tutorial: Send Network Analytics logs to Splunk
-The following video shows how to integrate [Network Analytics logs](/logs/reference/log-fields/account/network_analytics_logs/) in Splunk.
+The following video shows how to integrate [Network Analytics logs](/logs/logpush/logpush-job/datasets/account/network_analytics_logs/) in Splunk.