diff --git a/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md b/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md
index 19da2e10ac130..f398f7dd36319 100644
--- a/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md
+++ b/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md
@@ -4,7 +4,7 @@ title: Best Practices for Scaling Observability Pipelines
---
{{< site-region region="gov" >}}
-
Observability Pipelines is not available on the US1-FED Datadog site.
Observability Pipelines is not available on the US1-FED Datadog site.
diff --git a/content/en/observability_pipelines/destinations/google_cloud_storage.md b/content/en/observability_pipelines/destinations/google_cloud_storage.md
index b5600d01c31c9..e097243bb5673 100644
--- a/content/en/observability_pipelines/destinations/google_cloud_storage.md
+++ b/content/en/observability_pipelines/destinations/google_cloud_storage.md
@@ -3,7 +3,7 @@ title: Google Cloud Storage Destination
disable_toc: false
---
-
+
Use the Google Cloud Storage destination to send your logs to a Google Cloud Storage bucket. If you want to send logs to Google Cloud Storage for [archiving][1] and [rehydration][2], you must [configure Log Archives](#configure-log-archives). If you do not want to rehydrate logs in Datadog, skip to [Set up the destination for your pipeline](#set-up-the-destinations).
diff --git a/content/en/observability_pipelines/install_the_worker/_index.md b/content/en/observability_pipelines/install_the_worker/_index.md
index cbf3183ab6888..e3ec2cde6f651 100644
--- a/content/en/observability_pipelines/install_the_worker/_index.md
+++ b/content/en/observability_pipelines/install_the_worker/_index.md
@@ -46,7 +46,7 @@ If you had set up the pipeline components using the [API][6] or Terraform, to ge
{{% /tab %}}
{{% tab "Linux" %}}
-
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
+
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see [Manually install the Worker on Linux](#manually-install-the-worker-on-linux).
@@ -121,7 +121,7 @@ See [Update Existing Pipelines][1] if you want to make changes to your pipeline'
{{% /tab %}}
{{% tab "RPM" %}}
-
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
+
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
1. Set up the Datadog `rpm` repo on your system with the below command.
**Note**: If you are running RHEL 8.1 or CentOS 8.1, use `repo_gpgcheck=0` instead of `repo_gpgcheck=1` in the configuration below.
```shell
diff --git a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md
index 1ad97f583096d..c40013aa7c520 100644
--- a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md
+++ b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md
@@ -15,7 +15,7 @@ further_reading:
text: "Learn more about rehydrating log archives"
---
-
The Observability Pipelines Datadog Archives destination is in beta.
+
The Observability Pipelines Datadog Archives destination is in beta.
## Overview
@@ -194,7 +194,7 @@ See the [Log Archives documentation][6] for additional information.
You can configure the `datadog_archives` destination using the [configuration file](#configuration-file) or the [pipeline builder UI](#configuration-file).
-
If the Worker is ingesting logs that are not coming from the Datadog Agent and are routed to the Datadog Archives destination, those logs are not tagged with
reserved attributes. This means that you lose Datadog telemetry and the benefits of
unified service tagging. For example, say your syslogs are sent to
datadog_archives
and those logs have the status tagged as
severity
instead of the reserved attribute of
status
and the host tagged as
hostname
instead of the reserved attribute
host
. When these logs are rehydrated in Datadog, the
status
for the logs are all set to
info
and none of the logs will have a hostname tag.
+
If the Worker is ingesting logs that are not coming from the Datadog Agent and are routed to the Datadog Archives destination, those logs are not tagged with
reserved attributes. This means that you lose Datadog telemetry and the benefits of
unified service tagging. For example, say your syslogs are sent to
datadog_archives
and those logs have the status tagged as
severity
instead of the reserved attribute of
status
and the host tagged as
hostname
instead of the reserved attribute
host
. When these logs are rehydrated in Datadog, the
status
for the logs are all set to
info
and none of the logs will have a hostname tag.
### Configuration file
diff --git a/content/en/observability_pipelines/legacy/setup/datadog.md b/content/en/observability_pipelines/legacy/setup/datadog.md
index 3adfcc01b6686..ed7ffd8287110 100644
--- a/content/en/observability_pipelines/legacy/setup/datadog.md
+++ b/content/en/observability_pipelines/legacy/setup/datadog.md
@@ -108,8 +108,8 @@ In order to run the Worker in your AWS account, you need administrative access t
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
CloudFormation installs only support Remote Configuration.
-
Only use CloudFormation installs for non-production-level workloads.
+
CloudFormation installs only support Remote Configuration.
+
Only use CloudFormation installs for non-production-level workloads.
In order to run the Worker in your AWS account, you need administrative access to that account. Collect the following pieces of information to run the Worker instances:
* The VPC ID your instances will run in.
@@ -301,7 +301,7 @@ The Observability Pipelines Worker Docker image is published to Docker Hub [here
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
Only use CloudFormation installs for non-production-level workloads.
+
Only use CloudFormation installs for non-production-level workloads.
To install the Worker in your AWS Account, use the CloudFormation template to create a Stack:
@@ -405,7 +405,7 @@ An NLB is provisioned by the Terraform module, and configured to point at the in
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
Only use CloudFormation installs for non-production-level workloads.
+
Only use CloudFormation installs for non-production-level workloads.
An NLB is provisioned by the CloudFormation template, and is configured to point at the AutoScaling Group. Its DNS address is returned in the `LoadBalancerDNS` CloudFormation output.
{{% /tab %}}
@@ -442,7 +442,7 @@ By default, a 288GB EBS drive is allocated to each instance, and the sample conf
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
+
EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
By default, a 288GB EBS drive is allocated to each instance, and is auto-mounted and formatted upon instance boot.
{{% /tab %}}
diff --git a/content/en/observability_pipelines/legacy/setup/splunk.md b/content/en/observability_pipelines/legacy/setup/splunk.md
index 4b9f454c45c73..dd17e2e2ecebb 100644
--- a/content/en/observability_pipelines/legacy/setup/splunk.md
+++ b/content/en/observability_pipelines/legacy/setup/splunk.md
@@ -108,8 +108,8 @@ In order to run the Worker in your AWS account, you need administrative access t
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
CloudFormation installs only support Remote Configuration at this time.
-
Only use CloudFormation installs for non-production-level workloads.
+
CloudFormation installs only support Remote Configuration at this time.
+
Only use CloudFormation installs for non-production-level workloads.
In order to run the Worker in your AWS account, you need administrative access to that account. Collect the following pieces of information to run the Worker instances:
* The VPC ID your instances will run in.
@@ -425,7 +425,7 @@ EOT
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
Only use CloudFormation installs for non-production-level workloads.
+
Only use CloudFormation installs for non-production-level workloads.
To install the Worker in your AWS Account, use the CloudFormation template to create a Stack:
@@ -531,7 +531,7 @@ An NLB is provisioned by the Terraform module, and provisioned to point at the i
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
Only use CloudFormation installs for non-production-level workloads.
+
Only use CloudFormation installs for non-production-level workloads.
An NLB is provisioned by the CloudFormation template, and is configured to point at the AutoScaling Group. Its DNS address is returned in the `LoadBalancerDNS` CloudFormation output.
{{% /tab %}}
@@ -568,7 +568,7 @@ By default, a 288GB EBS drive is allocated to each instance, and the sample conf
{{% /tab %}}
{{% tab "CloudFormation" %}}
-
EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
+
EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
By default, a 288GB EBS drive is allocated to each instance, and is auto-mounted and formatted upon instance boot.
{{% /tab %}}
diff --git a/content/en/observability_pipelines/performance.md b/content/en/observability_pipelines/performance.md
index fcb50107ff9fb..4ea6c5a5bc252 100644
--- a/content/en/observability_pipelines/performance.md
+++ b/content/en/observability_pipelines/performance.md
@@ -16,7 +16,7 @@ further_reading:
text: "Destinations"
---
-
In-memory and disk buffering options for destinations are in Preview. Contact your account manager to request access.
+
In-memory and disk buffering options for destinations are in Preview. Contact your account manager to request access.
## Overview
diff --git a/content/en/observability_pipelines/processors/_index.md b/content/en/observability_pipelines/processors/_index.md
index ea8d4040e2207..c0155c0053700 100644
--- a/content/en/observability_pipelines/processors/_index.md
+++ b/content/en/observability_pipelines/processors/_index.md
@@ -19,7 +19,7 @@ Select a processor in the left navigation menu to see more information about it.
## Processor groups
-
Configuring a pipeline with processor groups is only available for Worker versions 2.7 and later.
+
Configuring a pipeline with processor groups is only available for Worker versions 2.7 and later.
{{< img src="observability_pipelines/processors/processor_groups.png" alt="Your image description" style="width:100%;" >}}
diff --git a/content/en/observability_pipelines/set_up_pipelines/_index.md b/content/en/observability_pipelines/set_up_pipelines/_index.md
index f5afc8dd011b7..1472ae82771ad 100644
--- a/content/en/observability_pipelines/set_up_pipelines/_index.md
+++ b/content/en/observability_pipelines/set_up_pipelines/_index.md
@@ -98,7 +98,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you
{{% /tab %}}
{{% tab "API" %}}
-
Creating pipelines using the Observability Pipelines API is in Preview. Fill out the
form to request access.
+
Creating pipelines using the Observability Pipelines API is in Preview. Fill out the
form to request access.
1. You can use Observability Pipelines API to [create a pipeline][1].
1. After creating the pipeline, [install the Worker][2] to send logs through it.
@@ -114,7 +114,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you
{{% /tab %}}
{{% tab "Terraform" %}}
-
Creating pipelines using Terraform is in Preview. Fill out the
form to request access.
+
Creating pipelines using Terraform is in Preview. Fill out the
form to request access.
1. You can use the [datadog_observability_pipeline][1] module to create a pipeline using Terraform.
1. After creating the pipeline, [install the Worker][2] to send logs through it.
diff --git a/content/en/observability_pipelines/sources/_index.md b/content/en/observability_pipelines/sources/_index.md
index 15e0b1c673d39..4c37b3f6fcf5c 100644
--- a/content/en/observability_pipelines/sources/_index.md
+++ b/content/en/observability_pipelines/sources/_index.md
@@ -81,7 +81,7 @@ Instead of using a self-signed certificate, Datadog recommends the following:
If you must use a self-signed certificate because the above approaches are not possible, you can configure your environment to trust the self-signed certificate on the Observability Pipelines Worker host.
-
Datadog does not recommend self-signed certificates. They are less secure and are not appropriate for production or internet-facing use. If you must use self-signed certificates, limit usage to internal testing only.
+
Datadog does not recommend self-signed certificates. They are less secure and are not appropriate for production or internet-facing use. If you must use self-signed certificates, limit usage to internal testing only.
For the Worker host to trust the self-signed certificate:
diff --git a/content/en/opentelemetry/ingestion_sampling.md b/content/en/opentelemetry/ingestion_sampling.md
index f0c8f13de8f3a..e3de38db30a57 100644
--- a/content/en/opentelemetry/ingestion_sampling.md
+++ b/content/en/opentelemetry/ingestion_sampling.md
@@ -109,7 +109,7 @@ To configure probabilistic sampling, do one of the following:
- Probabilistic sampling will apply to spans originating from both Datadog and OTel tracing libraries.
- If you send spans both to the Datadog Agent **and** OTel collector instances, set the same seed between Datadog Agent (`DD_APM_PROBABILISTIC_SAMPLER_HASH_SEED`) and OTel collector (`hash_seed`) to ensure consistent sampling.
-
DD_OTLP_CONFIG_TRACES_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE
is deprecated and has been replaced by DD_APM_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE
.
+
DD_OTLP_CONFIG_TRACES_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE
is deprecated and has been replaced by DD_APM_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE
.
#### Considerations
diff --git a/content/en/opentelemetry/instrument/instrumentation_libraries.md b/content/en/opentelemetry/instrument/instrumentation_libraries.md
index 7453d1e07ca46..3109cae019753 100644
--- a/content/en/opentelemetry/instrument/instrumentation_libraries.md
+++ b/content/en/opentelemetry/instrument/instrumentation_libraries.md
@@ -59,7 +59,7 @@ Datadog SDKs do not support OpenTelemetry Metrics and Logs APIs. To use OpenTele
4. The Datadog SDK for Java also accepts select individual instrumentation JARs produced by OpenTelemetry's [opentelemetry-java-instrumentation][9] build, for example the [R2DBC instrumentation JAR][11].
-
+
OpenTelemetry incubator APIs are not supported.
@@ -95,7 +95,7 @@ mvn spring-boot:run -Dstart-class=com.baeldung.pagination.PaginationApplication
Open `http://127.0.0.1:8080/products` to exercise the product query. With this setup, you are using OpenTelemetry's instrumentation to ensure full observability for R2DBC queries.
-
+
Versions 2.6.0-alpha and later of these OpenTelemetry instrumentations are not supported by the Datadog Java SDK.
diff --git a/content/en/opentelemetry/integrations/_index.md b/content/en/opentelemetry/integrations/_index.md
index 2472cb6570ec9..5906c4791ffd4 100644
--- a/content/en/opentelemetry/integrations/_index.md
+++ b/content/en/opentelemetry/integrations/_index.md
@@ -22,7 +22,7 @@ Datadog collects metrics from supported OpenTelemetry receivers at no extra cost
For example, the [`dockerstatsreceiver`][15] `metadata.yaml` file lists metrics that you can collect at no extra cost.
-
Ensure that you configure receivers according to OpenTelemetry receiver documentation. Incorrectly configured receivers may cause metrics to be classified as custom, resulting in additional charges.
+
Ensure that you configure receivers according to OpenTelemetry receiver documentation. Incorrectly configured receivers may cause metrics to be classified as custom, resulting in additional charges.
## Datadog-supported OpenTelemetry integrations
diff --git a/content/en/opentelemetry/integrations/datadog_extension.md b/content/en/opentelemetry/integrations/datadog_extension.md
index 8a34e0d2ae978..c70558e76fef8 100644
--- a/content/en/opentelemetry/integrations/datadog_extension.md
+++ b/content/en/opentelemetry/integrations/datadog_extension.md
@@ -89,7 +89,7 @@ service:
| `timeout` | Timeout for HTTP requests | `30s` |
| `tls.insecure_skip_verify` | Skip TLS certificate verification | `false` |
-
+
Hostname Matching: If you specify a custom
hostname
in the Datadog Extension, it
must match the
hostname
value in the Datadog Exporter configuration. The Datadog Extension does not have access to pipeline telemetry and cannot infer hostnames from incoming spans. It only obtains hostnames from system/cloud provider APIs or manual configuration. If telemetry has different
hostname attributes than the hostname reported by the extension, the telemetry will not be correlated to the correct host, and you may see duplicate hosts in Datadog.
diff --git a/content/en/opentelemetry/integrations/kafka_metrics.md b/content/en/opentelemetry/integrations/kafka_metrics.md
index ecb0686aeb0c0..a75859cadf7a8 100644
--- a/content/en/opentelemetry/integrations/kafka_metrics.md
+++ b/content/en/opentelemetry/integrations/kafka_metrics.md
@@ -6,7 +6,7 @@ further_reading:
text: "Setting Up the OpenTelemetry Collector"
---
-
+
OTel Kafka Metrics Remapping is in public alpha. It is available in versions >= 0.93.0 of the collector. If you have feedback related to this, reach out to your account team to provide your input.
diff --git a/content/en/opentelemetry/integrations/runtime_metrics/_index.md b/content/en/opentelemetry/integrations/runtime_metrics/_index.md
index 26bbfe9b1186f..287427d440088 100644
--- a/content/en/opentelemetry/integrations/runtime_metrics/_index.md
+++ b/content/en/opentelemetry/integrations/runtime_metrics/_index.md
@@ -69,7 +69,7 @@ OpenTelemetry Go applications are [instrumented manually][3]. To enable runtime
{{% tab ".NET" %}}
-
The minimum supported version of the .NET OpenTelemetry SDK is
1.5.0
+
The minimum supported version of the .NET OpenTelemetry SDK is
1.5.0
#### Automatic instrumentation
@@ -120,7 +120,7 @@ The OpenTelemetry runtime metrics have the following prefixes based on their sou
The following tables list the Datadog runtime metrics that are supported through OpenTelemetry mapping. "N/A" indicates that there is no OpenTelemetry equivalent metric available.
-
OpenTelemetry runtime metrics are mapped to Datadog by metric name. Do not rename host metrics for OpenTelemetry runtime metrics as this breaks the mapping.
+
OpenTelemetry runtime metrics are mapped to Datadog by metric name. Do not rename host metrics for OpenTelemetry runtime metrics as this breaks the mapping.
[100]: /opentelemetry/setup/collector_exporter/
[101]: /opentelemetry/setup/otlp_ingest_in_the_agent
diff --git a/content/en/opentelemetry/migrate/collector_0_95_0.md b/content/en/opentelemetry/migrate/collector_0_95_0.md
index 99afe791af8b5..743b8382862b5 100644
--- a/content/en/opentelemetry/migrate/collector_0_95_0.md
+++ b/content/en/opentelemetry/migrate/collector_0_95_0.md
@@ -16,7 +16,7 @@ To continue receiving Trace Metrics, configure the Datadog Connector in the Open
## Migrate to OpenTelemetry Collector version 0.95.0+
-
To continue receiving Trace Metrics, you must configure the Datadog Connector as a part of your upgrade to OpenTelemetry Collector version 0.95.0+. Upgrading without configuring the Datadog Connector might also result in difficulties viewing the APM Traces page within the application. Monitors and dashboards based on the affected metrics might also be impacted.
+
To continue receiving Trace Metrics, you must configure the Datadog Connector as a part of your upgrade to OpenTelemetry Collector version 0.95.0+. Upgrading without configuring the Datadog Connector might also result in difficulties viewing the APM Traces page within the application. Monitors and dashboards based on the affected metrics might also be impacted.
Before proceeding with the upgrade to the OTel Collector versions 0.95.0+:
- Review the [release notes](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.95.0) to understand the nature of the changes.
diff --git a/content/en/opentelemetry/migrate/ddot_collector.md b/content/en/opentelemetry/migrate/ddot_collector.md
index 98ffd01fb7db9..77419dfab83fa 100644
--- a/content/en/opentelemetry/migrate/ddot_collector.md
+++ b/content/en/opentelemetry/migrate/ddot_collector.md
@@ -16,7 +16,7 @@ If you are already using a standalone OpenTelemetry (OTel) Collector for your OT
To migrate to the DDOT Collector, you need to install the Datadog Agent and configure your applications to report the telemetry data.
-
+
The DDOT Collector only supports deployment as a DaemonSet (following the
agent deployment pattern), not as a
gateway. If you have an existing gateway architecture, you can use the DDOT Collector with the
loadbalancingexporter to connect to your existing gateway layer.
@@ -212,7 +212,7 @@ datadog:
```
1. (Optional) Enable additional Datadog features:
-
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
+
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
{{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}}
datadog:
...
@@ -228,7 +228,7 @@ datadog:
processCollection: true
{{< /code-block >}}
1. (Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs:
-
+
{{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}}
datadog:
...
diff --git a/content/en/opentelemetry/migrate/migrate_operation_names.md b/content/en/opentelemetry/migrate/migrate_operation_names.md
index 270e3d59347fd..514801d8c36a5 100644
--- a/content/en/opentelemetry/migrate/migrate_operation_names.md
+++ b/content/en/opentelemetry/migrate/migrate_operation_names.md
@@ -15,7 +15,7 @@ When using OpenTelemetry with Datadog, you might see unclear or lengthy operatio
Datadog has introduced new logic for generating operation names for OpenTelemetry traces, controlled by the `enable_operation_and_resource_name_logic_v2` feature flag. This new logic improves trace visibility in service pages and standardizes operation naming according to the rules outlined below.
-
+
Breaking Change: When this new logic is active (either by opting-in or future default), it is a breaking change for monitors or dashboards that reference operation names based on the old conventions. You must update your monitors and dashboards to use the new naming conventions described in
New mapping logic. If you cannot update them yet, you can
opt out .
diff --git a/content/en/opentelemetry/setup/ddot_collector/_index.md b/content/en/opentelemetry/setup/ddot_collector/_index.md
index 2cf8042c572aa..a0f345270dad1 100644
--- a/content/en/opentelemetry/setup/ddot_collector/_index.md
+++ b/content/en/opentelemetry/setup/ddot_collector/_index.md
@@ -9,7 +9,7 @@ further_reading:
---
{{< site-region region="gov" >}}
-
The Datadog Distribution of OpenTelemetry Collector (DDOT) is not yet FedRAMP/FIPS compliant.
+
The Datadog Distribution of OpenTelemetry Collector (DDOT) is not yet FedRAMP/FIPS compliant.
• If you require a FedRAMP or FIPS-compliant data collection pipeline, use the
FIPS-enabled Datadog Agent.
• If you are a GovCloud customer whose only requirement is data residency in the GovCloud (US1-FED) data center, you
may use the DDOT Collector.
{{< /site-region >}}
diff --git a/content/en/opentelemetry/setup/ddot_collector/custom_components.md b/content/en/opentelemetry/setup/ddot_collector/custom_components.md
index 5b5414d095856..b501ad2b56159 100644
--- a/content/en/opentelemetry/setup/ddot_collector/custom_components.md
+++ b/content/en/opentelemetry/setup/ddot_collector/custom_components.md
@@ -9,7 +9,7 @@ further_reading:
---
{{< site-region region="gov" >}}
-
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
+
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
{{< /site-region >}}
This guide explains how to build a DDOT Collector image with additional OpenTelemetry components not included in the default DDOT Collector. To see a list of components already included in the DDOT Collector by default, see [Included components][1].
diff --git a/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md b/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md
index c4f16dbda8e13..3e240c0f9d461 100644
--- a/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md
+++ b/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md
@@ -12,7 +12,7 @@ further_reading:
---
{{< site-region region="gov" >}}
-
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
+
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
{{< /site-region >}}
## Overview
@@ -128,7 +128,7 @@ The Datadog Operator automatically binds the OpenTelemetry Collector to ports `4
4. (Optional) Enable additional Datadog features:
-
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
+
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
{{< code-block lang="yaml" filename="datadog-agent.yaml" collapsible="true" >}}
# Enable Features
@@ -205,7 +205,7 @@ If you don't want to expose the port, you can use the Agent service instead:
4. (Optional) Enable additional Datadog features:
-
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
+
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
{{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}}
datadog:
@@ -226,7 +226,7 @@ When enabling additional Datadog features, always use the Datadog or OpenTelemet
5. (Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs:
-
+
{{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}}
datadog:
@@ -501,7 +501,7 @@ data:
exporters: [debug, datadog]
{{< /code-block >}}
-
The field for Collector config in the ConfigMap must be called otel-config.yaml
.
+
The field for Collector config in the ConfigMap must be called otel-config.yaml
.
2. Reference the `otel-agent-config-map` ConfigMap in your `DatadogAgent` resource using `features.otelCollector.conf.configMap` parameter:
{{< code-block lang="yaml" filename="datadog-agent.yaml" collapsible="false" >}}
diff --git a/content/en/opentelemetry/setup/ddot_collector/install/linux.md b/content/en/opentelemetry/setup/ddot_collector/install/linux.md
index 5c4582ef80b64..d4d3e888a2458 100644
--- a/content/en/opentelemetry/setup/ddot_collector/install/linux.md
+++ b/content/en/opentelemetry/setup/ddot_collector/install/linux.md
@@ -14,7 +14,7 @@ further_reading:
{{< /callout >}}
{{< site-region region="gov" >}}
-
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
+
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
{{< /site-region >}}
## Overview
@@ -111,7 +111,7 @@ DDOT automatically binds the OpenTelemetry Collector to ports 4317 (grpc) and 43
### (Optional) Enable additional Datadog features
-
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
+
Enabling these features may incur additional charges. Review the
pricing page and talk to your Customer Success Manager before proceeding.
For a complete list of available options, refer to the fully commented reference file at `/etc/datadog-agent/datadog.yaml.example` or the sample [`config_template.yaml`][12] file.
diff --git a/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md b/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md
index dcbd4a95724a7..1ef5022b31b10 100644
--- a/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md
+++ b/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md
@@ -33,7 +33,7 @@ To get started, you first [instrument your application][3] with OpenTelemetry SD
Read the OpenTelemetry instrumentation documentation to understand how to point your instrumentation to the Agent. The `receiver` section described below follows the [OpenTelemetry Collector OTLP receiver configuration schema][5].
-
Note: The supported setup is an ingesting Agent deployed on every OpenTelemetry-data generating host. You cannot send OpenTelemetry telemetry from collectors or instrumented apps running one host to an Agent on a different host. But, provided the Agent is local to the collector or SDK instrumented app, you can set up multiple pipelines.
+
Note: The supported setup is an ingesting Agent deployed on every OpenTelemetry-data generating host. You cannot send OpenTelemetry telemetry from collectors or instrumented apps running one host to an Agent on a different host. But, provided the Agent is local to the collector or SDK instrumented app, you can set up multiple pipelines.
## Enabling OTLP Ingestion on the Datadog Agent
@@ -103,7 +103,7 @@ OTLP logs ingestion on the Datadog Agent is disabled by default so that you don'
- Set `DD_LOGS_ENABLED` to true.
- Set `DD_OTLP_CONFIG_LOGS_ENABLED` to true.
-
+
Known Issue: Starting with Agent version 7.61.0, OTLP ingestion pipelines may fail to start in Docker environments, displaying the error:
Error running the OTLP ingest pipeline: failed to register process metrics: process does not exist
.
If you are using an affected version, you can use one of these workarounds:
1. Set the environment variable
HOST_PROC
to
/proc
in your Agent Docker container.
diff --git a/content/en/opentelemetry/troubleshooting.md b/content/en/opentelemetry/troubleshooting.md
index 46222c52b1fcd..5afee53da4344 100644
--- a/content/en/opentelemetry/troubleshooting.md
+++ b/content/en/opentelemetry/troubleshooting.md
@@ -271,7 +271,7 @@ features:
name: otel-http
```
-
When configuring ports 4317
and 4318
, you must use the default names otel-grpc
and otel-http
respectively to avoid port conflicts.
+
When configuring ports 4317
and 4318
, you must use the default names otel-grpc
and otel-http
respectively to avoid port conflicts.
## Further reading
diff --git a/content/en/product_analytics/session_replay/browser/_index.md b/content/en/product_analytics/session_replay/browser/_index.md
index ef29cb24b579b..5e8c48801dca4 100644
--- a/content/en/product_analytics/session_replay/browser/_index.md
+++ b/content/en/product_analytics/session_replay/browser/_index.md
@@ -69,13 +69,13 @@ if (user.isAuthenticated) {
To stop the Session Replay recording, call `stopSessionReplayRecording()`.
-
When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording()
to begin recording.
+
When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording()
to begin recording.
## Disable Session Replay
To stop session recordings, set `sessionReplaySampleRate` to `0`. This stops collecting data for the [Browser RUM & Session Replay plan][6].
-
If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate
to 0
.
+
If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate
to 0
.
## Playback history
diff --git a/content/en/profiler/connect_traces_and_profiles.md b/content/en/profiler/connect_traces_and_profiles.md
index b299f73eb3ed9..b1c5143cf4246 100644
--- a/content/en/profiler/connect_traces_and_profiles.md
+++ b/content/en/profiler/connect_traces_and_profiles.md
@@ -40,7 +40,7 @@ try (final Scope scope = tracer.activateSpan(span)) { // mandatory for Datadog c
```
-
+
diff --git a/content/en/profiler/enabling/ddprof.md b/content/en/profiler/enabling/ddprof.md
index a43cae49ade96..f8c43cf18e907 100644
--- a/content/en/profiler/enabling/ddprof.md
+++ b/content/en/profiler/enabling/ddprof.md
@@ -15,7 +15,7 @@ further_reading:
text: 'Fix problems you encounter while using the profiler'
---
-
+
ddprof
is in beta. Datadog recommends evaluating the profiler in a non-sensitive environment before deploying in production.
diff --git a/content/en/profiler/enabling/dotnet.md b/content/en/profiler/enabling/dotnet.md
index ea0b8b35947ee..a83dadd63306a 100644
--- a/content/en/profiler/enabling/dotnet.md
+++ b/content/en/profiler/enabling/dotnet.md
@@ -47,7 +47,7 @@ Supported .NET runtimes (64-bit applications)
.NET 8
.NET 9
-
+
@@ -75,7 +75,7 @@ The following profiling features are available in the following minimum versions
- Continuous Profiler is not supported for AWS Lambda.
- Continuous Profiler does not support ARM64.
-
+
Note: Unlike APM, Continuous Profiler is not activated by default when the APM package is installed. You must explicitly enable it for the applications you want to profile.
@@ -85,7 +85,7 @@ Ensure Datadog Agent v6+ is installed and running. Datadog recommends using [Dat
Otherwise, install the profiler using the following steps, depending on your operating system.
-
+
Note: Datadog's automatic instrumentation relies on the .NET CLR Profiling API. Since this API allows only one subscriber, run only one APM solution in your application environment.
@@ -97,7 +97,7 @@ You can install the Datadog .NET Profiler machine-wide so that any services on t
{{% tab "Linux with Single Step APM Instrumentation" %}}
1. With [Single Step APM Instrumentation][1], there is nothing else to install. Go to [Enabling the Profiler](#enabling-the-profiler) to see how to activate the profiler for an application.
-
+
Note: If APM was already manually installed, you must uninstall it by removing the following environment variables:
-
CORECLR_ENABLE_PROFILING
-
CORECLR_PROFILER
@@ -147,7 +147,7 @@ To install the .NET Profiler machine-wide:
{{% tab "NuGet" %}}
-
+
Note: This installation does not instrument applications running in IIS. For applications running in IIS, follow the Windows machine-wide installation process.
@@ -160,7 +160,7 @@ To install the .NET Profiler per-application:
{{% tab "Azure App Service" %}}
-
+
Note: Only Web Apps are supported. Functions are not supported.
@@ -177,7 +177,7 @@ To install the .NET Profiler per-webapp:
## Enabling the Profiler
-
+
Note: Datadog does not recommend enabling the profiler at machine-level or for all IIS applications. If you do have enabled it machine-wide, read the
Troubleshooting documentation for information about reducing the overhead that is associated with enabling the profiler for all system applications.
@@ -277,7 +277,7 @@ To install the .NET Profiler per-webapp:
net start w3svc
```
-
+
Note: Use stop
and start
commands. A reset or restart does not always work.
@@ -471,7 +471,7 @@ You can configure the profiler using the following environment variables. Note t
| `DD_PROFILING_HTTP_ENABLED` | Boolean | If set to `true`, enables outgoing HTTP request profiling used in Timeline user interface. Defaults to `false`. |
-
+
diff --git a/content/en/profiler/enabling/java.md b/content/en/profiler/enabling/java.md
index d97f240719fa4..7da1724d1a3e3 100644
--- a/content/en/profiler/enabling/java.md
+++ b/content/en/profiler/enabling/java.md
@@ -309,7 +309,7 @@ The allocation engine does not depend on the `/proc/sys/kernel/perf_event_parano
If the Datadog profiler CPU or wallclock engines are enabled, you can collect native stack traces. Native stack traces include things like JVM internals, native libraries used by your application or the JVM, and syscalls.
-
Native stack traces are not collected by default because usually they do not provide actionable insights and walking native stacks can potentially impact application stability. Test this setting in a non-production environment before you try using it in production.
+
Native stack traces are not collected by default because usually they do not provide actionable insights and walking native stacks can potentially impact application stability. Test this setting in a non-production environment before you try using it in production.
To enable native stack trace collection, understanding that it can destabilize your application, set:
diff --git a/content/en/profiler/profiler_troubleshooting/ddprof.md b/content/en/profiler/profiler_troubleshooting/ddprof.md
index 5a4884e404ed3..79d52b8d28175 100644
--- a/content/en/profiler/profiler_troubleshooting/ddprof.md
+++ b/content/en/profiler/profiler_troubleshooting/ddprof.md
@@ -9,7 +9,7 @@ further_reading:
text: 'APM Troubleshooting'
---
-
+
ddprof
is in Preview. Datadog recommends evaluating the profiler in a non-sensitive environment before deploying in production.
diff --git a/content/en/real_user_monitoring/browser/frustration_signals.md b/content/en/real_user_monitoring/browser/frustration_signals.md
index b9f36372663a9..1c907bf102fdc 100644
--- a/content/en/real_user_monitoring/browser/frustration_signals.md
+++ b/content/en/real_user_monitoring/browser/frustration_signals.md
@@ -148,7 +148,7 @@ Frustration signals are generated from mouse clicks, not keyboard strokes.
If a session is live, it is fetching information and may cause the banners to reflect a different number than those in the timeline.
-
+
diff --git a/content/en/real_user_monitoring/browser/monitoring_page_performance.md b/content/en/real_user_monitoring/browser/monitoring_page_performance.md
index 475646956ec77..04fe3cde9a3e2 100644
--- a/content/en/real_user_monitoring/browser/monitoring_page_performance.md
+++ b/content/en/real_user_monitoring/browser/monitoring_page_performance.md
@@ -35,7 +35,7 @@ You can access performance telemetry for your views in:
## Event timings and core web vitals
-
+
diff --git a/content/en/real_user_monitoring/browser/tracking_user_actions.md b/content/en/real_user_monitoring/browser/tracking_user_actions.md
index d68dbe6d8281f..b740f5a08b001 100644
--- a/content/en/real_user_monitoring/browser/tracking_user_actions.md
+++ b/content/en/real_user_monitoring/browser/tracking_user_actions.md
@@ -73,7 +73,7 @@ For example:
```html
Try it out!
-
+
Error:
Enter a valid email address
diff --git a/content/en/real_user_monitoring/correlate_with_other_telemetry/apm/_index.md b/content/en/real_user_monitoring/correlate_with_other_telemetry/apm/_index.md
index 85ff3bf536ae4..ee2a7fc4a96fb 100644
--- a/content/en/real_user_monitoring/correlate_with_other_telemetry/apm/_index.md
+++ b/content/en/real_user_monitoring/correlate_with_other_telemetry/apm/_index.md
@@ -121,7 +121,7 @@ To start sending just your iOS application's traces to Datadog, see [iOS Trace C
- `RegExp`: matches if any substring of the URL matches the provided RegExp. For example, `/^https:\/\/[^\/]+\.my-api-domain\.com/` matches URLs like `https://foo.my-api-domain.com/path`, but not `https://notintended.com/?from=guess.my-api-domain.com`. **Note:** The RegExp is not anchored to the start of the URL unless you use `^`. Be careful, as overly broad patterns can unintentionally match unwanted URLs and cause CORS errors.
- `function`: evaluates with the URL as parameter. Returning a `boolean` set to `true` indicates a match.
-
When using RegExp, the pattern is tested against the entire URL as a substring, not just the prefix. To avoid unintended matches, anchor your RegExp with `^` and be as specific as possible.
+
When using RegExp, the pattern is tested against the entire URL as a substring, not just the prefix. To avoid unintended matches, anchor your RegExp with `^` and be as specific as possible.
3. _(Optional)_ Configure the `traceSampleRate` initialization parameter to keep a defined percentage of the backend traces. If not set, 100% of the traces coming from browser requests are sent to Datadog. To keep 20% of backend traces, for example:
@@ -313,7 +313,7 @@ To start sending just your iOS application's traces to Datadog, see [iOS Trace C
{{% tab "Roku RUM" %}}
{{< site-region region="gov" >}}
-
RUM for Roku is not available on the US1-FED Datadog site.
+
RUM for Roku is not available on the US1-FED Datadog site.
{{< /site-region >}}
1. Set up [RUM Roku Monitoring][1].
diff --git a/content/en/real_user_monitoring/feature_flag_tracking/setup.md b/content/en/real_user_monitoring/feature_flag_tracking/setup.md
index da39e8cd1417f..d21204fdbda82 100644
--- a/content/en/real_user_monitoring/feature_flag_tracking/setup.md
+++ b/content/en/real_user_monitoring/feature_flag_tracking/setup.md
@@ -119,7 +119,7 @@ To enable feature flag data collection for your React Native application:
You can start collecting feature flag data with [custom feature flag management solutions](#custom-feature-flag-management), or by using one of Datadog's integration partners listed below.
-
+
**Note**: The following special characters are not supported for Feature Flag Tracking: `.`, `:`, `+`, `-`, `=`, `&&`, `||`, `>`, `<`, `!`, `(`, `)`, `{`, `}`, `[`, `]`, `^`, `"`, `โ`, `โ`, `~`, `*`, `?`, `\`. Datadog recommends avoiding these characters when possible in your feature flag names. If you are required to use one of these characters, replace the character before sending the data to Datadog. For example:
diff --git a/content/en/real_user_monitoring/guide/debug-symbols.md b/content/en/real_user_monitoring/guide/debug-symbols.md
index c49470d151da2..9d2bd286b77e4 100644
--- a/content/en/real_user_monitoring/guide/debug-symbols.md
+++ b/content/en/real_user_monitoring/guide/debug-symbols.md
@@ -23,7 +23,7 @@ Use the [RUM Debug Symbols page][1] to see if there are debug symbols for your a
If there are no debug symbols for your application, [upload them][2].
-
+
Ensure that the size of each debug symbol does not exceed the limit of **500 MB**, otherwise the upload is rejected.
For iOS dSYMs, individual files up to **2 GB** are supported.
diff --git a/content/en/real_user_monitoring/guide/enable-rum-shopify-store.md b/content/en/real_user_monitoring/guide/enable-rum-shopify-store.md
index 409257fb06fba..96569b4a7a01a 100644
--- a/content/en/real_user_monitoring/guide/enable-rum-shopify-store.md
+++ b/content/en/real_user_monitoring/guide/enable-rum-shopify-store.md
@@ -11,7 +11,7 @@ further_reading:
text: 'Alerting With Conversion Rates'
---
-
+
diff --git a/content/en/real_user_monitoring/guide/mobile-sdk-upgrade.md b/content/en/real_user_monitoring/guide/mobile-sdk-upgrade.md
index 2e8ed0917e0f2..aba7733949888 100644
--- a/content/en/real_user_monitoring/guide/mobile-sdk-upgrade.md
+++ b/content/en/real_user_monitoring/guide/mobile-sdk-upgrade.md
@@ -40,7 +40,7 @@ All SDK products (RUM, Trace, Logs, Session Replay, and so on) remain modular an
{{< tabs >}}
{{% tab "Android" %}}
-
+
@@ -223,7 +223,7 @@ Refer to the official `Open Telemetry` [documentation](https://opentelemetry.io/
#### Migrating tracing from `Open Tracing` to `DatadogTracing` (transition period)
-
This option has been added for compatibility and to simplify the transition from Open Tracing to Open Telemetry, but it may not be available in future major releases. Datadog recommends using Open Telemetry as the standard for tracing tasks. However, if it is not possible to enable desugaring in your project for some reason, you can use this method.
+
This option has been added for compatibility and to simplify the transition from Open Tracing to Open Telemetry, but it may not be available in future major releases. Datadog recommends using Open Telemetry as the standard for tracing tasks. However, if it is not possible to enable desugaring in your project for some reason, you can use this method.
Replace the `Open Tracing` configuration:
```kotlin
GlobalTracer.registerIfAbsent(
@@ -448,7 +448,7 @@ Reference to the `com.datadoghq:dd-sdk-android` artifact should be removed from
**Note**: The Maven coordinates of all the other artifacts stay the same.
-
v2 does not support Android API 19 (KitKat). The minimum SDK supported is now API 21 (Lollipop). Kotlin 1.7 is required. The SDK itself is compiled with Kotlin 1.8, so a compiler of Kotlin 1.6 and below cannot read SDK classes metadata.
+
v2 does not support Android API 19 (KitKat). The minimum SDK supported is now API 21 (Lollipop). Kotlin 1.7 is required. The SDK itself is compiled with Kotlin 1.8, so a compiler of Kotlin 1.6 and below cannot read SDK classes metadata.
Should you encounter an error such as the following:
diff --git a/content/en/real_user_monitoring/guide/shadow-dom.md b/content/en/real_user_monitoring/guide/shadow-dom.md
index 593a008989646..81ea43010d633 100644
--- a/content/en/real_user_monitoring/guide/shadow-dom.md
+++ b/content/en/real_user_monitoring/guide/shadow-dom.md
@@ -8,7 +8,7 @@ further_reading:
text: 'Learn about Session Replay'
---
-
+
Datadog only supports open Shadow DOM.
diff --git a/content/en/real_user_monitoring/guide/upload-javascript-source-maps.md b/content/en/real_user_monitoring/guide/upload-javascript-source-maps.md
index cb00a1a2321b6..78a3aa8c7e80a 100644
--- a/content/en/real_user_monitoring/guide/upload-javascript-source-maps.md
+++ b/content/en/real_user_monitoring/guide/upload-javascript-source-maps.md
@@ -23,7 +23,7 @@ If your front-end JavaScript source code is minified, upload your source maps to
Configure your JavaScript bundler such that when minifying your source code, it generates source maps that directly include the related source code in the `sourcesContent` attribute.
-
+
Ensure that the size of each source map augmented with the size of the related minified file does not exceed the limit of **500 MB**.
@@ -82,7 +82,7 @@ See the following example:
javascript.464388.js.map
```
-
+
If the sum of the file size for
javascript.364758.min.js
and
javascript.364758.js.map
exceeds the
the 500 MB limit, reduce it by configuring your bundler to split the source code into multiple smaller chunks. For more information, see
Code Splitting with WebpackJS.
diff --git a/content/en/real_user_monitoring/mobile_and_tv_monitoring/ios/setup.md b/content/en/real_user_monitoring/mobile_and_tv_monitoring/ios/setup.md
index eb9c2c3129c82..54ecf41348be3 100644
--- a/content/en/real_user_monitoring/mobile_and_tv_monitoring/ios/setup.md
+++ b/content/en/real_user_monitoring/mobile_and_tv_monitoring/ios/setup.md
@@ -138,7 +138,7 @@ In the initialization snippet, set an environment name, service name, and client
The SDK should be initialized as early as possible in the app lifecycle, specifically in the `AppDelegate`'s `application(_:didFinishLaunchingWithOptions:)` callback. This ensures all measurements, including application startup duration, are captured correctly. For apps built with SwiftUI, you can use `@UIApplicationDelegateAdaptor` to hook into the `AppDelegate`.
-
Initializing the SDK elsewhere (for example later during view loading) may result in inaccurate or missing telemetry, especially around app startup performance.
+
Initializing the SDK elsewhere (for example later during view loading) may result in inaccurate or missing telemetry, especially around app startup performance.
For more information, see [Using Tags][5].
@@ -505,7 +505,7 @@ The `trackRUMView(name:)` method starts and stops a view when the `SwiftUI` view
The Datadog iOS SDK allows you to instrument tap actions of `SwiftUI` applications. The instrumentation also works with hybrid `UIKit` and `SwiftUI` applications.
-
Using
.trackRUMTapAction(name:)
for
SwiftUI
controls inside a
List
can break its default gestures. For example, it may disable the
Button
action or break
NavigationLink
. To track taps in a
List
element, use the
Custom Actions API instead.
+
Using
.trackRUMTapAction(name:)
for
SwiftUI
controls inside a
List
can break its default gestures. For example, it may disable the
Button
action or break
NavigationLink
. To track taps in a
List
element, use the
Custom Actions API instead.
To instrument a tap action on a `SwiftUI.View`, add the following method to your view declaration:
diff --git a/content/en/real_user_monitoring/ownership_of_views.md b/content/en/real_user_monitoring/ownership_of_views.md
index 691bcc8fcf74d..a6af39a572173 100644
--- a/content/en/real_user_monitoring/ownership_of_views.md
+++ b/content/en/real_user_monitoring/ownership_of_views.md
@@ -27,7 +27,7 @@ To configure team ownership for your application's views:
After you associate a view with a team, Datadog automatically attributes new event data to that team.
-
If you change a team and view mapping, any past metrics or events are not retroactively tagged with the new team.
+
If you change a team and view mapping, any past metrics or events are not retroactively tagged with the new team.
{{< img src="/real_user_monitoring/ownership_of_views/ownership-application-management-2.png" alt="View of the Team Ownership page, where you can assign different pages of your application to specific teams." >}}
diff --git a/content/en/real_user_monitoring/platform/generate_metrics.md b/content/en/real_user_monitoring/platform/generate_metrics.md
index c7560cd5a4670..9c24a10fd0593 100644
--- a/content/en/real_user_monitoring/platform/generate_metrics.md
+++ b/content/en/real_user_monitoring/platform/generate_metrics.md
@@ -53,14 +53,14 @@ To create a custom metric from a search query in the [RUM Explorer][5], click th
5. Select a path to group by from the dropdown menu next to **group by**. The metric tag name is the original attribute or tag name without the `@`. By default, custom metrics generated from RUM events do not contain tags unless they are explicitly added. You can use an attribute or tag dimension that exists in your RUM events such as `@error.source` or `env` to create metric tags.
-
RUM-based custom metrics are considered as
custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality attributes such as timestamps, user IDs, request IDs, and session IDs.
+
RUM-based custom metrics are considered as
custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality attributes such as timestamps, user IDs, request IDs, and session IDs.
6. For custom metrics created on sessions and views, select **The active session/view starts matching the query** or **The session/view becomes inactive or is completed** to set the matching criteria for sessions and views. For more information, see [Add a RUM-based metric on sessions and views](#add-a-rum-based-metric-on-sessions-and-views).
7. Add percentile aggregations for distribution metrics. You can opt-in for advanced query functionality and use globally accurate percentiles (such as P50, P75, P90, P95, and P99).
-
Enabling advanced query functionality with percentiles generates more
custom metrics and is
billed accordingly.
+
Enabling advanced query functionality with percentiles generates more
custom metrics and is
billed accordingly.
8. Click **Create Metric**.
diff --git a/content/en/real_user_monitoring/rum_without_limits/_index.md b/content/en/real_user_monitoring/rum_without_limits/_index.md
index c4e495eaa873f..23cdf959803f4 100644
--- a/content/en/real_user_monitoring/rum_without_limits/_index.md
+++ b/content/en/real_user_monitoring/rum_without_limits/_index.md
@@ -45,7 +45,7 @@ To get started with RUM without Limits for new applications, at the [instrumenta
4. Enable `traceContextInjection: sampled` to allow backend tracing libraries to make their own sampling decisions for sessions where the RUM SDK decides not to keep the trace.
-
Steps 1, 3, and 4 may impact your APM traces ingestion. To ensure that ingested span volumes remain stable, configure the traceSampleRate
to the previously configured sessionSampleRate
. For instance, if you used to have sessionSampleRate
set to 10% and you bump it to 100% for RUM without Limits, decrease the traceSampleRate
from 100% to 10% accordingly to ingest the same amount of traces.
+
Steps 1, 3, and 4 may impact your APM traces ingestion. To ensure that ingested span volumes remain stable, configure the traceSampleRate
to the previously configured sessionSampleRate
. For instance, if you used to have sessionSampleRate
set to 10% and you bump it to 100% for RUM without Limits, decrease the traceSampleRate
from 100% to 10% accordingly to ingest the same amount of traces.
5. Deploy your application to apply the configuration.
diff --git a/content/en/real_user_monitoring/session_replay/browser/_index.md b/content/en/real_user_monitoring/session_replay/browser/_index.md
index ee9c90599eb63..1d2dd49a89737 100644
--- a/content/en/real_user_monitoring/session_replay/browser/_index.md
+++ b/content/en/real_user_monitoring/session_replay/browser/_index.md
@@ -71,7 +71,7 @@ if (user.isAuthenticated) {
To stop the Session Replay recording, call `stopSessionReplayRecording()`.
-
When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording()
to begin recording.
+
When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording()
to begin recording.
## Force Session Replay
@@ -81,13 +81,13 @@ To force Session Replay recording for the rest of the current session, call `sta
When using the force option, the session is upgraded to a replayed session for the remainder of its duration, regardless of its initial sampling decision.
-
The force option only upgrades an existing session to a replayed one if it is already being sampled. In other words, if sampling hasn't started yet, using the force option does not initiate one, and no replay is recorded.
+
The force option only upgrades an existing session to a replayed one if it is already being sampled. In other words, if sampling hasn't started yet, using the force option does not initiate one, and no replay is recorded.
## Disable Session Replay
To stop session recordings, set `sessionReplaySampleRate` to `0`. This stops collecting data for the [Browser RUM & Session Replay plan][6].
-
If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate
to 0
.
+
If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate
to 0
.
## Retention
diff --git a/content/en/real_user_monitoring/session_replay/browser/privacy_options.md b/content/en/real_user_monitoring/session_replay/browser/privacy_options.md
index 2403bd383b121..af2a98006f4e6 100644
--- a/content/en/real_user_monitoring/session_replay/browser/privacy_options.md
+++ b/content/en/real_user_monitoring/session_replay/browser/privacy_options.md
@@ -22,7 +22,7 @@ By enabling Session Replay, you can automatically mask sensitive elements from b
## Configuration
-
defaultPrivacyLevel
and mask-user-input
are available in the SDK v3.6.0+.
+
defaultPrivacyLevel
and mask-user-input
are available in the SDK v3.6.0+.
To enable your privacy settings, set `defaultPrivacyLevel` to `mask`, `mask-user-input`, or `allow` in your JavaScript configuration.
diff --git a/content/en/reference_tables/_index.md b/content/en/reference_tables/_index.md
index 718f03946f0a7..a349385a57404 100644
--- a/content/en/reference_tables/_index.md
+++ b/content/en/reference_tables/_index.md
@@ -131,7 +131,7 @@ For more information, see the [Azure integration documentation][4].
### Google Cloud storage
{{% site-region region="gov" %}}
-
Reference Tables are not available for your selected
Datadog site ({{< region-param key="dd_site_name" >}})
+
Reference Tables are not available for your selected
Datadog site ({{< region-param key="dd_site_name" >}})
{{% /site-region %}}
1. If you have not set up a Google Cloud integration with Datadog or you are using legacy Google project ID files (legacy projects are indicated in your GCP integration tile), follow the instructions for setting up the [Google Cloud Platform integration][1]. This involves creating a [Google Cloud service account][2].
diff --git a/content/en/security/application_security/setup/dotnet/linux.md b/content/en/security/application_security/setup/dotnet/linux.md
index 2504ff074e7f7..dd79ea411fe2b 100644
--- a/content/en/security/application_security/setup/dotnet/linux.md
+++ b/content/en/security/application_security/setup/dotnet/linux.md
@@ -44,7 +44,7 @@ Install the Datadog Agent by following the [setup instructions for Linux hosts][
**Download and install** the latest *Datadog .NET Tracer package* that supports your operating system and architecture.
-
+
Note on version: replace <TRACER_VERSION> with the latest three component version of the library (ej: 3.21.0)
@@ -62,7 +62,7 @@ sudo tar -C /opt/datadog -xzf datadog-dotnet-apm-
.tar.gz && /opt
**Download and install** the latest *Datadog .NET Tracer package* that supports your operating system and architecture.
-
+
Note on version: replace <TRACER_VERSION> with the latest three component version of the library (ej: 3.21.0)
@@ -79,7 +79,7 @@ sudo tar -C /opt/datadog -xzf datadog-dotnet-apm-
.arm64.tar.gz &
{{% /tab %}}
{{< /tabs >}}
-
+
If you are having issues installing the Tracer library check the [Tracer Installation guide][5]
*Note on version:* replace ** with the latest three component version of the library (ej: 3.21.0)
diff --git a/content/en/security/application_security/setup/dotnet/windows.md b/content/en/security/application_security/setup/dotnet/windows.md
index 7c911128e6091..abe43141abcfd 100644
--- a/content/en/security/application_security/setup/dotnet/windows.md
+++ b/content/en/security/application_security/setup/dotnet/windows.md
@@ -73,7 +73,7 @@ net start w3svc
{{% /tab %}}
{{% tab "Standalone apps *(.NET Framework)*" %}}
-
+
Note: The .NET runtime tries to load the .NET library into any .NET process that is started with these environment variables set. You should limit instrumentation to only the applications that need to be instrumented. Don't set these environment variables globally as this causes all .NET processes on the host to be instrumented.
@@ -86,7 +86,7 @@ Set the following required environment variables for automatic instrumentation t
{{% /tab %}}
{{% tab "Standalone apps *(.NET Core)*" %}}
-
+
Note: The .NET runtime tries to load the .NET library into any .NET process that is started with these environment variables set. You should limit instrumentation to only the applications that need to be instrumented. Don't set these environment variables globally as this causes all .NET processes on the host to be instrumented.
diff --git a/content/en/security/application_security/setup/gateway-api.md b/content/en/security/application_security/setup/gateway-api.md
index b73038c381a38..d83616237a69e 100644
--- a/content/en/security/application_security/setup/gateway-api.md
+++ b/content/en/security/application_security/setup/gateway-api.md
@@ -15,7 +15,7 @@ further_reading:
text: "Troubleshooting App and API Protection"
---
-
+
AAP for Gateway API is experimental. Please follow the instructions below to try it out.
diff --git a/content/en/security/application_security/setup/gcp/service-extensions.md b/content/en/security/application_security/setup/gcp/service-extensions.md
index 1d1e0bd7872b5..272e08cf93bae 100644
--- a/content/en/security/application_security/setup/gcp/service-extensions.md
+++ b/content/en/security/application_security/setup/gcp/service-extensions.md
@@ -85,7 +85,7 @@ To set up the App and API Protection Service Extension in GCP, use the Google Cl
5. Optionally, enable the `fail_open` to still allow the traffic to pass through if the service extension fails or times out.
-
+
Note: By default, if the service extension fails or times out, the proxy will return a 5xx error. To prevent this, enable the fail_open
setting. When enabled, request or response processing continues without error even if the extension fails, ensuring your application remains available.
@@ -497,7 +497,7 @@ Configure the container to send traces to your Datadog Agent using the following
The App and API Protection GCP Service Extensions integration is built on top of the [Datadog Go Tracer][6] and inherits all of its environment variables. See [Configuring the Go Tracing Library][7] and [App and API Protection Library Configuration][8].
-
+
Note: As the App and API Protection GCP Service Extensions integration is built on top of the Datadog Go Tracer, it generally follows the same release process as the tracer, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2
). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1
.
diff --git a/content/en/security/application_security/setup/go/setup.md b/content/en/security/application_security/setup/go/setup.md
index 0bb6793ce6b7a..a53ff3553b150 100644
--- a/content/en/security/application_security/setup/go/setup.md
+++ b/content/en/security/application_security/setup/go/setup.md
@@ -147,7 +147,7 @@ If you are building your Go application without [CGO][9], you can still enable A
$ CGO_ENABLED=0 orchestrion go build -tags appsec my-program
```
-
Disabling CGO usually guarantees a statically-linked binary. This is will not be the case here.
+
Disabling CGO usually guarantees a statically-linked binary. This is will not be the case here.
2. Install `libc.so.6`, `libpthread.so.0` and `libdl.so.2` on your system, as these libraries are required by the Datadog WAF:
This installation can be done by installing the `glibc` package on your system with your package manager. See [Creating a Dockerfile for App and API Protection for Go][3].
diff --git a/content/en/security/application_security/setup/istio.md b/content/en/security/application_security/setup/istio.md
index 4dfa3090e708e..c9b530cd48dc0 100644
--- a/content/en/security/application_security/setup/istio.md
+++ b/content/en/security/application_security/setup/istio.md
@@ -137,7 +137,7 @@ Configure the connection from the external processor to the Datadog Agent using
The External Processor is built on top of the [Datadog Go Tracer][7] and inherits all of its environment variables. See [Configuring the Go Tracing Library][8] and [App and API Protection Library Configuration][9].
-
+
Note: As the Datadog External Processor is built on top of the Datadog Go Tracer, it generally follows the same release process as the tracer, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2
). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1
.
diff --git a/content/en/security/application_security/setup/nginx/kubernetes.md b/content/en/security/application_security/setup/nginx/kubernetes.md
index 8bcf50ae842eb..0d1b0b4002266 100644
--- a/content/en/security/application_security/setup/nginx/kubernetes.md
+++ b/content/en/security/application_security/setup/nginx/kubernetes.md
@@ -59,7 +59,7 @@ accessible by the main ingress-nginx container.
When the main ingrees-nginx controller starts, the nginx configuration must be updated with the `load_module` directive,
allowing it to load the Datadog module seamlessly.
-
+
We provide a specific init container **for each ingress-nginx controller version**, starting with v1.10.0
. This is crucial because **each** init container must match the underlying nginx version. To ensure compatibility, ensure the version of the Datadog init container matches your ingress-nginx version.
diff --git a/content/en/security/application_security/setup/python/compatibility.md b/content/en/security/application_security/setup/python/compatibility.md
index bd865af29a265..a53deecc045dd 100644
--- a/content/en/security/application_security/setup/python/compatibility.md
+++ b/content/en/security/application_security/setup/python/compatibility.md
@@ -21,7 +21,7 @@ The following App and API Protection capabilities are supported in the Python li
{{< partial name="app_and_api_protection/python/capabilities.html" >}}
-
Datadog strongly encourages you to always use the last stable release of the tracer.
+
Datadog strongly encourages you to always use the last stable release of the tracer.
Threat Protection requires enabling [Remote Configuration][2], which is included in the listed minimum tracer version.
diff --git a/content/en/security/cloud_security_management/_index.md b/content/en/security/cloud_security_management/_index.md
index 5947c3d813c67..0257b879132fe 100644
--- a/content/en/security/cloud_security_management/_index.md
+++ b/content/en/security/cloud_security_management/_index.md
@@ -101,7 +101,7 @@ To get more detail, use [Findings][7] to review and remediate your organization'
{{< img src="security/csm/security_graph.png" alt="Security Graph displaying an example EC2 instance" width="100%">}}
- Use the [Resource Catalog][12] to view specific misconfigurations and threats that have been reported on the hosts and resources in your environments. For more information, see the [Resource Catalog][13] documentation.
{{< site-region region="gov" >}}
-
Resource Catalog is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
+
Resource Catalog is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}}
{{< img src="infrastructure/resource_catalog/resource_catalog_infra_3.png" alt="Resource Catalog map view displaying host and cloud resources grouped by category and misconfigurations." style="width:100%;" >}}
- Use the [Cloudcraft Security Map][21] to visualize your resources and any misconfigurations, vulnerabilities, identity risks, or sensitive data associated with them. For more information on these overlays, see the [Cloudcraft overlay][22] documentation.
diff --git a/content/en/security/cloud_security_management/guide/active-protection.md b/content/en/security/cloud_security_management/guide/active-protection.md
index 64d88fa63a0ef..941007e8ce8c7 100644
--- a/content/en/security/cloud_security_management/guide/active-protection.md
+++ b/content/en/security/cloud_security_management/guide/active-protection.md
@@ -6,7 +6,7 @@ further_reading:
text: "Workload Protection Detection Rules"
---
-
+
Workload Protection Active Protection is in Preview.
diff --git a/content/en/security/cloud_security_management/misconfigurations/frameworks_and_benchmarks/custom_frameworks.md b/content/en/security/cloud_security_management/misconfigurations/frameworks_and_benchmarks/custom_frameworks.md
index 5b8bdce2692e7..c05da7e067cc7 100644
--- a/content/en/security/cloud_security_management/misconfigurations/frameworks_and_benchmarks/custom_frameworks.md
+++ b/content/en/security/cloud_security_management/misconfigurations/frameworks_and_benchmarks/custom_frameworks.md
@@ -29,7 +29,7 @@ With custom frameworks, you can define and measure compliance against your own c
Next, add requirements to the framework:
-
You must add at least one requirement, control, and rule before saving the custom framework.
+
You must add at least one requirement, control, and rule before saving the custom framework.
1. Click **Add Requirement**.
1. Enter the following details:
diff --git a/content/en/security/cloud_security_management/security_graph.md b/content/en/security/cloud_security_management/security_graph.md
index c6db6fd40a97f..dc686978eb3f0 100644
--- a/content/en/security/cloud_security_management/security_graph.md
+++ b/content/en/security/cloud_security_management/security_graph.md
@@ -11,7 +11,7 @@ further_reading:
{{< /callout >}}
{{< site-region region="gov" >}}
-
Security Graph is not available in the selected site ({{< region-param key="dd_site_name" >}}).
+
Security Graph is not available in the selected site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}}
One of the most persistent challenges in cloud security is understanding how compute, storage, identity, and networking components interact with each other. With Security Graph, you can model your cloud environment as a relationship graph. Visualize and query the connections between your cloud resources, such as EC2 instances, IAM roles, S3 buckets, and security groups, combining data from your Agentless and Agent-based cloud scans. Investigate these relationships so you can surface indirect access paths, assess identity risks, and respond more effectively to emerging threats.
diff --git a/content/en/security/cloud_security_management/setup/agentless_scanning/enable.md b/content/en/security/cloud_security_management/setup/agentless_scanning/enable.md
index 97bd347bac618..b4a51fb724edf 100644
--- a/content/en/security/cloud_security_management/setup/agentless_scanning/enable.md
+++ b/content/en/security/cloud_security_management/setup/agentless_scanning/enable.md
@@ -71,7 +71,7 @@ Before setting up Agentless Scanning, ensure the following prerequisites are met
## Setup
-
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
+
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
To enable Agentless Scanning, use one of the following workflows:
@@ -86,9 +86,9 @@ Designed for new users, the quick start workflow offers an efficient setup proce
For existing users who want to add a new AWS account or enable Agentless Scanning on an existing integrated AWS account, see the instructions for
Terraform or
AWS CloudFormation.
-
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
+
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
-
Sensitive Data Scanner for cloud storage is in Limited Availability.
Request Access to enroll.
+
Sensitive Data Scanner for cloud storage is in Limited Availability.
Request Access to enroll.
##### Installation
@@ -220,9 +220,9 @@ If you've already [set up Cloud Security][10] and want to add a new cloud accoun
If you're setting up Cloud Security for the first time, you can follow the
quick start workflow, which also uses AWS CloudFormation to enable Agentless Scanning.
-
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
+
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
-
Sensitive Data Scanner for cloud storage is in Limited Availability.
Request Access to enroll.
+
Sensitive Data Scanner for cloud storage is in Limited Availability.
Request Access to enroll.
##### Set up AWS CloudFormation
@@ -288,7 +288,7 @@ Use the Azure Resource Manager template to deploy the Agentless Scanner. The tem
{{% collapse-content title="Azure Resource Manager setup guide" level="h4" id="azure-resource-manager-setup" %}}
If you've already [set up Cloud Security][10] and want to add a new Azure subscription or enable [Agentless Scanning][1] on an existing integrated Azure subscription, you can use either [Terraform][7] or Azure Resource Manager. This article provides detailed instructions for the Azure Resource Manager approach.
-
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
+
Running Agentless scanners incurs additional costs. To optimize these costs while still ensuring reliable 12-hour scans, Datadog recommends setting up
Agentless Scanning with Terraform as the default template.
{{< tabs >}}
{{% tab "New Azure subscription" %}}
diff --git a/content/en/security/cloud_siem/detect_and_monitor/historical_jobs.md b/content/en/security/cloud_siem/detect_and_monitor/historical_jobs.md
index fe0e459fe29e9..de236d2a2c86e 100644
--- a/content/en/security/cloud_siem/detect_and_monitor/historical_jobs.md
+++ b/content/en/security/cloud_siem/detect_and_monitor/historical_jobs.md
@@ -9,7 +9,7 @@ further_reading:
---
{{% site-region region="gov" %}}
-
This feature is not supported for the US1-FED site.
+
This feature is not supported for the US1-FED site.
{{% /site-region %}}
Historical Jobs allows you to backtest detections by running them against historical logs stored in Datadog Cloud SIEM.
diff --git a/content/en/security/cloud_siem/detect_and_monitor/mitre_attack_map.md b/content/en/security/cloud_siem/detect_and_monitor/mitre_attack_map.md
index 20d69ddcd0642..2ae6cf6e9c5cb 100644
--- a/content/en/security/cloud_siem/detect_and_monitor/mitre_attack_map.md
+++ b/content/en/security/cloud_siem/detect_and_monitor/mitre_attack_map.md
@@ -15,7 +15,7 @@ further_reading:
## Overview
-
+
The MITRE ATT&CK Framework is a knowledge base used to develop specific threat models and methodologies. Use the Cloud SIEM MITRE ATT&CK Map to explore and visualize the MITRE ATT&CK Framework against Datadog's out-of-the-box rules and your custom detection rules. The MITRE ATT&CK Map displays detection rule density as a heat map to provide visibility into attacker techniques. Your security teams can use the heat map to assess gaps in coverage that is relevant to their organization or team and prioritize improvements to their detection rule defenses.
diff --git a/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md b/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md
index af3b411e93758..a9ec4541da46a 100644
--- a/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md
+++ b/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md
@@ -31,7 +31,7 @@ Use [Google Cloud Dataflow][2] and the [Datadog template][3] to forward logs fro
1. [Create and run the Dataflow job](#create-and-run-the-dataflow-job)
1. [Use Cloud SIEM to triage Security Signals](#use-cloud-siem-to-triage-security-signals)
-
+
Collecting Google Cloud logs with a Pub/Sub Push subscription is in the process of being deprecated for the following reasons:
diff --git a/content/en/security/code_security/iac_security/iac_rules/_index.md b/content/en/security/code_security/iac_security/iac_rules/_index.md
index 17a56a4c5f3b8..e25c25f994f2d 100644
--- a/content/en/security/code_security/iac_security/iac_rules/_index.md
+++ b/content/en/security/code_security/iac_security/iac_rules/_index.md
@@ -13,7 +13,7 @@ further_reading:
---
{{% site-region region="gov" %}}
-
This product is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
+
This product is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
{{% /site-region %}}
[Infrastructure as Code (IaC) Security][1] identifies misconfigurations and security risks in infrastructure-as-code files before deployment, helping ensure that cloud environments remain secure and compliant.
diff --git a/content/en/security/code_security/secret_scanning/_index.md b/content/en/security/code_security/secret_scanning/_index.md
index 94f4865277057..0da892239480a 100644
--- a/content/en/security/code_security/secret_scanning/_index.md
+++ b/content/en/security/code_security/secret_scanning/_index.md
@@ -11,7 +11,7 @@ Secret Scanning is in Preview. Contact your Customer Success Manager to get acce
{{< /callout >}}
{{% site-region region="gov" %}}
-
+
Secret Scanning is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
diff --git a/content/en/security/code_security/secret_scanning/generic_ci_providers.md b/content/en/security/code_security/secret_scanning/generic_ci_providers.md
index e0bf93cd58a36..d3792370bd273 100644
--- a/content/en/security/code_security/secret_scanning/generic_ci_providers.md
+++ b/content/en/security/code_security/secret_scanning/generic_ci_providers.md
@@ -7,7 +7,7 @@ algolia:
---
{{% site-region region="gov" %}}
-
+
Secret Scanning is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
diff --git a/content/en/security/code_security/software_composition_analysis/setup_static/_index.md b/content/en/security/code_security/software_composition_analysis/setup_static/_index.md
index 242afb38c0748..7e46ab1666168 100644
--- a/content/en/security/code_security/software_composition_analysis/setup_static/_index.md
+++ b/content/en/security/code_security/software_composition_analysis/setup_static/_index.md
@@ -92,7 +92,7 @@ See the [GitLab source code setup instructions][1] to connect GitLab to Datadog.
{{% /tab %}}
{{% tab "Azure DevOps" %}}
-
+
Repositories from Azure DevOps are supported in closed Preview. Your Azure DevOps organizations must be connected to a Microsoft Entra tenant.
Join the Preview.
diff --git a/content/en/security/code_security/static_analysis/_index.md b/content/en/security/code_security/static_analysis/_index.md
index e1feb16123e25..1d0e65211b266 100644
--- a/content/en/security/code_security/static_analysis/_index.md
+++ b/content/en/security/code_security/static_analysis/_index.md
@@ -9,7 +9,7 @@ algolia:
---
{{% site-region region="gov" %}}
-
+
Code Security is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
diff --git a/content/en/security/code_security/static_analysis/custom_rules/_index.md b/content/en/security/code_security/static_analysis/custom_rules/_index.md
index cd1098961f097..8e805a1238b64 100644
--- a/content/en/security/code_security/static_analysis/custom_rules/_index.md
+++ b/content/en/security/code_security/static_analysis/custom_rules/_index.md
@@ -7,7 +7,7 @@ algolia:
---
{{% site-region region="gov" %}}
-
+
Code Security is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
@@ -56,7 +56,7 @@ To get a captured node, use the `captures` attribute of the first argument of th
- `end`: end position of the node. The position contains `line` and `col` attributes.
- `text`: the content of the node.
-
line
and col
attributes start at 1. Any result with line
or col
set to 0 is ignored.
+
line
and col
attributes start at 1. Any result with line
or col
set to 0 is ignored.
```javascript
function visit(node, filename, code) {
diff --git a/content/en/security/code_security/static_analysis/generic_ci_providers.md b/content/en/security/code_security/static_analysis/generic_ci_providers.md
index 48b285b548e8c..c2bfed6495d40 100644
--- a/content/en/security/code_security/static_analysis/generic_ci_providers.md
+++ b/content/en/security/code_security/static_analysis/generic_ci_providers.md
@@ -11,7 +11,7 @@ algolia:
---
{{% site-region region="gov" %}}
-
+
Code Analysis is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
diff --git a/content/en/security/code_security/static_analysis/github_actions.md b/content/en/security/code_security/static_analysis/github_actions.md
index a43307abfa234..17a79901bdd79 100644
--- a/content/en/security/code_security/static_analysis/github_actions.md
+++ b/content/en/security/code_security/static_analysis/github_actions.md
@@ -41,7 +41,7 @@ You **must** set your Datadog API and application keys as [secrets in your GitHu
Make sure to replace `dd_site` with the [Datadog site you are using][3].
-
+
Running a Datadog Static Code Analysis job as an action only supports the push
event trigger. Other event triggers (pull_request
, etc.) are not supported and can cause issues with the product.
diff --git a/content/en/security/code_security/static_analysis/setup/_index.md b/content/en/security/code_security/static_analysis/setup/_index.md
index d81f9561c33f5..b0b616c5e58c9 100644
--- a/content/en/security/code_security/static_analysis/setup/_index.md
+++ b/content/en/security/code_security/static_analysis/setup/_index.md
@@ -11,7 +11,7 @@ algolia:
---
{{% site-region region="gov" %}}
-
+
Code Security is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
@@ -71,7 +71,7 @@ See the [GitLab source code setup instructions][1] to connect GitLab to Datadog.
{{% /tab %}}
{{% tab "Azure DevOps" %}}
-
+
Repositories from Azure DevOps are supported in closed Preview. Your Azure DevOps organizations must be connected to a Microsoft Entra tenant.
Join the Preview.
@@ -147,7 +147,7 @@ There are three levels of configuration:
* Repo Level Configuration (Datadog)
* Repo Level Configuration (Repo File)
-
+
By default, when no configuration is defined at the org or repo level, Datadog uses a default configuration with all default rules enabled. If you define an org-level configuration without default rules, default rules are not used. If want to use default rules in this scenario, you must enable them.
diff --git a/content/en/security/code_security/static_analysis/static_analysis_rules/_index.md b/content/en/security/code_security/static_analysis/static_analysis_rules/_index.md
index 4de7c6c5a543f..ce179e300dd5b 100644
--- a/content/en/security/code_security/static_analysis/static_analysis_rules/_index.md
+++ b/content/en/security/code_security/static_analysis/static_analysis_rules/_index.md
@@ -258,7 +258,7 @@ further_reading:
---
{{% site-region region="gov" %}}
-
+
Code Security is not available for the {{< region-param key="dd_site_name" >}} site.
{{% /site-region %}}
diff --git a/content/en/security/notifications/variables.md b/content/en/security/notifications/variables.md
index 9d3a65be769d2..1c26cdf1f0d5b 100644
--- a/content/en/security/notifications/variables.md
+++ b/content/en/security/notifications/variables.md
@@ -93,7 +93,7 @@ The result is displayed in the ISO 8601 format: `yyyy-MM-dd HH:mm:ssยฑHH:mm`, fo
## Attribute variables
-
+
HIPAA-enabled Datadog organizations have access to only
template variables for security notifications. Attribute variables are not supported.
diff --git a/content/en/security/threats/security_signals.md b/content/en/security/threats/security_signals.md
index aae7838a83ffe..e57f08f4df532 100644
--- a/content/en/security/threats/security_signals.md
+++ b/content/en/security/threats/security_signals.md
@@ -37,7 +37,7 @@ You can triage a signal by assigning it to a user for further investigation. The
## Create a case
{{< site-region region="gov" >}}
-
Case Management is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
+
Case Management is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}}
Use [Case Management][6] to track, triage, and investigate security signals.
diff --git a/content/en/security/workload_protection/guide/active-protection.md b/content/en/security/workload_protection/guide/active-protection.md
index 40fcb2538a7f7..c19af94ca4cf0 100644
--- a/content/en/security/workload_protection/guide/active-protection.md
+++ b/content/en/security/workload_protection/guide/active-protection.md
@@ -8,7 +8,7 @@ further_reading:
text: "Workload Protection Detection Rules"
---
-
+
Workload Protection Active Protection is in Preview.
diff --git a/content/en/security/workload_protection/security_signals.md b/content/en/security/workload_protection/security_signals.md
index 76bcdf68e0ef1..c9ac77b051c0e 100644
--- a/content/en/security/workload_protection/security_signals.md
+++ b/content/en/security/workload_protection/security_signals.md
@@ -43,7 +43,7 @@ You can triage a signal by assigning it to a user for further investigation. The
## Create a case
{{< site-region region="gov" >}}
-
Case Management is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
+
Case Management is not supported for your selected
Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}}
Use [Case Management][6] to track, triage, and investigate security signals.
diff --git a/content/en/serverless/aws_lambda/fips-compliance.md b/content/en/serverless/aws_lambda/fips-compliance.md
index 21f94a95a4af4..d402349e37a1b 100644
--- a/content/en/serverless/aws_lambda/fips-compliance.md
+++ b/content/en/serverless/aws_lambda/fips-compliance.md
@@ -13,7 +13,7 @@ algolia:
---
{{< site-region region="us,us3,us5,eu,ap1,ap2" >}}
-
The FIPS-compliant Datadog Lambda extension is available in all AWS regions. While you can use these FIPS-compliant Lambda components with any Datadog site, end-to-end FIPS compliance requires sending data to the US1-FED site (ddog-gov.com).
+
The FIPS-compliant Datadog Lambda extension is available in all AWS regions. While you can use these FIPS-compliant Lambda components with any Datadog site, end-to-end FIPS compliance requires sending data to the US1-FED site (ddog-gov.com).
{{< /site-region >}}
Datadog provides FIPS-compliant monitoring for AWS Lambda functions through the use of FIPS-certified cryptographic modules and specially designed Lambda extension layers.
diff --git a/content/en/serverless/aws_lambda/instrumentation/go.md b/content/en/serverless/aws_lambda/instrumentation/go.md
index e244219a0b448..b76e565484e11 100644
--- a/content/en/serverless/aws_lambda/instrumentation/go.md
+++ b/content/en/serverless/aws_lambda/instrumentation/go.md
@@ -16,7 +16,7 @@ aliases:
- /serverless/aws_lambda/installation/go
---
-
If your Go Lambda functions are still using runtime
go1.x
and you cannot migrate to the
provided.al2
runtime, you must
instrument using the Datadog Forwarder. Otherwise, follow the instructions in this guide to instrument using the Datadog Lambda Extension.
+
If your Go Lambda functions are still using runtime
go1.x
and you cannot migrate to the
provided.al2
runtime, you must
instrument using the Datadog Forwarder. Otherwise, follow the instructions in this guide to instrument using the Datadog Lambda Extension.
Version 67+ of the Datadog Lambda Extension is optimized to significantly reduce cold start duration.
Read more.
diff --git a/content/en/serverless/aws_lambda/instrumentation/nodejs.md b/content/en/serverless/aws_lambda/instrumentation/nodejs.md
index 67a603f9ae564..b5e02355a095d 100644
--- a/content/en/serverless/aws_lambda/instrumentation/nodejs.md
+++ b/content/en/serverless/aws_lambda/instrumentation/nodejs.md
@@ -431,7 +431,7 @@ To configure Datadog using SST v3, follow these steps:
{{% /tab %}}
{{< /tabs >}}
-
Do not install the Datadog Lambda Library as a layer and as a JavaScript package. If you installed the Datadog Lambda Library as a layer, do not include datadog-lambda-js
in your package.json
, or install it as a dev dependency and run npm install --production
before deploying.
+
Do not install the Datadog Lambda Library as a layer and as a JavaScript package. If you installed the Datadog Lambda Library as a layer, do not include datadog-lambda-js
in your package.json
, or install it as a dev dependency and run npm install --production
before deploying.
## FIPS compliance
diff --git a/content/en/serverless/aws_lambda/instrumentation/ruby.md b/content/en/serverless/aws_lambda/instrumentation/ruby.md
index 7138d5178d3cb..25afa32c61272 100644
--- a/content/en/serverless/aws_lambda/instrumentation/ruby.md
+++ b/content/en/serverless/aws_lambda/instrumentation/ruby.md
@@ -19,7 +19,7 @@ aliases:
- /serverless/aws_lambda/installation/ruby
---
-
If you previously set up your Lambda functions using the Datadog Forwarder, see
instrumenting using the Datadog Forwarder. Otherwise, follow the instructions in this guide to instrument using the Datadog Lambda Extension.
+
If you previously set up your Lambda functions using the Datadog Forwarder, see
instrumenting using the Datadog Forwarder. Otherwise, follow the instructions in this guide to instrument using the Datadog Lambda Extension.
Version 67+ of the Datadog Lambda Extension is optimized to significantly reduce cold start duration.
Read more.
diff --git a/content/en/serverless/aws_lambda/profiling.md b/content/en/serverless/aws_lambda/profiling.md
index 404f37a24067b..e2fc2ce514c2e 100644
--- a/content/en/serverless/aws_lambda/profiling.md
+++ b/content/en/serverless/aws_lambda/profiling.md
@@ -13,7 +13,7 @@ further_reading:
Datadog's [Continuous Profiler][1] for AWS Lambda functions gives you visibility into the exact method name, class name, and line number in your Lambda code that is causing CPU or I/O bottlenecks.
-
+
Continuous Profiler for AWS Lambda is in Preview.
diff --git a/content/en/serverless/azure_app_service/linux_container.md b/content/en/serverless/azure_app_service/linux_container.md
index 1018021b555a8..5013bb9f6b557 100644
--- a/content/en/serverless/azure_app_service/linux_container.md
+++ b/content/en/serverless/azure_app_service/linux_container.md
@@ -232,7 +232,7 @@ Additional flags, like `--service` and `--env`, can be used to set the service a
{{% /tab %}}
{{% tab "Terraform" %}}
-
Because the Azure Web App for Containers resource does not directly support sitecontainers, you should expect drift in your configuration.
+
Because the Azure Web App for Containers resource does not directly support sitecontainers, you should expect drift in your configuration.
The [Datadog Terraform module for Linux Web Apps][1] wraps the [azurerm_linux_web_app][2] resource and automatically configures your Web App for Datadog Serverless Monitoring by adding required environment variables and the serverless-init sidecar.
diff --git a/content/en/serverless/azure_app_service/windows_code.md b/content/en/serverless/azure_app_service/windows_code.md
index ecb027578e6cb..c8bb2f05e2d31 100644
--- a/content/en/serverless/azure_app_service/windows_code.md
+++ b/content/en/serverless/azure_app_service/windows_code.md
@@ -176,14 +176,14 @@ The [Datadog Windows Web App module][2] only deploys the Web App resource and ex
3. Click **Save**. This restarts your application.
4. Stop your application by clicking **Stop**.
-
You must stop your application to successfully install Datadog.
+
You must stop your application to successfully install Datadog.
5. In your Azure Portal, navigate to the **Extensions** page and select the Datadog APM extension.
{{< img src="infrastructure/serverless/azure_app_services/choose_extension.png" alt="Example of Extensions page in Azure portal, showing .NET Datadog APM extension." style="width:100%;" >}}
6. Accept the legal terms, click **OK**, and wait for the installation to complete.
-
This step requires that your application be in a stopped state.
+
This step requires that your application be in a stopped state.
7. Start the main application, click **Start**:
@@ -519,7 +519,7 @@ Many organizations use [Azure Resource Management (ARM) templates](https://docs.
{{% /tab %}}
{{% tab "Java" %}}
-
Support for Java Web Apps is in Preview for extension v2.4+. Programmatic management is not available for Java Web Apps.
+
Support for Java Web Apps is in Preview for extension v2.4+. Programmatic management is not available for Java Web Apps.
Interested in support for other App Service resource types or runtimes?
Sign up to be notified when a Preview becomes available.
{{% /tab %}}
diff --git a/content/en/serverless/google_cloud_run/functions_1st_gen.md b/content/en/serverless/google_cloud_run/functions_1st_gen.md
index 9ed5bc234dfe7..ff1d74ed0827c 100644
--- a/content/en/serverless/google_cloud_run/functions_1st_gen.md
+++ b/content/en/serverless/google_cloud_run/functions_1st_gen.md
@@ -6,7 +6,7 @@ title: Instrumenting 1st Gen Cloud Run Functions
This page explains how to collect traces, trace metrics, runtime metrics, and custom metrics from your Cloud Run functions (1st gen), previously known as Cloud Functions.
-
+
Migrating to 2nd gen Cloud Run functions
Datadog recommends using 2nd gen
Cloud Run functions, which offer improved performance, better scaling, and better monitoring with Datadog.
diff --git a/content/en/serverless/guide/azure_app_service_linux_code_wrapper_script.md b/content/en/serverless/guide/azure_app_service_linux_code_wrapper_script.md
index 8f9df399d1407..968d157b74f1d 100644
--- a/content/en/serverless/guide/azure_app_service_linux_code_wrapper_script.md
+++ b/content/en/serverless/guide/azure_app_service_linux_code_wrapper_script.md
@@ -6,7 +6,7 @@ further_reading:
text: "Monitor your Linux web apps on Azure App Service with Datadog"
---
-
+
The AAS Linux Wrapper is now deprecated. It will continue to receive layer bumps but no new features.
It will be retired on January 1, 2026, at which point no further updates will be provided.
Datadog strongly recommends switching to the
sidecar instrumentation method as soon as possible.
diff --git a/content/en/serverless/guide/datadog_forwarder_dotnet.md b/content/en/serverless/guide/datadog_forwarder_dotnet.md
index 1b19e155c3961..0417c87cfb88a 100644
--- a/content/en/serverless/guide/datadog_forwarder_dotnet.md
+++ b/content/en/serverless/guide/datadog_forwarder_dotnet.md
@@ -4,7 +4,7 @@ title: Instrumenting .NET Serverless Applications Using the Datadog Forwarder
---
## Overview
-
+
diff --git a/content/en/serverless/guide/datadog_forwarder_go.md b/content/en/serverless/guide/datadog_forwarder_go.md
index 2d0079e631f48..68169c7e77649 100644
--- a/content/en/serverless/guide/datadog_forwarder_go.md
+++ b/content/en/serverless/guide/datadog_forwarder_go.md
@@ -5,7 +5,7 @@ title: Instrumenting Go Serverless Applications Using the Datadog Forwarder
## Overview
-
+
diff --git a/content/en/serverless/guide/datadog_forwarder_java.md b/content/en/serverless/guide/datadog_forwarder_java.md
index 2241ac90dae9d..34b900f0ab50e 100644
--- a/content/en/serverless/guide/datadog_forwarder_java.md
+++ b/content/en/serverless/guide/datadog_forwarder_java.md
@@ -4,11 +4,11 @@ title: Instrumenting Java Serverless Applications Using the Datadog Forwarder
---
## Overview
-
+
-
+
Some older versions of
datadog-lambda-java
import
log4j <=2.14.0
as a transitive dependency.
Upgrade instructions are below.
diff --git a/content/en/serverless/guide/datadog_forwarder_node.md b/content/en/serverless/guide/datadog_forwarder_node.md
index 1132cec9cad46..fa4706afd3d26 100644
--- a/content/en/serverless/guide/datadog_forwarder_node.md
+++ b/content/en/serverless/guide/datadog_forwarder_node.md
@@ -5,7 +5,7 @@ title: Instrumenting Node.js Serverless Applications Using the Datadog Forwarder
## Overview
-
+
diff --git a/content/en/serverless/guide/datadog_forwarder_python.md b/content/en/serverless/guide/datadog_forwarder_python.md
index 5efc7353ea09c..349311f265377 100644
--- a/content/en/serverless/guide/datadog_forwarder_python.md
+++ b/content/en/serverless/guide/datadog_forwarder_python.md
@@ -4,7 +4,7 @@ title: Instrumenting Python Serverless Applications Using the Datadog Forwarder
---
## Overview
-
+
diff --git a/content/en/serverless/guide/datadog_forwarder_ruby.md b/content/en/serverless/guide/datadog_forwarder_ruby.md
index 6e2c4f8ceb128..36f3d62ea5d5c 100644
--- a/content/en/serverless/guide/datadog_forwarder_ruby.md
+++ b/content/en/serverless/guide/datadog_forwarder_ruby.md
@@ -5,7 +5,7 @@ title: Instrumenting Ruby Serverless Applications Using the Datadog Forwarder
## Overview
-
+
diff --git a/content/en/service_management/case_management/customization.md b/content/en/service_management/case_management/customization.md
index 24a1291bc190a..6f3859a62579f 100644
--- a/content/en/service_management/case_management/customization.md
+++ b/content/en/service_management/case_management/customization.md
@@ -18,7 +18,7 @@ Datadog Case Management allows customization to align with your team's unique wo
## Custom Case Types
-
+
You must have Case Shared Settings Write (
cases_shared_settings_write
) permissions. For more information, see
Datadog Role Permissions.
diff --git a/content/en/service_management/case_management/projects.md b/content/en/service_management/case_management/projects.md
index 49edbff419edc..74022eeb9f40f 100644
--- a/content/en/service_management/case_management/projects.md
+++ b/content/en/service_management/case_management/projects.md
@@ -22,7 +22,7 @@ To create a project:
## Delete a project
-
Deleted cases cannot be recovered.
+
Deleted cases cannot be recovered.
You can delete a project from a project's Settings page.
diff --git a/content/en/service_management/events/guides/email.md b/content/en/service_management/events/guides/email.md
index 05f1b83730c0c..87358ed959695 100644
--- a/content/en/service_management/events/guides/email.md
+++ b/content/en/service_management/events/guides/email.md
@@ -8,7 +8,7 @@ aliases:
---
{{< site-region region="gov" >}}
-
Events with email is not supported on {{< region-param key=dd_datacenter code="true" >}}
+
Events with email is not supported on {{< region-param key=dd_datacenter code="true" >}}
{{< /site-region >}}
If your application does not have an existing [Datadog integration][1], and you don't want to create a [custom Agent check][2], you can send events with email. This can also be done with messages published to an Amazon SNS topic; read the [Create Datadog Events from Amazon SNS Emails][6] guide for more information.
diff --git a/content/en/service_management/events/guides/migrating_to_new_events_features.md b/content/en/service_management/events/guides/migrating_to_new_events_features.md
index 468ef24991497..ea8aba64ce03b 100644
--- a/content/en/service_management/events/guides/migrating_to_new_events_features.md
+++ b/content/en/service_management/events/guides/migrating_to_new_events_features.md
@@ -10,7 +10,7 @@ further_reading:
text: "Troubleshoot faster with improved Datadog Events"
---
-
+
Datadog's legacy event stream and event monitors retire on June 30, 2022. Datadog is migrating all customers to a new and improved events experience. This page contains important information about this migration. Before the retirement date, follow the steps on this page to ensure that your existing event visualizations and monitors continue to work properly.
diff --git a/content/en/service_management/events/pipelines_and_processors/aggregation_key.md b/content/en/service_management/events/pipelines_and_processors/aggregation_key.md
index 9d45c15bf1b00..0fb00ebbeb412 100644
--- a/content/en/service_management/events/pipelines_and_processors/aggregation_key.md
+++ b/content/en/service_management/events/pipelines_and_processors/aggregation_key.md
@@ -12,7 +12,7 @@ Use the aggregation key processor to generate a custom aggregation key (`@aggreg
- Events originating from different sources or integrations receive distinct aggregation keys.
- By default, existing aggregation keys are overwritten by this processor. Adjust the toggle to configure this behavior.
-
Aggregation keys are included by default in Datadog Monitor alerts and are not modified by the aggregation key processor. This ensures that monitor alert events retain their original keys and are not overwritten.
+
Aggregation keys are included by default in Datadog Monitor alerts and are not modified by the aggregation key processor. This ensures that monitor alert events retain their original keys and are not overwritten.
The aggregation key processor performs the following actions:
diff --git a/content/en/service_management/on-call/_index.md b/content/en/service_management/on-call/_index.md
index 51ce865662b83..9b7fa8c273c4b 100644
--- a/content/en/service_management/on-call/_index.md
+++ b/content/en/service_management/on-call/_index.md
@@ -78,7 +78,7 @@ To restrict access to an On-Call resource:
## Start using Datadog On-Call
-
To preserve incident history, Datadog On-Call does not support deletion of resources like Pages, escalation policies, or schedules. To test On-Call without affecting your production environment, create a trial organization as a sandbox.
+
To preserve incident history, Datadog On-Call does not support deletion of resources like Pages, escalation policies, or schedules. To test On-Call without affecting your production environment, create a trial organization as a sandbox.
To get started with On-Call, [onboard an On-Call Team][1] and ensure that all Team members configure their [On-Call profile settings][2] to receive notifications.
diff --git a/content/en/service_management/on-call/automations.md b/content/en/service_management/on-call/automations.md
index 99b39b6035b72..9c9e826117df4 100644
--- a/content/en/service_management/on-call/automations.md
+++ b/content/en/service_management/on-call/automations.md
@@ -22,7 +22,7 @@ Handover automations run automatically at the start or end of an on-call shift.
By using built-in automations instead of maintaining cron jobs or custom tools, you can streamline operations, eliminate manual steps, and ensure the right actions always run when a shift changes.
-
+
If you need a specific action that isn't listed, contact your account representative or
support@datadoghq.com.
diff --git a/content/en/service_management/on-call/cross_org_paging.md b/content/en/service_management/on-call/cross_org_paging.md
index d9fd9cb5ecb3c..0dee89771c543 100644
--- a/content/en/service_management/on-call/cross_org_paging.md
+++ b/content/en/service_management/on-call/cross_org_paging.md
@@ -33,7 +33,7 @@ To enable paging between orgs or datacenters, you must establish a secure connec
- `on_call_respond` - Respond to On-Call Pages
- `user_access_read` - Read user information (automatically included in most roles)
-
+
Service accounts created with Terraform may be missing the user_access_read
permission. This permission is automatically added to roles created through the UI, but it cannot be manually added through the UI and may not be included in Terraform-configured roles. If cross-org paging fails with permission errors, add an additional role to your service account that includes the user_access_read
permission.
diff --git a/content/en/service_management/on-call/guides/configure-mobile-device-for-on-call.md b/content/en/service_management/on-call/guides/configure-mobile-device-for-on-call.md
index f66d500711da0..9c14a78043923 100644
--- a/content/en/service_management/on-call/guides/configure-mobile-device-for-on-call.md
+++ b/content/en/service_management/on-call/guides/configure-mobile-device-for-on-call.md
@@ -94,7 +94,7 @@ You can override your device's system volume and Do Not Disturb mode for both pu
6. Test the setup of your critical push notification by tapping **Test push notifications**.
-
+
On Android, the Datadog mobile app cannot bypass system volume or Do Not Disturb settings when used within a Work Profile. As a workaround, install the Datadog mobile app on your personal profile.
diff --git a/content/en/service_management/on-call/routing_rules.md b/content/en/service_management/on-call/routing_rules.md
index 22e4fe2eaec5a..2c26db30869cd 100644
--- a/content/en/service_management/on-call/routing_rules.md
+++ b/content/en/service_management/on-call/routing_rules.md
@@ -43,7 +43,7 @@ When a Page is acknowledged or resolved in Slack, Datadog updates the original n
Routing rules use [Datadog query syntax][3] and support multiple `if/else` conditions. Rules are evaluated from top to bottom, and the final rule must act as a fallback that routes all unmatched alerts to an escalation policy.
-
Routing rule syntax is case-sensitive. For example, `tags.env:Prod` will not match `tags.env:prod`.
+
Routing rule syntax is case-sensitive. For example, `tags.env:Prod` will not match `tags.env:prod`.
**Supported attributes:**
diff --git a/content/en/service_management/status_pages/_index.md b/content/en/service_management/status_pages/_index.md
index 83d7b737b98ab..77723f87c802d 100644
--- a/content/en/service_management/status_pages/_index.md
+++ b/content/en/service_management/status_pages/_index.md
@@ -81,7 +81,7 @@ If you selected:
## Add an incident
-
Incidents published on Status Pages are not the same as incidents declared within Datadog Incident Management. Incidents on Status Pages are carefully crafted messages posted to a public website to communicate system status, and may encompass multiple internal Incident Management incidents.
+
Incidents published on Status Pages are not the same as incidents declared within Datadog Incident Management. Incidents on Status Pages are carefully crafted messages posted to a public website to communicate system status, and may encompass multiple internal Incident Management incidents.
When an issue arises, you can communicate it clearly through your status page.
diff --git a/content/en/synthetics/browser_tests/_index.md b/content/en/synthetics/browser_tests/_index.md
index 1ffaf34b22438..2c692d22cf63f 100644
--- a/content/en/synthetics/browser_tests/_index.md
+++ b/content/en/synthetics/browser_tests/_index.md
@@ -250,7 +250,7 @@ Step replay allows you to re-run one or more steps of your browser test directly
### Debugger permission
-
+
The current version of the extension does not have Chrome's debugger permission yet, as a result:
- JavaScript-based steps and keystroke simulations are not yet available.
diff --git a/content/en/synthetics/guide/browser-tests-passkeys.md b/content/en/synthetics/guide/browser-tests-passkeys.md
index d2595a92f3052..fb288005b7176 100644
--- a/content/en/synthetics/guide/browser-tests-passkeys.md
+++ b/content/en/synthetics/guide/browser-tests-passkeys.md
@@ -27,7 +27,7 @@ Passkeys in Synthetic Monitoring are handled by Virtual Authenticator global var
{{< img src="synthetics/guide/browser-tests-passkeys/new-variable-virtual-authenticator.png" alt="Create a Virtual Authenticator global variable" style="width:70%;" >}}
## Use passkeys in your Synthetic browser tests
-Synthetic Monitoring supports passkeys in browser tests for Chrome and Edge.
+Synthetic Monitoring supports passkeys in browser tests for Chrome and Edge.
### Add passkeys to a browser test
diff --git a/content/en/synthetics/guide/explore-rum-through-synthetics.md b/content/en/synthetics/guide/explore-rum-through-synthetics.md
index 4af81e60b9f01..9adcfc94b12e7 100644
--- a/content/en/synthetics/guide/explore-rum-through-synthetics.md
+++ b/content/en/synthetics/guide/explore-rum-through-synthetics.md
@@ -24,7 +24,7 @@ Synthetic browser tests embed the Real User Monitoring SDK, allowing you to expl
## Allow Synthetic data on RUM applications
-
+
If the target application is already instrumented with RUM, you should not enable RUM data collection within the synthetic test configuration as this can result in unexpected behavior.
In your browser test recording, click **Collect RUM Data on** above the **Start Recording** button and select an application to collect data on. After saving your recording and test configuration, RUM gathers test data and generates session recordings from your browser test runs.
diff --git a/content/en/synthetics/notifications/conditional_alerting.md b/content/en/synthetics/notifications/conditional_alerting.md
index e701d7b3d454d..52aeef831fcd2 100644
--- a/content/en/synthetics/notifications/conditional_alerting.md
+++ b/content/en/synthetics/notifications/conditional_alerting.md
@@ -15,7 +15,7 @@ further_reading:
Use conditional templating to change messages, set notification handles, or override alert priority based on test results. This is especially useful when routing alerts to specific teams.
-
+
To ensure notifications are delivered properly, always include a notification handle in your conditional logic. Notifications are dropped if no handle is provided. Make sure to:
diff --git a/content/en/synthetics/platform/private_locations/_index.md b/content/en/synthetics/platform/private_locations/_index.md
index cbd7501ea2daa..d541b72e34a2c 100644
--- a/content/en/synthetics/platform/private_locations/_index.md
+++ b/content/en/synthetics/platform/private_locations/_index.md
@@ -95,7 +95,7 @@ You must install .NET version 4.7.2 or later on your computer before using the M
{{< site-region region="gov" >}}
-
FIPS compliance is not supported for Windows private locations that report to
ddog-gov.com
. To disable this behavior, use the
--disableFipsCompliance
option.
+
FIPS compliance is not supported for Windows private locations that report to
ddog-gov.com
. To disable this behavior, use the
--disableFipsCompliance
option.
{{< /site-region >}}
@@ -627,7 +627,7 @@ Because Datadog already integrates with Kubernetes and AWS, it is ready-made to
Once the process is complete, click **Finish** on the installation completion page.
-
If you entered your JSON configuration, the Windows Service starts running using that configuration. If you did not enter your configuration, run C:\\Program Files\Datadog-Synthetics\Synthetics\synthetics-pl-worker.exe --config=< PathToYourConfiguration >
from a command prompt or use the start menu
shortcut to start the Synthetics Private Location Worker.
+
If you entered your JSON configuration, the Windows Service starts running using that configuration. If you did not enter your configuration, run C:\\Program Files\Datadog-Synthetics\Synthetics\synthetics-pl-worker.exe --config=< PathToYourConfiguration >
from a command prompt or use the start menu
shortcut to start the Synthetics Private Location Worker.
[101]: https://ddsynthetics-windows.s3.amazonaws.com/datadog-synthetics-worker-{{< synthetics-worker-version "synthetics-windows-pl" >}}.amd64.msi
[102]: https://app.datadoghq.com/synthetics/settings/private-locations
@@ -839,7 +839,7 @@ readinessProbe:
#### Additional health check configurations
-
This method of adding private location health checks is no longer supported. Datadog recommends using liveness and readiness probes.
+
This method of adding private location health checks is no longer supported. Datadog recommends using liveness and readiness probes.
The `/tmp/liveness.date` file of private location containers gets updated after every successful poll from Datadog (2s by default). The container is considered unhealthy if no poll has been performed in a while, for example: no fetch in the last minute.
@@ -1005,7 +1005,7 @@ Users with the [Datadog Admin and Datadog Standard roles][20] can view private l
If you are using the [custom role feature][21], add your user to a custom role that includes `synthetics_private_location_read` and `synthetics_private_location_write` permissions.
-
If a test includes restricted private locations, updating the test removes those locations from the test.
+
If a test includes restricted private locations, updating the test removes those locations from the test.
## Restrict access
diff --git a/content/en/tests/code_coverage.md b/content/en/tests/code_coverage.md
index 669c2af522e49..53baa67244837 100644
--- a/content/en/tests/code_coverage.md
+++ b/content/en/tests/code_coverage.md
@@ -37,7 +37,7 @@ Ensure that [Test Optimization][1] is already set up for your language.
* `cucumber-js>=7.0.0`.
* `vitest>=2.0.0`.
-
+
Note: The DataDog Tracer does not generate code coverage. If your tests are run with code coverage enabled, dd-trace
reports it under the test.code_coverage.lines_pct
tag for your test sessions automatically.
@@ -288,7 +288,7 @@ DD_ENV=ci DD_SERVICE=my-python-service pytest --cov
* `datadog-ci-rb>=1.7.0`
* `simplecov>=0.18.0`.
-
+
Note: The DataDog library does not generate total code coverage. If your tests are run with code coverage enabled, datadog-ci-rb
reports it under the test.code_coverage.lines_pct
tag for your test sessions automatically.
@@ -305,7 +305,7 @@ This feature is enabled by default. Use `DD_CIVISIBILITY_SIMPLECOV_INSTRUMENTATI
* `go test -cover`
-
+
Note: The DataDog library does not generate total code coverage. If your tests are run with code coverage enabled, dd-trace-go
reports it under the test.code_coverage.lines_pct
tag for your test sessions automatically.
diff --git a/content/en/tests/flaky_management/_index.md b/content/en/tests/flaky_management/_index.md
index 67950827c8ef3..f7bf49f3a506f 100644
--- a/content/en/tests/flaky_management/_index.md
+++ b/content/en/tests/flaky_management/_index.md
@@ -14,7 +14,7 @@ further_reading:
---
{{< site-region region="gov" >}}
-
Test Optimization is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.
+
Test Optimization is not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.
{{< /site-region >}}
## Overview
diff --git a/content/en/tests/network.md b/content/en/tests/network.md
index 0c2eb6a58b631..7696e972a7caa 100644
--- a/content/en/tests/network.md
+++ b/content/en/tests/network.md
@@ -12,7 +12,7 @@ further_reading:
text: "Troubleshooting Test Optimization"
---
-
+
Tracers always initiate traffic to Datadog. Sessions are never initiated from Datadog back to the tracers.
diff --git a/content/en/tests/setup/dotnet.md b/content/en/tests/setup/dotnet.md
index 0f4ae50e94471..6a02376371c57 100644
--- a/content/en/tests/setup/dotnet.md
+++ b/content/en/tests/setup/dotnet.md
@@ -72,7 +72,7 @@ Install or update the `dd-trace` command using one of the following ways:
## Instrumenting tests
-
+
To instrument your test suite, prefix your test command with `dd-trace ci run`, providing the name of the service or library under test as the `--dd-service` parameter, and the environment where tests are being run (for example, `local` when running tests on a developer workstation, or `ci` when running them on a CI provider) as the `--dd-env` parameter. For example:
@@ -265,7 +265,7 @@ BenchmarkRunner.Run
(config);
## Custom instrumentation
-
+
Note: Your custom instrumentation setup depends on the dd-trace
version. To use the custom instrumentation, you must keep the package versions for dd-trace
and Datadog.Trace
NuGet packages in sync.
@@ -279,7 +279,7 @@ For more information about how to add spans and tags for custom instrumentation,
## Manual testing API
-
+
Note: To use the manual testing API, you must add the Datadog.Trace
NuGet package in the target .NET project.
diff --git a/content/en/tests/setup/javascript.md b/content/en/tests/setup/javascript.md
index aa52588f22fe0..8118bd478294c 100644
--- a/content/en/tests/setup/javascript.md
+++ b/content/en/tests/setup/javascript.md
@@ -45,7 +45,7 @@ To report test results to Datadog, you need to configure the Datadog JavaScript
{{% tab "CI Provider with Auto-Instrumentation Support" %}}
{{% ci-autoinstrumentation %}}
-
+
Note: Auto-instrumentation is not supported for Cypress tests. To instrument Cypress tests, follow the manual instrumentation steps outlined below.
@@ -207,7 +207,7 @@ The format of the annotations is the following, where `$TAG_NAME` is a *string*
```
**Note**: `description` values in annotations are [typed as strings][2]. Numbers also work, but you may need to disable the typing error with `// @ts-expect-error`.
-
+
Important: The DD_TAGS
prefix is mandatory and case sensitive.
@@ -488,7 +488,7 @@ If the browser application being tested is instrumented using [Browser Monitorin
{{% /tab %}}
{{% tab "Vitest" %}}
-
+
@@ -632,7 +632,7 @@ For more information about `service` and `env` reserved tags, see [Unified Servi
## Manual testing API
-
+
Note: The manual testing API is available starting in dd-trace
versions 5.23.0
and 4.47.0
.
diff --git a/content/en/tests/setup/junit_xml.md b/content/en/tests/setup/junit_xml.md
index b5af875602c37..9060dff82e1fb 100644
--- a/content/en/tests/setup/junit_xml.md
+++ b/content/en/tests/setup/junit_xml.md
@@ -16,7 +16,7 @@ further_reading:
text: "Troubleshooting Test Optimization"
---
-
+
Note: Datadog recommends the native instrumentation of tests over uploading JUnit XML files,
as the native instrumentation provides more accurate time results, supports distributed traces on integration tests
and other features that are not available with JUnit XML uploads.
@@ -107,7 +107,7 @@ DD_ENV=ci DATADOG_API_KEY=<api_key> DATADOG_SITE={{< region-param key="dd_
-
Make sure that this command runs in your CI even when your tests have failed. Usually, when tests fail, the CI job aborts execution, and the upload command does not run.
+
Make sure that this command runs in your CI even when your tests have failed. Usually, when tests fail, the CI job aborts execution, and the upload command does not run.
{{< tabs >}}
@@ -494,7 +494,7 @@ datadog-ci junit upload --service service_name \
{{< /tabs >}}
-
+
When using bash from Git for Windows, define the MSYS_NO_PATHCONV=1 environment variable.
Otherwise, any argument starting with / will be expanded to a Windows path.
diff --git a/content/en/tests/setup/python.md b/content/en/tests/setup/python.md
index 105825f45f105..62fb3005c80e7 100644
--- a/content/en/tests/setup/python.md
+++ b/content/en/tests/setup/python.md
@@ -169,7 +169,7 @@ For additional configurations, see [Configuration Settings][1].
### Manual testing API
-
Note: The Test Optimization manual testing API is in beta and subject to change.
+
Note: The Test Optimization manual testing API is in beta and subject to change.
As of version `2.13.0`, the [Datadog Python tracer][1] provides the Test Optimization API (`ddtrace.ext.test_visibility`) to submit test optimization results as needed.
diff --git a/content/en/tests/setup/ruby.md b/content/en/tests/setup/ruby.md
index 6bed0e1b7a0aa..e05baf6094254 100644
--- a/content/en/tests/setup/ruby.md
+++ b/content/en/tests/setup/ruby.md
@@ -52,7 +52,7 @@ To report test results to Datadog, you need to configure the `datadog-ci` gem:
{{% tab "CI Provider with Auto-Instrumentation Support" %}}
{{% ci-autoinstrumentation %}}
-
+
@@ -291,7 +291,7 @@ For example:
DD_ENV=ci bundle exec rake test
```
-
+
Note: When using `minitest/autorun`, ensure that `datadog/ci` is required before `minitest/autorun`.
diff --git a/content/en/tests/setup/swift.md b/content/en/tests/setup/swift.md
index 26dc156934a86..1b0527df1863f 100644
--- a/content/en/tests/setup/swift.md
+++ b/content/en/tests/setup/swift.md
@@ -106,7 +106,7 @@ end
[1]: https://github.com/DataDog/dd-sdk-swift-testing/releases
{{% /tab %}}
{{< /tabs >}}
-
Note: This framework is useful only for testing and should only be linked with the application when running tests. Do not distribute the framework to your users.
+
Note: This framework is useful only for testing and should only be linked with the application when running tests. Do not distribute the framework to your users.
## Instrumenting your tests
@@ -118,7 +118,7 @@ To enable testing instrumentation, add the following environment variables to yo
{{< img src="continuous_integration/swift_env.png" alt="Swift Environments" >}}
-
You should have your main target in the variables expansion of the environment variables; if not selected, variables are not valid.
+
You should have your main target in the variables expansion of the environment variables; if not selected, variables are not valid.
For UI Tests, environment variables need to be set only in the test target, because the framework automatically injects these values to the application.
@@ -241,7 +241,7 @@ The framework enables auto-instrumentation of all supported libraries, but in so
`DD_DISABLE_CRASH_HANDLER`
: Disables crash handling and reporting. (Boolean)
-
If you disable crash reporting, tests that crash are not reported at all, and don't appear as test failures. If you need to disable crash handling for any of your tests, run them as a separate target, so you don't disable it for the others.
+
If you disable crash reporting, tests that crash are not reported at all, and don't appear as test failures. If you need to disable crash handling for any of your tests, run them as a separate target, so you don't disable it for the others.
### Network auto-instrumentation
diff --git a/content/en/tests/test_impact_analysis/_index.md b/content/en/tests/test_impact_analysis/_index.md
index b4b4073dd0bcc..17681a586682f 100644
--- a/content/en/tests/test_impact_analysis/_index.md
+++ b/content/en/tests/test_impact_analysis/_index.md
@@ -15,7 +15,7 @@ further_reading:
text: "Monitor all your CI pipelines with Datadog"
---
-
This feature was formerly known as Intelligent Test Runner, and some tags still contain "itr".
+
This feature was formerly known as Intelligent Test Runner, and some tags still contain "itr".
## Overview
diff --git a/content/en/tests/test_impact_analysis/setup/go.md b/content/en/tests/test_impact_analysis/setup/go.md
index c45ac0b114640..144bfb0d9d8a9 100644
--- a/content/en/tests/test_impact_analysis/setup/go.md
+++ b/content/en/tests/test_impact_analysis/setup/go.md
@@ -42,7 +42,7 @@ orchestrion go test ./... -cover -covermode=count -coverpkg ./...
3. `-coverpkg`: the code coverage analysis for each test must be configured to apply in all package dependencies and not only for the package being tested. This way, if a dependency changes, you can track the test affected by this change. If you run the test command from the root of the project (where the go.mod file is), you can use the `./...` wildcard. If not, you must manually list all package dependencies comma separated (`pattern1, pattern2, pattern3, ...`). For that, you could use the `go list ./...` command to get all the package names.
-
Having an incorrect -coverpkg value affects the ability of Test Impact Analysis to correctly track test coverage.
+
Having an incorrect -coverpkg value affects the ability of Test Impact Analysis to correctly track test coverage.
## Disable skipping for specific tests
diff --git a/content/en/tracing/configure_data_security/_index.md b/content/en/tracing/configure_data_security/_index.md
index 78639d2b8eb86..bf275961fe6d1 100644
--- a/content/en/tracing/configure_data_security/_index.md
+++ b/content/en/tracing/configure_data_security/_index.md
@@ -659,7 +659,7 @@ Some tracing libraries provide an interface for processing spans to manually mod
{{< site-region region="gov" >}}
-
+
Instrumentation telemetry is not available for the {{< region-param key="dd_site_name" >}} site, but is enabled by default. To avoid errors, {{< region-param key="dd_site_name" >}} users should disable this capability by setting DD_INSTRUMENTATION_TELEMETRY_ENABLED=false
on their application and DD_APM_TELEMETRY_ENABLED=false
on their Agent.
diff --git a/content/en/tracing/faq/app_analytics_agent_configuration.md b/content/en/tracing/faq/app_analytics_agent_configuration.md
index 38f4ed5c97509..994a34cfaf325 100644
--- a/content/en/tracing/faq/app_analytics_agent_configuration.md
+++ b/content/en/tracing/faq/app_analytics_agent_configuration.md
@@ -6,7 +6,7 @@ aliases:
- /tracing/guide/app_analytics_agent_configuration/
---
-
+
This page describes deprecated features with configuration information relevant to legacy App Analytics, useful for troubleshooting or modifying some old setups. To have full control over your traces, use