diff --git a/content/en/account_management/_index.md b/content/en/account_management/_index.md index e99a4f13832b1..6ea11b1729d15 100644 --- a/content/en/account_management/_index.md +++ b/content/en/account_management/_index.md @@ -13,7 +13,7 @@ further_reading: text: "Best practices for managing Datadog organizations at scale" --- {{< site-region region="gov" >}} -
The Datadog for Government platform supports only SAML or basic authentication using a username/email and password. Before configuring SAML authentication, ensure that at least one username/email and password account is established to maintain access during the setup process. Datadog recommends enabling multi-factor authentication (MFA) for password-based accounts. +
The Datadog for Government platform supports only SAML or basic authentication using a username/email and password. Before configuring SAML authentication, ensure that at least one username/email and password account is established to maintain access during the setup process. Datadog recommends enabling multi-factor authentication (MFA) for password-based accounts. If you need SAML enabled for a trial account, contact Datadog Support.
@@ -40,7 +40,7 @@ You can manage your timezone, visual accessibility preference, and email subscri Under email subscriptions, you have access to the following reports: {{< site-region region="us3,us5,gov,ap1,ap2" >}} -
Email digests are not available in the selected site ({{< region-param key="dd_site_name" >}}).
+
Email digests are not available in the selected site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}} * Daily Digest diff --git a/content/en/account_management/api-app-keys.md b/content/en/account_management/api-app-keys.md index 751b34fb96ef3..f74a5dbbceaf6 100644 --- a/content/en/account_management/api-app-keys.md +++ b/content/en/account_management/api-app-keys.md @@ -85,7 +85,7 @@ To add a Datadog application key, navigate to [**Organization Settings** > **App {{< img src="account_management/app-key.png" alt="Navigate to the Application Keys page for your organization in Datadog" style="width:80%;" >}} {{< site-region region="ap2,gov" >}} -
Make sure to securely store your application key immediately after creation, as the key secret cannot be retrieved later.
+
Make sure to securely store your application key immediately after creation, as the key secret cannot be retrieved later.
{{< /site-region >}} **Notes:** diff --git a/content/en/account_management/audit_trail/_index.md b/content/en/account_management/audit_trail/_index.md index 1af6b54f15078..d42e412b2cc57 100644 --- a/content/en/account_management/audit_trail/_index.md +++ b/content/en/account_management/audit_trail/_index.md @@ -157,7 +157,7 @@ The Inspect Changes (Diff) tab in the audit event details panel compares the con ## Filter audit events based on Reference Tables -
Reference Tables containing over 1,000,000 rows cannot be used to filter events. See Add Custom Metadata with Reference Tables for more information on how to create and manage Reference Tables.
+
Reference Tables containing over 1,000,000 rows cannot be used to filter events. See Add Custom Metadata with Reference Tables for more information on how to create and manage Reference Tables.
Reference Tables allow you to combine metadata with audit events, providing more information to investigate Datadog user behavior. Add a query filter based on a Reference Table to perform lookup queries. For more information on activating and managing this feature, see the [Reference Tables][2] guide. diff --git a/content/en/account_management/audit_trail/forwarding_audit_events.md b/content/en/account_management/audit_trail/forwarding_audit_events.md index 378b4487e962e..fd5a7ffffd1dd 100644 --- a/content/en/account_management/audit_trail/forwarding_audit_events.md +++ b/content/en/account_management/audit_trail/forwarding_audit_events.md @@ -9,13 +9,13 @@ further_reading: --- {{% site-region region="gov" %}} -
+
Audit Event Forwarding is not available in the US1-FED site.
{{% /site-region %}} {{% site-region region="us,us3,us5,eu,ap1,ap2" %}} -
+
Audit Event Forwarding is in Preview.
{{% /site-region %}} diff --git a/content/en/account_management/authn_mapping/_index.md b/content/en/account_management/authn_mapping/_index.md index f3f20c75363c3..baa52e42bb8c9 100644 --- a/content/en/account_management/authn_mapping/_index.md +++ b/content/en/account_management/authn_mapping/_index.md @@ -543,7 +543,7 @@ curl -X GET \ ### Enable or disable all mappings -
+
When mappings are enabled, all users logging in with SAML are stripped of their roles and reassigned roles based on the values in their SAML assertion. It's important to confirm you are receiving the expected SAML assertions in your login before enabling the mapping enforcement.
diff --git a/content/en/account_management/billing/usage_attribution.md b/content/en/account_management/billing/usage_attribution.md index 02b40e094daca..01e200a271222 100644 --- a/content/en/account_management/billing/usage_attribution.md +++ b/content/en/account_management/billing/usage_attribution.md @@ -13,7 +13,7 @@ algolia: ## Overview -
+
Usage Attribution is an advanced feature included in the Enterprise plan. For all other plans, contact your account representative or success@datadoghq.com to request this feature.
diff --git a/content/en/account_management/delete_data.md b/content/en/account_management/delete_data.md index ac994f52f49cf..d952ec914f70f 100644 --- a/content/en/account_management/delete_data.md +++ b/content/en/account_management/delete_data.md @@ -27,7 +27,7 @@ To grant an account access to delete data, perform the following steps: ### Start deletions -
Deleted data can never be recovered, and deletions cannot be undone.
+
Deleted data can never be recovered, and deletions cannot be undone.
For Logs: Deletions cannot be scoped to a specific index, and deletions occur across Index, Flex Indexes, and Online Archives.
diff --git a/content/en/account_management/faq/usage_control_apm.md b/content/en/account_management/faq/usage_control_apm.md index 989df99150b8a..62c9ce74c4326 100644 --- a/content/en/account_management/faq/usage_control_apm.md +++ b/content/en/account_management/faq/usage_control_apm.md @@ -2,7 +2,7 @@ title: Estimate and Control APM Usage --- -
+
This page describes deprecated features with configuration information relevant to legacy App Analytics, useful for troubleshooting or modifying some old setups. To have full control over your traces, use ingestion controls and retention filters instead.
diff --git a/content/en/account_management/org_settings/service_accounts.md b/content/en/account_management/org_settings/service_accounts.md index 3c632aee5cf8b..6574553a43f09 100644 --- a/content/en/account_management/org_settings/service_accounts.md +++ b/content/en/account_management/org_settings/service_accounts.md @@ -74,7 +74,7 @@ To create a new application key, follow the steps below: The dialog box refreshes, showing you the key. Copy and paste the key into your desired location. After you close the dialog box, you cannot retrieve the value of the key. {{< site-region region="ap2,gov" >}} -
Service account application keys are one-time read only. Make sure to securely store your application key immediately after creation, as the key secret cannot be retrieved later.
+
Service account application keys are one-time read only. Make sure to securely store your application key immediately after creation, as the key secret cannot be retrieved later.
{{< /site-region >}} To revoke an application key, find the key in the service account detailed view side panel and hover over it. Pencil and trash can icons appear on the right. Click the trash can to revoke the key. After the key is revoked, click **Confirm**. diff --git a/content/en/account_management/plan_and_usage/cost_details.md b/content/en/account_management/plan_and_usage/cost_details.md index 5ede06db752bb..c0f9cdc13b9b6 100644 --- a/content/en/account_management/plan_and_usage/cost_details.md +++ b/content/en/account_management/plan_and_usage/cost_details.md @@ -72,7 +72,7 @@ To query estimated cost data through the API, see [Get estimated cost across you ### Cost Summary (sub-organization) -
This feature is in limited availability. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
+
This feature is in limited availability. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
As a sub-organization, you can view the costs for your organization only. This restriction allows for more distributed ownership and removes the need to grant broader Admin permissions to the parent organization. diff --git a/content/en/account_management/plan_and_usage/usage_details.md b/content/en/account_management/plan_and_usage/usage_details.md index 708dca4dd6cf2..3caa891ef4b90 100644 --- a/content/en/account_management/plan_and_usage/usage_details.md +++ b/content/en/account_management/plan_and_usage/usage_details.md @@ -89,7 +89,7 @@ Time selection contains options to view usage graphs at daily, weekly, monthly o ## Billable on-demand pills and committed lines -
This feature is in beta. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
+
This feature is in beta. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
Purple on-demand pills highlight the portion of billable usage that is on-demand usage. Blue committed and allotted pills highlight the portion of your usage that is covered by commitments and allotments from parent products. The dashed `Committed` line shows commitments per product, without any allotments (such as Custom Metrics or Containers). @@ -139,7 +139,7 @@ This data can be downloaded as a CSV file. ## First-time usage notifications -
This feature is in beta. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
+
This feature is in beta. To request access and confirm your organization meets the feature criteria, contact your account representative or Customer Support.
The first-time usage notifications feature sends email notifications when there is first-time billable usage for a new product not included in your current contract. Emails are sent approximately 48 hours after the usage first occurs during a given month. diff --git a/content/en/account_management/saml/_index.md b/content/en/account_management/saml/_index.md index 7cbefb3ad3982..e8a7bf99c731f 100644 --- a/content/en/account_management/saml/_index.md +++ b/content/en/account_management/saml/_index.md @@ -11,7 +11,7 @@ algolia: tags: ['saml'] --- {{< site-region region="gov" >}} -
The Datadog for Government site only supports SAML login.
+
The Datadog for Government site only supports SAML login.
{{< /site-region >}} ## Overview @@ -134,7 +134,7 @@ Some organizations might not want to invite all of their users to Datadog. If yo Administrators can set the default role for new JIT users. The default role is **Standard**, but you can choose to add new JIT users as **Read-Only**, **Administrators**, or any custom role. -
+
Important: If Role Mapping is enabled, it takes priority over the roles set during JIT provisioning. Without the proper Group Attribute statements, users might end up without roles and lose access to Datadog. To prevent users from being locked out after JIT provisioning, make sure to review your mapping definitions and check your assertions before enabling both Mappings and JIT.
diff --git a/content/en/account_management/saml/mapping.md b/content/en/account_management/saml/mapping.md index be2f63bdc0f3d..11007a52a47ad 100644 --- a/content/en/account_management/saml/mapping.md +++ b/content/en/account_management/saml/mapping.md @@ -39,7 +39,7 @@ It's important to understand what is sent in an assertion before turning on mapp When a user logs in who has the specified identity provider attribute, they are automatically assigned the Datadog role. Likewise, if someone has that identity provider attribute removed, they lose access to the role (unless another mapping adds it). -
+
Important: If a user does not match any mapping, they lose any roles they had previously and are prevented from logging into the org with SAML. This includes roles that may be set with Just-In-Time provisioning. Double-check your mapping definitions and inspect your own assertions before enabling Mappings to prevent any scenarios where your users are unable to login.
diff --git a/content/en/account_management/saml/okta.md b/content/en/account_management/saml/okta.md index f438facd79f62..fb3e283d6613c 100644 --- a/content/en/account_management/saml/okta.md +++ b/content/en/account_management/saml/okta.md @@ -11,7 +11,7 @@ further_reading: --- {{% site-region region="gov" %}} -
+
In the {{< region-param key="dd_site_name" >}} site, you must manually configure the Datadog application in Okta using the legacy instructions. Ignore the instructions on this page about the preconfigured Datadog application in the Okta application catalog.
{{% /site-region %}} diff --git a/content/en/account_management/scim/entra.md b/content/en/account_management/scim/entra.md index 3bba88a3d87a1..0209ca5956ca4 100644 --- a/content/en/account_management/scim/entra.md +++ b/content/en/account_management/scim/entra.md @@ -11,7 +11,7 @@ algolia: SCIM is available with the Infrastructure Pro and Infrastructure Enterprise plans.
-
+
Due to a Microsoft freeze on third-party app updates in Entra following a security incident in late 2024, Team provisioning via SCIM is unavailable. To create Teams in Datadog, use one of the supported alternatives: SAML mapping, Terraform, diff --git a/content/en/account_management/users/_index.md b/content/en/account_management/users/_index.md index 0dfa7fac44b87..9d9e526293dff 100644 --- a/content/en/account_management/users/_index.md +++ b/content/en/account_management/users/_index.md @@ -18,7 +18,7 @@ further_reading: text: "Manage your users with the USER API" --- {{< site-region region="gov" >}} -
The Datadog for Government site only supports SAML login.
+
The Datadog for Government site only supports SAML login.
{{< /site-region >}} Datadog's **User** tab in **Organization Settings** allows you to manage your users and their associated roles. Switch between list and grid views by clicking **List View** or **Grid View** on the right. diff --git a/content/en/actions/datastore/auth.md b/content/en/actions/datastore/auth.md index 2b14f28c9b9de..b54dfd1499808 100644 --- a/content/en/actions/datastore/auth.md +++ b/content/en/actions/datastore/auth.md @@ -8,7 +8,7 @@ further_reading: --- {{< site-region region="gov" >}} -
Datastores are not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
+
Datastores are not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}} ## Required Datadog role permissions diff --git a/content/en/actions/datastore/create.md b/content/en/actions/datastore/create.md index 20d8b5c41fa93..0e999b7ad1ff9 100644 --- a/content/en/actions/datastore/create.md +++ b/content/en/actions/datastore/create.md @@ -15,7 +15,7 @@ further_reading: --- {{< site-region region="gov" >}} -
App Builder is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
+
App Builder is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}} You can create and manage datastores from the [Datastore page][1]. diff --git a/content/en/actions/datastore/use.md b/content/en/actions/datastore/use.md index 676f47984c1fa..e8165da2c1d03 100644 --- a/content/en/actions/datastore/use.md +++ b/content/en/actions/datastore/use.md @@ -15,7 +15,7 @@ further_reading: --- {{< site-region region="gov" >}} -
App Builder is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
+
App Builder is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
{{< /site-region >}} You can reference and perform CRUD (Create, Read, Update, and Delete) operations on a datastore inside a workflow or an app. Additionally, you can create a workflow or app directly from an existing datastore. diff --git a/content/en/actions/private_actions/_index.md b/content/en/actions/private_actions/_index.md index a323f008769e0..605e7dd76c7dc 100644 --- a/content/en/actions/private_actions/_index.md +++ b/content/en/actions/private_actions/_index.md @@ -31,7 +31,7 @@ further_reading: Private actions allow your Datadog workflows and apps to interact with services hosted on your private network without exposing them to the public internet. To use private actions, you must install a private action runner on a host in your network using Docker or [Kubernetes][1] and pair the runner with a [connection][2]. -
To install a private action runner, your organization must have Remote Configuration enabled.
+
To install a private action runner, your organization must have Remote Configuration enabled.
When you first start the runner, it generates a private key for authentication with Datadog's servers. This private key is never accessible by Datadog and ensures you exclusive access. Datadog uses a public key derived from the private key as the means to authenticate specific runners. diff --git a/content/en/actions/private_actions/run_script.md b/content/en/actions/private_actions/run_script.md index dfc5bc8ab7b80..385e10200308f 100644 --- a/content/en/actions/private_actions/run_script.md +++ b/content/en/actions/private_actions/run_script.md @@ -7,7 +7,7 @@ disable_toc: false This page explains how to use the private action runner (PAR), which allows you to run custom scripts and Linux binaries within your Datadog workflows and apps. Unlike standard private actions that call specific APIs or services, the script action gives you the flexibility to execute arbitrary commands, shell scripts, and command-line tools directly from the private action runner in your private network. -
+
Security Notice: The PAR script action runs within a containerized environment using a dedicated Linux user named scriptuser for enhanced security. Datadog enforces container sandboxing and only accepts signed tasks, but you decide which binaries and scripts are allowed. Always review every command you add to the script action allow-list, especially ones that take dynamic user input. Ensure that your actions are configured with the least privileged commands, and carefully review the permissions you share through connections. For more information, see connection security considerations.
diff --git a/content/en/actions/workflows/track.md b/content/en/actions/workflows/track.md index 5fe174288d2e4..aede197c0147e 100644 --- a/content/en/actions/workflows/track.md +++ b/content/en/actions/workflows/track.md @@ -91,7 +91,7 @@ You can also filter the output by **Status** to see only `info`, `warn`, or `err ## Track workflow billing in Usage Attribution -
+
Usage Attribution is an advanced feature included in the Enterprise plan. For all other plans, contact your account representative or success@datadoghq.com to request this feature.
diff --git a/content/en/agent/configuration/agent-commands.md b/content/en/agent/configuration/agent-commands.md index a5850b7c5eca1..fbfa02d20d995 100644 --- a/content/en/agent/configuration/agent-commands.md +++ b/content/en/agent/configuration/agent-commands.md @@ -14,7 +14,7 @@ algolia: tags: ['agent status command'] --- -
+
For Linux based systems where the service wrapper command is not available, consult the list of alternatives.
diff --git a/content/en/agent/configuration/dual-shipping.md b/content/en/agent/configuration/dual-shipping.md index ed23d6f92ea3a..dc42218045a4d 100644 --- a/content/en/agent/configuration/dual-shipping.md +++ b/content/en/agent/configuration/dual-shipping.md @@ -12,7 +12,7 @@ further_reading: text: "Send logs to external destinations with Observability Pipelines" --- -
+
Dual shipping can impact billing if you are sending data to multiple Datadog organizations. For more information about the impact of this configuration, contact Datadog Support.
diff --git a/content/en/agent/configuration/fips-compliance.md b/content/en/agent/configuration/fips-compliance.md index 5337a7bd4de60..9fea27f78dcf9 100644 --- a/content/en/agent/configuration/fips-compliance.md +++ b/content/en/agent/configuration/fips-compliance.md @@ -54,7 +54,7 @@ The Datadog FIPS Agent does **not** support the following: [1]: /opentelemetry/setup/ddot_collector ## Compliance guidelines -
+
This is not an exhaustive list. These requirements are a baseline only. You are responsible for evaluating your environment and implementing any additional controls needed to achieve full FIPS compliance.
The following baseline controls apply to each platform. Your system may require additional controls: diff --git a/content/en/agent/configuration/network.md b/content/en/agent/configuration/network.md index 220e111be523b..fe315aa7cc417 100644 --- a/content/en/agent/configuration/network.md +++ b/content/en/agent/configuration/network.md @@ -25,7 +25,7 @@ algolia: ## Overview -
+
Traffic is always initiated by the Agent to Datadog. No sessions are ever initiated from Datadog back to the Agent.
@@ -248,7 +248,7 @@ Add all of the `ip-ranges` to your inclusion list. While only a subset are activ ## Open ports -
+
All outbound traffic is sent over SSL through TCP or UDP.

Ensure the Agent is only accessible by your applications or trusted network sources using a firewall rule or similar network restriction. Untrusted access can allow malicious actors to perform several invasive actions, including but not limited to writing traces and metrics to your Datadog account, or obtaining information about your configuration and services. @@ -367,7 +367,7 @@ The APM receiver and the DogStatsD ports are located in the **Trace Collection C # receiver_port: 8126 {{< /code-block >}} -
If you change the DogStatsD port or APM receiver port value here, you must also change the APM tracing library configuration for the corresponding port. See the information about configuring ports in the Library Configuration docs for your language.
+
If you change the DogStatsD port or APM receiver port value here, you must also change the APM tracing library configuration for the corresponding port. See the information about configuring ports in the Library Configuration docs for your language.
## Using proxies diff --git a/content/en/agent/faq/circleci-incident-impact-on-datadog-agent.md b/content/en/agent/faq/circleci-incident-impact-on-datadog-agent.md index c0c515e4bea21..7943b8f1f1a9b 100644 --- a/content/en/agent/faq/circleci-incident-impact-on-datadog-agent.md +++ b/content/en/agent/faq/circleci-incident-impact-on-datadog-agent.md @@ -28,7 +28,7 @@ title: Impact of the CircleCI Security Incident on the Datadog Agent -
Summary: Check your RPM-based Linux hosts (RHEL, CentOS, Rocky Linux, AlmaLinux, Amazon Linux, SUSE/SLES, Fedora) to find and fix any that trust the key with fingerprint 60A389A44A0C32BAE3C03F0B069B56F54172A230.
+
Summary: Check your RPM-based Linux hosts (RHEL, CentOS, Rocky Linux, AlmaLinux, Amazon Linux, SUSE/SLES, Fedora) to find and fix any that trust the key with fingerprint 60A389A44A0C32BAE3C03F0B069B56F54172A230.
On January 4th, 2023, Datadog was notified by CircleCI that they were investigating a [security incident][1] that may have led to leaking of stored secrets. Datadog identified a single secret stored in CircleCI that could theoretically be misused by a potential attacker, an old RPM GNU Privacy Guard (GPG) private signing key and its passphrase. This page provides information about the implications of the potential leak, actions you should take on your hosts, and the measures Datadog is taking to mitigate any risks to our customers. diff --git a/content/en/agent/faq/docker-hub.md b/content/en/agent/faq/docker-hub.md index e5cafaf9fa4a5..83be56d0bf138 100644 --- a/content/en/agent/faq/docker-hub.md +++ b/content/en/agent/faq/docker-hub.md @@ -2,7 +2,7 @@ title: Docker Hub --- -
Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
+
Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
If you are using Docker, there are several container images available through [GCR][11], and [ECR][12]. If you need to use Docker Hub: diff --git a/content/en/agent/faq/fips_proxy.md b/content/en/agent/faq/fips_proxy.md index e05f3e280bdf5..f24d7729dc9ce 100644 --- a/content/en/agent/faq/fips_proxy.md +++ b/content/en/agent/faq/fips_proxy.md @@ -8,13 +8,13 @@ further_reading: text: "FIPS Compliance" --- -
The Datadog FIPS Proxy is no longer the recommended solution for FIPS-compliant encryption of the Datadog Agent. Use the Datadog FIPS Agent instead.
+
The Datadog FIPS Proxy is no longer the recommended solution for FIPS-compliant encryption of the Datadog Agent. Use the Datadog FIPS Agent instead.
The Datadog Agent FIPS Proxy ensures that communication between the Datadog Agent and Datadog uses FIPS-compliant encryption. The Datadog Agent FIPS Proxy is a separately distributed component that you deploy on the same host as the Datadog Agent. The proxy acts as an intermediary between the Agent and Datadog intake. The Agent communicates with the Datadog Agent FIPS Proxy, which encrypts payloads using a FIPS 140-2 validated cryptography and relays the payloads to Datadog. The Datadog Agent and the Agent FIPS Proxy must be configured in tandem to communicate with one another. -
FIPS compliance is not retained if the Datadog Agent FIPS Proxy and the Datadog Agent are not on the same host. +
FIPS compliance is not retained if the Datadog Agent FIPS Proxy and the Datadog Agent are not on the same host.
Similarly, FIPS compliance is not retained if the fips.enabled option is not set to true in datadog.yaml.
## Supported platforms and limitations @@ -120,7 +120,7 @@ The `https` option is set to `false` because the Agent uses HTTP to communicate **Host security and hardening are your responsibilities.** -
The fips.enabled setting defaults to false in the Agent. It must be set to true to ensure all communications are forwarded through the Datadog Agent FIPS Proxy.

If fips.enabled is not set to true, the Agent is not FIPS Compliant.
+
The fips.enabled setting defaults to false in the Agent. It must be set to true to ensure all communications are forwarded through the Datadog Agent FIPS Proxy.

If fips.enabled is not set to true, the Agent is not FIPS Compliant.
### Verify your installation @@ -182,7 +182,7 @@ The `use_https` option is set to `false` because the Agent uses HTTP to communic **Host security and hardening are your responsibilities.** -
The fips.enabled setting defaults to false in the Agent. It must be set to true to ensure all communications are forwarded through the Datadog Agent FIPS Proxy.

If fips.enabled is not set to true, the Agent is not FIPS Compliant.
+
The fips.enabled setting defaults to false in the Agent. It must be set to true to ensure all communications are forwarded through the Datadog Agent FIPS Proxy.

If fips.enabled is not set to true, the Agent is not FIPS Compliant.
{{% /tab %}} diff --git a/content/en/agent/faq/proxy_example_haproxy.md b/content/en/agent/faq/proxy_example_haproxy.md index 2aa314df17240..48dab84bc0709 100644 --- a/content/en/agent/faq/proxy_example_haproxy.md +++ b/content/en/agent/faq/proxy_example_haproxy.md @@ -11,7 +11,7 @@ private: true ## Overview -
+
Datadog discourages forwarding traffic using software like HAProxy or NGINX because it requires you to manually configure and maintain the list of specific Datadog endpoints the Agent needs to reach. This list can change, leading to potential data loss if not kept up-to-date. The only exception is if you need Deep Packet Inspection (DPI) capabilities, in which case you might consider using HAProxy or NGINX as they allow you to disable TLS or use your own TLS certificates and inspect the traffic.
diff --git a/content/en/agent/faq/proxy_example_nginx.md b/content/en/agent/faq/proxy_example_nginx.md index 93415ca250560..312e1ec0f264e 100644 --- a/content/en/agent/faq/proxy_example_nginx.md +++ b/content/en/agent/faq/proxy_example_nginx.md @@ -9,7 +9,7 @@ private: true ## Overview -
+
Datadog discourages forwarding traffic using software like HAProxy or NGINX because it requires you to manually configure and maintain the list of specific Datadog endpoints the Agent needs to reach. This list can change, leading to potential data loss if not kept up-to-date. The only exception is if you need Deep Packet Inspection (DPI) capabilities, in which case you might consider using HAProxy or NGINX as they allow you to disable TLS or use your own TLS certificates and inspect the traffic.
diff --git a/content/en/agent/faq/rpm-gpg-key-rotation-agent-6.md b/content/en/agent/faq/rpm-gpg-key-rotation-agent-6.md index 7fc251e2c09b2..6f10058b7dd63 100644 --- a/content/en/agent/faq/rpm-gpg-key-rotation-agent-6.md +++ b/content/en/agent/faq/rpm-gpg-key-rotation-agent-6.md @@ -2,7 +2,7 @@ title: RPM GPG Key Rotation --- -
+
This page pertains to the 2019 key rotation. For the 2022 key rotation, consult the 2022 Linux Agent Key Rotation documentation.
diff --git a/content/en/agent/guide/agent-5-kubernetes-basic-agent-usage.md b/content/en/agent/guide/agent-5-kubernetes-basic-agent-usage.md index 3702276f4c9ce..ba1408f1ff607 100644 --- a/content/en/agent/guide/agent-5-kubernetes-basic-agent-usage.md +++ b/content/en/agent/guide/agent-5-kubernetes-basic-agent-usage.md @@ -8,7 +8,7 @@ private: true {{< img src="integrations/kubernetes/k8sdashboard.png" alt="Kubernetes Dashboard" >}} -
+
The Datadog Agent v5 is supported up to Kubernetes version 1.8, for latest version of Kubernetes use the Datadog Agent v6.
diff --git a/content/en/agent/guide/agent-5-ports.md b/content/en/agent/guide/agent-5-ports.md index f30f90ecf791a..0892b096b7452 100644 --- a/content/en/agent/guide/agent-5-ports.md +++ b/content/en/agent/guide/agent-5-ports.md @@ -6,7 +6,7 @@ private: true This page covers the ports used by Agent 5. For information on the latest version of the Agent, see [Network Traffic][1]. -
+
All outbound traffic is sent over SSL through TCP or UDP.

Ensure the Agent is only accessible by your applications or trusted network sources using a firewall rule or similar network restriction. Untrusted access can allow malicious actors to perform several invasive actions, including but not limited to writing traces and metrics to your Datadog account, or obtaining information about your configuration and services. diff --git a/content/en/agent/guide/agent-5-proxy.md b/content/en/agent/guide/agent-5-proxy.md index 477ef83511e88..483f6defe5f45 100644 --- a/content/en/agent/guide/agent-5-proxy.md +++ b/content/en/agent/guide/agent-5-proxy.md @@ -26,7 +26,7 @@ For specific information regarding Squid, see the [Squid](#squid) section of thi Traditional web proxies are supported natively by the Agent. If you need to connect to the Internet through a proxy, edit your Agent configuration file. -
+
The <HOST>:<PORT> used to proxy metrics can NOT be used to proxy logs. See the Proxy for Logs page.
diff --git a/content/en/agent/guide/azure-private-link.md b/content/en/agent/guide/azure-private-link.md index 3662ebe596d8a..06ae301cf9e4b 100644 --- a/content/en/agent/guide/azure-private-link.md +++ b/content/en/agent/guide/azure-private-link.md @@ -4,7 +4,7 @@ description: Configure Azure Private Link to send telemetry to Datadog securely --- {{% site-region region="us,us5,eu,gov,ap1,ap2" %}} -
This feature is not supported for the selected Datadog site.
+
This feature is not supported for the selected Datadog site.
{{% /site-region %}} {{% site-region region="us3" %}} diff --git a/content/en/agent/guide/datadog-disaster-recovery.md b/content/en/agent/guide/datadog-disaster-recovery.md index 0ae00367ef3c4..eddc8850373c8 100644 --- a/content/en/agent/guide/datadog-disaster-recovery.md +++ b/content/en/agent/guide/datadog-disaster-recovery.md @@ -72,7 +72,7 @@ Email your new org name to your [Customer Success Manager](mailto:success@datado {{% collapse-content title="Retrieve the public IDs and link your DDR and primary orgs " level="h5" %}} -
For security reasons, Datadog is unable to link the orgs on your behalf.
+
For security reasons, Datadog is unable to link the orgs on your behalf.
After the Datadog team has set your DDR org, use the Datadog [public API endpoint][8] to retrieve the public IDs of the primary and DDR org. @@ -167,7 +167,7 @@ Here's an example of a datadog-sync-cli command for syncing log configurations: datadog-sync migrate โ€“config config โ€“resources="users,roles,logs_pipelines,logs_pipelines_order,logs_indexes,logs_indexes_order,logs_metrics,logs_restriction_queries" โ€“cleanup=Force ``` -
datadog-sync-cli limitation for log standard attributes
The datadog-sync-cli is regularly being updated with new resources. At this time, syncing log standard attributes is not supported for private beta. If you use standard attributes with your log pipelines and are remapping your logs, attributes are a dependency that you need to manually re-configure in your DDR org. See the Datadog standard attribute documentation for support. +
datadog-sync-cli limitation for log standard attributes
The datadog-sync-cli is regularly being updated with new resources. At this time, syncing log standard attributes is not supported for private beta. If you use standard attributes with your log pipelines and are remapping your logs, attributes are a dependency that you need to manually re-configure in your DDR org. See the Datadog standard attribute documentation for support.
#### Verify availability at the DDR site @@ -216,7 +216,7 @@ Then, follow the prompt to scope the hosts and telemetry (metrics, logs, traces) {{< img src="/agent/guide/ddr/ddr-fa-policy-scope.png" alt="Scope the hosts and telemetry required to failover" style="width:80%;" >}} -
+
Note: Cloud Integrations can only run in either your primary or DDR Datadog site, but not both at the same time, so failing them over will cease Cloud Integration data in your primary site. During an integration failover, integrations run only in the DDR data center. When no longer in failover, disable the failover policy to return integration data collection to the primary org.
@@ -334,7 +334,7 @@ You can test failover for your cloud integrations from your DDR organization's l On the failover landing page, you can check the status of your DDR org, or click **Fail over your integrations** to test your cloud integration failover. -
+
When no longer in failover, disable the failover policy in the DDR org to return integration data collection to the primary org.
diff --git a/content/en/agent/guide/dogstream.md b/content/en/agent/guide/dogstream.md index 7e337f917988c..56b30c88e9b02 100644 --- a/content/en/agent/guide/dogstream.md +++ b/content/en/agent/guide/dogstream.md @@ -5,7 +5,7 @@ aliases: - /agent/faq/dogstream --- -
+
This is a deprecated feature of Agent 5. New feature releases are discontinued.
@@ -61,7 +61,7 @@ If your custom log parser is not working, the first thing to check are the Agent * If all goes well you should see `dogstream: parsing {filename} with {function name} (requested {config option text})`. -
+
To test that dogstreams are working, append a line-don't edit an existing one-to any log file you've configured the Agent to watch. The Agent only tails the end of each log file, so it doesn't notice any changes you make elsewhere in the file.
diff --git a/content/en/agent/guide/environment-variables.md b/content/en/agent/guide/environment-variables.md index 00cf90c606e0a..ba02f430c0a27 100644 --- a/content/en/agent/guide/environment-variables.md +++ b/content/en/agent/guide/environment-variables.md @@ -16,7 +16,7 @@ further_reading: text: "Proxy environment variables" --- -
+
For Agent v5, reference the Docker Agent GitHub repo.
diff --git a/content/en/agent/guide/gcp-private-service-connect.md b/content/en/agent/guide/gcp-private-service-connect.md index 33cae03d23957..97752a41bb991 100644 --- a/content/en/agent/guide/gcp-private-service-connect.md +++ b/content/en/agent/guide/gcp-private-service-connect.md @@ -11,7 +11,7 @@ further_reading: --- {{% site-region region="us,us3,gov,ap1,ap2" %}} -
This feature is not supported for the selected Datadog site.
+
This feature is not supported for the selected Datadog site.
{{% /site-region %}} {{% site-region region="us5" %}} diff --git a/content/en/agent/guide/private-link.md b/content/en/agent/guide/private-link.md index 6bb55defc0dd7..9d11b00a1a594 100644 --- a/content/en/agent/guide/private-link.md +++ b/content/en/agent/guide/private-link.md @@ -26,7 +26,7 @@ further_reading: --- {{% site-region region="us3,us5,eu,gov" %}} -
Datadog PrivateLink does not support the selected Datadog site.
+
Datadog PrivateLink does not support the selected Datadog site.
{{% /site-region %}} {{% site-region region="us,ap1,ap2" %}} diff --git a/content/en/agent/guide/setup_remote_config.md b/content/en/agent/guide/setup_remote_config.md index 7b3a7ba4bf217..b877a4cb7162a 100644 --- a/content/en/agent/guide/setup_remote_config.md +++ b/content/en/agent/guide/setup_remote_config.md @@ -100,7 +100,7 @@ datadog: ### At the organization level -
Datadog does not recommend disabling Remote Configuration at the organization level. Disabling Remote Configuration at the organization level prevents Datadog components in several products across your organization from receiving configurations from Datadog.
+
Datadog does not recommend disabling Remote Configuration at the organization level. Disabling Remote Configuration at the organization level prevents Datadog components in several products across your organization from receiving configurations from Datadog.
To disable Remote Configuration at the organization level: 1. Ensure you have the required `org_management` permission. diff --git a/content/en/agent/logs/advanced_log_collection.md b/content/en/agent/logs/advanced_log_collection.md index 30ec5fafe0239..0f50006cab7ac 100644 --- a/content/en/agent/logs/advanced_log_collection.md +++ b/content/en/agent/logs/advanced_log_collection.md @@ -551,7 +551,7 @@ spec: {{% /tab %}} {{< /tabs >}} -
Important! Regex patterns for multi-line logs must start at the beginning of a log. Patterns cannot be matched mid-line. A never matching pattern may cause log line losses.

Log collection works with precision of up to millisecond. Logs with greater precision are not sent even if they match the pattern.
+
Important! Regex patterns for multi-line logs must start at the beginning of a log. Patterns cannot be matched mid-line. A never matching pattern may cause log line losses.

Log collection works with precision of up to millisecond. Logs with greater precision are not sent even if they match the pattern.
More examples: diff --git a/content/en/agent/logs/auto_multiline_detection.md b/content/en/agent/logs/auto_multiline_detection.md index 9a68179200241..1e76f320a5e12 100644 --- a/content/en/agent/logs/auto_multiline_detection.md +++ b/content/en/agent/logs/auto_multiline_detection.md @@ -12,7 +12,7 @@ algolia: tags: ['advanced log filter'] --- -
This feature is available for Agent version 7.65.0+ and above. For older Agent versions or to explicitly enable the legacy implementation, see Auto Multi-line Detection and Aggregation (Legacy).
+
This feature is available for Agent version 7.65.0+ and above. For older Agent versions or to explicitly enable the legacy implementation, see Auto Multi-line Detection and Aggregation (Legacy).
## Overview diff --git a/content/en/agent/logs/auto_multiline_detection_legacy.md b/content/en/agent/logs/auto_multiline_detection_legacy.md index 1e661d40f1522..990374222aff6 100644 --- a/content/en/agent/logs/auto_multiline_detection_legacy.md +++ b/content/en/agent/logs/auto_multiline_detection_legacy.md @@ -27,7 +27,7 @@ algolia: tags: ['advanced log filter'] --- -
This document applies to Agent versions earlier than v7.65.0, or when the legacy auto multi-line detection is explicitly enabled. For newer Agent versions, please see Auto Multi-line Detection and Aggregation.
+
This document applies to Agent versions earlier than v7.65.0, or when the legacy auto multi-line detection is explicitly enabled. For newer Agent versions, please see Auto Multi-line Detection and Aggregation.
## Global automatic multi-line aggregation With Agent 7.37+, you can enable `auto_multi_line_detection` to automatically detect [common multi-line patterns][1] across **all** configured log integrations. diff --git a/content/en/agent/logs/proxy.md b/content/en/agent/logs/proxy.md index eb49d897ea418..2afff5ba24c0c 100644 --- a/content/en/agent/logs/proxy.md +++ b/content/en/agent/logs/proxy.md @@ -14,7 +14,7 @@ further_reading: --- {{% site-region region="us3,eu,us5,gov,ap1,ap2" %}} -
+
TCP is not available for the {{< region-param key="dd_site_name" >}} site. Contact support for more information.
{{% /site-region %}} diff --git a/content/en/agent/supported_platforms/sccm.md b/content/en/agent/supported_platforms/sccm.md index 664b1e8f65309..101b12fc90e22 100644 --- a/content/en/agent/supported_platforms/sccm.md +++ b/content/en/agent/supported_platforms/sccm.md @@ -52,7 +52,7 @@ Microsoft SCCM (Systems Center Configuration Manager) is a configuration managem ### Deploy the Datadog Agent application -
Before deploying the Datadog Agent application, make sure you've installed and configured Distribution Points in Configuration Manager
+
Before deploying the Datadog Agent application, make sure you've installed and configured Distribution Points in Configuration Manager
1. Go to **Software Library** > **Overview** > **Application Management** > **Applications** and select the Datadog Agent application you created earlier. 1. From the **Home** tab in the **Deployment** group, select **Deploy**. diff --git a/content/en/agent/troubleshooting/send_a_flare.md b/content/en/agent/troubleshooting/send_a_flare.md index d9f8ac0251c3b..0424d5c29702c 100644 --- a/content/en/agent/troubleshooting/send_a_flare.md +++ b/content/en/agent/troubleshooting/send_a_flare.md @@ -29,7 +29,7 @@ When contacting Datadog Support with Remote Configuration enabled for an Agent, ## Send a flare from the Datadog site {{< site-region region="gov" >}} -
Sending an Agent Flare from Fleet Automation is not supported for this site.
+
Sending an Agent Flare from Fleet Automation is not supported for this site.
{{< /site-region >}} To send a flare from the Datadog site, make sure you've enabled [Fleet Automation][2] and [Remote configuration][3] on the Agent. diff --git a/content/en/api/latest/rate-limits/_index.md b/content/en/api/latest/rate-limits/_index.md index a2dc5f25d1ac1..14ac2250cc09e 100644 --- a/content/en/api/latest/rate-limits/_index.md +++ b/content/en/api/latest/rate-limits/_index.md @@ -18,7 +18,7 @@ Regarding the API rate limit policy: - The rate limit for event submission is `250,000` events per minute per organization. - The rate limits for endpoints vary and are included in the headers detailed below. These can be extended on demand. -
+
The list above is not comprehensive of all rate limits on Datadog APIs. If you are experiencing rate limiting, reach out to support for more information about the APIs you're using and their limits.
| Rate Limit Headers | Description | diff --git a/content/en/api/latest/scopes/_index.md b/content/en/api/latest/scopes/_index.md index 6ec1b1a81eff6..81083fdf7d248 100644 --- a/content/en/api/latest/scopes/_index.md +++ b/content/en/api/latest/scopes/_index.md @@ -7,7 +7,7 @@ disable_sidebar: true Scopes are an authorization mechanism that allow you to limit and define the specific access applications have to an organization's Datadog data. When authorized to access data on behalf of a user or service account, applications can only access the information explicitly permitted by their assigned scopes. -
This page lists only the authorization scopes that can be assigned to OAuth clients. To view the full list of assignable permissions for scoped application keys, see Datadog Role Permissions. +
This page lists only the authorization scopes that can be assigned to OAuth clients. To view the full list of assignable permissions for scoped application keys, see Datadog Role Permissions.
  • OAuth clients โ†’ Can only be assigned authorization scopes (limited set).
  • diff --git a/content/en/api/v1/rate-limits/_index.md b/content/en/api/v1/rate-limits/_index.md index f9e71baae225e..3f3434b4b9c19 100644 --- a/content/en/api/v1/rate-limits/_index.md +++ b/content/en/api/v1/rate-limits/_index.md @@ -18,7 +18,7 @@ Regarding the API rate limit policy: - The rate limit for event submission is `50,000` events per minute per organization. - The rate limits for endpoints vary and are included in the headers detailed below. These can be extended on demand. -
    +
    The list above is not comprehensive of all rate limits on Datadog APIs. If you are experiencing rate limiting, reach out to support for more information about the APIs you're using and their limits.
    | Rate Limit Headers | Description | diff --git a/content/en/api/v2/rate-limits/_index.md b/content/en/api/v2/rate-limits/_index.md index 2b0295d5938a0..8ab293d54a9d0 100644 --- a/content/en/api/v2/rate-limits/_index.md +++ b/content/en/api/v2/rate-limits/_index.md @@ -18,7 +18,7 @@ Regarding the API rate limit policy: - The rate limit for event submission is `500,000` events per hour per organization. - The rate limits for endpoints vary and are included in the headers detailed below. These can be extended on demand. -
    +
    The list above is not comprehensive of all rate limits on Datadog APIs. If you are experiencing rate limiting, reach out to support for more information about the APIs you're using and their limits.
    | Rate Limit Headers | Description | diff --git a/content/en/bits_ai/mcp_server/setup/_index.md b/content/en/bits_ai/mcp_server/setup/_index.md index 3c81e55685ddd..c4d35027c3426 100644 --- a/content/en/bits_ai/mcp_server/setup/_index.md +++ b/content/en/bits_ai/mcp_server/setup/_index.md @@ -24,7 +24,7 @@ further_reading: The Datadog MCP Server is in Preview. There is no charge for using the Datadog MCP Server during the Preview. If you're interested in this feature and need access, complete this form. Learn more about the MCP Server on the Datadog blog. {{< /callout >}} -
    +

    Disclaimers

    • The Datadog MCP Server is not supported for production use during the Preview.
    • diff --git a/content/en/change_tracking/_index.md b/content/en/change_tracking/_index.md index 3c0d3cd6b47c1..956b259fbde9d 100644 --- a/content/en/change_tracking/_index.md +++ b/content/en/change_tracking/_index.md @@ -35,7 +35,7 @@ further_reading: --- {{< site-region region="gov" >}} -
      Change Tracking is not available in the selected site ({{< region-param key="dd_site_name" >}})
      +
      Change Tracking is not available in the selected site ({{< region-param key="dd_site_name" >}})
      {{< /site-region >}} ## Overview diff --git a/content/en/cloud_cost_management/multisource_querying/_index.md b/content/en/cloud_cost_management/multisource_querying/_index.md index f7ead05400ed8..54729ae5af45c 100644 --- a/content/en/cloud_cost_management/multisource_querying/_index.md +++ b/content/en/cloud_cost_management/multisource_querying/_index.md @@ -104,7 +104,7 @@ The following FOCUS tags are available in Cloud Cost Management: The `all.cost` metric has [Container costs allocated][13] for AWS, Azure, and Google Cloud costs, so you can query by the [relevant container tags][14]. -
      If your organization tags with any of these FOCUS tags, Datadog recommends updating your tag key on the underlying infrastructure so that tag values do not overlap with FOCUS tag values in Cloud Cost Management.
      +
      If your organization tags with any of these FOCUS tags, Datadog recommends updating your tag key on the underlying infrastructure so that tag values do not overlap with FOCUS tag values in Cloud Cost Management.
      ## Currency conversion Cloud Cost Management retrieves the billing currency from each cloud provider's bill. When processing costs from multiple providers in different currencies, cost charges are converted to USD. This conversion is performed using the average monthly exchange rate, which is updated daily. This ensures that Cloud Cost Management can consistently and accurately represent all cost data, regardless of its original currency. To view your cost in the original billing currency, filter to a single provider. diff --git a/content/en/cloud_cost_management/recommendations/custom_recommendations.md b/content/en/cloud_cost_management/recommendations/custom_recommendations.md index ccbd8fc7f8dc2..4f8f8022822d8 100644 --- a/content/en/cloud_cost_management/recommendations/custom_recommendations.md +++ b/content/en/cloud_cost_management/recommendations/custom_recommendations.md @@ -28,7 +28,7 @@ With custom recommendations, you can: ## Customize a recommendation -
      To customize a recommendation, you must be assigned the **Cloud Cost Management - Cloud Cost Management Write** permission.
      +
      To customize a recommendation, you must be assigned the **Cloud Cost Management - Cloud Cost Management Write** permission.
      Customizations reflect within 24 hours, when recommendations are generated next.
      diff --git a/content/en/cloud_cost_management/setup/aws.md b/content/en/cloud_cost_management/setup/aws.md index 1fabd9d6c11b5..eb068475f35a8 100644 --- a/content/en/cloud_cost_management/setup/aws.md +++ b/content/en/cloud_cost_management/setup/aws.md @@ -108,7 +108,7 @@ To enable Datadog to locate the Cost and Usage Report, complete the fields with **Note**: Datadog only supports legacy CURs generated by AWS. Do not modify or move the files generated by AWS, or attempt to provide access to files generated by a 3rd party. {{< site-region region="gov" >}} -
      The AWS Cost and Usage Reports endpoint is used to validate the above fields against the CUR export in your S3 bucket. This endpoint is not FIPS validated.
      +
      The AWS Cost and Usage Reports endpoint is used to validate the above fields against the CUR export in your S3 bucket. This endpoint is not FIPS validated.
      {{< /site-region >}} ### Configure access to the Cost and Usage Report diff --git a/content/en/cloud_cost_management/setup/google_cloud.md b/content/en/cloud_cost_management/setup/google_cloud.md index 7abd20aef8dd3..7039cfeee527d 100644 --- a/content/en/cloud_cost_management/setup/google_cloud.md +++ b/content/en/cloud_cost_management/setup/google_cloud.md @@ -32,7 +32,7 @@ To use Google Cloud Cost Management in Datadog, follow these steps: Navigate to [Setup & Configuration][3], and select a Google Cloud Platform integration. If you do not see your desired Service Account in the list, go to the [Google Cloud Platform integration][4] to configure it. -
      +
      The Datadog Google Cloud Platform integration allows Cloud Costs to automatically monitor all projects this service account has access to. To limit infrastructure monitoring hosts for these projects, apply tags to the hosts. Then define whether the tags should be included or excluded from monitoring in the Limit Metric Collection Filters section of the integration page.
      diff --git a/content/en/cloud_cost_management/tag_explorer/_index.md b/content/en/cloud_cost_management/tag_explorer/_index.md index 60212c5542c1f..82f5167e1bf7f 100644 --- a/content/en/cloud_cost_management/tag_explorer/_index.md +++ b/content/en/cloud_cost_management/tag_explorer/_index.md @@ -65,7 +65,7 @@ For Google Cloud tags, select **Google** from the dropdown menu on the top right {{% /tab %}} {{% tab "Datadog" %}} -
      Daily Datadog costs are in Preview.
      +
      Daily Datadog costs are in Preview.
      For Datadog tags, select **Datadog** from the dropdown menu on the top right corner. @@ -74,7 +74,7 @@ For Datadog tags, select **Datadog** from the dropdown menu on the top right cor {{% /tab %}} {{% tab "Confluent Cloud" %}} -
      Confluent Cloud costs are in Preview.
      +
      Confluent Cloud costs are in Preview.
      For Confluent Cloud tags, select **Confluent Cloud** from the dropdown menu on the top right corner. @@ -83,7 +83,7 @@ For Confluent Cloud tags, select **Confluent Cloud** from the dropdown menu on t {{% /tab %}} {{% tab "Databricks" %}} -
      Databricks costs are in Preview.
      +
      Databricks costs are in Preview.
      For Databricks tags, select **Databricks** from the dropdown menu on the top right corner. @@ -92,7 +92,7 @@ For Databricks tags, select **Databricks** from the dropdown menu on the top rig {{% /tab %}} {{% tab "Fastly" %}} -
      Fastly costs are in Preview.
      +
      Fastly costs are in Preview.
      For Fastly tags, select **Fastly** from the dropdown menu on the top right corner. @@ -101,7 +101,7 @@ For Fastly tags, select **Fastly** from the dropdown menu on the top right corne {{% /tab %}} {{% tab "Elastic Cloud" %}} -
      Elastic Cloud costs are in Preview.
      +
      Elastic Cloud costs are in Preview.
      For Elastic Cloud tags, select **Elastic Cloud** from the dropdown menu on the top right corner. @@ -110,7 +110,7 @@ For Elastic Cloud tags, select **Elastic Cloud** from the dropdown menu on the t {{% /tab %}} {{% tab "MongoDB" %}} -
      MongoDB costs are in Preview.
      +
      MongoDB costs are in Preview.
      For MongoDB tags, select **MongoDB** from the dropdown menu on the top right corner. @@ -119,7 +119,7 @@ For MongoDB tags, select **MongoDB** from the dropdown menu on the top right cor {{% /tab %}} {{% tab "OpenAI" %}} -
      OpenAI costs are in Preview.
      +
      OpenAI costs are in Preview.
      For OpenAI tags, select **OpenAI** from the dropdown menu on the top right corner. @@ -128,7 +128,7 @@ For OpenAI tags, select **OpenAI** from the dropdown menu on the top right corne {{% /tab %}} {{% tab "Snowflake" %}} -
      Snowflake costs are in Preview.
      +
      Snowflake costs are in Preview.
      For Snowflake tags, select **Snowflake** from the dropdown menu on the top right corner. @@ -137,7 +137,7 @@ For Snowflake tags, select **Snowflake** from the dropdown menu on the top right {{% /tab %}} {{% tab "Twilio" %}} -
      Twilio costs are in Preview.
      +
      Twilio costs are in Preview.
      For Twilio tags, select **Twilio** from the dropdown menu on the top right corner. diff --git a/content/en/cloud_cost_management/tag_pipelines.md b/content/en/cloud_cost_management/tag_pipelines.md index 7b4252166bb77..f9d4ebc844b1a 100644 --- a/content/en/cloud_cost_management/tag_pipelines.md +++ b/content/en/cloud_cost_management/tag_pipelines.md @@ -26,7 +26,7 @@ When tag pipelines change, the new rules are automatically applied to the most r To create a ruleset, navigate to [**Cloud Cost > Settings > Tag Pipelines**][1]. -
      You can create up to 100 rules. API-based Reference Tables are not supported.
      +
      You can create up to 100 rules. API-based Reference Tables are not supported.
      Before creating individual rules, create a ruleset (a folder for your rules) by clicking **+ New Ruleset**. diff --git a/content/en/cloudcraft/account-management/cancel-subscription.md b/content/en/cloudcraft/account-management/cancel-subscription.md index 88408848bad44..d74f55b1647fe 100644 --- a/content/en/cloudcraft/account-management/cancel-subscription.md +++ b/content/en/cloudcraft/account-management/cancel-subscription.md @@ -26,7 +26,7 @@ If you do not see the **Cancel subscription** option, you may have bought your s - If you bought your subscription through the Datadog sales team, it does not auto-renew and expires at the end of your billing cycle. - If you bought your subscription through the AWS Marketplace, the cancellation process must be done through your AWS account. For more information, see [Cancel your product subscription][2] in the AWS Marketplace documentation page. -
      Canceling a subscription does not remove your data from Cloudcraft's servers. Instead, it changes your account to the free plan. If you wish to delete your account and all data from Cloudcraft's servers, contact the Cloudcraft support team. +
      Canceling a subscription does not remove your data from Cloudcraft's servers. Instead, it changes your account to the free plan. If you wish to delete your account and all data from Cloudcraft's servers, contact the Cloudcraft support team.
      [1]: /cloudcraft/getting-started/using-bits-menu/ diff --git a/content/en/cloudcraft/getting-started/crafting-better-diagrams.md b/content/en/cloudcraft/getting-started/crafting-better-diagrams.md index 66c90e46afefe..229ebc9e75a3b 100644 --- a/content/en/cloudcraft/getting-started/crafting-better-diagrams.md +++ b/content/en/cloudcraft/getting-started/crafting-better-diagrams.md @@ -33,7 +33,7 @@ Under **Region**, select the regions you want to scan. By default, `Global` and After you make your selections, regions are scanned automatically and the number of resources found is displayed next to the region name. You can click the **Sync** button above the **Region** section to trigger a manual scan of all selected regions. -
      Selecting many regions may impact performance of the live scanning process.
      +
      Selecting many regions may impact performance of the live scanning process.
      ## Filter resources diff --git a/content/en/cloudcraft/getting-started/system-requirements.md b/content/en/cloudcraft/getting-started/system-requirements.md index dc8b189f7146b..f1df341c18bb5 100644 --- a/content/en/cloudcraft/getting-started/system-requirements.md +++ b/content/en/cloudcraft/getting-started/system-requirements.md @@ -4,4 +4,4 @@ title: System Requirements For the ideal user experience, Cloudcraft recommends using Google Chrome or other Chromium-based browsers, like Microsoft Edge, Brave, and Opera. You may experience performance issues when using other browsers, such as Firefox and Safari. -
      Cloudcraft might not work correctly with beta or pre-release versions of browsers.
      +
      Cloudcraft might not work correctly with beta or pre-release versions of browsers.
      diff --git a/content/en/cloudprem/configure/ingress.md b/content/en/cloudprem/configure/ingress.md index d067701e20724..c277258c08529 100644 --- a/content/en/cloudprem/configure/ingress.md +++ b/content/en/cloudprem/configure/ingress.md @@ -20,7 +20,7 @@ Ingress is a critical component of your CloudPrem deployment. The Helm chart aut ## Public ingress -
      Only the CloudPrem gRPC API endpoints (paths starting with /cloudprem) perform mutual TLS authentication. Exposing any other endpoints through the public ingress introduces a security risk, as those endpoints would be accessible over the internet without authentication. Always restrict non-gRPC endpoints to the internal ingress.
      +
      Only the CloudPrem gRPC API endpoints (paths starting with /cloudprem) perform mutual TLS authentication. Exposing any other endpoints through the public ingress introduces a security risk, as those endpoints would be accessible over the internet without authentication. Always restrict non-gRPC endpoints to the internal ingress.
      The public ingress is essential for enabling Datadog's control plane and query service to manage and query CloudPrem clusters over the public internet. It provides secure access to the CloudPrem gRPC API through the following mechanisms: - Creates an internet-facing AWS Application Load Balancer (ALB) that accepts traffic from Datadog services diff --git a/content/en/containers/amazon_ecs/_index.md b/content/en/containers/amazon_ecs/_index.md index 757316586edf2..d492ccbd4f0bf 100644 --- a/content/en/containers/amazon_ecs/_index.md +++ b/content/en/containers/amazon_ecs/_index.md @@ -223,7 +223,7 @@ To collect Live Process information for all your containers and send it to Datad #### Cloud Network Monitoring -
      +
      This feature is only available for Linux.
      diff --git a/content/en/containers/cluster_agent/_index.md b/content/en/containers/cluster_agent/_index.md index 8b8f152949d28..f7578305381b5 100644 --- a/content/en/containers/cluster_agent/_index.md +++ b/content/en/containers/cluster_agent/_index.md @@ -38,7 +38,7 @@ If you're using Docker, the Datadog Cluster Agent is available on Docker Hub and |--------------------------------------------------|-----------------------------------------------------------| | [hub.docker.com/r/datadog/cluster-agent][2] | [gcr.io/datadoghq/cluster-agent][3] | -
      Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
      +
      Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
      ### Minimum Agent and Cluster Agent versions diff --git a/content/en/containers/guide/container-discovery-management.md b/content/en/containers/guide/container-discovery-management.md index a422eb9446ebf..267227d3bcb2b 100644 --- a/content/en/containers/guide/container-discovery-management.md +++ b/content/en/containers/guide/container-discovery-management.md @@ -39,7 +39,7 @@ Use the environment variables in the table below to configure container filterin - container image name (`image`) - Kubernetes namespace (`kube_namespace`) -
      +
      The `name` parameter only applies to container names, not pod names, even if the container runs in a Kubernetes pod. diff --git a/content/en/containers/guide/datadogoperator_migration.md b/content/en/containers/guide/datadogoperator_migration.md index 567f730089b26..d192d78b44b80 100644 --- a/content/en/containers/guide/datadogoperator_migration.md +++ b/content/en/containers/guide/datadogoperator_migration.md @@ -7,11 +7,11 @@ aliases: ## Migrating to version 1.0 of the Datadog Operator -
      +
      The v1alpha1 DatadogAgent reconciliation in the Operator is deprecated since v1.2.0+ and will be removed in v1.7.0. After it's removed, you will not be able to configure the Datadog Operator to reconcile the v1alpha1 DatadogAgent CRD. However, you will still be able to apply a v1alpha1 manifest with the conversion webhook enabled using datadogCRDs.migration.datadogAgents.conversionWebhook.enabled.
      -
      +
      DatadogAgent v1alpha1 and the conversion webhook will be removed in v1.8.0. After it's removed, you will not be able to migrate unless you use earlier version of the Operator.
      diff --git a/content/en/containers/guide/kubernetes_daemonset.md b/content/en/containers/guide/kubernetes_daemonset.md index 7987608311423..55456e57f43ce 100644 --- a/content/en/containers/guide/kubernetes_daemonset.md +++ b/content/en/containers/guide/kubernetes_daemonset.md @@ -7,7 +7,7 @@ further_reading: text: "Install the Datadog Agent on Kubernetes" --- -
      +
      Datadog discourages using DaemonSets to deploy the Datadog Agent because the manual process is prone to errors. Datadog recommends that you use Datadog Operator or Helm to install the Agent on Kubernetes.
      diff --git a/content/en/containers/kubernetes/installation.md b/content/en/containers/kubernetes/installation.md index 8f00ad43a29ed..bb53fce892335 100644 --- a/content/en/containers/kubernetes/installation.md +++ b/content/en/containers/kubernetes/installation.md @@ -229,7 +229,7 @@ By default, the Agent image is pulled from Google Artifact Registry (`gcr.io/dat If you are deploying the Agent in an AWS environment, Datadog recommend that you use Amazon ECR. -
      Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from Google Artifact Registry or Amazon ECR. For instructions, see Changing your container registry.
      +
      Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from Google Artifact Registry or Amazon ECR. For instructions, see Changing your container registry.
      {{< tabs >}} {{% tab "Datadog Operator" %}} diff --git a/content/en/containers/kubernetes/log.md b/content/en/containers/kubernetes/log.md index 83cd1052270c6..f5e0c33432966 100644 --- a/content/en/containers/kubernetes/log.md +++ b/content/en/containers/kubernetes/log.md @@ -137,7 +137,7 @@ datadog: {{% /tab %}} {{< /tabs >}} -
      +
      Warning for unprivileged installations

      When running an unprivileged installation, the Agent needs to be able to read log files in /var/log/pods. diff --git a/content/en/continuous_delivery/deployments/argocd.md b/content/en/continuous_delivery/deployments/argocd.md index 0b5500b28ac9c..b9ad80a3a06e0 100644 --- a/content/en/continuous_delivery/deployments/argocd.md +++ b/content/en/continuous_delivery/deployments/argocd.md @@ -219,7 +219,7 @@ If the command has been correctly run, deployments contain Git metadata from the If your Argo CD application deploys more than one service, Datadog can automatically infer the services deployed from an application sync. Datadog infers the services based on the Kubernetes resources that were modified. -
      +
      Automatic service discovery is not supported when Server-Side Apply is used.
      diff --git a/content/en/continuous_delivery/features/code_changes_detection.md b/content/en/continuous_delivery/features/code_changes_detection.md index fcd7d38896098..621a151a814a8 100644 --- a/content/en/continuous_delivery/features/code_changes_detection.md +++ b/content/en/continuous_delivery/features/code_changes_detection.md @@ -52,7 +52,7 @@ https://docs.datadoghq.com/integrations/guide/source-code-integration/?tab=githu {{< tabs >}} {{% tab "GitHub" %}} -
      +
      GitHub workflows running the pull_request trigger are not supported by the GitHub integration. If you are using the pull_request trigger, use the alternative method.
      @@ -70,7 +70,7 @@ To confirm that the setup is valid, select your GitHub App in the [GitHub integr {{% /tab %}} {{% tab "GitLab" %}} -
      Datadog's GitLab integration is in Preview. To request access to Datadog's GitLab integration for your organization, reach out to Datadog Support.
      +
      Datadog's GitLab integration is in Preview. To request access to Datadog's GitLab integration for your organization, reach out to Datadog Support.
      After your organization has access, follow the [GitLab installation guide][1]. @@ -86,7 +86,7 @@ When this command is executed, Datadog receives the repository URL, the commit S Run this command in CI for every new commit. When a deployment is executed for a specific commit SHA, ensure that the `datadog-ci git-metadata upload` command is run for that commit **before** the deployment event is sent. -
      +
      Do not provide the --no-gitsync option to the datadog-ci git-metadata upload command. When that option is included, the commit information is not sent to Datadog and changes are not detected.
      @@ -170,7 +170,7 @@ extensions: In this case, Code Changes Detection for deployments of the `shopist` service will consider the Git commits that include changes in the whole repository tree. -
      If a pattern is exactly ** or begins with it, enclose it in quotes, as * is reserved in YAML for anchors.
      +
      If a pattern is exactly ** or begins with it, enclose it in quotes, as * is reserved in YAML for anchors.
      ## Further Reading diff --git a/content/en/continuous_integration/pipelines/awscodepipeline.md b/content/en/continuous_integration/pipelines/awscodepipeline.md index 55f77de54f910..b09d32a1ded59 100644 --- a/content/en/continuous_integration/pipelines/awscodepipeline.md +++ b/content/en/continuous_integration/pipelines/awscodepipeline.md @@ -116,7 +116,7 @@ If you are using [Test Optimization][8] and your pipeline contains one or more [ The AWS CodePipeline integration supports correlating **CodeBuild** actions with their respective job and pipeline spans. To enable log collection for your CodeBuild actions, see the [AWS log forwarding guide][16]. -
      Note: Log correlation for CodeBuild actions requires the CodeBuild project to have the default CloudWatch log group and log stream names.
      +
      Note: Log correlation for CodeBuild actions requires the CodeBuild project to have the default CloudWatch log group and log stream names.
      Logs are billed separately from CI Visibility. Log retention, exclusion, and indexes are configured in Logs Settings. Logs for AWS CodeBuild can be identified by the `source:codebuild` and `sourcecategory:aws` tags. diff --git a/content/en/continuous_integration/pipelines/azure.md b/content/en/continuous_integration/pipelines/azure.md index f2ce57d96e0cc..14bb74155193c 100644 --- a/content/en/continuous_integration/pipelines/azure.md +++ b/content/en/continuous_integration/pipelines/azure.md @@ -14,7 +14,7 @@ further_reading: text: "Extend Pipeline Visibility by adding custom tags and measures" --- -
      +
      Azure DevOps Server is not officially supported.
      diff --git a/content/en/continuous_integration/pipelines/custom_commands.md b/content/en/continuous_integration/pipelines/custom_commands.md index 26c50f48310b6..1b888e9c449cf 100644 --- a/content/en/continuous_integration/pipelines/custom_commands.md +++ b/content/en/continuous_integration/pipelines/custom_commands.md @@ -53,7 +53,7 @@ echo "Hello World" {{< /site-region >}} {{< site-region region="gov" >}} -
      CI Visibility is not available in the selected site ({{< region-param key="dd_site_name" >}}).
      +
      CI Visibility is not available in the selected site ({{< region-param key="dd_site_name" >}}).
      {{< /site-region >}} ### Configuration settings @@ -124,7 +124,7 @@ DATADOG_API_KEY=<key> DATADOG_SITE={{< region-param key="dd_site" >}} data {{< /site-region >}} {{< site-region region="gov" >}} -
      CI Visibility is not available in the selected site ({{< region-param key="dd_site_name" >}}).
      +
      CI Visibility is not available in the selected site ({{< region-param key="dd_site_name" >}}).
      {{< /site-region >}} ### Configuration settings diff --git a/content/en/continuous_integration/pipelines/gitlab.md b/content/en/continuous_integration/pipelines/gitlab.md index a350d055b6450..d489816bac65c 100644 --- a/content/en/continuous_integration/pipelines/gitlab.md +++ b/content/en/continuous_integration/pipelines/gitlab.md @@ -168,7 +168,7 @@ kubectl exec -it -- \ Then, configure the integration on a [project][103] by going to **Settings > Integrations > Datadog** for each project you want to instrument. -
      Due to a bug in early versions of GitLab, the Datadog integration cannot be enabled at group or instance level on GitLab versions < 14.1, even if the option is available on GitLab's UI.
      +
      Due to a bug in early versions of GitLab, the Datadog integration cannot be enabled at group or instance level on GitLab versions < 14.1, even if the option is available on GitLab's UI.
      Fill in the integration configuration settings: @@ -212,7 +212,7 @@ You can test the integration with the **Test settings** button (only available w {{% tab "GitLab < 13.7" %}} -
      Direct support with webhooks is not under development. Unexpected issues could happen. Datadog recommends that you update GitLab instead.
      +
      Direct support with webhooks is not under development. Unexpected issues could happen. Datadog recommends that you update GitLab instead.
      For older versions of GitLab, you can use [webhooks][101] to send pipeline data to Datadog. @@ -369,7 +369,7 @@ To enable collection of job logs: {{% /tab %}} {{% tab "GitLab >= 15.3" %}} -
      Datadog downloads log files directly from your GitLab logs object storage with temporary pre-signed URLs. +
      Datadog downloads log files directly from your GitLab logs object storage with temporary pre-signed URLs. This means that for Datadog servers to access the storage, the storage must not have network restrictions The endpoint, if set, should resolve to a publicly accessible URL.
      @@ -379,7 +379,7 @@ The Datadog downloads log files directly from your GitLab logs object storage with temporary pre-signed URLs. +
      Datadog downloads log files directly from your GitLab logs object storage with temporary pre-signed URLs. This means that for Datadog servers to access the storage, the storage must not have network restrictions The endpoint, if set, should resolve to a publicly accessible URL.
      diff --git a/content/en/continuous_integration/pipelines/teamcity.md b/content/en/continuous_integration/pipelines/teamcity.md index 517dfebee72d5..5bb1daca6ffc3 100644 --- a/content/en/continuous_integration/pipelines/teamcity.md +++ b/content/en/continuous_integration/pipelines/teamcity.md @@ -69,7 +69,7 @@ provide information about the user email. When one of the other username styles is used (**UserId** or **Author Name**), the plugin automatically generates an email for the user by appending `@Teamcity` to the username. For example, if the **UserId** username style is used and the Git author username is `john.doe`, the plugin generates `john.doe@Teamcity` as the Git author email. The username style is defined for [VCS Roots][11], and can be modified in the VCS Root settings. -
      The Git author email is used for +
      The Git author email is used for billing purposes, therefore there might be cost implications when username styles not providing email (UserId or Author Name) are used. Reach out to the Datadog support team if you have any questions about your use case. diff --git a/content/en/continuous_testing/guide/view-continuous-testing-test-runs-in-test-optimization.md b/content/en/continuous_testing/guide/view-continuous-testing-test-runs-in-test-optimization.md index babe41cc1f9a4..c25b5f3855f0a 100644 --- a/content/en/continuous_testing/guide/view-continuous-testing-test-runs-in-test-optimization.md +++ b/content/en/continuous_testing/guide/view-continuous-testing-test-runs-in-test-optimization.md @@ -13,7 +13,7 @@ further_reading: text: 'Working with Flaky Tests' --- -{{< site-region region="gov" >}}
      Mobile Application Testing is not supported on this Datadog site ({{< region-param key="dd_site_name" >}}).
      +{{< site-region region="gov" >}}
      Mobile Application Testing is not supported on this Datadog site ({{< region-param key="dd_site_name" >}}).
      {{< /site-region >}} ## Overview diff --git a/content/en/dashboards/configure/_index.md b/content/en/dashboards/configure/_index.md index 85b3d054538e6..be75b72739742 100644 --- a/content/en/dashboards/configure/_index.md +++ b/content/en/dashboards/configure/_index.md @@ -75,7 +75,7 @@ Copy, import, or export a dashboard's JSON using the export icon (upper right) w ### Delete dashboard -
      Dashboards must be unstarred before deletion.
      +
      Dashboards must be unstarred before deletion.
      Use this option to permanently delete your dashboard. Use the preset **Recently Deleted** list to restore deleted dashboards. Dashboards in **Recently Deleted** are permanently deleted after 30 days. For more information, see the [Dashboard list][7] documentation. diff --git a/content/en/dashboards/functions/timeshift.md b/content/en/dashboards/functions/timeshift.md index 5e700cefbbe42..54e1b598c6952 100644 --- a/content/en/dashboards/functions/timeshift.md +++ b/content/en/dashboards/functions/timeshift.md @@ -52,7 +52,7 @@ Here is an example of `system.load.1` with the `hour_before()` value shown as a ## Day before -
      The day before feature is being deprecated. Use calendar shift with a value of "-1d" instead.
      +
      The day before feature is being deprecated. Use calendar shift with a value of "-1d" instead.
      | Function | Description | Example | |:---------------|:---------------------------------------------------------------------|:-------------------------------| @@ -64,7 +64,7 @@ Here is an example of `nginx.net.connections` with the `day_before()` value show ## Week before -
      The week before feature is being deprecated. Use calendar shift with a value of "-7d" instead.
      +
      The week before feature is being deprecated. Use calendar shift with a value of "-7d" instead.
      | Function | Description | Example | |:----------------|:-------------------------------------------------------------------------------|:--------------------------------| @@ -76,7 +76,7 @@ Here is an example of `cassandra.db.read_count` with the `week_before()` value s ## Month before -
      The month before feature is being deprecated. Use calendar shift with a value of "-1mo", "-30d" or "-4w" instead, depending on your use case.
      +
      The month before feature is being deprecated. Use calendar shift with a value of "-1mo", "-30d" or "-4w" instead, depending on your use case.
      | Function | Description | Example | |:-----------------|:-------------------------------------------------------------------------------------------|:---------------------------------| diff --git a/content/en/dashboards/guide/dashboard-lists-api-v1-doc.md b/content/en/dashboards/guide/dashboard-lists-api-v1-doc.md index 7d17299ceac0f..b1ed3e2c368a5 100644 --- a/content/en/dashboards/guide/dashboard-lists-api-v1-doc.md +++ b/content/en/dashboards/guide/dashboard-lists-api-v1-doc.md @@ -15,7 +15,7 @@ Interact with your dashboard lists through the API to make it easier to organize ## Get items of a dashboard list -
      +
      This endpoint is outdated. Use the get items of a dashboard list v2 endpoint instead.
      @@ -375,7 +375,7 @@ curl -X GET \ ## Add items to a dashboard list -
      +
      This endpoint is outdated. Use the add items to a dashboard list v2 endpoint instead.
      @@ -626,7 +626,7 @@ curl -X ADD -H "Content-type: application/json" \ ## Update items of a dashboard list -
      +
      This endpoint is outdated. Use the update items of a dashboard list v2 endpoint instead.
      @@ -879,7 +879,7 @@ curl -X UPDATE -H "Content-type: application/json" \ ## Delete items from a dashboard list -
      +
      This endpoint is outdated. Use the delete items from a dashboard list v2 endpoint instead.
      diff --git a/content/en/dashboards/guide/how-to-use-terraform-to-restrict-dashboard-edit.md b/content/en/dashboards/guide/how-to-use-terraform-to-restrict-dashboard-edit.md index c55f100f7aac4..2366559367faf 100644 --- a/content/en/dashboards/guide/how-to-use-terraform-to-restrict-dashboard-edit.md +++ b/content/en/dashboards/guide/how-to-use-terraform-to-restrict-dashboard-edit.md @@ -24,7 +24,7 @@ resource "datadog_dashboard" "example" { ## Restricting a dashboard using a restriction policy -
      Restriction policies are in Preview. Contact Datadog Support or your Customer Success Manager for access.
      +
      Restriction policies are in Preview. Contact Datadog Support or your Customer Success Manager for access.
      [Restriction Policies][1] allow you to restrict the editing of dashboards and other resources to specific principals, including roles, teams, users, and service accounts. diff --git a/content/en/dashboards/guide/screenboard-api-doc.md b/content/en/dashboards/guide/screenboard-api-doc.md index d39cc863d57e3..46a2706f29e61 100644 --- a/content/en/dashboards/guide/screenboard-api-doc.md +++ b/content/en/dashboards/guide/screenboard-api-doc.md @@ -6,7 +6,7 @@ aliases: - /graphing/guide/screenboard-api-doc --- -
      +
      This endpoint is outdated. Use the new Dashboard endpoint instead.
      diff --git a/content/en/dashboards/guide/timeboard-api-doc.md b/content/en/dashboards/guide/timeboard-api-doc.md index 63de06b6bae88..bcc380e1fc17c 100644 --- a/content/en/dashboards/guide/timeboard-api-doc.md +++ b/content/en/dashboards/guide/timeboard-api-doc.md @@ -6,7 +6,7 @@ aliases: - /graphing/guide/timeboard-api-doc --- -
      +
      This endpoint is outdated. Use the new Dashboard endpoint instead.
      diff --git a/content/en/dashboards/sharing/shared_dashboards.md b/content/en/dashboards/sharing/shared_dashboards.md index ad265d8877db1..3639f325ae34a 100644 --- a/content/en/dashboards/sharing/shared_dashboards.md +++ b/content/en/dashboards/sharing/shared_dashboards.md @@ -155,7 +155,7 @@ Shared dashboards support a limited number of timeframe options and do not allow ## Edit Shared Dashboards -
      Any changes to a dashboard's content or layout are instantly reflected in the shared version. Be cautious when editing to avoid unintentionally sharing private data.
      +
      Any changes to a dashboard's content or layout are instantly reflected in the shared version. Be cautious when editing to avoid unintentionally sharing private data.
      To make a change to the share type, configuration, or recipients of a shared dashboard: diff --git a/content/en/dashboards/widgets/_index.md b/content/en/dashboards/widgets/_index.md index 448115d39c296..7e09023f2e3cd 100644 --- a/content/en/dashboards/widgets/_index.md +++ b/content/en/dashboards/widgets/_index.md @@ -169,7 +169,7 @@ Widgets not linked to global time show the data for their local time frame as ap ## Copy and paste widgets -
      Enable Static Public Data Sharing in your Organization Settings to use this feature.
      +
      Enable Static Public Data Sharing in your Organization Settings to use this feature.
      Widgets can be copied on [Dashboards][4], [Notebooks][5], [APM Service][6], and the [APM resource][7] page by using Ctrl/Cmd + C, or by selecting the share icon and choosing "Copy". diff --git a/content/en/dashboards/widgets/event_stream.md b/content/en/dashboards/widgets/event_stream.md index 6fcd291c25b35..4de4a4d59aa54 100644 --- a/content/en/dashboards/widgets/event_stream.md +++ b/content/en/dashboards/widgets/event_stream.md @@ -6,4 +6,4 @@ aliases: - /graphing/widgets/event_stream/ --- -
      The Event Stream widget is supported through the List widget.
      +
      The Event Stream widget is supported through the List widget.
      diff --git a/content/en/dashboards/widgets/event_timeline.md b/content/en/dashboards/widgets/event_timeline.md index a7ae3627a591e..9efdd205ee723 100644 --- a/content/en/dashboards/widgets/event_timeline.md +++ b/content/en/dashboards/widgets/event_timeline.md @@ -6,4 +6,4 @@ aliases: - /graphing/widgets/event_timeline/ --- -
      The Event Timeline widget is supported through the Timeseries widget.
      +
      The Event Timeline widget is supported through the Timeseries widget.
      diff --git a/content/en/dashboards/widgets/log_stream.md b/content/en/dashboards/widgets/log_stream.md index b32e92c6728f6..c2280323ebc88 100644 --- a/content/en/dashboards/widgets/log_stream.md +++ b/content/en/dashboards/widgets/log_stream.md @@ -6,4 +6,4 @@ aliases: - /graphing/widgets/log_stream/ --- -
      View the Log Management stream through the List widget.
      +
      View the Log Management stream through the List widget.
      diff --git a/content/en/dashboards/widgets/retention.md b/content/en/dashboards/widgets/retention.md index 23e6160e6f570..50e89758adcad 100644 --- a/content/en/dashboards/widgets/retention.md +++ b/content/en/dashboards/widgets/retention.md @@ -12,7 +12,7 @@ further_reading: --- {{% site-region region="gov" %}} -
      +
      The Retention widget is not available in the Datadog site ({{< region-param key="dd_site_name" >}}).
      {{% /site-region %}} diff --git a/content/en/dashboards/widgets/sankey.md b/content/en/dashboards/widgets/sankey.md index 25b5de44b7c28..247ccf386d4d1 100644 --- a/content/en/dashboards/widgets/sankey.md +++ b/content/en/dashboards/widgets/sankey.md @@ -9,7 +9,7 @@ further_reading: --- {{% site-region region="gov" %}} -
      +
      The Sankey widget is not available in the selected Datadog site ({{< region-param key="dd_site_name" >}}).
      {{% /site-region %}} diff --git a/content/en/data_jobs/airflow.md b/content/en/data_jobs/airflow.md index 6cbcc794a3ae9..d3b8a58ac3b16 100644 --- a/content/en/data_jobs/airflow.md +++ b/content/en/data_jobs/airflow.md @@ -181,7 +181,7 @@ Set `OPENLINEAGE_CLIENT_LOGGING` to `DEBUG` in the [Amazon MWAA start script][3] {{% tab "Astronomer" %}} -
      +
      For Astronomer customers using Astro, Astro offers lineage features that rely on the Airflow OpenLineage provider. Data Jobs Monitoring depends on the same OpenLineage provider and uses the Composite transport to add additional transport.
      @@ -241,7 +241,7 @@ Check that the OpenLineage environment variables are correctly set on the Astron **Note**: Using the `.env` file to add the environment variables does not work because the variables are only applied to the local Airflow environment. {{% /tab %}} {{% tab "Google Cloud Composer" %}} -
      +
      Data Jobs Monitoring for Airflow is not yet compatible with Dataplex data lineage. Setting up OpenLineage for Data Jobs Monitoring overrides your existing Dataplex transport configuration.
      diff --git a/content/en/data_jobs/databricks.md b/content/en/data_jobs/databricks.md index 1a4059ea3c798..c7b0fdff1e1a8 100644 --- a/content/en/data_jobs/databricks.md +++ b/content/en/data_jobs/databricks.md @@ -32,7 +32,7 @@ Follow these steps to enable Data Jobs Monitoring for Databricks. {{% tab "Use a Service Principal for OAuth" %}} -
      New workspaces must authenticate using OAuth. Workspaces integrated with a Personal Access Token continue to function and can switch to OAuth at any time. After a workspace starts using OAuth, it cannot revert to a Personal Access Token.
      +
      New workspaces must authenticate using OAuth. Workspaces integrated with a Personal Access Token continue to function and can switch to OAuth at any time. After a workspace starts using OAuth, it cannot revert to a Personal Access Token.
      1. In your Databricks account, click on **User Management** in the left menu. Then, under the **Service principals** tab, click **Add service principal**. 1. Under the **Credentials & secrets** tab, click **Generate secret**. Set **Lifetime (days)** to the maximum value allowed (730), then click **Generate**. Take note of your client ID and client secret. Also take note of your account ID, which can be found by clicking on your profile in the upper-right corner. @@ -69,7 +69,7 @@ Follow these steps to enable Data Jobs Monitoring for Databricks. {{% tab "Use a Personal Access Token (Legacy)" %}} -
      This option is only available for workspaces created before July 7, 2025. New workspaces must authenticate using OAuth.
      +
      This option is only available for workspaces created before July 7, 2025. New workspaces must authenticate using OAuth.
      1. In your Databricks workspace, click on your profile in the top right corner and go to **Settings**. Select **Developer** in the left side bar. Next to **Access tokens**, click **Manage**. 1. Click **Generate new token**, enter "Datadog Integration" in the **Comment** field, set the **Lifetime (days)** value to the maximum allowed (730 days), and create a reminder to update the token before it expires. Then click **Generate**. Take note of your token. @@ -121,7 +121,7 @@ The Datadog Agent must be installed on Databricks clusters to monitor Databricks Datadog can install and manage a global init script in the Databricks workspace. The Datadog Agent is installed on all clusters in the workspace, when they start. -
      +
      • This setup does not work on Databricks clusters in Standard (formerly Shared) access mode, because global init scripts cannot be installed on those clusters. If you are using clusters with the Standard (formerly Shared) access mode, you must follow the instructions to Manually install on a specific cluster for installation on those specific clusters.
      • This install option, in which Datadog installs and manages your Datadog global init script, requires a Databricks Access Token with Workspace Admin permissions. A token with CAN VIEW access does not allow Datadog to manage the global init script of your Databricks account.
      • @@ -166,7 +166,7 @@ Optionally, you can add tags to your Databricks cluster and Spark performance me {{% tab "Manually install a global init script" %}} -
        +
        This setup does not work on Databricks clusters in Standard (formerly Shared) access mode, because global init scripts cannot be installed on those clusters. If you are using clusters with the Standard (formerly Shared) access mode, you must follow the instructions to Manually install on a specific cluster for installation on those specific clusters.
        diff --git a/content/en/data_jobs/emr.md b/content/en/data_jobs/emr.md index 15f80c12e1f05..9e604fe5892a4 100644 --- a/content/en/data_jobs/emr.md +++ b/content/en/data_jobs/emr.md @@ -41,7 +41,7 @@ EMR EC2 instance profile is a IAM role assigned to every EC2 instance in an Amaz #### Permissions to get secret value using AWS Secrets Manager -
        +
        These permissions are required if you are using AWS Secrets Manager.
        @@ -57,7 +57,7 @@ These permissions are required if you are using AWS Secrets Man #### Permissions to describe cluster -
        +
        These permissions are required if you are NOT using the default role, EMR_EC2_DefaultRole.
        diff --git a/content/en/data_streams/_index.md b/content/en/data_streams/_index.md index 6f2e16db42cbc..dceb9bc320301 100644 --- a/content/en/data_streams/_index.md +++ b/content/en/data_streams/_index.md @@ -34,7 +34,7 @@ cascade: {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/data_streams/data_pipeline_lineage.md b/content/en/data_streams/data_pipeline_lineage.md index 213832622cff7..b6f99f86aec0b 100644 --- a/content/en/data_streams/data_pipeline_lineage.md +++ b/content/en/data_streams/data_pipeline_lineage.md @@ -13,7 +13,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/data_streams/dead_letter_queues.md b/content/en/data_streams/dead_letter_queues.md index 23eb22cf8f312..1d593ccf75f3f 100644 --- a/content/en/data_streams/dead_letter_queues.md +++ b/content/en/data_streams/dead_letter_queues.md @@ -3,7 +3,7 @@ title: Dead Letter Queues --- {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/data_streams/metrics_and_tags.md b/content/en/data_streams/metrics_and_tags.md index 19295e7aa77f3..a264b7e157e15 100644 --- a/content/en/data_streams/metrics_and_tags.md +++ b/content/en/data_streams/metrics_and_tags.md @@ -5,7 +5,7 @@ aliases: --- {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/data_streams/schema_tracking.md b/content/en/data_streams/schema_tracking.md index e97f0978cc185..48fd05cba17c6 100644 --- a/content/en/data_streams/schema_tracking.md +++ b/content/en/data_streams/schema_tracking.md @@ -3,7 +3,7 @@ title: Schema Tracking --- {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/data_streams/setup/_index.md b/content/en/data_streams/setup/_index.md index c77637030f518..a507b6e9efdd4 100644 --- a/content/en/data_streams/setup/_index.md +++ b/content/en/data_streams/setup/_index.md @@ -3,7 +3,7 @@ title: Setup Data Streams Monitoring --- {{% site-region region="gov" %}} -
        +
        Data Streams Monitoring is not available for the {{< region-param key="dd_site_name" >}} site.
        {{% /site-region %}} diff --git a/content/en/database_monitoring/connect_dbm_and_apm.md b/content/en/database_monitoring/connect_dbm_and_apm.md index d093c05c89a0b..37970c8b35555 100644 --- a/content/en/database_monitoring/connect_dbm_and_apm.md +++ b/content/en/database_monitoring/connect_dbm_and_apm.md @@ -76,7 +76,7 @@ APM tracer integrations support a *Propagation Mode*, which controls the amount \*\* Full mode SQL Server for Java/.NET: -
        If your application uses context_info for instrumentation, the APM tracer overwrites it.
        +
        If your application uses context_info for instrumentation, the APM tracer overwrites it.
        - The instrumentation executes a `SET context_info` command when the client issues a query, which makes an additional round-trip to the database. - Prerequisites: @@ -111,7 +111,7 @@ Datadog recommends setting the obfuscation mode to `obfuscate_and_normalize` for sql_obfuscation_mode: "obfuscate_and_normalize" ``` -
        Changing the obfuscation mode may alter the normalized SQL text. If you have monitors based on SQL text in APM traces, you may need to update them.
        +
        Changing the obfuscation mode may alter the normalized SQL text. If you have monitors based on SQL text in APM traces, you may need to update them.
        {{< tabs >}} {{% tab "Go" %}} @@ -224,7 +224,7 @@ Enable the prepared statements tracing for Postgres using **one** of the followi **Note**: The prepared statements instrumentation overwrites the `Application` property with the text `_DD_overwritten_by_tracer`, and causes an extra round trip to the database. This additional round trip normally has a negligible impact on the SQL statement execution time. -
        Enabling prepared statements tracing may cause increased connection pinning when using Amazon RDS Proxy, which reduces connection pooling efficiency. For more information, see Connection pinning on RDS Proxy.
        +
        Enabling prepared statements tracing may cause increased connection pinning when using Amazon RDS Proxy, which reduces connection pooling efficiency. For more information, see Connection pinning on RDS Proxy.
        **Tracer versions below 1.44**: Prepared statements are not supported in `full` mode for Postgres and MySQL, and all JDBC API calls that use prepared statements are automatically downgraded to `service` mode. Since most Java SQL libraries use prepared statements by default, this means that **most** Java applications are only able to use `service` mode. @@ -347,7 +347,7 @@ for doc in results: {{% tab ".NET" %}} -
        +
        This feature requires automatic instrumentation to be enabled for your .NET service.
        @@ -367,7 +367,7 @@ Enable the database monitoring propagation feature by setting the following envi {{% tab "PHP" %}} -
        +
        This feature requires the tracer extension to be enabled for your PHP service.
        diff --git a/content/en/database_monitoring/guide/sql_extended_events.md b/content/en/database_monitoring/guide/sql_extended_events.md index 6b09c1da91834..5f17233d3627b 100644 --- a/content/en/database_monitoring/guide/sql_extended_events.md +++ b/content/en/database_monitoring/guide/sql_extended_events.md @@ -340,7 +340,7 @@ The default query duration threshold is `duration > 1000000` (1 second). Adjust - **Capture more queries**: Lower the threshold (for example, `duration > 500000` for 500 ms) - **Capture fewer queries**: Raise the threshold (for example, `duration > 5000000` for 5 seconds) -
        Setting thresholds too low can result in excessive event collection that affects server performance, event loss due to buffer overflow, and incomplete data, as Datadog only collects the most recent 1000 events per collection interval.
        +
        Setting thresholds too low can result in excessive event collection that affects server performance, event loss due to buffer overflow, and incomplete data, as Datadog only collects the most recent 1000 events per collection interval.
        ### Memory allocation - The default value is `MAX_MEMORY = 1024 KB`. diff --git a/content/en/database_monitoring/setup_mysql/aurora.md b/content/en/database_monitoring/setup_mysql/aurora.md index ddae539ce8cfd..bd152d21d3ec6 100644 --- a/content/en/database_monitoring/setup_mysql/aurora.md +++ b/content/en/database_monitoring/setup_mysql/aurora.md @@ -201,7 +201,7 @@ instances: instance_endpoint: '' ``` -
        Use the Aurora instance endpoint here, not the cluster endpoint.
        +
        Use the Aurora instance endpoint here, not the cluster endpoint.
        [Restart the Agent][3] to start sending MySQL metrics to Datadog. @@ -251,7 +251,7 @@ LABEL "com.datadoghq.ad.init_configs"='[{}]' LABEL "com.datadoghq.ad.instances"='[{"dbm": true, "host": "", "port": 3306,"username": "datadog","password": "ENC[datadog_user_database_password]"}]' ``` -
        Use the Aurora instance endpoint as the host, not the cluster endpoint.
        +
        Use the Aurora instance endpoint as the host, not the cluster endpoint.
        [1]: /agent/docker/integrations/?tab=docker diff --git a/content/en/database_monitoring/setup_mysql/troubleshooting.md b/content/en/database_monitoring/setup_mysql/troubleshooting.md index 6eb81dc138c19..d307ed04e1f03 100644 --- a/content/en/database_monitoring/setup_mysql/troubleshooting.md +++ b/content/en/database_monitoring/setup_mysql/troubleshooting.md @@ -195,7 +195,7 @@ performance_schema_max_sql_text_length=4096 ### Query activity is missing -
        Query Activity and Wait Event collection are not supported for Flexible Server, as these features require MySQL settings that are not available on a Flexible Server host.
        +
        Query Activity and Wait Event collection are not supported for Flexible Server, as these features require MySQL settings that are not available on a Flexible Server host.
        Before following these steps to diagnose missing query activity, ensure the Agent is running successfully and you have followed [the steps to diagnose missing agent data](#no-data-is-showing-after-configuring-database-monitoring). Below are possible causes for missing query activity. diff --git a/content/en/database_monitoring/setup_oracle/autonomous_database.md b/content/en/database_monitoring/setup_oracle/autonomous_database.md index 006395d0d5a26..a0b84df5b8754 100644 --- a/content/en/database_monitoring/setup_oracle/autonomous_database.md +++ b/content/en/database_monitoring/setup_oracle/autonomous_database.md @@ -137,7 +137,7 @@ After all Agent configuration is complete, [restart the Datadog Agent][4]. Database Monitoring supports custom queries for Oracle databases. See the [conf.yaml.example][12] to learn more about the configuration options available. -
        Running custom queries may result in additional costs or fees assessed by Oracle.
        +
        Running custom queries may result in additional costs or fees assessed by Oracle.
        [1]: /database_monitoring/agent_integration_overhead/?tab=oracle [2]: /database_monitoring/data_collected/#sensitive-information diff --git a/content/en/database_monitoring/setup_oracle/exadata.md b/content/en/database_monitoring/setup_oracle/exadata.md index a5e4bafa2f889..639bd09be14d6 100644 --- a/content/en/database_monitoring/setup_oracle/exadata.md +++ b/content/en/database_monitoring/setup_oracle/exadata.md @@ -69,7 +69,7 @@ Configure the Agent by following the instructions for [self-hosted Oracle databa Database Monitoring supports custom queries for Oracle databases. See the [conf.yaml.example][5] to learn more about the configuration options available. -
        Running custom queries may result in additional costs or fees assessed by Oracle.
        +
        Running custom queries may result in additional costs or fees assessed by Oracle.
        [1]: /agent/configuration/agent-commands/#agent-status-and-information [2]: https://app.datadoghq.com/databases diff --git a/content/en/database_monitoring/setup_oracle/rac.md b/content/en/database_monitoring/setup_oracle/rac.md index a31d9a350c414..7fb7fc58aacc6 100644 --- a/content/en/database_monitoring/setup_oracle/rac.md +++ b/content/en/database_monitoring/setup_oracle/rac.md @@ -92,7 +92,7 @@ Set the `rac_cluster` configuration parameter to the name of your RAC cluster or Database Monitoring supports custom queries for Oracle databases. See the [conf.yaml.example][5] to learn more about the configuration options available. -
        Running custom queries may result in additional costs or fees assessed by Oracle.
        +
        Running custom queries may result in additional costs or fees assessed by Oracle.
        [1]: /agent/configuration/agent-commands/#agent-status-and-information [2]: https://app.datadoghq.com/databases diff --git a/content/en/database_monitoring/setup_oracle/rds.md b/content/en/database_monitoring/setup_oracle/rds.md index 135720f6b856c..43c75ee7118b3 100644 --- a/content/en/database_monitoring/setup_oracle/rds.md +++ b/content/en/database_monitoring/setup_oracle/rds.md @@ -125,7 +125,7 @@ Once all Agent configuration is complete, [restart the Datadog Agent][2]. Database Monitoring supports custom queries for Oracle databases. See the [conf.yaml.example][9] to learn more about the configuration options available. -
        Running custom queries may result in additional costs or fees assessed by Oracle.
        +
        Running custom queries may result in additional costs or fees assessed by Oracle.
        [1]: /database_monitoring/agent_integration_overhead/?tab=oracle [2]: /agent/configuration/agent-commands/#start-stop-and-restart-the-agent diff --git a/content/en/database_monitoring/setup_oracle/selfhosted.md b/content/en/database_monitoring/setup_oracle/selfhosted.md index 8de090763454c..72988674e43ba 100644 --- a/content/en/database_monitoring/setup_oracle/selfhosted.md +++ b/content/en/database_monitoring/setup_oracle/selfhosted.md @@ -166,7 +166,7 @@ Once all Agent configuration is complete, [restart the Datadog Agent][9]. Database Monitoring supports custom queries for Oracle databases. See the [conf.yaml.example][4] to learn more about the configuration options available. -
        Running custom queries may result in additional costs or fees assessed by Oracle.
        +
        Running custom queries may result in additional costs or fees assessed by Oracle.
        [1]: https://app.datadoghq.com/account/settings/agent/latest?platform=overview [2]: https://app.datadoghq.com/dash/integration/30990/dbm-oracle-database-overview diff --git a/content/en/database_monitoring/setup_postgres/aurora.md b/content/en/database_monitoring/setup_postgres/aurora.md index 4f9b71b822dd1..fda394b453597 100644 --- a/content/en/database_monitoring/setup_postgres/aurora.md +++ b/content/en/database_monitoring/setup_postgres/aurora.md @@ -218,7 +218,7 @@ To configure collecting Database Monitoring metrics for an Agent running on a ho # dbname: '' ``` -
        Use the Aurora instance endpoint here, not the cluster endpoint.
        +
        Use the Aurora instance endpoint here, not the cluster endpoint.
        2. [Restart the Agent][2]. diff --git a/content/en/database_monitoring/setup_postgres/heroku.md b/content/en/database_monitoring/setup_postgres/heroku.md index ea1919521bda4..598e398328359 100644 --- a/content/en/database_monitoring/setup_postgres/heroku.md +++ b/content/en/database_monitoring/setup_postgres/heroku.md @@ -105,7 +105,7 @@ The Postgres integration and, if enabled, Database Monitoring, will begin collec {{% tab "Option B: Custom Configuration" %}} ### Custom Configuration -
        +
        Important: If you tried Option A first and need to remove the DD_ENABLE_HEROKU_POSTGRES and DD_ENABLE_DBM configurations, use the commands below: ``` shell diff --git a/content/en/datadog_cloudcraft/_index.md b/content/en/datadog_cloudcraft/_index.md index 8fe2cccc92b73..82d84b69b91e4 100644 --- a/content/en/datadog_cloudcraft/_index.md +++ b/content/en/datadog_cloudcraft/_index.md @@ -42,7 +42,7 @@ Cloudcraft's core functionality is its ability to generate detailed architecture **Note**: Cloudcraft adapts to restrictive permissions by excluding inaccessible resources. For example, if you don't grant permission to list S3 buckets, the diagram excludes those buckets. If permissions block certain resources, an alert displays in the UI. -
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        +
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        {{< img src="/infrastructure/resource_catalog/aws_usage_toggle.png" alt="AWS Usage toggle in account settings" style="width:100%;" >}}
        diff --git a/content/en/ddsql_editor/_index.md b/content/en/ddsql_editor/_index.md index e6f54ac91c94f..3b1f5e32de6e5 100644 --- a/content/en/ddsql_editor/_index.md +++ b/content/en/ddsql_editor/_index.md @@ -43,7 +43,7 @@ GROUP BY instance_type ## Explore your telemetry -
        Querying Logs, Metrics, Spans, and RUM through DDSQL is in Preview. Use this form to request access. +
        Querying Logs, Metrics, Spans, and RUM through DDSQL is in Preview. Use this form to request access. If you want access to Spans, RUM, or other data sources not listed in the use cases section, mention them in the access request form.
        diff --git a/content/en/ddsql_reference/ddsql_preview/ddsql_use_cases.md b/content/en/ddsql_reference/ddsql_preview/ddsql_use_cases.md index 4055ed6088394..c4f718bcdb40b 100644 --- a/content/en/ddsql_reference/ddsql_preview/ddsql_use_cases.md +++ b/content/en/ddsql_reference/ddsql_preview/ddsql_use_cases.md @@ -10,7 +10,7 @@ further_reading: text: "Learn more about the DDSLQ Editor" --- -
        +
        There are two different variants of DDSQL. The examples in this guide use DDSQL (Preview) Syntax. See the syntax documented in DDSQL Reference.
        diff --git a/content/en/ddsql_reference/ddsql_preview/reference_tables.md b/content/en/ddsql_reference/ddsql_preview/reference_tables.md index dc78fc35a1472..aae03283bbefc 100644 --- a/content/en/ddsql_reference/ddsql_preview/reference_tables.md +++ b/content/en/ddsql_reference/ddsql_preview/reference_tables.md @@ -13,7 +13,7 @@ further_reading: The DDSQL Editor is in Preview. {{< /callout >}} -
        +
        There are two different variants of DDSQL. The examples in this guide use DDSQL (Preview) Syntax. See the syntax documented in DDSQL Reference.
        diff --git a/content/en/ddsql_reference/ddsql_preview/statements.md b/content/en/ddsql_reference/ddsql_preview/statements.md index 3b0f4f69010f2..2695ab45f6d0d 100644 --- a/content/en/ddsql_reference/ddsql_preview/statements.md +++ b/content/en/ddsql_reference/ddsql_preview/statements.md @@ -162,7 +162,7 @@ INSERT INTO table_name [ (specific, columns, ...) ] VALUES ## SHOW -
        While the SHOW statement is a part of the SQL standard, the runtime parameter names are experimental. Parameters may be renamed, retyped, or deprecated in the future.
        +
        While the SHOW statement is a part of the SQL standard, the runtime parameter names are experimental. Parameters may be renamed, retyped, or deprecated in the future.
        When running queries, DDSQL references runtime parameters (environmental variables) that are not specified in the query statement itself, such as the default interval to use for metrics queries if no `BUCKET BY` is specified, or the start and end timestamp for a query. diff --git a/content/en/ddsql_reference/ddsql_preview/tags.md b/content/en/ddsql_reference/ddsql_preview/tags.md index fe82c368b1c7a..e933bc158a5dd 100644 --- a/content/en/ddsql_reference/ddsql_preview/tags.md +++ b/content/en/ddsql_reference/ddsql_preview/tags.md @@ -11,7 +11,7 @@ aliases: The DDSQL Editor is in Preview. {{< /callout >}} -
        +
        There are two different variants of DDSQL. The examples in this guide use DDSQL (Preview) Syntax. See the syntax documented in DDSQL Reference.
        diff --git a/content/en/deployment_gates/_index.md b/content/en/deployment_gates/_index.md index 5e12235d8799d..a4982c54dc86d 100644 --- a/content/en/deployment_gates/_index.md +++ b/content/en/deployment_gates/_index.md @@ -19,7 +19,7 @@ algolia: --- {{< site-region region="gov" >}} -
        Deployment Gates are not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.
        +
        Deployment Gates are not available in the selected site ({{< region-param key="dd_site_name" >}}) at this time.
        {{< /site-region >}} {{< callout url="http://datadoghq.com/product-preview/deployment-gates" >}} diff --git a/content/en/developers/custom_checks/prometheus.md b/content/en/developers/custom_checks/prometheus.md index 431d02a7efb4b..ff098972aa1c3 100644 --- a/content/en/developers/custom_checks/prometheus.md +++ b/content/en/developers/custom_checks/prometheus.md @@ -59,7 +59,7 @@ instances: ### Configuration -
        +
        The names of the configuration and check files must match. If your check is called mycheck.py your configuration file must be named mycheck.yaml.
        diff --git a/content/en/developers/custom_checks/write_agent_check.md b/content/en/developers/custom_checks/write_agent_check.md index 720b97e9922f6..ead99975aade8 100644 --- a/content/en/developers/custom_checks/write_agent_check.md +++ b/content/en/developers/custom_checks/write_agent_check.md @@ -20,7 +20,7 @@ This page takes you through the process of building a basic "Hello world!" custo Before you create a custom Agent check, install the [Datadog Agent][1]. -
        To work with the latest version of the Agent, your custom Agent check must be Python 3 compatible.
        +
        To work with the latest version of the Agent, your custom Agent check must be Python 3 compatible.
        ### Configuration diff --git a/content/en/developers/dogstatsd/_index.md b/content/en/developers/dogstatsd/_index.md index 8e17181015686..e030d8bd34157 100644 --- a/content/en/developers/dogstatsd/_index.md +++ b/content/en/developers/dogstatsd/_index.md @@ -42,7 +42,7 @@ DogStatsD is available on Docker Hub and GCR: |--------------------------------------------------|-----------------------------------------------------------| | [hub.docker.com/r/datadog/dogstatsd][3] | [gcr.io/datadoghq/dogstatsd][4] | -
        Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
        +
        Docker Hub is subject to image pull rate limits. If you are not a Docker Hub customer, Datadog recommends that you update your Datadog Agent and Cluster Agent configuration to pull from GCR or ECR. For instructions, see Changing your container registry.
        ## How it works @@ -374,7 +374,7 @@ options = { initialize(**options) ``` -
        +
        By default, Python DogStatsD client instances (including the statsd global instance) cannot be shared across processes but are thread-safe. Because of this, the parent process and each child process must create their own instances of the client or the buffering must be explicitly disabled by setting disable_buffering to True. See the documentation on datadog.dogstatsd for more details.
        diff --git a/content/en/developers/dogstatsd/high_throughput.md b/content/en/developers/dogstatsd/high_throughput.md index 56977aebc1572..793803b667bda 100644 --- a/content/en/developers/dogstatsd/high_throughput.md +++ b/content/en/developers/dogstatsd/high_throughput.md @@ -83,7 +83,7 @@ with dsd: dsd.gauge('example_metric.gauge_2', 1001, tags=["environment:dev"]) ``` -
        +
        By default, Python DogStatsD client instances (including the statsd global instance) cannot be shared across processes but are thread-safe. Because of this, the parent process and each child process must create their own instances of the client or the buffering must be explicitly disabled by setting disable_buffering to True. See the documentation on datadog.dogstatsd for more details.
        @@ -368,7 +368,7 @@ Avoid sending metrics in bursts in your application - this prevents the Datadog Another thing to look at to limit the maximum memory usage is to reduce the buffering. The main buffer of the DogStatsD server within the Agent is configurable with the `dogstatsd_queue_size` field (since Datadog Agent 6.1.0), its default value of `1024` induces an approximate maximum memory usage of 768MB. -
        +
        Note: Reducing the buffer size could increase the number of packet drops.
        @@ -384,7 +384,7 @@ See the next section on burst detection to help you detect bursts of metrics fro DogStatsD has a stats mode in which you can see which metrics are the most processed. -
        +
        Note: Enabling metrics stats mode can decrease DogStatsD performance.
        diff --git a/content/en/developers/dogstatsd/unix_socket.md b/content/en/developers/dogstatsd/unix_socket.md index 74767028583fc..d526997d93af8 100644 --- a/content/en/developers/dogstatsd/unix_socket.md +++ b/content/en/developers/dogstatsd/unix_socket.md @@ -38,7 +38,7 @@ To enable the Agent DogStatsD UDS: {{< tabs >}} {{% tab "Host" %}} -
        Note: The Agent install script automatically creates the socket file with the correct permissions, and use_dogstatsd: true & dogstatsd_socket: "/var/run/datadog/dsd.socket" are set by default.
        +
        Note: The Agent install script automatically creates the socket file with the correct permissions, and use_dogstatsd: true & dogstatsd_socket: "/var/run/datadog/dsd.socket" are set by default.
        1. Create a socket file for DogStatsD to use as a listening socket. For example: ```shell diff --git a/content/en/developers/faq/legacy-openmetrics.md b/content/en/developers/faq/legacy-openmetrics.md index 13bccf9fd3974..177bc409b7011 100644 --- a/content/en/developers/faq/legacy-openmetrics.md +++ b/content/en/developers/faq/legacy-openmetrics.md @@ -42,7 +42,7 @@ instances: ### Configuration -
        +
        The names of the configuration and check files must match. If your check is called mycheck.py your configuration file must be named mycheck.yaml.
        diff --git a/content/en/developers/ide_plugins/idea/_index.md b/content/en/developers/ide_plugins/idea/_index.md index ebdac8d196f8b..d37b72f428b34 100644 --- a/content/en/developers/ide_plugins/idea/_index.md +++ b/content/en/developers/ide_plugins/idea/_index.md @@ -20,7 +20,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        +
        The Datadog extension for JetBrains IDEs is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/developers/ide_plugins/visual_studio/_index.md b/content/en/developers/ide_plugins/visual_studio/_index.md index 8ffdb1ad6a958..ea4454bce8852 100644 --- a/content/en/developers/ide_plugins/visual_studio/_index.md +++ b/content/en/developers/ide_plugins/visual_studio/_index.md @@ -22,7 +22,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        +
        The Datadog extension for Visual Studio is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/developers/ide_plugins/vscode/_index.md b/content/en/developers/ide_plugins/vscode/_index.md index 94a67edfb8bd5..258a64d186423 100644 --- a/content/en/developers/ide_plugins/vscode/_index.md +++ b/content/en/developers/ide_plugins/vscode/_index.md @@ -28,7 +28,7 @@ further_reading: {{% site-region region="gov" %}} -
        +
        The Datadog extension for Visual Studio Code is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/dora_metrics/setup/deployments.md b/content/en/dora_metrics/setup/deployments.md index d45938b4f81b6..151fb5c17b821 100644 --- a/content/en/dora_metrics/setup/deployments.md +++ b/content/en/dora_metrics/setup/deployments.md @@ -192,7 +192,7 @@ https://docs.datadoghq.com/integrations/guide/source-code-integration/?tab=githu {{< tabs >}} {{% tab "GitHub" %}} -
        +
        GitHub workflows running on pull_request trigger are not currently supported by the GitHub integration. If you are using the pull_request trigger, use the alternative method.
        @@ -226,7 +226,7 @@ When this command is executed, Datadog receives the repository URL, the commit S Run this command in CI for every new commit. If a deployment is executed for a specific commit SHA, ensure that the `datadog-ci git-metadata upload` command is run for that commit **before** the deployment event is sent. -
        +
        Do not provide the --no-gitsync option to the datadog-ci git-metadata upload command. When that option is included, the commit information is not sent to Datadog and the change lead time metric is not calculated.
        diff --git a/content/en/dora_metrics/setup/failures.md b/content/en/dora_metrics/setup/failures.md index e89327f3e36da..d382f113f4454 100644 --- a/content/en/dora_metrics/setup/failures.md +++ b/content/en/dora_metrics/setup/failures.md @@ -134,7 +134,7 @@ The matching algorithm works in the following steps: 5. If the PagerDuty service name of the incident matches a team name in the Software Catalog, the incident metrics and events are emitted with the team. 6. If there have been no matches up to this point, the incident metrics and events are emitted with the PagerDuty service and PagerDuty team provided in the incident. -
        +
        If an incident is resolved manually in PagerDuty instead of from a monitor notification, the incident resolution event does not contain monitor information and the first step of the matching algorithm is skipped.
        diff --git a/content/en/error_tracking/guides/sentry_sdk.md b/content/en/error_tracking/guides/sentry_sdk.md index 6fb92ef9a33b4..f9c7d5062d13c 100644 --- a/content/en/error_tracking/guides/sentry_sdk.md +++ b/content/en/error_tracking/guides/sentry_sdk.md @@ -6,7 +6,7 @@ further_reading: tag: "Documentation" text: "Manage Data Collection" --- -
        +
        Using the Sentry SDK with Error Tracking helps you migrate to Datadog. However, to get the most out of Error Tracking, it is recommended to use the Datadog SDKs. See Frontend Error Tracking and Backend Error Tracking.
        diff --git a/content/en/getting_started/agent/_index.md b/content/en/getting_started/agent/_index.md index b24d80606a4f9..13e65a3c922c0 100644 --- a/content/en/getting_started/agent/_index.md +++ b/content/en/getting_started/agent/_index.md @@ -150,7 +150,7 @@ Datadog agent (v. 7.XX.X) started on The Agent is set up to provide the following service checks: - `datadog.agent.up`: Returns `OK` if the Agent connects to Datadog. -
        AIX Agents do not report the datadog.agent.up service check. You can use the metric datadog.agent.running to monitor the uptime of an AIX Agent. The metric emits a value of 1 if the Agent is reporting to Datadog.
        +
        AIX Agents do not report the datadog.agent.up service check. You can use the metric datadog.agent.running to monitor the uptime of an AIX Agent. The metric emits a value of 1 if the Agent is reporting to Datadog.
        - `datadog.agent.check_status`: Returns `CRITICAL` if an Agent check is unable to send metrics to Datadog, otherwise returns `OK`. diff --git a/content/en/getting_started/containers/datadog_operator.md b/content/en/getting_started/containers/datadog_operator.md index ee2d8c7d91d08..633de1f37df98 100644 --- a/content/en/getting_started/containers/datadog_operator.md +++ b/content/en/getting_started/containers/datadog_operator.md @@ -67,7 +67,7 @@ The [Datadog Operator][1] is an open source [Kubernetes Operator][2] that enable ### Running Agents in a single container -
        Available in Operator v1.4.0 or later
        +
        Available in Operator v1.4.0 or later
        By default, the Datadog Operator creates an Agent DaemonSet with pods running multiple Agent containers. Datadog Operator v1.4.0 introduces a configuration which allows users to run Agents in a single container. In order to avoid elevating privileges for all Agents in the single container, this feature is only applicable when `system-probe` or `security-agent` is not required. For more details, see [Running as an unprivileged user][7] on the Agent Data Security page. diff --git a/content/en/getting_started/incident_management/_index.md b/content/en/getting_started/incident_management/_index.md index b2da8f9b52eb5..6f2e5d465ea46 100644 --- a/content/en/getting_started/incident_management/_index.md +++ b/content/en/getting_started/incident_management/_index.md @@ -38,7 +38,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        Incident Management is not available for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        Incident Management is not available for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} ## Overview diff --git a/content/en/getting_started/integrations/azure.md b/content/en/getting_started/integrations/azure.md index dad74ad53f173..945eb60de3b76 100644 --- a/content/en/getting_started/integrations/azure.md +++ b/content/en/getting_started/integrations/azure.md @@ -52,7 +52,7 @@ See the following table for a summary of the various configuration options avail ***_All sites_** configurations can be used in the US3 site orgs, but only US3 site orgs can use the Azure Native integration. -
        Cloud cost management and log archives are only supported with App registration. For US3 sites that have set up the Datadog Azure Native integration, you need to create an App registration to access these functionalities. +
        Cloud cost management and log archives are only supported with App registration. For US3 sites that have set up the Datadog Azure Native integration, you need to create an App registration to access these functionalities.
        ## Setup @@ -112,7 +112,7 @@ If you are on the US3 site and use the Azure Native Integration, use the site se ### Azure Native integration If you are using the Azure Native integration, see the [Send Azure Logs with the Datadog Resource][18] guide for instructions on sending your _subscription level_, _Azure resource_, and _Azure Active Directory_ logs to Datadog. -
        Note: log archives are only supported with App registration. For US3 sites that have set up the Datadog Azure Native integration, you need to create an App registration to access these functionalities. +
        Note: log archives are only supported with App registration. For US3 sites that have set up the Datadog Azure Native integration, you need to create an App registration to access these functionalities.
        {{% /site-region %}} diff --git a/content/en/getting_started/integrations/google_cloud.md b/content/en/getting_started/integrations/google_cloud.md index bbdb4ccc63d86..ec3d528a8f9a1 100644 --- a/content/en/getting_started/integrations/google_cloud.md +++ b/content/en/getting_started/integrations/google_cloud.md @@ -46,7 +46,7 @@ Use this guide to get started monitoring your Google Cloud environment. This app {{% /site-region %}}         โ— The Google Cloud integration requires the below APIs to be enabled **for each of the projects** you want to monitor: -
        Ensure that any projects being monitored are not configured as scoping projects that pull in metrics from multiple other projects.
        +
        Ensure that any projects being monitored are not configured as scoping projects that pull in metrics from multiple other projects.
        [Cloud Monitoring API][3] : Allows Datadog to query your Google Cloud metric data. @@ -224,7 +224,7 @@ Use the [Datadog Dataflow template][14] to batch and compresses your log events You can use the [terraform-gcp-datadog-integration][64] module to manage this infrastructure through Terraform, or follow [the instructions listed here][16] to set up Log Collection. You can also use the [Stream logs from Google Cloud to Datadog][9] guide in the Google Cloud architecture center, for a more detailed explanation of the steps and architecture involved in log forwarding. For a deep dive into the benefits of the Pub/Sub to Datadog template, read [Stream your Google Cloud logs to Datadog with Dataflow][17] in the Datadog blog. -
        The Dataflow API must be enabled to use Google Cloud Dataflow. See Enabling APIs in the Google Cloud documentation for more information.
        +
        The Dataflow API must be enabled to use Google Cloud Dataflow. See Enabling APIs in the Google Cloud documentation for more information.
        ## Leveraging the Datadog Agent diff --git a/content/en/getting_started/support/_index.md b/content/en/getting_started/support/_index.md index dc1a9cba98912..fd7037ef1ca87 100644 --- a/content/en/getting_started/support/_index.md +++ b/content/en/getting_started/support/_index.md @@ -47,7 +47,7 @@ If you're not sure which option is best, feel free to use either channel to conn ## Reaching out on chat -
        Chat is available any business day between the hours of 10:00 and 19:00 Eastern Time (ET). Chat is not available for HIPAA-enabled accounts.
        +
        Chat is available any business day between the hours of 10:00 and 19:00 Eastern Time (ET). Chat is not available for HIPAA-enabled accounts.
        To get started, click **Support** on the bottom-left corner of the navigation menu. diff --git a/content/en/getting_started/tagging/unified_service_tagging.md b/content/en/getting_started/tagging/unified_service_tagging.md index 51ece4a55fae1..658df2c7b833b 100644 --- a/content/en/getting_started/tagging/unified_service_tagging.md +++ b/content/en/getting_started/tagging/unified_service_tagging.md @@ -128,7 +128,7 @@ You can also use the OpenTelemetry Resource Attributes environment variables to - name: OTEL_SERVICE_NAME value: "" ``` -
        The OTEL_SERVICE_NAME environment variable takes precedence over the service.name attribute in the OTEL_RESOURCE_ATTRIBUTES environment variable.
        +
        The OTEL_SERVICE_NAME environment variable takes precedence over the service.name attribute in the OTEL_RESOURCE_ATTRIBUTES environment variable.
        ##### Partial configuration @@ -306,7 +306,7 @@ Requirements: {{% tab "ECS" %}} -
        +
        On ECS Fargate using Fluent Bit or FireLens, unified service tagging is only available for metrics and traces, not log collection.
        @@ -336,7 +336,7 @@ Set the `DD_ENV`, `DD_SERVICE`, and `DD_VERSION` (optional with automatic versio "com.datadoghq.tags.version": "" } ``` -
        +
        On ECS Fargate, you must add these tags to your application container, not the Datadog Agent container.
        @@ -514,7 +514,7 @@ When using OpenTelemetry, map the following [resource attributes][16] to their c 1: `deployment.environment` is deprecated in favor of `deployment.environment.name` in [OpenTelemetry semantic conventions v1.27.0][17]. 2: `deployment.environment.name` is supported in Datadog Agent 7.58.0+ and Datadog Exporter v0.110.0+. -
        Datadog-specific environment variables like DD_SERVICE, DD_ENV or DD_VERSION are not supported out of the box in your OpenTelemetry configuration.
        +
        Datadog-specific environment variables like DD_SERVICE, DD_ENV or DD_VERSION are not supported out of the box in your OpenTelemetry configuration.
        {{< tabs >}} {{% tab "Environment variables" %}} diff --git a/content/en/getting_started/test_impact_analysis/_index.md b/content/en/getting_started/test_impact_analysis/_index.md index 2a70621c8a56a..ccd3fb40ce743 100644 --- a/content/en/getting_started/test_impact_analysis/_index.md +++ b/content/en/getting_started/test_impact_analysis/_index.md @@ -17,7 +17,7 @@ algolia: tags: ["test impact analysis", "intelligent test runner", "ci test", "ci tests", "flaky test", "flaky tests"] --- -
        This feature was formerly known as Intelligent Test Runner, and some tags still contain "itr".
        +
        This feature was formerly known as Intelligent Test Runner, and some tags still contain "itr".
        ## Overview diff --git a/content/en/infrastructure/process/_index.md b/content/en/infrastructure/process/_index.md index 8f7237f9e6f6a..e13ce4e97b9f5 100644 --- a/content/en/infrastructure/process/_index.md +++ b/content/en/infrastructure/process/_index.md @@ -25,7 +25,7 @@ further_reading: --- -
        +
        Live Processes and Live Process Monitoring are included in the Enterprise plan. For all other plans, contact your account representative or success@datadoghq.com to request this feature.
        @@ -156,7 +156,7 @@ See the standard [DaemonSet installation][1] and the [Docker Agent][2] informati {{% /tab %}} {{% tab "AWS ECS Fargate" %}} -
        You can view your ECS Fargate processes in Datadog. To see their relationship to ECS Fargate containers, use the Datadog Agent v7.50.0 or later.
        +
        You can view your ECS Fargate processes in Datadog. To see their relationship to ECS Fargate containers, use the Datadog Agent v7.50.0 or later.
        In order to collect processes, the Datadog Agent must be running as a container within the task. diff --git a/content/en/infrastructure/process/increase_process_retention.md b/content/en/infrastructure/process/increase_process_retention.md index c28e3531a68fe..4c9c3fac3bbb7 100644 --- a/content/en/infrastructure/process/increase_process_retention.md +++ b/content/en/infrastructure/process/increase_process_retention.md @@ -49,7 +49,7 @@ You can create multiple metrics using the same query by selecting the **Create A **Note**: Data points for process-based metrics are generated at ten second intervals. There may be up to a 3-minute delay from the moment the metric is created or updated, to the moment the first data point is reported. -
        Process-based metrics are considered custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality tags like command and user to avoid impacting your billing.
        +
        Process-based metrics are considered custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality tags like command and user to avoid impacting your billing.
        ### Update a process-based metric diff --git a/content/en/infrastructure/resource_catalog/_index.md b/content/en/infrastructure/resource_catalog/_index.md index 3c17abb766424..66a7838c08eb8 100644 --- a/content/en/infrastructure/resource_catalog/_index.md +++ b/content/en/infrastructure/resource_catalog/_index.md @@ -52,7 +52,7 @@ By default, when you navigate to the Resource Catalog, you are able to see Datad {{< img src="/infrastructure/resource_catalog/resource_catalog_settings.png" alt="The Resource Catalog configuration page for extending resource collection" width="100%">}} -
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        +
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        {{< img src="/infrastructure/resource_catalog/aws_usage_toggle.png" alt="AWS Usage toggle in account settings" style="width:100%;" >}}
        diff --git a/content/en/infrastructure/resource_catalog/policies/_index.md b/content/en/infrastructure/resource_catalog/policies/_index.md index f14509242dfa1..e99d1f42aa6d7 100644 --- a/content/en/infrastructure/resource_catalog/policies/_index.md +++ b/content/en/infrastructure/resource_catalog/policies/_index.md @@ -15,7 +15,7 @@ further_reading: text: "Proactively enforce infrastructure best practices with Datadog Resource Policies" --- -{{< site-region region="gov" >}}
        Resource Catalog is not available for the selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +{{< site-region region="gov" >}}
        Resource Catalog is not available for the selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}} {{< callout url="https://www.datadoghq.com/product-preview/infra-governance-policies/" btn_hidden="false" header="Join the Preview!">}} diff --git a/content/en/infrastructure/resource_catalog/resource_changes/_index.md b/content/en/infrastructure/resource_catalog/resource_changes/_index.md index 9771efc001270..ff193c5e62d9c 100644 --- a/content/en/infrastructure/resource_catalog/resource_changes/_index.md +++ b/content/en/infrastructure/resource_catalog/resource_changes/_index.md @@ -7,7 +7,7 @@ further_reading: --- {{< site-region region="gov" >}} -
        Resource Changes is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        Resource Changes is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}} {{< callout url="https://www.datadoghq.com/product-preview/recent-changes-tab/" >}} @@ -38,7 +38,7 @@ For a comprehensive list of supported resources, see the [Supported resources](# To view configuration changes for your resources, ensure that resource collection is enabled for your cloud resources so they are visible within the Resource Catalog. You can manage this setting from the [Resource Catalog settings page][2] or the relevant [cloud provider integration tile][3]. This step automatically enables [Snapshot Changes](#snapshot-changes) for your resources. -
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        +
        Note: Enabling resource collection can impact your AWS CloudWatch costs. To avoid these charges, disable Usage metrics in the Metric Collection tab of the Datadog AWS Integration.
        {{< img src="/infrastructure/resource_catalog/aws_usage_toggle.png" alt="AWS Usage toggle in account settings" style="width:100%;" >}}
        diff --git a/content/en/integrations/faq/agent-5-amazon-ecs.md b/content/en/integrations/faq/agent-5-amazon-ecs.md index 59b17321b6b0a..f83bad305aa03 100644 --- a/content/en/integrations/faq/agent-5-amazon-ecs.md +++ b/content/en/integrations/faq/agent-5-amazon-ecs.md @@ -4,7 +4,7 @@ title: Amazon Elastic Container Service with Agent v5 private: true --- -
        +
        This documentation is to setup Amazon EC2 container Service with Datadog Agent 5
        diff --git a/content/en/integrations/faq/troubleshooting-jmx-integrations.md b/content/en/integrations/faq/troubleshooting-jmx-integrations.md index 360e989f6e44c..bad64107a6930 100644 --- a/content/en/integrations/faq/troubleshooting-jmx-integrations.md +++ b/content/en/integrations/faq/troubleshooting-jmx-integrations.md @@ -9,7 +9,7 @@ further_reading: To verify you have access to JMX, test using JConsole or equivalent if possible. If you're unable to connect using JConsole [this article][1] may help to get you sorted. Also, if the metrics listed in your YAML aren't 1:1 with those listed in JConsole you'll need to correct this. -
        +
        For all versions of Agent v5.32.8 or greater, the jmxterm JAR is not shipped with the agent. To download and use jmxterm, see the upstream project. Change /opt/datadog-agent/agent/checks/libs/jmxterm-1.0-DATADOG-uber.jar in the examples below to the jmxterm JAR path you downloaded from the upstream project.
        diff --git a/content/en/integrations/guide/aws-cloudwatch-metric-streams-with-kinesis-data-firehose.md b/content/en/integrations/guide/aws-cloudwatch-metric-streams-with-kinesis-data-firehose.md index 96d293022734a..9e57b9b6e2bac 100644 --- a/content/en/integrations/guide/aws-cloudwatch-metric-streams-with-kinesis-data-firehose.md +++ b/content/en/integrations/guide/aws-cloudwatch-metric-streams-with-kinesis-data-firehose.md @@ -11,7 +11,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        AWS CloudWatch Metric Streams with Amazon Data Firehose is not available for the selected site ({{< region-param key="dd_site_name" >}}).
        +
        AWS CloudWatch Metric Streams with Amazon Data Firehose is not available for the selected site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} Using Amazon CloudWatch Metric Streams and Amazon Data Firehose, you can get CloudWatch metrics into Datadog with only a two to three minute latency. This is significantly faster than Datadog's default API polling approach, which provides updated metrics every 10 minutes. You can learn more about the API polling approach in the [Cloud Metric Delay documentation][1]. @@ -24,7 +24,7 @@ Using Amazon CloudWatch Metric Streams and Amazon Data Firehose, you can get Clo - Optionally specify a limited set of namespaces or metrics to stream. 2. Once you create the Metric Stream, Datadog immediately starts receiving the streamed metrics and displays them on the Datadog site with no additional configuration needed. -
        Per-namespace filtering configured in the AWS Integration tile also applies to CloudWatch Metric Streams.
        +
        Per-namespace filtering configured in the AWS Integration tile also applies to CloudWatch Metric Streams.
        ### Metric Streaming versus API polling {#streaming-vs-polling} diff --git a/content/en/integrations/guide/aws-manual-setup.md b/content/en/integrations/guide/aws-manual-setup.md index c5b65a2fd5ae4..7819395f2777d 100644 --- a/content/en/integrations/guide/aws-manual-setup.md +++ b/content/en/integrations/guide/aws-manual-setup.md @@ -52,7 +52,7 @@ Use this guide to manually set up the Datadog [AWS Integration][1]. To set up the AWS integration manually, create an IAM policy and IAM role in your AWS account, and configure the role with an AWS External ID generated in your Datadog account. This allows Datadog's AWS account to query AWS APIs on your behalf, and pull data into your Datadog account. The sections below detail the steps for creating each of these components, and then completing the setup in your Datadog account. {{< site-region region="gov" >}} -
        +
        Setting up S3 Log Archives using Role Delegation is in limited availability. Contact Datadog Support to request this feature in your Datadog for Government account.
        {{< /site-region >}} @@ -123,7 +123,7 @@ This policy defines the permissions necessary for the Datadog integration role t 5. Click **Save**. 6. Wait up to 10 minutes for data to start being collected, and then view the out-of-the-box AWS Overview Dashboard to see metrics sent by your AWS services and infrastructure. -
        If there is a Datadog is not authorized to perform sts:AssumeRole error, follow the troubleshooting steps recommended in the UI, or read the troubleshooting guide.
        +
        If there is a Datadog is not authorized to perform sts:AssumeRole error, follow the troubleshooting steps recommended in the UI, or read the troubleshooting guide.
        \*{{% mainland-china-disclaimer %}} diff --git a/content/en/integrations/guide/azure-portal.md b/content/en/integrations/guide/azure-portal.md index ab88d86efda02..0d4a65a0e1d18 100644 --- a/content/en/integrations/guide/azure-portal.md +++ b/content/en/integrations/guide/azure-portal.md @@ -13,7 +13,7 @@ further_reading: text: "Enable monitoring for enterprise-scale Azure environments in minutes with Datadog" --- -
        +
        This guide is for managing the Azure Native integration with the Datadog resource.
        diff --git a/content/en/integrations/guide/cloud-foundry-setup.md b/content/en/integrations/guide/cloud-foundry-setup.md index 69e1d85ce4d78..c1f7ce358afad 100644 --- a/content/en/integrations/guide/cloud-foundry-setup.md +++ b/content/en/integrations/guide/cloud-foundry-setup.md @@ -92,7 +92,7 @@ There are three points of integration with Datadog, each of which achieves a dif - **Datadog Cluster Agent BOSH release** - Deploy one Datadog Cluster Agent job. The job queries the CAPI and BBS API to collect cluster-level and application-level metadata to provide improved tagging capabilities in your applications and containers. - **Datadog Firehose Nozzle** - Deploy one or more Datadog Firehose Nozzle jobs. The jobs tap into your deployment's Loggregator Firehose and send all non-container metrics to Datadog. -
        +
        These integrations are meant for Cloud Foundry deployment administrators, not end users.
        diff --git a/content/en/integrations/guide/fips-integrations.md b/content/en/integrations/guide/fips-integrations.md index 2c4da6193d97c..f871e4b5a4095 100644 --- a/content/en/integrations/guide/fips-integrations.md +++ b/content/en/integrations/guide/fips-integrations.md @@ -49,7 +49,7 @@ Integrations marked out of the box ("OOTB") require no further configuration. | Zookeeper | The `use_tls` option must be enabled through the integration configuration. | -
        +
        Configuring the IIS integration to query remote systems is discouraged. It relies on a Windows API for cryptography, which Datadog cannot control.
        diff --git a/content/en/integrations/guide/jmxfetch-fips.md b/content/en/integrations/guide/jmxfetch-fips.md index 9866b4a767a22..a8529402eba5b 100644 --- a/content/en/integrations/guide/jmxfetch-fips.md +++ b/content/en/integrations/guide/jmxfetch-fips.md @@ -45,7 +45,7 @@ connector) and the client (JMXFetch). Commands provided in this section are for reference only and should be adjusted based on your specific scenario. -
        Configure the JVM in FIPS mode before generating certificates, as some Java FIPS modules reject private keys created in non-FIPS mode.
        +
        Configure the JVM in FIPS mode before generating certificates, as some Java FIPS modules reject private keys created in non-FIPS mode.
        {{< tabs >}} diff --git a/content/en/integrations/guide/source-code-integration.md b/content/en/integrations/guide/source-code-integration.md index 8c9df4835bee9..db2b43693f3b2 100644 --- a/content/en/integrations/guide/source-code-integration.md +++ b/content/en/integrations/guide/source-code-integration.md @@ -69,7 +69,7 @@ Install Datadog's [GitHub integration][101] using the [integration tile][102] or {{% /tab %}} {{% tab "GitLab (SaaS & On-Prem)" %}} -
        +
        Repositories from GitLab instances are supported in closed Preview. Repositories from GitLab instances are supported for both GitLab.com (SaaS) and GitLab Self-Managed/Dedicated (On-Prem). For GitLab Self-Managed, your instance must be accessible from the internet. If needed, you can allowlist Datadog's webhooks IP addresses to allow Datadog to connect to your instance. Join the Preview.
        @@ -81,7 +81,7 @@ Install Datadog's [GitLab Source Code integration][101] using the [integration t {{% /tab %}} {{% tab "Azure DevOps (SaaS Only)" %}} -
        +
        Repositories from Azure DevOps are supported in closed Preview. Join the Preview.
        @@ -92,7 +92,7 @@ Install Datadog's Azure DevOps Source Code integration while onboarding to [Data {{% /tab %}} {{% tab "Other SCM Providers" %}} -
        +
        Repositories on self-hosted instances or private URLs are not supported out-of-the-box. To enable this feature, contact Support.
        diff --git a/content/en/internal_developer_portal/integrations.md b/content/en/internal_developer_portal/integrations.md index 43454a7f3e59a..1ebdbc861af9d 100644 --- a/content/en/internal_developer_portal/integrations.md +++ b/content/en/internal_developer_portal/integrations.md @@ -17,7 +17,7 @@ further_reading: text: "Learn about the PagerDuty integration" --- {{% site-region region="gov" %}} -
        +
        PagerDuty and OpsGenie integrations for Internal Developer Portal are not supported in the {{< region-param key=dd_datacenter code="true" >}} site.
        {{% /site-region %}} diff --git a/content/en/internal_developer_portal/scorecards/using_scorecards.md b/content/en/internal_developer_portal/scorecards/using_scorecards.md index 88b5e384bd429..cc026cef4e0fd 100644 --- a/content/en/internal_developer_portal/scorecards/using_scorecards.md +++ b/content/en/internal_developer_portal/scorecards/using_scorecards.md @@ -45,7 +45,7 @@ You can visualize how teams' scores progress over time as they make changes and You can generate Scorecard reports, which send scheduled overviews of Scorecard information to your team's Slack channel to help everyone understand how entities and teams are meeting the expected standards. Creating a report generates a Workflow using [Datadog Workflow Automation][2], which runs at a scheduled time. -
        Running this Workflow may impact your billing. Read the pricing page for more information
        +
        Running this Workflow may impact your billing. Read the pricing page for more information
        To create a Report: diff --git a/content/en/internal_developer_portal/software_catalog/endpoints/_index.md b/content/en/internal_developer_portal/software_catalog/endpoints/_index.md index e0852c3b1e88f..a9fb5b8ac0ddf 100644 --- a/content/en/internal_developer_portal/software_catalog/endpoints/_index.md +++ b/content/en/internal_developer_portal/software_catalog/endpoints/_index.md @@ -28,7 +28,7 @@ aliases: --- {{% site-region region="gov" %}} -
        +
        Endpoint Observability is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/internal_developer_portal/software_catalog/entity_model/_index.md b/content/en/internal_developer_portal/software_catalog/entity_model/_index.md index 989a4d7a7a553..fda0a096f9d01 100644 --- a/content/en/internal_developer_portal/software_catalog/entity_model/_index.md +++ b/content/en/internal_developer_portal/software_catalog/entity_model/_index.md @@ -52,7 +52,7 @@ algolia: --- {{< site-region region="gov" >}} -
        Entity Model schema v3.0 is not available in the selected site at this time.
        +
        Entity Model schema v3.0 is not available in the selected site at this time.
        {{< /site-region >}} diff --git a/content/en/internal_developer_portal/software_catalog/set_up/import_entities.md b/content/en/internal_developer_portal/software_catalog/set_up/import_entities.md index 7bb73838aefb0..ae48ff84bee20 100644 --- a/content/en/internal_developer_portal/software_catalog/set_up/import_entities.md +++ b/content/en/internal_developer_portal/software_catalog/set_up/import_entities.md @@ -57,7 +57,7 @@ During import, Datadog maps Backstage data to Datadog data: | `spec.dependsOn` | `dependsOn` | | Other `spec` values | Mapped to custom tags | -
        +
        The Software Catalog processes the entire YAML file as a whole. If any section of the YAML file does not have kind:component or kind:system, the entire catalog-info.yaml file is rejected. Schema version v3.0 is required to use kind:system and the dependsOn field.
        diff --git a/content/en/llm_observability/evaluations/ragas_evaluations.md b/content/en/llm_observability/evaluations/ragas_evaluations.md index 95004861fe60e..f01ffb3ac8ad3 100644 --- a/content/en/llm_observability/evaluations/ragas_evaluations.md +++ b/content/en/llm_observability/evaluations/ragas_evaluations.md @@ -14,7 +14,7 @@ further_reading: For a simplified setup guide, see [Ragas Quickstart][7]. -
        +
        Datadog recommends that you use sampling for Ragas evaluations. These LLM-as-a-judge evaluations are powered by your LLM provider's account. Evaluations are automatically traced and sent to Datadog. These traces contain LLM spans, which may affect your LLM Observability billing. See Sampling.
        diff --git a/content/en/logs/explorer/facets.md b/content/en/logs/explorer/facets.md index dff5c0171e3f1..7e2ca7fb29ef2 100644 --- a/content/en/logs/explorer/facets.md +++ b/content/en/logs/explorer/facets.md @@ -220,7 +220,7 @@ This is the best option if you onboard logs flowing from new sources. Rather tha ## Delete a facet -
        Deleting a facet that is being used in indexes, monitors, dashboards, restriction queries, or by other teams can cause configurations to break.
        +
        Deleting a facet that is being used in indexes, monitors, dashboards, restriction queries, or by other teams can cause configurations to break.
        To delete a facet, follow these steps: diff --git a/content/en/logs/explorer/search.md b/content/en/logs/explorer/search.md index fb7ae4c6dfe55..292b5275e1071 100644 --- a/content/en/logs/explorer/search.md +++ b/content/en/logs/explorer/search.md @@ -22,7 +22,7 @@ The [Logs Explorer][5] lets you search and view individual logs as a list. Howev ## Natural language queries {{% site-region region="gov" %}} -
        +
        Natural Language Queries is not available in the Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/logs/explorer/search_syntax.md b/content/en/logs/explorer/search_syntax.md index b207e6a18799a..999c380e34fac 100644 --- a/content/en/logs/explorer/search_syntax.md +++ b/content/en/logs/explorer/search_syntax.md @@ -43,7 +43,7 @@ To combine multiple terms into a complex query, you can use any of the following ## Full-text search -
        The full-text search feature is only available in Log Management and works in monitor, dashboard, and notebook queries. The full-text search syntax cannot be used to define index filters, archive filters, log pipeline filters, rehydration filters, or in Live Tail.
        +
        The full-text search feature is only available in Log Management and works in monitor, dashboard, and notebook queries. The full-text search syntax cannot be used to define index filters, archive filters, log pipeline filters, rehydration filters, or in Live Tail.
        Use the syntax `*:search_term` to perform a full-text search across all log attributes, including the log message. diff --git a/content/en/logs/faq/logs_cost_attribution.md b/content/en/logs/faq/logs_cost_attribution.md index 0e5b950c12e99..0eb97a8bad739 100644 --- a/content/en/logs/faq/logs_cost_attribution.md +++ b/content/en/logs/faq/logs_cost_attribution.md @@ -89,7 +89,7 @@ Create custom tags so you can organize custom log usage metrics into categories ### Create a `retention_period` tag -
        Datadog recommends that you set up the retention_period tag even if your indexes all have the same retention period. This makes sure that if you start using multiple retention periods, all logs are tagged with its retention period.
        +
        Datadog recommends that you set up the retention_period tag even if your indexes all have the same retention period. This makes sure that if you start using multiple retention periods, all logs are tagged with its retention period.
        `retention_period` is the number of days your logs are retained in Datadog indexes. Since indexing billing costs are incurred based on the number of days that the logs are retained, use the `retention_period` tag to associate each log with its retention period to see cost attribution. @@ -150,7 +150,7 @@ Use a [Category Processor][6] to create a new `retention_period` attribute to as ### Create an `online_archives` tag -
        Datadog recommends that you set up the online_archives tag even if none of your indexes have online archives enabled. This ensures that if you start using Online Archives, the relevant logs are tagged with online_archives.
        +
        Datadog recommends that you set up the online_archives tag even if none of your indexes have online archives enabled. This ensures that if you start using Online Archives, the relevant logs are tagged with online_archives.
        The `online_archives` tag indicates whether or not your logs have been routed to Online Archives. Since Online Archives are charged differently than standard indexing, use the `online_archives` tag to determine which logs have been routed to Online Archives and see cost attribution. @@ -195,7 +195,7 @@ Datadog highly recommends automating this process by using the [Datadog API endp ### Create a `sds` tag -
        Datadog recommends that you still set up the sds tag even if you are not using the Sensitive Data Scanner. This makes sure that if you start using Sensitive Data Scanner, all the relevant logs are tagged with sds.
        +
        Datadog recommends that you still set up the sds tag even if you are not using the Sensitive Data Scanner. This makes sure that if you start using Sensitive Data Scanner, all the relevant logs are tagged with sds.
        The `sds` tag indicates whether or not your logs have been scanned by the Sensitive Data Scanner. Use the `sds` tag to estimate the costs associated with the specific usage of Sensitive Data Scanner. diff --git a/content/en/logs/guide/azure-native-logging-guide.md b/content/en/logs/guide/azure-native-logging-guide.md index fdbe9b7480b61..2facf69f6f85d 100644 --- a/content/en/logs/guide/azure-native-logging-guide.md +++ b/content/en/logs/guide/azure-native-logging-guide.md @@ -27,7 +27,7 @@ Provide insight into the operations on your resources at the [control plane][1]. To send activity logs to Datadog, select **Send subscription activity logs**. If this option is left unchecked, none of the activity logs are sent to Datadog. -
        When log collection is enabled, the Datadog resource automatically modifies the logging configurations of App Services. Azure triggers a restart for App Services when their logging configurations change.
        +
        When log collection is enabled, the Datadog resource automatically modifies the logging configurations of App Services. Azure triggers a restart for App Services when their logging configurations change.
        ## Azure resource logs diff --git a/content/en/logs/guide/collect-google-cloud-logs-with-push.md b/content/en/logs/guide/collect-google-cloud-logs-with-push.md index 1faf92e5a79ea..07e8f73278f15 100644 --- a/content/en/logs/guide/collect-google-cloud-logs-with-push.md +++ b/content/en/logs/guide/collect-google-cloud-logs-with-push.md @@ -15,7 +15,7 @@ further_reading: text: "How to send logs to Datadog while reducing data transfer fees" --- -
        +
        This page describes deprecated features with configuration information relevant to legacy Pub/Sub Push subscriptions, useful for troubleshooting or modifying legacy setups. Pub/Sub Push subscription is being deprecated for the following reasons: - For Google Cloud VPC, new Push subscriptions cannot be configured with external endpoints (see Google Cloud's [Supported products and limitations][12] page for more information) diff --git a/content/en/logs/guide/delete_logs_with_sensitive_data.md b/content/en/logs/guide/delete_logs_with_sensitive_data.md index a4df6173d6f70..441801aa524f6 100644 --- a/content/en/logs/guide/delete_logs_with_sensitive_data.md +++ b/content/en/logs/guide/delete_logs_with_sensitive_data.md @@ -49,7 +49,7 @@ Use [Sensitive Data Scanner][5] to limit the risk of storing sensitive data in D ## Submit a request for log deletion -
        +
        Only a Datadog Admin can request log deletion. If you are not an Admin, make sure to include an Admin on the request so they can confirm the deletion request.
        @@ -61,7 +61,7 @@ If the options for changing your retention period, making logs un-queryable, and 1. If the request is for targeted deletion by time frame, the exact time range, in Epoch or UTC format, of the logs that contained sensitive data. 1. The name of the indexes where the sensitive data is in. 1. Confirmation that you understand the following requirement: -
        +
        Datadog deletes logs by time buckets, not by query scope or precise time frame. Therefore, Datadog might have to delete a larger amount of data than your exposed logs. For example. if you need to delete all error logs from service:x that came in between 10:00 a.m. to 12:00 p.m. from index:main, Datadog might have to delete all logs in that index from 1:00 a.m. to 5:00 p.m. Datadog support will work with you to ensure that only the necessary data is deleted.
        diff --git a/content/en/logs/guide/how-to-set-up-only-logs.md b/content/en/logs/guide/how-to-set-up-only-logs.md index 4561d48780417..e889002be6ef1 100644 --- a/content/en/logs/guide/how-to-set-up-only-logs.md +++ b/content/en/logs/guide/how-to-set-up-only-logs.md @@ -11,7 +11,7 @@ further_reading: text: "Kubernetes Log Collection" --- -
        Infrastructure Monitoring is a prerequisite to using APM. If you are an APM customer, do not turn off metric collection or you might lose critical telemetry and metric collection information.
        +
        Infrastructure Monitoring is a prerequisite to using APM. If you are an APM customer, do not turn off metric collection or you might lose critical telemetry and metric collection information.
        To disable payloads, you must be running Agent v6.4+. This disables metric data submission (including Custom Metrics) so that hosts stop showing up in Datadog. Follow these steps: diff --git a/content/en/logs/guide/reduce_data_transfer_fees.md b/content/en/logs/guide/reduce_data_transfer_fees.md index 0ccd170db39ca..992c77d954129 100644 --- a/content/en/logs/guide/reduce_data_transfer_fees.md +++ b/content/en/logs/guide/reduce_data_transfer_fees.md @@ -22,7 +22,7 @@ Send data over a private network to avoid the public internet and reduce your da ## Supported cloud providers -
        Make sure the selected Datadog site {{< region-param key="dd_site_name" code="true" >}} is correct. Cloud specific private links are not available for all Datadog sites.
        +
        Make sure the selected Datadog site {{< region-param key="dd_site_name" code="true" >}} is correct. Cloud specific private links are not available for all Datadog sites.
        {{< whatsnext desc="Connect to Datadog over:" >}} {{< nextlink href="/agent/guide/private-link/?tab=crossregionprivatelinkendpoints&site=us" >}}US1 - AWS PrivateLink{{< /nextlink >}} diff --git a/content/en/logs/log_collection/csharp.md b/content/en/logs/log_collection/csharp.md index a6e656347a569..5536bf2481683 100644 --- a/content/en/logs/log_collection/csharp.md +++ b/content/en/logs/log_collection/csharp.md @@ -323,7 +323,7 @@ Agentless logging (also known as "direct log submission") supports the following It does not require modifying your application code, or installing additional dependencies into your application. -
        +
        Note: If you use log4net or NLog, an appender (log4net) or a logger (NLog) must be configured for Agentless logging to be enabled. In those cases, you can either add these extra dependencies, or use agentless logging with the Serilog sink instead.
        @@ -358,7 +358,7 @@ Enabled by default from Tracer version 3.24.0. : Enables Agentless logging. Enable for your logging framework by setting to `Serilog`, `NLog`, `Log4Net`, or `ILogger` (for `Microsoft.Extensions.Logging`). If you are using multiple logging frameworks, use a semicolon separated list of variables.
        **Example**: `Serilog;Log4Net;NLog` -
        +
        Note: If you are using a logging framework in conjunction with Microsoft.Extensions.Logging, you will generally need to use the framework name. For example, if you are using Serilog.Extensions.Logging, you should set DD_LOGS_DIRECT_SUBMISSION_INTEGRATIONS=Serilog.
        diff --git a/content/en/logs/log_collection/java.md b/content/en/logs/log_collection/java.md index 7ca8aa443dc7e..81a1209d475ab 100644 --- a/content/en/logs/log_collection/java.md +++ b/content/en/logs/log_collection/java.md @@ -520,7 +520,7 @@ Log4j 2 allows logging to a remote host, but it does not offer the ability to pr ### Configure Logback {{< site-region region="us3,us5,ap1,ap2,gov" >}} -
        The TCP endpoint is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}). For a list of logging endpoints, see Log Collection and Integrations.
        +
        The TCP endpoint is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}). For a list of logging endpoints, see Log Collection and Integrations.
        {{< /site-region >}} diff --git a/content/en/logs/log_collection/php.md b/content/en/logs/log_collection/php.md index 65995d1023abf..48fd681de1aa6 100644 --- a/content/en/logs/log_collection/php.md +++ b/content/en/logs/log_collection/php.md @@ -434,7 +434,7 @@ monolog: {{% /tab %}} {{% tab "Laravel" %}} -
        +
        The function \DDTrace\current_context() has been introduced in version 0.61.0.
        diff --git a/content/en/logs/log_configuration/archives.md b/content/en/logs/log_configuration/archives.md index 973ec7335b1b1..a8b132c3c0dfa 100644 --- a/content/en/logs/log_configuration/archives.md +++ b/content/en/logs/log_configuration/archives.md @@ -78,7 +78,7 @@ Set up the [Google Cloud integration][1] for the project that holds your GCS sto ### Create a storage bucket {{< site-region region="gov" >}} -
        Sending logs to an archive is outside of the Datadog GovCloud environment, which is outside the control of Datadog. Datadog shall not be responsible for any logs that have left the Datadog GovCloud environment, including without limitation, any obligations or requirements that the user may have related to FedRAMP, DoD Impact Levels, ITAR, export compliance, data residency or similar regulations applicable to such logs.
        +
        Sending logs to an archive is outside of the Datadog GovCloud environment, which is outside the control of Datadog. Datadog shall not be responsible for any logs that have left the Datadog GovCloud environment, including without limitation, any obligations or requirements that the user may have related to FedRAMP, DoD Impact Levels, ITAR, export compliance, data residency or similar regulations applicable to such logs.
        {{< /site-region >}} {{< tabs >}} @@ -87,7 +87,7 @@ Set up the [Google Cloud integration][1] for the project that holds your GCS sto Go into your [AWS console][1] and [create an S3 bucket][2] to send your archives to. {{< site-region region="gov" >}} -
        Datadog Archives do not support bucket names with dots (.) when integrated with an S3 FIPS endpoint which relies on virtual-host style addressing. Learn more from AWS documentation. AWS FIPS and AWS Virtual Hosting.
        +
        Datadog Archives do not support bucket names with dots (.) when integrated with an S3 FIPS endpoint which relies on virtual-host style addressing. Learn more from AWS documentation. AWS FIPS and AWS Virtual Hosting.
        {{< /site-region >}} **Notes:** diff --git a/content/en/logs/log_configuration/flex_logs.md b/content/en/logs/log_configuration/flex_logs.md index 083026fded0e0..20a7c6e19ec4b 100644 --- a/content/en/logs/log_configuration/flex_logs.md +++ b/content/en/logs/log_configuration/flex_logs.md @@ -70,7 +70,7 @@ Use the spectrum of log types shown in the image below to determine when to use Compute is the querying capacity to run queries for Flex Logs. It is used when querying logs in the Flex Logs tier. It is not used for ingestion or when only searching Standard Indexing logs. The available compute tiers are: -
        The compute sizes available for US3, US5, AP1, AP2, and US1-FED are Starter, XS and S.
        +
        The compute sizes available for US3, US5, AP1, AP2, and US1-FED are Starter, XS and S.
        - Starter - Extra small (XS) diff --git a/content/en/logs/log_configuration/forwarding_custom_destinations.md b/content/en/logs/log_configuration/forwarding_custom_destinations.md index eed80d041c345..8ff93910c5c1b 100644 --- a/content/en/logs/log_configuration/forwarding_custom_destinations.md +++ b/content/en/logs/log_configuration/forwarding_custom_destinations.md @@ -37,7 +37,7 @@ The following metrics report on logs that have been forwarded successfully, incl ## Set up log forwarding to custom destinations {{< site-region region="gov" >}} -
        Sending logs to a custom destination is outside of the Datadog GovCloud environment, which is outside the control of Datadog. Datadog shall not be responsible for any logs that have left the Datadog GovCloud environment, including without limitation, any obligations or requirements that the user may have related to FedRAMP, DoD Impact Levels, ITAR, export compliance, data residency or similar regulations applicable to such logs.
        +
        Sending logs to a custom destination is outside of the Datadog GovCloud environment, which is outside the control of Datadog. Datadog shall not be responsible for any logs that have left the Datadog GovCloud environment, including without limitation, any obligations or requirements that the user may have related to FedRAMP, DoD Impact Levels, ITAR, export compliance, data residency or similar regulations applicable to such logs.
        {{< /site-region >}} 1. Add webhook IPs from the {{< region-param key="ip_ranges_url" link="true" text="IP ranges list">}} to the allowlist. diff --git a/content/en/logs/log_configuration/indexes.md b/content/en/logs/log_configuration/indexes.md index 7d1a7a982e8c9..e99c470bb4585 100644 --- a/content/en/logs/log_configuration/indexes.md +++ b/content/en/logs/log_configuration/indexes.md @@ -52,7 +52,7 @@ To delete an index from your organization, use the "Delete icon" in the index ac {{< img src="logs/indexes/delete-index.png" alt="Delete index" style="width:70%;">}} -
        +
        You cannot recreate an index with the same name as the deleted one.
        diff --git a/content/en/logs/log_configuration/logs_to_metrics.md b/content/en/logs/log_configuration/logs_to_metrics.md index 72e2524e785e2..74734c0590fca 100644 --- a/content/en/logs/log_configuration/logs_to_metrics.md +++ b/content/en/logs/log_configuration/logs_to_metrics.md @@ -53,7 +53,7 @@ You can also create metrics from an Analytics search by selecting the "Generate {{< img src="logs/processing/logs_to_metrics/count_unique.png" alt="The timeseries graph configuration page with the count unique query parameter highlighted" style="width:80%;">}} -
        Log-based metrics are considered custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality attributes like timestamps, user IDs, request IDs, or session IDs to avoid impacting your billing.
        +
        Log-based metrics are considered custom metrics and billed accordingly. Avoid grouping by unbounded or extremely high cardinality attributes like timestamps, user IDs, request IDs, or session IDs to avoid impacting your billing.
        ### Update a log-based metric diff --git a/content/en/logs/log_configuration/online_archives.md b/content/en/logs/log_configuration/online_archives.md index 54e7faa02ce4e..914ed133a6a3d 100644 --- a/content/en/logs/log_configuration/online_archives.md +++ b/content/en/logs/log_configuration/online_archives.md @@ -16,7 +16,7 @@ algolia: tags: ['online archives'] --- -
        +
        Online Archives is in limited availability. To request access, contact Datadog Support.
        diff --git a/content/en/logs/log_configuration/parsing.md b/content/en/logs/log_configuration/parsing.md index a71a73c3bc55a..672a99e05107c 100644 --- a/content/en/logs/log_configuration/parsing.md +++ b/content/en/logs/log_configuration/parsing.md @@ -72,7 +72,7 @@ After processing, the following structured log is generated: ### Matcher and filter -
        Grok parsing features available at query-time (in the Log Explorer) support a limited subset of matchers (data, integer, notSpace, number, and word) and filters (number and integer).

        +
        Grok parsing features available at query-time (in the Log Explorer) support a limited subset of matchers (data, integer, notSpace, number, and word) and filters (number and integer).

        The following full set of matchers and filters are specific to ingest-time Grok Parser functionality.
        Here is a list of all the matchers and filters natively implemented by Datadog: diff --git a/content/en/logs/log_configuration/pipelines.md b/content/en/logs/log_configuration/pipelines.md index 58f55b94a16e0..ec29fe17e6cca 100644 --- a/content/en/logs/log_configuration/pipelines.md +++ b/content/en/logs/log_configuration/pipelines.md @@ -99,7 +99,7 @@ Specify alternate attributes to use as the source of a log's date by setting a [ **Note**: Datadog rejects a log entry if its official date is older than 18 hours in the past. -
        +
        The recognized date formats are: ISO8601, UNIX (the milliseconds EPOCH format), and RFC3164.
        diff --git a/content/en/metrics/custom_metrics/historical_metrics.md b/content/en/metrics/custom_metrics/historical_metrics.md index 9b1c801a11f0c..83ae4c9f9481a 100644 --- a/content/en/metrics/custom_metrics/historical_metrics.md +++ b/content/en/metrics/custom_metrics/historical_metrics.md @@ -18,7 +18,7 @@ further_reading: {{< jqmath-vanilla >}} {{% site-region region="gov" %}} -
        Historical metrics ingestion is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        Historical metrics ingestion is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} ## Overview diff --git a/content/en/metrics/guide/custom_metrics_governance.md b/content/en/metrics/guide/custom_metrics_governance.md index 9a963d0ef89af..3050d55e581a4 100644 --- a/content/en/metrics/guide/custom_metrics_governance.md +++ b/content/en/metrics/guide/custom_metrics_governance.md @@ -33,7 +33,7 @@ Check out this [interactive walk through][17] of Datadog's custom metrics govern ## Prerequisites -
        Some product features require Administrator access.
        +
        Some product features require Administrator access.
        {{< whatsnext desc="This guide assumes you have an understanding of the following concepts in custom metrics:" >}} {{< nextlink href="/metrics/custom_metrics/" >}}What is considered a custom metric{{< /nextlink >}} @@ -49,7 +49,7 @@ See the steps in this section to review your total account's monthly metric usag ### Account-level visibility -
        You must have the Datadog Admin Role to access the Plan & Usage page.
        +
        You must have the Datadog Admin Role to access the Plan & Usage page.
        The [Plan and Usage][1] provides you an out-of-the-box (OOTB) summary of your account's monthly billable custom metrics usage with detailed insights on your costs, burn rate, and Top Custom Metric names. @@ -76,7 +76,7 @@ Team-level visibility enables account administrators to hold teams accountable. Individual teams might have limited insights into the costs of the metrics and tags they're submitting. This results in teams being less motivated to control their usage or even limit usage growth. It is crucial for everyone to have visibility into their usage and feel empowered to take ownership in managing those volumes and associated costs. #### Find the source of your largest custom metrics -
        You must have the Datadog Admin Role to access the Plan & Usage page.
        +
        You must have the Datadog Admin Role to access the Plan & Usage page.
        {{< img src="metrics/guide/custom_metrics_governance/team_attribution_plan_usage_table.png" alt="Navigate to the Metrics Summary from the Plan & Usage page through the Top Custom Metrics table" style="width:90%;" >}} diff --git a/content/en/metrics/guide/rate-limit.md b/content/en/metrics/guide/rate-limit.md index d3489a7a65422..b6bcb075118b0 100644 --- a/content/en/metrics/guide/rate-limit.md +++ b/content/en/metrics/guide/rate-limit.md @@ -28,7 +28,7 @@ This guide explains: When Datadog notices a cardinality increase, before any rate limits are applied, a warning [event][2] is created. If the metric cardinality continues to increase, a rate limit might be applied. If the metric is rate limited, a second event is generated stating a rate limit has been placed. View these events in the [Event Explorer][3]. -
        Datadog does not send a notification for every subsequent rate-limiting event. As a best practice, build an Event Monitor to send alerts when metrics are rate-limited in the future.
        +
        Datadog does not send a notification for every subsequent rate-limiting event. As a best practice, build an Event Monitor to send alerts when metrics are rate-limited in the future.
        ## Monitor rate limit events @@ -51,7 +51,7 @@ For more information, see the [Best Practices for Custom Metrics Governance][4] ## Submit a request to remove the rate limit -
        Only a Datadog Admin can request a removal of a metric rate limit. If you are not an Admin, make sure to include an Admin on the support ticket so they can confirm the request.
        +
        Only a Datadog Admin can request a removal of a metric rate limit. If you are not an Admin, make sure to include an Admin on the support ticket so they can confirm the request.
        After making the changes to remove the unbounded tags, submit a request to [Datadog Support][5] to remove the rate limit. In your request, provide the following information: - Name of the rate-limited metric diff --git a/content/en/mobile/guide/configure-mobile-device-for-on-call.md b/content/en/mobile/guide/configure-mobile-device-for-on-call.md index 4556920f715f8..e33e62b3a196a 100644 --- a/content/en/mobile/guide/configure-mobile-device-for-on-call.md +++ b/content/en/mobile/guide/configure-mobile-device-for-on-call.md @@ -98,7 +98,7 @@ You can override your device's system volume and Do Not Disturb mode for both pu 6. Test the setup of your critical push notification by tapping **Test push notifications**. -
        +
        On Android, the Datadog mobile app cannot bypass system volume or Do Not Disturb settings when used within a Work Profile. As a workaround, install the Datadog mobile app on your personal profile.
        diff --git a/content/en/mobile/push_notification.md b/content/en/mobile/push_notification.md index 5c87990aefcc9..d7da28c4aa268 100644 --- a/content/en/mobile/push_notification.md +++ b/content/en/mobile/push_notification.md @@ -13,7 +13,7 @@ further_reading: text: "Workflow Automation Documentation" --- {{< site-region region="gov" >}} -
        Only Incident Management push notifications are supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        Only Incident Management push notifications are supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}} Receive mobile push notifications for [on-call alerts](#circumvent-mute-and-Do-Not-Disturb-mode-for-On-Call), [incidents](#incident-notifications), and [workflow automation updates](#workflow-automation-notifications), so you can stay informed in real time from the Datadog mobile app. @@ -59,7 +59,7 @@ For more information, see the [guide on setting up your mobile device for On-Cal ### Critical push notifications {{< site-region region="gov" >}} -
        On-Call is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        On-Call is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}}
        Critical push notifications are only available for On-Call. If you are setting up On-Call on the Datadog mobile app for the first time, an onboarding flow takes care of notification settings and permissions. @@ -104,7 +104,7 @@ Critical push notifications are only available for On-Call. If you are setting u 6. Test the setup of your critical push notification by tapping **Test push notifications**. -
        +
        On Android, the Datadog mobile app cannot bypass system volume or Do Not Disturb settings when used within a Work Profile. As a workaround, install the Datadog mobile app on your personal profile.
        @@ -129,7 +129,7 @@ By default if you have push notifications enabled and are assigned as a commande ## Workflow automation notifications {{< site-region region="gov" >}} -
        Workflow automation is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        Workflow automation is not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}} Create [workflow automations][3] that send mobile push notifications. diff --git a/content/en/monitors/guide/custom_schedules.md b/content/en/monitors/guide/custom_schedules.md index 5f837589c735b..37a3fc7c1cb15 100644 --- a/content/en/monitors/guide/custom_schedules.md +++ b/content/en/monitors/guide/custom_schedules.md @@ -26,7 +26,7 @@ Monitor Custom Schedules are supported on events, logs, and metrics monitors wit Click **Add Custom Schedule** to configure your evaluation frequency. -
        After a custom schedule has been enabled on a monitor, the schedule cannot be disabled. Custom schedules can only be added or removed during monitor creation. +
        After a custom schedule has been enabled on a monitor, the schedule cannot be disabled. Custom schedules can only be added or removed during monitor creation.
        {{< tabs >}} diff --git a/content/en/monitors/notify/_index.md b/content/en/monitors/notify/_index.md index 7eee4a06f293b..c6849a95da9e3 100644 --- a/content/en/monitors/notify/_index.md +++ b/content/en/monitors/notify/_index.md @@ -111,7 +111,7 @@ When an incident is created from a monitor, the incident's [field values][13] ar Monitor notifications include content such as the monitor's query, the @-mentions used, metric snapshots (for metric monitors), and links back to relevant pages in Datadog. You have the option to choose which content you would like to include or exclude from notifications for individual monitors. -
        Distribution metrics with percentile aggregators (such as `p50`, `p75`, `p95`, or `p99`) do not generate a snapshot graph in notifications.
        +
        Distribution metrics with percentile aggregators (such as `p50`, `p75`, `p95`, or `p99`) do not generate a snapshot graph in notifications.
        {{< img src="monitors/notifications/monitor_notification_presets.png" alt="Set a monitor preset" style="width:70%;" >}} diff --git a/content/en/monitors/notify/notification_rules.md b/content/en/monitors/notify/notification_rules.md index 82f85ab5ec770..c356fb842fe26 100644 --- a/content/en/monitors/notify/notification_rules.md +++ b/content/en/monitors/notify/notification_rules.md @@ -21,7 +21,7 @@ Monitor notification rules are predefined sets of conditions that automate the p ## Creating notification rules -
        You must have the monitor_config_policy_write permission to create a rule.
        +
        You must have the monitor_config_policy_write permission to create a rule.
        1. Navigate to [**Monitors > Settings > Notification Rules**][1]. 1. Click **New Rule**. diff --git a/content/en/monitors/settings/_index.md b/content/en/monitors/settings/_index.md index 13c841dfe3890..c9ad8ea3ae257 100644 --- a/content/en/monitors/settings/_index.md +++ b/content/en/monitors/settings/_index.md @@ -43,7 +43,7 @@ The setting applies to **all** Monitor alert notifications, as it's an org-wide Monitor tag policies allow you to enforce data validation on tags and tag values on your Datadog monitors. This ensures that alerts are sent to the correct downstream systems and workflows for triage and processing. -
        After set up, tag policies apply to all Datadog monitors
        +
        After set up, tag policies apply to all Datadog monitors
        - To create a new monitor, it must adhere to your organization's tag policies. - Existing monitors that violate your organization's tag policies continue to provide alerts and notifications, but must be updated to match the tag policies before you can modify other settings. diff --git a/content/en/monitors/status/status_page.md b/content/en/monitors/status/status_page.md index 9a215a01ca5ff..585bc87b426f7 100644 --- a/content/en/monitors/status/status_page.md +++ b/content/en/monitors/status/status_page.md @@ -19,7 +19,7 @@ further_reading: text: "Quickly get rich, actionable context for alerts with Datadog's new Monitor Status page" --- -
        The provisional status page has limited support for monitors and their features. For more details, see Restrictions of provisional status page.

        If you are using the legacy status page, see the Status Page (Legacy) documentation
        +
        The provisional status page has limited support for monitors and their features. For more details, see Restrictions of provisional status page.

        If you are using the legacy status page, see the Status Page (Legacy) documentation
        ## Overview diff --git a/content/en/monitors/types/host.md b/content/en/monitors/types/host.md index 02d4056ba7727..f69dd5f006c02 100644 --- a/content/en/monitors/types/host.md +++ b/content/en/monitors/types/host.md @@ -25,7 +25,7 @@ Infrastructure monitoring provides visibility into your entire IT environment, i Every Datadog Agent reports a service check called `datadog.agent.up` with the status `OK`. You can monitor this check across one or more hosts by using a host monitor. -
        AIX Agents do not report the datadog.agent.up service check. You can use the metric datadog.agent.running to monitor the uptime of an AIX Agent. The metric emits a value of 1 if the Agent is reporting to Datadog.
        +
        AIX Agents do not report the datadog.agent.up service check. You can use the metric datadog.agent.running to monitor the uptime of an AIX Agent. The metric emits a value of 1 if the Agent is reporting to Datadog.
        ## Monitor creation diff --git a/content/en/monitors/types/process.md b/content/en/monitors/types/process.md index 4010052aeac93..d2aa14af2fa79 100644 --- a/content/en/monitors/types/process.md +++ b/content/en/monitors/types/process.md @@ -19,7 +19,7 @@ further_reading: text: "Monitor processes running on AWS Fargate with Datadog" --- -
        +
        Live Processes and Live Process Monitoring are included in the Enterprise plan. For all other plans, contact your account representative or success@datadoghq.com to request this feature.
        diff --git a/content/en/network_monitoring/devices/setup.md b/content/en/network_monitoring/devices/setup.md index c5d9d2c08e6df..fc18eee51cfa6 100644 --- a/content/en/network_monitoring/devices/setup.md +++ b/content/en/network_monitoring/devices/setup.md @@ -46,7 +46,7 @@ Navigate to the [Agent installation page][1], and install the [Datadog Agent][2] ### High Availability {{< site-region region="gov" >}} -
        High Availability support of the Datadog Agent is in not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        +
        High Availability support of the Datadog Agent is in not supported for your selected Datadog site ({{< region-param key="dd_site_name" >}}).
        {{< /site-region >}} High Availability (HA) support of the Datadog Agent in Network Device Monitoring allows you to designate an active Agent and a standby Agent, ensuring automatic failover if the active Agent encounters an issue. This setup eliminates the Agent as a single point of failure, maintaining continuous monitoring during unexpected incidents or planned maintenance, such as OS updates and Agent upgrades. diff --git a/content/en/network_monitoring/netflow/_index.md b/content/en/network_monitoring/netflow/_index.md index 269aa8d85d567..8f862623db584 100644 --- a/content/en/network_monitoring/netflow/_index.md +++ b/content/en/network_monitoring/netflow/_index.md @@ -225,7 +225,7 @@ To visualize the raw bytes/packets (sampled) sent by your devices, you can query NetFlow data is retained for 30 days by default, with options for 15, 30, 60, and 90 day retention. -
        To retain NetFlow data for longer periods of time, contact your account representative.
        +
        To retain NetFlow data for longer periods of time, contact your account representative.
        ## Troubleshooting diff --git a/content/en/network_monitoring/network_path/setup.md b/content/en/network_monitoring/network_path/setup.md index 40561f4b23f90..fe3855e819ba9 100644 --- a/content/en/network_monitoring/network_path/setup.md +++ b/content/en/network_monitoring/network_path/setup.md @@ -307,7 +307,7 @@ spec: Configure network traffic paths to allow the Agent to automatically discover and monitor network paths based on actual network traffic, eliminating the need to manually configure individual endpoints. See [exclude CIDR ranges](#exclude-cidr-ranges) to filter specific network ranges. -
        Enabling Network Path to automatically detect paths can generate a significant number of logs, particularly when monitoring network paths across a large number of hosts.
        +
        Enabling Network Path to automatically detect paths can generate a significant number of logs, particularly when monitoring network paths across a large number of hosts.
        {{< tabs >}} diff --git a/content/en/notebooks/_index.md b/content/en/notebooks/_index.md index 22520a04fded7..4d3c5e9279812 100644 --- a/content/en/notebooks/_index.md +++ b/content/en/notebooks/_index.md @@ -180,7 +180,7 @@ At the top of the notebook, you can see avatar images of all of the users curren Notebooks support template variables. Dynamically scope visualizations by adding and selecting template variable values. For more information, see [Template Variables][5]. -
        Some Analysis features have limited or no support for template variables. For more information, see Template Variable Support in Analysis Notebooks.
        +
        Some Analysis features have limited or no support for template variables. For more information, see Template Variable Support in Analysis Notebooks.
        ### Time controls diff --git a/content/en/notebooks/advanced_analysis/_index.md b/content/en/notebooks/advanced_analysis/_index.md index e238a201085c5..81aab19aa78c4 100644 --- a/content/en/notebooks/advanced_analysis/_index.md +++ b/content/en/notebooks/advanced_analysis/_index.md @@ -14,7 +14,7 @@ further_reading: --- {{% site-region region="gov" %}} -
        +
        Notebooks Advanced Analysis is not available in the Datadog site ({{< region-param key="dd_site_name" >}}).
        {{% /site-region %}} diff --git a/content/en/observability_pipelines/_index.md b/content/en/observability_pipelines/_index.md index ff1ab2b5d8f10..7fcac14bb47bb 100644 --- a/content/en/observability_pipelines/_index.md +++ b/content/en/observability_pipelines/_index.md @@ -50,7 +50,7 @@ further_reading: --- {{< site-region region="gov" >}} -
        Observability Pipelines is not available on the US1-FED Datadog site.
        +
        Observability Pipelines is not available on the US1-FED Datadog site.
        {{< /site-region >}} ## Overview diff --git a/content/en/observability_pipelines/advanced_configurations.md b/content/en/observability_pipelines/advanced_configurations.md index 3b1f3f7ce704e..1fa033c520d5d 100644 --- a/content/en/observability_pipelines/advanced_configurations.md +++ b/content/en/observability_pipelines/advanced_configurations.md @@ -30,7 +30,7 @@ This document explains [bootstrapping](#bootstrap-options) for the Observability ## Bootstrap Options -
        All configuration file paths specified in the pipeline need to be under /DD_OP_DATA_DIR/config. +
        All configuration file paths specified in the pipeline need to be under /DD_OP_DATA_DIR/config. Modifying files under that location while OPW is running might have adverse effects.
        diff --git a/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md b/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md index 19da2e10ac130..f398f7dd36319 100644 --- a/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md +++ b/content/en/observability_pipelines/best_practices_for_scaling_observability_pipelines.md @@ -4,7 +4,7 @@ title: Best Practices for Scaling Observability Pipelines --- {{< site-region region="gov" >}} -
        Observability Pipelines is not available on the US1-FED Datadog site.
        +
        Observability Pipelines is not available on the US1-FED Datadog site.
        {{< /site-region >}}
        diff --git a/content/en/observability_pipelines/destinations/google_cloud_storage.md b/content/en/observability_pipelines/destinations/google_cloud_storage.md index b5600d01c31c9..e097243bb5673 100644 --- a/content/en/observability_pipelines/destinations/google_cloud_storage.md +++ b/content/en/observability_pipelines/destinations/google_cloud_storage.md @@ -3,7 +3,7 @@ title: Google Cloud Storage Destination disable_toc: false --- -
        For Worker versions 2.7 and later, the Google Cloud destination supports uniform bucket-level access. Google recommends using uniform bucket-level access.
        For Worker version older than 2.7, only Access Control Lists is supported.
        +
        For Worker versions 2.7 and later, the Google Cloud destination supports uniform bucket-level access. Google recommends using uniform bucket-level access.
        For Worker version older than 2.7, only Access Control Lists is supported.
        Use the Google Cloud Storage destination to send your logs to a Google Cloud Storage bucket. If you want to send logs to Google Cloud Storage for [archiving][1] and [rehydration][2], you must [configure Log Archives](#configure-log-archives). If you do not want to rehydrate logs in Datadog, skip to [Set up the destination for your pipeline](#set-up-the-destinations). diff --git a/content/en/observability_pipelines/install_the_worker/_index.md b/content/en/observability_pipelines/install_the_worker/_index.md index cbf3183ab6888..e3ec2cde6f651 100644 --- a/content/en/observability_pipelines/install_the_worker/_index.md +++ b/content/en/observability_pipelines/install_the_worker/_index.md @@ -46,7 +46,7 @@ If you had set up the pipeline components using the [API][6] or Terraform, to ge {{% /tab %}} {{% tab "Linux" %}} -
        For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
        +
        For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
        Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see [Manually install the Worker on Linux](#manually-install-the-worker-on-linux). @@ -121,7 +121,7 @@ See [Update Existing Pipelines][1] if you want to make changes to your pipeline' {{% /tab %}} {{% tab "RPM" %}} -
        For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
        +
        For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
        1. Set up the Datadog `rpm` repo on your system with the below command.
        **Note**: If you are running RHEL 8.1 or CentOS 8.1, use `repo_gpgcheck=0` instead of `repo_gpgcheck=1` in the configuration below. ```shell diff --git a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md index 1ad97f583096d..c40013aa7c520 100644 --- a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md +++ b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md @@ -15,7 +15,7 @@ further_reading: text: "Learn more about rehydrating log archives" --- -
        The Observability Pipelines Datadog Archives destination is in beta.
        +
        The Observability Pipelines Datadog Archives destination is in beta.
        ## Overview @@ -194,7 +194,7 @@ See the [Log Archives documentation][6] for additional information. You can configure the `datadog_archives` destination using the [configuration file](#configuration-file) or the [pipeline builder UI](#configuration-file). -
        If the Worker is ingesting logs that are not coming from the Datadog Agent and are routed to the Datadog Archives destination, those logs are not tagged with reserved attributes. This means that you lose Datadog telemetry and the benefits of unified service tagging. For example, say your syslogs are sent to datadog_archives and those logs have the status tagged as severity instead of the reserved attribute of status and the host tagged as hostname instead of the reserved attribute host. When these logs are rehydrated in Datadog, the status for the logs are all set to info and none of the logs will have a hostname tag.
        +
        If the Worker is ingesting logs that are not coming from the Datadog Agent and are routed to the Datadog Archives destination, those logs are not tagged with reserved attributes. This means that you lose Datadog telemetry and the benefits of unified service tagging. For example, say your syslogs are sent to datadog_archives and those logs have the status tagged as severity instead of the reserved attribute of status and the host tagged as hostname instead of the reserved attribute host. When these logs are rehydrated in Datadog, the status for the logs are all set to info and none of the logs will have a hostname tag.
        ### Configuration file diff --git a/content/en/observability_pipelines/legacy/setup/datadog.md b/content/en/observability_pipelines/legacy/setup/datadog.md index 3adfcc01b6686..ed7ffd8287110 100644 --- a/content/en/observability_pipelines/legacy/setup/datadog.md +++ b/content/en/observability_pipelines/legacy/setup/datadog.md @@ -108,8 +108,8 @@ In order to run the Worker in your AWS account, you need administrative access t {{% /tab %}} {{% tab "CloudFormation" %}} -
        CloudFormation installs only support Remote Configuration.
        -
        Only use CloudFormation installs for non-production-level workloads.
        +
        CloudFormation installs only support Remote Configuration.
        +
        Only use CloudFormation installs for non-production-level workloads.
        In order to run the Worker in your AWS account, you need administrative access to that account. Collect the following pieces of information to run the Worker instances: * The VPC ID your instances will run in. @@ -301,7 +301,7 @@ The Observability Pipelines Worker Docker image is published to Docker Hub [here {{% /tab %}} {{% tab "CloudFormation" %}} -
        Only use CloudFormation installs for non-production-level workloads.
        +
        Only use CloudFormation installs for non-production-level workloads.
        To install the Worker in your AWS Account, use the CloudFormation template to create a Stack: @@ -405,7 +405,7 @@ An NLB is provisioned by the Terraform module, and configured to point at the in {{% /tab %}} {{% tab "CloudFormation" %}} -
        Only use CloudFormation installs for non-production-level workloads.
        +
        Only use CloudFormation installs for non-production-level workloads.
        An NLB is provisioned by the CloudFormation template, and is configured to point at the AutoScaling Group. Its DNS address is returned in the `LoadBalancerDNS` CloudFormation output. {{% /tab %}} @@ -442,7 +442,7 @@ By default, a 288GB EBS drive is allocated to each instance, and the sample conf {{% /tab %}} {{% tab "CloudFormation" %}} -
        EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
        +
        EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
        By default, a 288GB EBS drive is allocated to each instance, and is auto-mounted and formatted upon instance boot. {{% /tab %}} diff --git a/content/en/observability_pipelines/legacy/setup/splunk.md b/content/en/observability_pipelines/legacy/setup/splunk.md index 4b9f454c45c73..dd17e2e2ecebb 100644 --- a/content/en/observability_pipelines/legacy/setup/splunk.md +++ b/content/en/observability_pipelines/legacy/setup/splunk.md @@ -108,8 +108,8 @@ In order to run the Worker in your AWS account, you need administrative access t {{% /tab %}} {{% tab "CloudFormation" %}} -
        CloudFormation installs only support Remote Configuration at this time.
        -
        Only use CloudFormation installs for non-production-level workloads.
        +
        CloudFormation installs only support Remote Configuration at this time.
        +
        Only use CloudFormation installs for non-production-level workloads.
        In order to run the Worker in your AWS account, you need administrative access to that account. Collect the following pieces of information to run the Worker instances: * The VPC ID your instances will run in. @@ -425,7 +425,7 @@ EOT {{% /tab %}} {{% tab "CloudFormation" %}} -
        Only use CloudFormation installs for non-production-level workloads.
        +
        Only use CloudFormation installs for non-production-level workloads.
        To install the Worker in your AWS Account, use the CloudFormation template to create a Stack: @@ -531,7 +531,7 @@ An NLB is provisioned by the Terraform module, and provisioned to point at the i {{% /tab %}} {{% tab "CloudFormation" %}} -
        Only use CloudFormation installs for non-production-level workloads.
        +
        Only use CloudFormation installs for non-production-level workloads.
        An NLB is provisioned by the CloudFormation template, and is configured to point at the AutoScaling Group. Its DNS address is returned in the `LoadBalancerDNS` CloudFormation output. {{% /tab %}} @@ -568,7 +568,7 @@ By default, a 288GB EBS drive is allocated to each instance, and the sample conf {{% /tab %}} {{% tab "CloudFormation" %}} -
        EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
        +
        EBS drives created by this CloudFormation template have their lifecycle tied to the instance they are created with. This leads to data loss if an instance is terminated, for example by the AutoScaling Group. For this reason, only use CloudFormation installs for non-production-level workloads.
        By default, a 288GB EBS drive is allocated to each instance, and is auto-mounted and formatted upon instance boot. {{% /tab %}} diff --git a/content/en/observability_pipelines/performance.md b/content/en/observability_pipelines/performance.md index fcb50107ff9fb..4ea6c5a5bc252 100644 --- a/content/en/observability_pipelines/performance.md +++ b/content/en/observability_pipelines/performance.md @@ -16,7 +16,7 @@ further_reading: text: "Destinations" --- -
        In-memory and disk buffering options for destinations are in Preview. Contact your account manager to request access.
        +
        In-memory and disk buffering options for destinations are in Preview. Contact your account manager to request access.
        ## Overview diff --git a/content/en/observability_pipelines/processors/_index.md b/content/en/observability_pipelines/processors/_index.md index ea8d4040e2207..c0155c0053700 100644 --- a/content/en/observability_pipelines/processors/_index.md +++ b/content/en/observability_pipelines/processors/_index.md @@ -19,7 +19,7 @@ Select a processor in the left navigation menu to see more information about it. ## Processor groups -
        Configuring a pipeline with processor groups is only available for Worker versions 2.7 and later.
        +
        Configuring a pipeline with processor groups is only available for Worker versions 2.7 and later.
        {{< img src="observability_pipelines/processors/processor_groups.png" alt="Your image description" style="width:100%;" >}} diff --git a/content/en/observability_pipelines/set_up_pipelines/_index.md b/content/en/observability_pipelines/set_up_pipelines/_index.md index f5afc8dd011b7..1472ae82771ad 100644 --- a/content/en/observability_pipelines/set_up_pipelines/_index.md +++ b/content/en/observability_pipelines/set_up_pipelines/_index.md @@ -98,7 +98,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you {{% /tab %}} {{% tab "API" %}} -
        Creating pipelines using the Observability Pipelines API is in Preview. Fill out the form to request access.
        +
        Creating pipelines using the Observability Pipelines API is in Preview. Fill out the form to request access.
        1. You can use Observability Pipelines API to [create a pipeline][1]. 1. After creating the pipeline, [install the Worker][2] to send logs through it. @@ -114,7 +114,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you {{% /tab %}} {{% tab "Terraform" %}} -
        Creating pipelines using Terraform is in Preview. Fill out the form to request access.
        +
        Creating pipelines using Terraform is in Preview. Fill out the form to request access.
        1. You can use the [datadog_observability_pipeline][1] module to create a pipeline using Terraform. 1. After creating the pipeline, [install the Worker][2] to send logs through it. diff --git a/content/en/observability_pipelines/sources/_index.md b/content/en/observability_pipelines/sources/_index.md index 15e0b1c673d39..4c37b3f6fcf5c 100644 --- a/content/en/observability_pipelines/sources/_index.md +++ b/content/en/observability_pipelines/sources/_index.md @@ -81,7 +81,7 @@ Instead of using a self-signed certificate, Datadog recommends the following: If you must use a self-signed certificate because the above approaches are not possible, you can configure your environment to trust the self-signed certificate on the Observability Pipelines Worker host. -
        Datadog does not recommend self-signed certificates. They are less secure and are not appropriate for production or internet-facing use. If you must use self-signed certificates, limit usage to internal testing only.
        +
        Datadog does not recommend self-signed certificates. They are less secure and are not appropriate for production or internet-facing use. If you must use self-signed certificates, limit usage to internal testing only.
        For the Worker host to trust the self-signed certificate: diff --git a/content/en/opentelemetry/ingestion_sampling.md b/content/en/opentelemetry/ingestion_sampling.md index f0c8f13de8f3a..e3de38db30a57 100644 --- a/content/en/opentelemetry/ingestion_sampling.md +++ b/content/en/opentelemetry/ingestion_sampling.md @@ -109,7 +109,7 @@ To configure probabilistic sampling, do one of the following: - Probabilistic sampling will apply to spans originating from both Datadog and OTel tracing libraries. - If you send spans both to the Datadog Agent **and** OTel collector instances, set the same seed between Datadog Agent (`DD_APM_PROBABILISTIC_SAMPLER_HASH_SEED`) and OTel collector (`hash_seed`) to ensure consistent sampling. -
        DD_OTLP_CONFIG_TRACES_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE is deprecated and has been replaced by DD_APM_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE.
        +
        DD_OTLP_CONFIG_TRACES_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE is deprecated and has been replaced by DD_APM_PROBABILISTIC_SAMPLER_SAMPLING_PERCENTAGE.
        #### Considerations diff --git a/content/en/opentelemetry/instrument/instrumentation_libraries.md b/content/en/opentelemetry/instrument/instrumentation_libraries.md index 7453d1e07ca46..3109cae019753 100644 --- a/content/en/opentelemetry/instrument/instrumentation_libraries.md +++ b/content/en/opentelemetry/instrument/instrumentation_libraries.md @@ -59,7 +59,7 @@ Datadog SDKs do not support OpenTelemetry Metrics and Logs APIs. To use OpenTele 4. The Datadog SDK for Java also accepts select individual instrumentation JARs produced by OpenTelemetry's [opentelemetry-java-instrumentation][9] build, for example the [R2DBC instrumentation JAR][11]. -
        +
        OpenTelemetry incubator APIs are not supported.
        @@ -95,7 +95,7 @@ mvn spring-boot:run -Dstart-class=com.baeldung.pagination.PaginationApplication Open `http://127.0.0.1:8080/products` to exercise the product query. With this setup, you are using OpenTelemetry's instrumentation to ensure full observability for R2DBC queries. -
        +
        Versions 2.6.0-alpha and later of these OpenTelemetry instrumentations are not supported by the Datadog Java SDK.
        diff --git a/content/en/opentelemetry/integrations/_index.md b/content/en/opentelemetry/integrations/_index.md index 2472cb6570ec9..5906c4791ffd4 100644 --- a/content/en/opentelemetry/integrations/_index.md +++ b/content/en/opentelemetry/integrations/_index.md @@ -22,7 +22,7 @@ Datadog collects metrics from supported OpenTelemetry receivers at no extra cost For example, the [`dockerstatsreceiver`][15] `metadata.yaml` file lists metrics that you can collect at no extra cost. -
        Ensure that you configure receivers according to OpenTelemetry receiver documentation. Incorrectly configured receivers may cause metrics to be classified as custom, resulting in additional charges.
        +
        Ensure that you configure receivers according to OpenTelemetry receiver documentation. Incorrectly configured receivers may cause metrics to be classified as custom, resulting in additional charges.
        ## Datadog-supported OpenTelemetry integrations diff --git a/content/en/opentelemetry/integrations/datadog_extension.md b/content/en/opentelemetry/integrations/datadog_extension.md index 8a34e0d2ae978..c70558e76fef8 100644 --- a/content/en/opentelemetry/integrations/datadog_extension.md +++ b/content/en/opentelemetry/integrations/datadog_extension.md @@ -89,7 +89,7 @@ service: | `timeout` | Timeout for HTTP requests | `30s` | | `tls.insecure_skip_verify` | Skip TLS certificate verification | `false` | -
        +
        Hostname Matching: If you specify a custom hostname in the Datadog Extension, it must match the hostname value in the Datadog Exporter configuration. The Datadog Extension does not have access to pipeline telemetry and cannot infer hostnames from incoming spans. It only obtains hostnames from system/cloud provider APIs or manual configuration. If telemetry has different hostname attributes than the hostname reported by the extension, the telemetry will not be correlated to the correct host, and you may see duplicate hosts in Datadog.
        diff --git a/content/en/opentelemetry/integrations/kafka_metrics.md b/content/en/opentelemetry/integrations/kafka_metrics.md index ecb0686aeb0c0..a75859cadf7a8 100644 --- a/content/en/opentelemetry/integrations/kafka_metrics.md +++ b/content/en/opentelemetry/integrations/kafka_metrics.md @@ -6,7 +6,7 @@ further_reading: text: "Setting Up the OpenTelemetry Collector" --- -
        +
        OTel Kafka Metrics Remapping is in public alpha. It is available in versions >= 0.93.0 of the collector. If you have feedback related to this, reach out to your account team to provide your input.
        diff --git a/content/en/opentelemetry/integrations/runtime_metrics/_index.md b/content/en/opentelemetry/integrations/runtime_metrics/_index.md index 26bbfe9b1186f..287427d440088 100644 --- a/content/en/opentelemetry/integrations/runtime_metrics/_index.md +++ b/content/en/opentelemetry/integrations/runtime_metrics/_index.md @@ -69,7 +69,7 @@ OpenTelemetry Go applications are [instrumented manually][3]. To enable runtime {{% tab ".NET" %}} -
        The minimum supported version of the .NET OpenTelemetry SDK is 1.5.0
        +
        The minimum supported version of the .NET OpenTelemetry SDK is 1.5.0
        #### Automatic instrumentation @@ -120,7 +120,7 @@ The OpenTelemetry runtime metrics have the following prefixes based on their sou The following tables list the Datadog runtime metrics that are supported through OpenTelemetry mapping. "N/A" indicates that there is no OpenTelemetry equivalent metric available. -
        OpenTelemetry runtime metrics are mapped to Datadog by metric name. Do not rename host metrics for OpenTelemetry runtime metrics as this breaks the mapping.
        +
        OpenTelemetry runtime metrics are mapped to Datadog by metric name. Do not rename host metrics for OpenTelemetry runtime metrics as this breaks the mapping.
        [100]: /opentelemetry/setup/collector_exporter/ [101]: /opentelemetry/setup/otlp_ingest_in_the_agent diff --git a/content/en/opentelemetry/migrate/collector_0_95_0.md b/content/en/opentelemetry/migrate/collector_0_95_0.md index 99afe791af8b5..743b8382862b5 100644 --- a/content/en/opentelemetry/migrate/collector_0_95_0.md +++ b/content/en/opentelemetry/migrate/collector_0_95_0.md @@ -16,7 +16,7 @@ To continue receiving Trace Metrics, configure the Datadog Connector in the Open ## Migrate to OpenTelemetry Collector version 0.95.0+ -
        To continue receiving Trace Metrics, you must configure the Datadog Connector as a part of your upgrade to OpenTelemetry Collector version 0.95.0+. Upgrading without configuring the Datadog Connector might also result in difficulties viewing the APM Traces page within the application. Monitors and dashboards based on the affected metrics might also be impacted.
        +
        To continue receiving Trace Metrics, you must configure the Datadog Connector as a part of your upgrade to OpenTelemetry Collector version 0.95.0+. Upgrading without configuring the Datadog Connector might also result in difficulties viewing the APM Traces page within the application. Monitors and dashboards based on the affected metrics might also be impacted.
        Before proceeding with the upgrade to the OTel Collector versions 0.95.0+: - Review the [release notes](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.95.0) to understand the nature of the changes. diff --git a/content/en/opentelemetry/migrate/ddot_collector.md b/content/en/opentelemetry/migrate/ddot_collector.md index 98ffd01fb7db9..77419dfab83fa 100644 --- a/content/en/opentelemetry/migrate/ddot_collector.md +++ b/content/en/opentelemetry/migrate/ddot_collector.md @@ -16,7 +16,7 @@ If you are already using a standalone OpenTelemetry (OTel) Collector for your OT To migrate to the DDOT Collector, you need to install the Datadog Agent and configure your applications to report the telemetry data. -
        +
        The DDOT Collector only supports deployment as a DaemonSet (following the agent deployment pattern), not as a gateway. If you have an existing gateway architecture, you can use the DDOT Collector with the loadbalancingexporter to connect to your existing gateway layer.
        @@ -212,7 +212,7 @@ datadog: ``` 1. (Optional) Enable additional Datadog features: -
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        +
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        {{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}} datadog: ... @@ -228,7 +228,7 @@ datadog: processCollection: true {{< /code-block >}} 1. (Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs: -
        Custom metrics may impact billing. See the custom metrics billing page for more information.
        +
        Custom metrics may impact billing. See the custom metrics billing page for more information.
        {{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}} datadog: ... diff --git a/content/en/opentelemetry/migrate/migrate_operation_names.md b/content/en/opentelemetry/migrate/migrate_operation_names.md index 270e3d59347fd..514801d8c36a5 100644 --- a/content/en/opentelemetry/migrate/migrate_operation_names.md +++ b/content/en/opentelemetry/migrate/migrate_operation_names.md @@ -15,7 +15,7 @@ When using OpenTelemetry with Datadog, you might see unclear or lengthy operatio Datadog has introduced new logic for generating operation names for OpenTelemetry traces, controlled by the `enable_operation_and_resource_name_logic_v2` feature flag. This new logic improves trace visibility in service pages and standardizes operation naming according to the rules outlined below. -
        +
        Breaking Change: When this new logic is active (either by opting-in or future default), it is a breaking change for monitors or dashboards that reference operation names based on the old conventions. You must update your monitors and dashboards to use the new naming conventions described in New mapping logic. If you cannot update them yet, you can opt out .
        diff --git a/content/en/opentelemetry/setup/ddot_collector/_index.md b/content/en/opentelemetry/setup/ddot_collector/_index.md index 2cf8042c572aa..a0f345270dad1 100644 --- a/content/en/opentelemetry/setup/ddot_collector/_index.md +++ b/content/en/opentelemetry/setup/ddot_collector/_index.md @@ -9,7 +9,7 @@ further_reading: --- {{< site-region region="gov" >}} -
        The Datadog Distribution of OpenTelemetry Collector (DDOT) is not yet FedRAMP/FIPS compliant.
        +
        The Datadog Distribution of OpenTelemetry Collector (DDOT) is not yet FedRAMP/FIPS compliant.
        • If you require a FedRAMP or FIPS-compliant data collection pipeline, use the FIPS-enabled Datadog Agent.
        • If you are a GovCloud customer whose only requirement is data residency in the GovCloud (US1-FED) data center, you may use the DDOT Collector.
        {{< /site-region >}} diff --git a/content/en/opentelemetry/setup/ddot_collector/custom_components.md b/content/en/opentelemetry/setup/ddot_collector/custom_components.md index 5b5414d095856..b501ad2b56159 100644 --- a/content/en/opentelemetry/setup/ddot_collector/custom_components.md +++ b/content/en/opentelemetry/setup/ddot_collector/custom_components.md @@ -9,7 +9,7 @@ further_reading: --- {{< site-region region="gov" >}} -
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        +
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        {{< /site-region >}} This guide explains how to build a DDOT Collector image with additional OpenTelemetry components not included in the default DDOT Collector. To see a list of components already included in the DDOT Collector by default, see [Included components][1]. diff --git a/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md b/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md index c4f16dbda8e13..3e240c0f9d461 100644 --- a/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md +++ b/content/en/opentelemetry/setup/ddot_collector/install/kubernetes.md @@ -12,7 +12,7 @@ further_reading: --- {{< site-region region="gov" >}} -
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        +
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        {{< /site-region >}} ## Overview @@ -128,7 +128,7 @@ The Datadog Operator automatically binds the OpenTelemetry Collector to ports `4 4. (Optional) Enable additional Datadog features: -
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        +
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        {{< code-block lang="yaml" filename="datadog-agent.yaml" collapsible="true" >}} # Enable Features @@ -205,7 +205,7 @@ If you don't want to expose the port, you can use the Agent service instead: 4. (Optional) Enable additional Datadog features: -
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        +
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        {{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}} datadog: @@ -226,7 +226,7 @@ When enabling additional Datadog features, always use the Datadog or OpenTelemet 5. (Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs: -
        Custom metrics may impact billing. See the custom metrics billing page for more information.
        +
        Custom metrics may impact billing. See the custom metrics billing page for more information.
        {{< code-block lang="yaml" filename="datadog-values.yaml" collapsible="true" >}} datadog: @@ -501,7 +501,7 @@ data: exporters: [debug, datadog] {{< /code-block >}} -
        The field for Collector config in the ConfigMap must be called otel-config.yaml.
        +
        The field for Collector config in the ConfigMap must be called otel-config.yaml.
        2. Reference the `otel-agent-config-map` ConfigMap in your `DatadogAgent` resource using `features.otelCollector.conf.configMap` parameter: {{< code-block lang="yaml" filename="datadog-agent.yaml" collapsible="false" >}} diff --git a/content/en/opentelemetry/setup/ddot_collector/install/linux.md b/content/en/opentelemetry/setup/ddot_collector/install/linux.md index 5c4582ef80b64..d4d3e888a2458 100644 --- a/content/en/opentelemetry/setup/ddot_collector/install/linux.md +++ b/content/en/opentelemetry/setup/ddot_collector/install/linux.md @@ -14,7 +14,7 @@ further_reading: {{< /callout >}} {{< site-region region="gov" >}} -
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        +
        FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
        {{< /site-region >}} ## Overview @@ -111,7 +111,7 @@ DDOT automatically binds the OpenTelemetry Collector to ports 4317 (grpc) and 43 ### (Optional) Enable additional Datadog features -
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        +
        Enabling these features may incur additional charges. Review the pricing page and talk to your Customer Success Manager before proceeding.
        For a complete list of available options, refer to the fully commented reference file at `/etc/datadog-agent/datadog.yaml.example` or the sample [`config_template.yaml`][12] file. diff --git a/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md b/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md index dcbd4a95724a7..1ef5022b31b10 100644 --- a/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md +++ b/content/en/opentelemetry/setup/otlp_ingest_in_the_agent.md @@ -33,7 +33,7 @@ To get started, you first [instrument your application][3] with OpenTelemetry SD Read the OpenTelemetry instrumentation documentation to understand how to point your instrumentation to the Agent. The `receiver` section described below follows the [OpenTelemetry Collector OTLP receiver configuration schema][5]. -
        Note: The supported setup is an ingesting Agent deployed on every OpenTelemetry-data generating host. You cannot send OpenTelemetry telemetry from collectors or instrumented apps running one host to an Agent on a different host. But, provided the Agent is local to the collector or SDK instrumented app, you can set up multiple pipelines.
        +
        Note: The supported setup is an ingesting Agent deployed on every OpenTelemetry-data generating host. You cannot send OpenTelemetry telemetry from collectors or instrumented apps running one host to an Agent on a different host. But, provided the Agent is local to the collector or SDK instrumented app, you can set up multiple pipelines.
        ## Enabling OTLP Ingestion on the Datadog Agent @@ -103,7 +103,7 @@ OTLP logs ingestion on the Datadog Agent is disabled by default so that you don' - Set `DD_LOGS_ENABLED` to true. - Set `DD_OTLP_CONFIG_LOGS_ENABLED` to true. -
        +
        Known Issue: Starting with Agent version 7.61.0, OTLP ingestion pipelines may fail to start in Docker environments, displaying the error: Error running the OTLP ingest pipeline: failed to register process metrics: process does not exist.

        If you are using an affected version, you can use one of these workarounds:

        1. Set the environment variable HOST_PROC to /proc in your Agent Docker container.
        diff --git a/content/en/opentelemetry/troubleshooting.md b/content/en/opentelemetry/troubleshooting.md index 46222c52b1fcd..5afee53da4344 100644 --- a/content/en/opentelemetry/troubleshooting.md +++ b/content/en/opentelemetry/troubleshooting.md @@ -271,7 +271,7 @@ features: name: otel-http ``` -
        When configuring ports 4317 and 4318, you must use the default names otel-grpc and otel-http respectively to avoid port conflicts.
        +
        When configuring ports 4317 and 4318, you must use the default names otel-grpc and otel-http respectively to avoid port conflicts.
        ## Further reading diff --git a/content/en/product_analytics/session_replay/browser/_index.md b/content/en/product_analytics/session_replay/browser/_index.md index ef29cb24b579b..5e8c48801dca4 100644 --- a/content/en/product_analytics/session_replay/browser/_index.md +++ b/content/en/product_analytics/session_replay/browser/_index.md @@ -69,13 +69,13 @@ if (user.isAuthenticated) { To stop the Session Replay recording, call `stopSessionReplayRecording()`. -
        When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording() to begin recording.
        +
        When using a version of the RUM Browser SDK older than v5.0.0, Session Replay recording does not begin automatically. Call startSessionReplayRecording() to begin recording.
        ## Disable Session Replay To stop session recordings, set `sessionReplaySampleRate` to `0`. This stops collecting data for the [Browser RUM & Session Replay plan][6]. -
        If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate to 0.
        +
        If you're using a version of the RUM Browser SDK previous to v5.0.0, set replaySampleRate to 0.
        ## Playback history diff --git a/content/en/profiler/connect_traces_and_profiles.md b/content/en/profiler/connect_traces_and_profiles.md index b299f73eb3ed9..b1c5143cf4246 100644 --- a/content/en/profiler/connect_traces_and_profiles.md +++ b/content/en/profiler/connect_traces_and_profiles.md @@ -40,7 +40,7 @@ try (final Scope scope = tracer.activateSpan(span)) { // mandatory for Datadog c ``` -
        +
        It's highly recommended to use the Datadog profiler instead of Java Flight Recorder (JFR).
        diff --git a/content/en/profiler/enabling/ddprof.md b/content/en/profiler/enabling/ddprof.md index a43cae49ade96..f8c43cf18e907 100644 --- a/content/en/profiler/enabling/ddprof.md +++ b/content/en/profiler/enabling/ddprof.md @@ -15,7 +15,7 @@ further_reading: text: 'Fix problems you encounter while using the profiler' --- -
        +
        ddprof is in beta. Datadog recommends evaluating the profiler in a non-sensitive environment before deploying in production.
        diff --git a/content/en/profiler/enabling/dotnet.md b/content/en/profiler/enabling/dotnet.md index ea0b8b35947ee..a83dadd63306a 100644 --- a/content/en/profiler/enabling/dotnet.md +++ b/content/en/profiler/enabling/dotnet.md @@ -47,7 +47,7 @@ Supported .NET runtimes (64-bit applications) .NET 8
        .NET 9 -
        +
        Note: For containers, more than one core is required. Read the Troubleshooting documentation for more details.
        @@ -75,7 +75,7 @@ The following profiling features are available in the following minimum versions - Continuous Profiler is not supported for AWS Lambda. - Continuous Profiler does not support ARM64. -
        +
        Note: Unlike APM, Continuous Profiler is not activated by default when the APM package is installed. You must explicitly enable it for the applications you want to profile.
        @@ -85,7 +85,7 @@ Ensure Datadog Agent v6+ is installed and running. Datadog recommends using [Dat Otherwise, install the profiler using the following steps, depending on your operating system. -
        +
        Note: Datadog's automatic instrumentation relies on the .NET CLR Profiling API. Since this API allows only one subscriber, run only one APM solution in your application environment.
        @@ -97,7 +97,7 @@ You can install the Datadog .NET Profiler machine-wide so that any services on t {{% tab "Linux with Single Step APM Instrumentation" %}} 1. With [Single Step APM Instrumentation][1], there is nothing else to install. Go to [Enabling the Profiler](#enabling-the-profiler) to see how to activate the profiler for an application. -
        +
        Note: If APM was already manually installed, you must uninstall it by removing the following environment variables:
        - CORECLR_ENABLE_PROFILING
        - CORECLR_PROFILER
        @@ -147,7 +147,7 @@ To install the .NET Profiler machine-wide: {{% tab "NuGet" %}} -
        +
        Note: This installation does not instrument applications running in IIS. For applications running in IIS, follow the Windows machine-wide installation process.
        @@ -160,7 +160,7 @@ To install the .NET Profiler per-application: {{% tab "Azure App Service" %}} -
        +
        Note: Only Web Apps are supported. Functions are not supported.
        @@ -177,7 +177,7 @@ To install the .NET Profiler per-webapp: ## Enabling the Profiler -
        +
        Note: Datadog does not recommend enabling the profiler at machine-level or for all IIS applications. If you do have enabled it machine-wide, read the Troubleshooting documentation for information about reducing the overhead that is associated with enabling the profiler for all system applications.
        @@ -277,7 +277,7 @@ To install the .NET Profiler per-webapp: net start w3svc ``` -
        +
        Note: Use stop and start commands. A reset or restart does not always work.
        @@ -471,7 +471,7 @@ You can configure the profiler using the following environment variables. Note t | `DD_PROFILING_HTTP_ENABLED` | Boolean | If set to `true`, enables outgoing HTTP request profiling used in Timeline user interface. Defaults to `false`. | -
        +
        Note: For IIS applications, you must set environment variables in the Registry (under HKLM\System\CurrentControlSet\Services\WAS and HKLM\System\CurrentControlSet\Services\W3SVC nodes) as shown in the Windows Service tab, above. The environment variables are applied for all IIS applications. Starting with IIS 10, you can set environment variables for each IIS application in the C:\Windows\System32\inetsrv\config\applicationhost.config file. Read the Microsoft documentation for more details.
        diff --git a/content/en/profiler/enabling/java.md b/content/en/profiler/enabling/java.md index d97f240719fa4..7da1724d1a3e3 100644 --- a/content/en/profiler/enabling/java.md +++ b/content/en/profiler/enabling/java.md @@ -309,7 +309,7 @@ The allocation engine does not depend on the `/proc/sys/kernel/perf_event_parano If the Datadog profiler CPU or wallclock engines are enabled, you can collect native stack traces. Native stack traces include things like JVM internals, native libraries used by your application or the JVM, and syscalls. -
        Native stack traces are not collected by default because usually they do not provide actionable insights and walking native stacks can potentially impact application stability. Test this setting in a non-production environment before you try using it in production.
        +
        Native stack traces are not collected by default because usually they do not provide actionable insights and walking native stacks can potentially impact application stability. Test this setting in a non-production environment before you try using it in production.
        To enable native stack trace collection, understanding that it can destabilize your application, set: diff --git a/content/en/profiler/profiler_troubleshooting/ddprof.md b/content/en/profiler/profiler_troubleshooting/ddprof.md index 5a4884e404ed3..79d52b8d28175 100644 --- a/content/en/profiler/profiler_troubleshooting/ddprof.md +++ b/content/en/profiler/profiler_troubleshooting/ddprof.md @@ -9,7 +9,7 @@ further_reading: text: 'APM Troubleshooting' --- -
        +
        ddprof is in Preview. Datadog recommends evaluating the profiler in a non-sensitive environment before deploying in production.
        diff --git a/content/en/real_user_monitoring/browser/frustration_signals.md b/content/en/real_user_monitoring/browser/frustration_signals.md index b9f36372663a9..1c907bf102fdc 100644 --- a/content/en/real_user_monitoring/browser/frustration_signals.md +++ b/content/en/real_user_monitoring/browser/frustration_signals.md @@ -148,7 +148,7 @@ Frustration signals are generated from mouse clicks, not keyboard strokes. If a session is live, it is fetching information and may cause the banners to reflect a different number than those in the timeline. -
        +
        To provide feedback or submit a feature request, contact Datadog Support.
        diff --git a/content/en/real_user_monitoring/browser/monitoring_page_performance.md b/content/en/real_user_monitoring/browser/monitoring_page_performance.md index 475646956ec77..04fe3cde9a3e2 100644 --- a/content/en/real_user_monitoring/browser/monitoring_page_performance.md +++ b/content/en/real_user_monitoring/browser/monitoring_page_performance.md @@ -35,7 +35,7 @@ You can access performance telemetry for your views in: ## Event timings and core web vitals -
        +
        Datadog's Core Web Vitals telemetry is available from the @datadog/browser-rum package v2.2.0+.
        diff --git a/content/en/real_user_monitoring/browser/tracking_user_actions.md b/content/en/real_user_monitoring/browser/tracking_user_actions.md index d68dbe6d8281f..b740f5a08b001 100644 --- a/content/en/real_user_monitoring/browser/tracking_user_actions.md +++ b/content/en/real_user_monitoring/browser/tracking_user_actions.md @@ -73,7 +73,7 @@ For example: ```html Try it out! -