diff --git a/content/en/account_management/audit_trail/forwarding_audit_events.md b/content/en/account_management/audit_trail/forwarding_audit_events.md index d92649e90e7..fc2e15ee236 100644 --- a/content/en/account_management/audit_trail/forwarding_audit_events.md +++ b/content/en/account_management/audit_trail/forwarding_audit_events.md @@ -61,13 +61,9 @@ Audit Event Forwarding allows you to send audit events from Datadog to custom de 6. Enter a name for the destination. 7. In the **Configure Destination** section, enter the following details: - - a. The endpoint to which you want to send the logs. The endpoint must start with `https://`. An example endpoint for Elasticsearch: `https://.us-central1.gcp.cloud.es.io`. - - b. The name of the destination index where you want to send the logs. - - c. Optionally, select the index rotation for how often you want to create a new index: `No Rotation`, `Every Hour`, `Every Day`, `Every Week`, or `Every Month`. The default is `No Rotation`. - + 1. The endpoint to which you want to send the logs. The endpoint must start with `https://`. An example endpoint for Elasticsearch: `https://.us-central1.gcp.cloud.es.io`. + 1. The name of the destination index where you want to send the logs. + 1. Optionally, select the index rotation for how often you want to create a new index: `No Rotation`, `Every Hour`, `Every Day`, `Every Week`, or `Every Month`. The default is `No Rotation`. 8. In the **Configure Authentication** section, enter the username and password for your Elasticsearch account. 9. Click **Save**. diff --git a/content/en/developers/guide/query-data-to-a-text-file-step-by-step.md b/content/en/developers/guide/query-data-to-a-text-file-step-by-step.md index 457ec31cd55..256df161d67 100644 --- a/content/en/developers/guide/query-data-to-a-text-file-step-by-step.md +++ b/content/en/developers/guide/query-data-to-a-text-file-step-by-step.md @@ -14,16 +14,11 @@ Prerequisite: Python and `pip` installed on your localhost. Windows users see [I 3. Create a new folder: `mkdir `. 4. Enter the folder: `cd `. 5. Download the script [api_query_data.py][3] to the folder created in step 3 and edit it: - - a. Replace `` and `` with your [Datadog API and app keys][4]. - - b. Replace `system.cpu.idle` with a metric you want to fetch. A list of your metrics is displayed in the [Datadog Metric Summary][5]. - - c. Optionally, replace `*` with a host to filter the data. A list of your hosts is displayed in the [Datadog Infrastructure List][6]. - - d. Optionally, change the time period to collect the data. The current setting is 3600 seconds (one hour). **Note**: If you run this too aggressively, you may reach the [Datadog API limits][7]. - - e. Save your file and confirm its location. + 1. Replace `` and `` with your [Datadog API and app keys][4]. + 1. Replace `system.cpu.idle` with a metric you want to fetch. A list of your metrics is displayed in the [Datadog Metric Summary][5]. + 1. Optionally, replace `*` with a host to filter the data. A list of your hosts is displayed in the [Datadog Infrastructure List][6]. + 1. Optionally, change the time period to collect the data. The current setting is 3600 seconds (one hour). **Note**: If you run this too aggressively, you may reach the [Datadog API limits][7]. + 1. Save your file and confirm its location. Once the above is complete: diff --git a/content/en/developers/integrations/_index.md b/content/en/developers/integrations/_index.md index a114f524c01..15679eca932 100644 --- a/content/en/developers/integrations/_index.md +++ b/content/en/developers/integrations/_index.md @@ -84,19 +84,13 @@ Follow these steps to create a new integration with Datadog. 1. **Apply to the Datadog Partner Network.** Once accepted, a member of the Datadog Technology Partner team will reach out to schedule an introductory call. 2. **Request a Datadog sandbox account** for development via the Datadog Partner Network portal. 3. **Start developing your integration** using the Integration Developer Platform: - - a. Define the basic details about your integration. - - b. Define and write your integration code by following the instructions to create one of the following integration types: + 1. Define the basic details about your integration. + 1. Define and write your integration code by following the instructions to create one of the following integration types: - [Agent-based integration][5] - [API-based integration][6] - - c. Specify what type of data your integration queries or submits. - - d. Create a dashboard, and optionally create monitors or security rules. - - e. Fill in the remaining fields: setup and uninstallation instructions, images, support details, and other key details that help describe the value of your integration. - + 1. Specify what type of data your integration queries or submits. + 1. Create a dashboard, and optionally create monitors or security rules. + 1. Fill in the remaining fields: setup and uninstallation instructions, images, support details, and other key details that help describe the value of your integration. 4. **Test your integration** in your Datadog sandbox account. 5. **Submit your integration for review.** 6. **Once approved, your integration is published.** diff --git a/content/en/getting_started/integrations/aws.md b/content/en/getting_started/integrations/aws.md index ddb054b104c..c08b70da816 100644 --- a/content/en/getting_started/integrations/aws.md +++ b/content/en/getting_started/integrations/aws.md @@ -45,86 +45,78 @@ This process can be repeated for as many AWS accounts as necessary, or you can a ## Prerequisites -Before getting started, ensure you have the following prerequisites: - -1. An [AWS][7] account. Your AWS user needs the following IAM permissions to successfully run the CloudFormation template: - - * cloudformation:CreateStack - * cloudformation:CreateUploadBucket - * cloudformation:DeleteStack - * cloudformation:DescribeStacks - * cloudformation:DescribeStackEvents - * cloudformation:GetStackPolicy - * cloudformation:GetTemplateSummary - * cloudformation:ListStacks - * cloudformation:ListStackResources - * ec2:DescribeSecurityGroups - * ec2:DescribeSubnets - * ec2:DescribeVpcs - * iam:AttachRolePolicy - * iam:CreatePolicy - * iam:CreateRole - * iam:DeleteRole - * iam:DeleteRolePolicy - * iam:DetachRolePolicy - * iam:GetRole - * iam:GetRolePolicy - * iam:PassRole - * iam:PutRolePolicy - * iam:TagRole - * iam:UpdateAssumeRolePolicy - * kms:Decrypt - * lambda:AddPermission - * lambda:CreateFunction - * lambda:DeleteFunction - * lambda:GetCodeSigningConfig - * lambda:GetFunction - * lambda:GetFunctionCodeSigningConfig - * lambda:GetLayerVersion - * lambda:InvokeFunction - * lambda:PutFunctionConcurrency - * lambda:RemovePermission - * lambda:TagResource - * logs:CreateLogGroup - * logs:DeleteLogGroup - * logs:DescribeLogGroups - * logs:PutRetentionPolicy - * oam:ListSinks - * oam:ListAttachedLinks - * s3:CreateBucket - * s3:DeleteBucket - * s3:DeleteBucketPolicy - * s3:GetEncryptionConfiguration - * s3:GetObject - * s3:GetObjectVersion - * s3:PutBucketPolicy - * s3:PutBucketPublicAccessBlock - * s3:PutEncryptionConfiguration - * s3:PutLifecycleConfiguration - * secretsmanager:CreateSecret - * secretsmanager:DeleteSecret - * secretsmanager:GetSecretValue - * secretsmanager:PutSecretValue - * serverlessrepo:CreateCloudFormationTemplate +Before getting started, ensure you have an [AWS][7] account. Your AWS user needs the following IAM permissions to successfully run the CloudFormation template: + - cloudformation:CreateStack + - cloudformation:CreateUploadBucket + - cloudformation:DeleteStack + - cloudformation:DescribeStacks + - cloudformation:DescribeStackEvents + - cloudformation:GetStackPolicy + - cloudformation:GetTemplateSummary + - cloudformation:ListStacks + - cloudformation:ListStackResources + - ec2:DescribeSecurityGroups + - ec2:DescribeSubnets + - ec2:DescribeVpcs + - iam:AttachRolePolicy + - iam:CreatePolicy + - iam:CreateRole + - iam:DeleteRole + - iam:DeleteRolePolicy + - iam:DetachRolePolicy + - iam:GetRole + - iam:GetRolePolicy + - iam:PassRole + - iam:PutRolePolicy + - iam:TagRole + - iam:UpdateAssumeRolePolicy + - kms:Decrypt + - lambda:AddPermission + - lambda:CreateFunction + - lambda:DeleteFunction + - lambda:GetCodeSigningConfig + - lambda:GetFunction + - lambda:GetFunctionCodeSigningConfig + - lambda:GetLayerVersion + - lambda:InvokeFunction + - lambda:PutFunctionConcurrency + - lambda:RemovePermission + - lambda:TagResource + - logs:CreateLogGroup + - logs:DeleteLogGroup + - logs:DescribeLogGroups + - logs:PutRetentionPolicy + - oam:ListSinks + - oam:ListAttachedLinks + - s3:CreateBucket + - s3:DeleteBucket + - s3:DeleteBucketPolicy + - s3:GetEncryptionConfiguration + - s3:GetObject + - s3:GetObjectVersion + - s3:PutBucketPolicy + - s3:PutBucketPublicAccessBlock + - s3:PutEncryptionConfiguration + - s3:PutLifecycleConfiguration + - secretsmanager:CreateSecret + - secretsmanager:DeleteSecret + - secretsmanager:GetSecretValue + - secretsmanager:PutSecretValue + - serverlessrepo:CreateCloudFormationTemplate ## Setup -2. Go to the [AWS integration configuration page][8] in Datadog and click **Add AWS Account**. - -3. Configure the integration's settings under the **Automatically using CloudFormation** option. - a. Select the AWS regions to integrate with. - b. Add your Datadog [API key][9]. - c. Optionally, send logs and other data to Datadog with the [Datadog Forwarder Lambda][1]. - d. Optionally, enable [Cloud Security Misconfigurations][54] to scan your cloud environment, hosts, and containers for misconfigurations and security risks. - -4. Click **Launch CloudFormation Template**. This opens the AWS Console and loads the CloudFormation stack. All the parameters are filled in based on your selections in the prior Datadog form, so you do not need to edit those unless desired. +1. Go to the [AWS integration configuration page][8] in Datadog and click **Add AWS Account**. +1. Configure the integration's settings under the **Automatically using CloudFormation** option. + 1. Select the AWS regions to integrate with. + 1. Add your Datadog [API key][9]. + 1. Optionally, send logs and other data to Datadog with the [Datadog Forwarder Lambda][1]. + 1. Optionally, enable [Cloud Security Misconfigurations][54] to scan your cloud environment, hosts, and containers for misconfigurations and security risks. +1. Click **Launch CloudFormation Template**. This opens the AWS Console and loads the CloudFormation stack. All the parameters are filled in based on your selections in the prior Datadog form, so you do not need to edit those unless desired. **Note:** The `DatadogAppKey` parameter enables the CloudFormation stack to make API calls to Datadog to add and edit the Datadog configuration for this AWS account. The key is automatically generated and tied to your Datadog account. - -5. Check the required boxes from AWS and click **Create stack**. This launches the creation process for the Datadog stack along with three nested stacks. This could take several minutes. Ensure that the stack is successfully created before proceeding. - -6. After the stack is created, go back to the AWS integration tile in Datadog and click **Ready!** - -7. Wait up to 10 minutes for data to start being collected, and then view the out-of-the-box [AWS overview dashboard][12] to see metrics sent by your AWS services and infrastructure: +1. Check the required boxes from AWS and click **Create stack**. This launches the creation process for the Datadog stack along with three nested stacks. This could take several minutes. Ensure that the stack is successfully created before proceeding. +1. After the stack is created, go back to the AWS integration tile in Datadog and click **Ready!** +1. Wait up to 10 minutes for data to start being collected, and then view the out-of-the-box [AWS overview dashboard][12] to see metrics sent by your AWS services and infrastructure: {{< img src="getting_started/integrations/aws-dashboard.png" alt="The AWS overview dashboard in the Datadog account. On the left is the AWS logo and an AWS events graph showing 'No matching entries found'. In the center are graphs related to EBS volumes with numerical data displayed and a heatmap showing consistent data. Along the right are graphs related to ELBs showing numerical data as well as a timeseries graph showing spiky data from three sources.">}} ## Configuration diff --git a/content/en/getting_started/integrations/azure.md b/content/en/getting_started/integrations/azure.md index 72c20c6ec57..bf71d554b98 100644 --- a/content/en/getting_started/integrations/azure.md +++ b/content/en/getting_started/integrations/azure.md @@ -131,17 +131,16 @@ Follow these steps to deploy the Datadog Azure integration through [Terraform][2 - App Service Plans - Container Apps -You can also click to enable custom metric collection from [Azure Application Insights][101], and disable the collection of usage metrics. - + You can also click to enable custom metric collection from [Azure Application Insights][101], and disable the collection of usage metrics. 4. Optionally, click the resource collection toggle to disable the collection of configuration information from your Azure resources. 5. Configure log collection: - a. If a log forwarder already exists in the tenant, extend its scope to include any new subscriptions or management groups. - b. If you're creating a new log forwarder: - a. Enter a resource group name to store the log forwarder control plane. - b. Select a control plane subscription for the log-forwarding orchestration (LFO). - c. Select a region for the control plane. - See the [Architecture section][102] of the automated log forwarding guide for more information about this architecture. + - If a log forwarder already exists in the tenant, extend its scope to include any new subscriptions or management groups. + - If you're creating a new log forwarder: + 1. Enter a resource group name to store the log forwarder control plane. + 1. Select a control plane subscription for the log-forwarding orchestration (LFO). + 1. Select a region for the control plane. + See the [Architecture section][102] of the automated log forwarding guide for more information about this architecture. 6. Copy and run the command under **Initialize and apply the Terraform**. [100]: https://app.datadoghq.com/integrations/azure/ diff --git a/content/en/infrastructure/containers/configuration.md b/content/en/infrastructure/containers/configuration.md index 8c96cac1514..274ff0f98b2 100644 --- a/content/en/infrastructure/containers/configuration.md +++ b/content/en/infrastructure/containers/configuration.md @@ -417,8 +417,7 @@ field#status.conditions.HorizontalAbleToScale.status:"False" You can use the `kubernetes_state_core` check to collect custom resource metrics when running the Datadog Cluster Agent. -1. Write defintions for your custom resources and the fields to turn into metrics according to the following format: - +1. Write definitions for your custom resources and the fields to turn into metrics according to the following format: ```yaml #=(...) collectCrMetrics: @@ -456,13 +455,11 @@ You can use the `kubernetes_state_core` check to collect custom resource metrics path: [metadata, generation] ``` - By default, RBAC and API resource names are derived from the kind in groupVersionKind by converting it to lowercase, and adding an "s" suffix (for example, Kind: ENIConfig → eniconfigs). If the Custom Resource Definition (CRD) uses a different plural form, you can override this behavior by specifying the resource field. In the example above, CNINode overrides the default by setting resource: "cninode-pluralized". + By default, RBAC and API resource names are derived from the kind in groupVersionKind by converting it to lowercase, and adding an "s" suffix (for example, Kind: ENIConfig → eniconfigs). If the Custom Resource Definition (CRD) uses a different plural form, you can override this behavior by specifying the resource field. In the example above, CNINode overrides the default by setting resource: "cninode-pluralized". Metric names are produced using the following rules: - - a. No prefix precified: `kubernetes_state_customresource.` - - b. Prefix precified: `kubernetes_state_customresource._` + - No prefix: `kubernetes_state_customresource.` + - Prefix: `kubernetes_state_customresource._` For more details, see [Custom Resource State Metrics][5]. @@ -492,9 +489,9 @@ You can use the `kubernetes_state_core` check to collect custom resource metrics {{% /tab %}} {{% tab "Datadog Operator" %}} -
+
This functionality requires Agent Operator v1.20+. -
+
1. Install the Datadog Operator with an option that grants the Datadog Agent permission to collect custom resources: diff --git a/content/en/integrations/guide/aws-organizations-setup.md b/content/en/integrations/guide/aws-organizations-setup.md index 8cada2cd247..ebf8ed07cf5 100644 --- a/content/en/integrations/guide/aws-organizations-setup.md +++ b/content/en/integrations/guide/aws-organizations-setup.md @@ -60,8 +60,8 @@ Copy the Template URL from the Datadog AWS integration configuration page to use - Select your Datadog APP key on Datadog AWS integration configuration page and use it in the `DatadogAppKey` parameter in the StackSet. - *Optionally:* - a. Enable [Cloud Security Misconfigurations][5] to scan your cloud environment, hosts, and containers for misconfigurations and security risks. - b. Disable metric collection if you do not want to monitor your AWS infrastructure. This is recommended only for [Cloud Cost Management][6] (CCM) or [Cloud Security Misconfigurations][5] specific use cases. + 1. Enable [Cloud Security Misconfigurations][5] to scan your cloud environment, hosts, and containers for misconfigurations and security risks. + 1. Disable metric collection if you do not want to monitor your AWS infrastructure. This is recommended only for [Cloud Cost Management][6] (CCM) or [Cloud Security Misconfigurations][5] specific use cases. 3. **Configure StackSet options** Keep the **Execution configuration** option as `Inactive` so the StackSet performs one operation at a time. diff --git a/content/en/logs/faq/logs_cost_attribution.md b/content/en/logs/faq/logs_cost_attribution.md index 0eb97a8bad7..b509b2a2ed8 100644 --- a/content/en/logs/faq/logs_cost_attribution.md +++ b/content/en/logs/faq/logs_cost_attribution.md @@ -60,9 +60,9 @@ Use a [Category Processor][6] to create a new `team` attribute for your logs. 3. Enter a name for the processor. For example, "Create team attribute". 4. Enter `team` in the **Set target category attribute** field. This creates a `team` attribute. 5. In the **Populate category** section, add a category for each team. For example, to add the tag `team:service_a` to log events that match `service:a` and `env:prod`: - a. Enter `service:a` and `env:prod` in the **All events that match** field. - b. Enter `service_a` in the **Appear under the value name** field. - c. Click **Add**. + 1. Enter `service:a` and `env:prod` in the **All events that match** field. + 1. Enter `service_a` in the **Appear under the value name** field. + 1. Click **Add**. 6. Add the other teams as separate categories. 7. Click **Create**. @@ -109,10 +109,10 @@ Use a [Category Processor][6] to create a new `index_name` attribute for identif 4. Enter **index_name** in the **Set target category attribute** field. This creates an `index_name` attribute. 5. Add a category for each index. For example, if you have an index named `retention-7` for all logs tagged with `env:staging`: {{< img src="logs/faq/logs_cost_attribution/indexes_configuration.png" alt="The indexes list showing the filter query, retention period, and whether online archives is enabled for the retention-30, retention-15, retention-7, and demo indexes" >}} -Then, in the **Populate category** section: - a. Enter `env:staging` in the **All events that match** field. - b. Enter `retention-7` in the **Appear under the value name** field. - c. Click **Add** + Then, in the **Populate category** section: + 1. Enter `env:staging` in the **All events that match** field. + 1. Enter `retention-7` in the **Appear under the value name** field. + 1. Click **Add**. 6. Add the other indexes as separate categories. 7. Click **Create**. @@ -127,9 +127,9 @@ Use a [Category Processor][6] to create a new `retention_period` attribute to as 3. Enter a name for the processor. For example, "Create retention_period attribute". 4. Enter `retention_period` in the **Set target category attribute** field. This creates a `retention_period` attribute. 5. Add a category for each retention period. For example, if you have a 7-day retention index named `retention-7`, then in the **Populate category** section: - a. Enter `@index_name:(retention-7)` in the **All events that match** field. - b. Enter `7` in the **Appear under the value name** field. - c. Click **Add** + 1. Enter `@index_name:(retention-7)` in the **All events that match** field. + 1. Enter `7` in the **Appear under the value name** field. + 1. Click **Add**. 6. Add the other retention periods as separate categories. 7. Click **Create**. @@ -164,14 +164,14 @@ Use a [Category Processor][6] to create a new `online_archives` attribute to ind 2. Select **Category Processor** for the processor type. 3. Enter a name for the processor. For example, "Create online_archives attribute". This creates an `online_archives` attribute. 4. In the **Populate category** section, add two categories: -
In the **first category**, the value `true` is assigned to all indexes with Online Archives enabled. For example, if logs in the index named `retention-30` go into Online Archives: - a. Enter `@index_name:(retention-30)` in the **All events that match** field. - b. Enter `true` in the **Appear under the value name** field. - c. Click **Add** -
In the **second category**, the value `false` is assigned to all other indexes. - a. Enter `*` in the **All events that match** field. - b. Enter `false` in the **Appear under the value name** field. - c. Click **Add** + - In the **first category**, the value `true` is assigned to all indexes with Online Archives enabled. For example, if logs in the index named `retention-30` go into Online Archives: + 1. Enter `@index_name:(retention-30)` in the **All events that match** field. + 1. Enter `true` in the **Appear under the value name** field. + 1. Click **Add**. + - In the **second category**, the value `false` is assigned to all other indexes. + 1. Enter `*` in the **All events that match** field. + 1. Enter `false` in the **Appear under the value name** field. + 1. Click **Add**. 5. Click **Create**. {{< img src="logs/faq/logs_cost_attribution/online_archives_attribute.png" alt="The category processor form fill in with data to create a online_archives attribute" style="width:75%" >}} @@ -203,13 +203,13 @@ For the Sensitive Data Scanner, billed usage is based on the volume of logs scan 1. Go to the [Sensitive Data Scanner][8]. 2. In each scanning group: - a. Click **Add Scanning Rule**. - b. Enter `.` in the **Define Regex to match** field to match all logs. - c. Select **Entire Event** in the **Scan the entire event or a portion of it** field. - d. Enter `sds:true` in the **Add tags** field. - e. Leave **Define action on match** on **No action**. - f. Enter a name for the scanning rule. For example, "Create sds tag". - g. Click **Create**. + 1. Click **Add Scanning Rule**. + 1. Enter `.` in the **Define Regex to match** field to match all logs. + 1. Select **Entire Event** in the **Scan the entire event or a portion of it** field. + 1. Enter `sds:true` in the **Add tags** field. + 1. Leave **Define action on match** on **No action**. + 1. Enter a name for the scanning rule. For example, "Create sds tag". + 1. Click **Create**. ## Generate custom logs metrics @@ -296,9 +296,9 @@ Datadog recommends that you configure the table widget for Log Indexing in the f 2. Select the **Table** widget. 3. Select the **events** count metric that you generated earlier to count the number of events ingested. 4. In the **from** field, add the following: - a. `datadog_index:*` to filter to only logs that have been routed to indexes. - b. `datadog_is_excluded:false` to filter to only logs that have not matched any exclusion filter. - c. `retention_period:7` to filter to only logs that are retained for 7 days. You don't need to add this tag if you have the same retention period for all your indexes and therefore did not set up this tag earlier. If you have additional `retention_period` tags, create a separate widget for each one. + 1. `datadog_index:*` to filter to only logs that have been routed to indexes. + 1. `datadog_is_excluded:false` to filter to only logs that have not matched any exclusion filter. + 1. `retention_period:7` to filter to only logs that are retained for 7 days. You don't need to add this tag if you have the same retention period for all your indexes and therefore did not set up this tag earlier. If you have additional `retention_period` tags, create a separate widget for each one. 5. Select the **sum by** field, and add the `team` tag to show the usage in events, by team. You can also add other tags for your different cost buckets. 6. Add the following formula to convert usage into costs: `Usage in millions of events` * `Unit cost for 7 days of retention`. If your contractual price per million of events changes, you need to update the formula manually. 7. Click **Save**. diff --git a/content/en/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose.md b/content/en/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose.md index 9cb0d02eb8f..895dd59eef4 100644 --- a/content/en/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose.md +++ b/content/en/logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose.md @@ -51,84 +51,73 @@ See the [Send AWS Services Logs with the Datadog Amazon Data Firehose Destinatio ### Configure Fluent Bit for Firehose on an EKS Fargate cluster 1. Create the `aws-observability` namespace. - -{{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} -kubectl create namespace aws-observability + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl create namespace aws-observability {{< /code-block >}} - 2. Create the following Kubernetes ConfigMap for Fluent Bit as `aws-logging-configmap.yaml`. Substitute the name of your delivery stream. - -
For the new higher performance Kinesis Firehose plugin use the plugin name kinesis_firehose instead of amazon_data_firehose.
- -{{< code-block lang="yaml" filename="" disable_copy="false" collapsible="false" >}} -apiVersion: v1 -kind: ConfigMap -metadata: - name: aws-logging - namespace: aws-observability -data: - filters.conf: | - [FILTER] - Name kubernetes - Match kube.* - Merge_Log On - Buffer_Size 0 - Kube_Meta_Cache_TTL 300s - - flb_log_cw: 'true' - - output.conf: | - [OUTPUT] - Name kinesis_firehose - Match kube.* - region - delivery_stream +
For the new higher performance Kinesis Firehose plugin use the plugin name kinesis_firehose instead of amazon_data_firehose.
+ {{< code-block lang="yaml" filename="" disable_copy="false" collapsible="false" >}} + apiVersion: v1 + kind: ConfigMap + metadata: + name: aws-logging + namespace: aws-observability + data: + filters.conf: | + [FILTER] + Name kubernetes + Match kube.* + Merge_Log On + Buffer_Size 0 + Kube_Meta_Cache_TTL 300s + + flb_log_cw: 'true' + + output.conf: | + [OUTPUT] + Name kinesis_firehose + Match kube.* + region + delivery_stream {{< /code-block >}} - 3. Use `kubectl` to apply the ConfigMap manifest. - -{{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} -kubectl apply -f aws-logging-configmap.yaml + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl apply -f aws-logging-configmap.yaml {{< /code-block >}} 4. Create an IAM policy and attach it to the pod execution role to allow the log router running on AWS Fargate to write to the Amazon Data Firehose. You can use the example below, replacing the ARN in the **Resource** field with the ARN of your delivery stream, as well as specifying your region and account ID. - -{{< code-block lang="json" filename="allow_firehose_put_permission.json" disable_copy="false" collapsible="false" >}} -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": [ - "firehose:PutRecord", - "firehose:PutRecordBatch" - ], - "Resource": - "arn:aws:firehose:::deliverystream/" - } -] + {{< code-block lang="json" filename="allow_firehose_put_permission.json" disable_copy="false" collapsible="false" >}} + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": [ + "firehose:PutRecord", + "firehose:PutRecordBatch" + ], + "Resource": + "arn:aws:firehose:::deliverystream/" + } + ] } {{< /code-block >}} - - a. Create the policy. - -{{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} -aws iam create-policy \ - --policy-name FluentBitEKSFargate \ - --policy-document file://allow_firehose_put_permission.json + 1. Create the policy. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + aws iam create-policy \ + --policy-name FluentBitEKSFargate \ + --policy-document file://allow_firehose_put_permission.json {{< /code-block >}} - - b. Retrieve the Fargate Pod Execution Role and attach the IAM policy. - -{{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - POD_EXEC_ROLE=$(aws eks describe-fargate-profile \ - --cluster-name fargate-cluster \ - --fargate-profile-name fargate-profile \ - --query 'fargateProfile.podExecutionRoleArn' --output text |cut -d '/' -f 2) - aws iam attach-role-policy \ - --policy-arn arn:aws:iam:::policy/FluentBitEKSFargate \ - --role-name $POD_EXEC_ROLE + 2. Retrieve the Fargate Pod Execution Role and attach the IAM policy. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + POD_EXEC_ROLE=$(aws eks describe-fargate-profile \ + --cluster-name fargate-cluster \ + --fargate-profile-name fargate-profile \ + --query 'fargateProfile.podExecutionRoleArn' --output text |cut -d '/' -f 2) + aws iam attach-role-policy \ + --policy-arn arn:aws:iam:::policy/FluentBitEKSFargate \ + --role-name $POD_EXEC_ROLE {{< /code-block >}} ### Deploy a sample application @@ -136,14 +125,13 @@ aws iam create-policy \ To generate logs and test the Amazon Data Firehose delivery stream, deploy a sample workload to your EKS Fargate cluster. 1. Create a deployment manifest `sample-deployment.yaml`. - -{{< code-block lang="yaml" filename="sample-deployment.yaml" disable_copy="false" collapsible="false" >}} - apiVersion: apps/v1 - kind: Deployment - metadata: + {{< code-block lang="yaml" filename="sample-deployment.yaml" disable_copy="false" collapsible="false" >}} + apiVersion: apps/v1 + kind: Deployment + metadata: name: sample-app namespace: fargate-namespace - spec: + spec: selector: matchLabels: app: nginx @@ -159,72 +147,61 @@ To generate logs and test the Amazon Data Firehose delivery stream, deploy a sam ports: - containerPort: 80 {{< /code-block >}} - - 2. Create the `fargate-namespace` namespace. - - {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - kubectl create namespace fargate-namespace - {{< /code-block >}} - - 3. Use `kubectl` to apply the deployment manifest. - - {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - kubectl apply -f sample-deployment.yaml - {{< /code-block >}} +2. Create the `fargate-namespace` namespace. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl create namespace fargate-namespace +{{< /code-block >}} +3. Use `kubectl` to apply the deployment manifest. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl apply -f sample-deployment.yaml +{{< /code-block >}} ### Validation 1. Verify that `sample-app` pods are running in the namespace `fargate-namespace`. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl get pods -n fargate-namespace +{{< /code-block >}} - {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - kubectl get pods -n fargate-namespace - {{< /code-block >}} - -Expected output: - - {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} - NAME READY STATUS RESTARTS AGE - sample-app-6c8b449b8f-kq2qz 1/1 Running 0 3m56s - sample-app-6c8b449b8f-nn2w7 1/1 Running 0 3m56s - sample-app-6c8b449b8f-wzsjj 1/1 Running 0 3m56s - {{< /code-block >}} + Expected output: + {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} + NAME READY STATUS RESTARTS AGE + sample-app-6c8b449b8f-kq2qz 1/1 Running 0 3m56s + sample-app-6c8b449b8f-nn2w7 1/1 Running 0 3m56s + sample-app-6c8b449b8f-wzsjj 1/1 Running 0 3m56s +{{< /code-block >}} 2. Use `kubectl describe pod` to confirm that the Fargate logging feature is enabled. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl describe pod -n fargate-namespace |grep Logging +{{< /code-block >}} - {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - kubectl describe pod -n fargate-namespace |grep Logging - {{< /code-block >}} - -Expected output: - - {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} + Expected output: + {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} Logging: LoggingEnabled - Normal LoggingEnabled 5m fargate-scheduler Successfully enabled logging for pod - {{< /code-block >}} + Normal LoggingEnabled 5m fargate-scheduler Successfully enabled logging for pod +{{< /code-block >}} 3. Inspect deployment logs. + {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} + kubectl logs -l app=nginx -n fargate-namespace +{{< /code-block >}} - {{< code-block lang="shell" filename="" disable_copy="false" collapsible="false" >}} - kubectl logs -l app=nginx -n fargate-namespace - {{< /code-block >}} - -Expected output: - - {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} - /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh - /docker-entrypoint.sh: Configuration complete; ready for start up - 2023/01/27 16:53:42 [notice] 1#1: using the "epoll" event method - 2023/01/27 16:53:42 [notice] 1#1: nginx/1.23.3 - 2023/01/27 16:53:42 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) - 2023/01/27 16:53:42 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64 - 2023/01/27 16:53:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535 - 2023/01/27 16:53:42 [notice] 1#1: start worker processes - ... - {{< /code-block >}} + Expected output: + {{< code-block lang="bash" filename="" disable_copy="true" collapsible="false" >}} + /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh + /docker-entrypoint.sh: Configuration complete; ready for start up + 2023/01/27 16:53:42 [notice] 1#1: using the "epoll" event method + 2023/01/27 16:53:42 [notice] 1#1: nginx/1.23.3 + 2023/01/27 16:53:42 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) + 2023/01/27 16:53:42 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64 + 2023/01/27 16:53:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535 + 2023/01/27 16:53:42 [notice] 1#1: start worker processes + ... +{{< /code-block >}} 4. Verify the logs are in Datadog. In the [Datadog Log Explorer][10], search for `@aws.firehose.arn:""`, replacing `` with your Amazon Data Firehose ARN, to filter for logs from the Amazon Data Firehose. - -{{< img src="logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_verification.jpg" alt="Verification of the nginx log lines in Datadog Log Explorer" responsive="true">}} + {{< img src="logs/guide/aws-eks-fargate-logs-with-kinesis-data-firehose/log_verification.jpg" alt="Verification of the nginx log lines in Datadog Log Explorer" responsive="true">}} ### Remap attributes for log correlation diff --git a/content/en/logs/guide/azure-manual-log-forwarding.md b/content/en/logs/guide/azure-manual-log-forwarding.md index ae4922703bd..5e64f09a8a2 100644 --- a/content/en/logs/guide/azure-manual-log-forwarding.md +++ b/content/en/logs/guide/azure-manual-log-forwarding.md @@ -55,10 +55,10 @@ If you already have a function app configured for this purpose, skip to [Add a n 1. In the Azure portal, navigate to the [Function App overview][106] and click **Create**. 2. In the **Instance Details** section, configure the following settings: - a. Select the **Code** radio button - b. For **Runtime stack**, select `Node.js` - c. For **Version**, select `18 LTS`. - d. For **Operating System**, select `Windows`. + 1. Select the **Code** radio button. + 1. For **Runtime stack**, select `Node.js`. + 1. For **Version**, select `18 LTS`. + 1. For **Operating System**, select `Windows`. 3. Configure other settings as desired. 4. Click **Review + create** to validate the resource. If validation is successful, click **Create**. diff --git a/content/en/logs/guide/google-cloud-log-forwarding.md b/content/en/logs/guide/google-cloud-log-forwarding.md index 1ef293b4a67..ef19cdc8a7b 100644 --- a/content/en/logs/guide/google-cloud-log-forwarding.md +++ b/content/en/logs/guide/google-cloud-log-forwarding.md @@ -166,13 +166,9 @@ The default behavior for Dataflow pipeline workers is to use your project's [Com 2. From the **Log Router** tab, select **Create Sink**. 3. Provide a name for the sink. 4. Choose _Cloud Pub/Sub_ as the destination and select the Cloud Pub/Sub topic that was created for that purpose. **Note**: The Cloud Pub/Sub topic can be located in a different project. - - {{< img src="integrations/google_cloud_pubsub/creating_sink2.png" alt="Export Google Cloud Pub/Sub Logs to Pub Sub" >}} - + {{< img src="integrations/google_cloud_pubsub/creating_sink2.png" alt="Export Google Cloud Pub/Sub Logs to Pub Sub" >}} 5. Choose the logs you want to include in the sink with an optional inclusion or exclusion filter. You can filter the logs with a search query, or use the [sample function][426]. For example, to include only 10% of the logs with a `severity` level of `ERROR`, create an inclusion filter with `severity="ERROR" AND sample(insertId, 0.1)`. - - {{< img src="integrations/google_cloud_platform/sink_inclusion_filter_2.png" alt="The inclusion filter for a Google Cloud logging sink with a query of severity=ERROR and sample(insertId, 0.1)" >}} - + {{< img src="integrations/google_cloud_platform/sink_inclusion_filter_2.png" alt="The inclusion filter for a Google Cloud logging sink with a query of severity=ERROR and sample(insertId, 0.1)" >}} 6. Click **Create Sink**. **Note**: It is possible to create several exports from Google Cloud Logging to the same Cloud Pub/Sub topic with different sinks. @@ -182,31 +178,31 @@ The default behavior for Dataflow pipeline workers is to use your project's [Com 1. Go to the [Create job from template][427] page in the Google Cloud console. 2. Give the job a name and select a Dataflow regional endpoint. 3. Select `Pub/Sub to Datadog` in the **Dataflow template** dropdown, and the **Required parameters** section appears. - a. Select the input subscription in the **Pub/Sub input subscription** dropdown. - b. Enter the following in the **Datadog Logs API URL** field: -
https://{{< region-param key="http_endpoint" code="true" >}}
+ 1. Select the input subscription in the **Pub/Sub input subscription** dropdown. + 1. Enter the following in the **Datadog Logs API URL** field: +
https://{{< region-param key="http_endpoint" code="true" >}}
- **Note**: Ensure that the Datadog site selector on the right of the page is set to your [Datadog site][428] before copying the URL above. + **Note**: Ensure that the Datadog site selector on the right of the page is set to your [Datadog site][428] before copying the URL above. - c. Select the topic created to receive message failures in the **Output deadletter Pub/Sub topic** dropdown. - d. Specify a path for temporary files in your storage bucket in the **Temporary location** field. + 1. Select the topic created to receive message failures in the **Output deadletter Pub/Sub topic** dropdown. + 1. Specify a path for temporary files in your storage bucket in the **Temporary location** field. -{{< img src="integrations/google_cloud_platform/dataflow_parameters.png" alt="Required parameters in the Datadog Dataflow template" style="width:80%;">}} + {{< img src="integrations/google_cloud_platform/dataflow_parameters.png" alt="Required parameters in the Datadog Dataflow template" style="width:80%;">}} 4. Under **Optional Parameters**, check `Include full Pub/Sub message in the payload`. 5. If you created a secret in Secret Manager with your Datadog API key value as mentioned in [step 1](#1-create-a-cloud-pubsub-topic-and-subscription), enter the **resource name** of the secret in the **Google Cloud Secret Manager ID** field. -{{< img src="integrations/google_cloud_platform/dataflow_template_optional_parameters.png" alt="Optional parameters in the Datadog Dataflow template with Google Cloud Secret Manager ID and Source of the API key passed fields both highlighted" style="width:80%;">}} + {{< img src="integrations/google_cloud_platform/dataflow_template_optional_parameters.png" alt="Optional parameters in the Datadog Dataflow template with Google Cloud Secret Manager ID and Source of the API key passed fields both highlighted" style="width:80%;">}} -See [Template parameters][412] in the Dataflow template for details on using the other available options: + See [Template parameters][412] in the Dataflow template for details on using the other available options: - - `apiKeySource=KMS` with `apiKeyKMSEncryptionKey` set to your [Cloud KMS][429] key ID and `apiKey` set to the encrypted API key - - **Not recommended**: `apiKeySource=PLAINTEXT` with `apiKey` set to the plaintext API key + - `apiKeySource=KMS` with `apiKeyKMSEncryptionKey` set to your [Cloud KMS][429] key ID and `apiKey` set to the encrypted API key + - **Not recommended**: `apiKeySource=PLAINTEXT` with `apiKey` set to the plaintext API key 6. If you created a custom worker service account, select it in the **Service account email** dropdown. -{{< img src="integrations/google_cloud_platform/dataflow_template_service_account.png" alt="Optional parameters in the Datadog Dataflow template with the service account email dropdown highlighted" style="width:80%;">}} + {{< img src="integrations/google_cloud_platform/dataflow_template_service_account.png" alt="Optional parameters in the Datadog Dataflow template with the service account email dropdown highlighted" style="width:80%;">}} 7. Click **RUN JOB**. diff --git a/content/en/logs/log_configuration/forwarding_custom_destinations.md b/content/en/logs/log_configuration/forwarding_custom_destinations.md index 1e72aa7fd95..cfd48401787 100644 --- a/content/en/logs/log_configuration/forwarding_custom_destinations.md +++ b/content/en/logs/log_configuration/forwarding_custom_destinations.md @@ -131,8 +131,8 @@ The following metrics report on logs that have been forwarded successfully, incl {{< /tabs >}} 10. In the **Select Tags to Forward** section: - a. Select whether you want **All tags**, **No tags**, or **Specific Tags** to be included. - b. Select whether you want to **Include** or **Exclude specific tags**, and specify which tags to include or exclude. + 1. Select whether you want **All tags**, **No tags**, or **Specific Tags** to be included. + 1. Select whether you want to **Include** or **Exclude specific tags**, and specify which tags to include or exclude. 11. Click **Save**. diff --git a/content/en/monitors/guide/_index.md b/content/en/monitors/guide/_index.md index 4003ac82b6e..904c578669e 100644 --- a/content/en/monitors/guide/_index.md +++ b/content/en/monitors/guide/_index.md @@ -18,7 +18,7 @@ cascade: {{< nextlink href="monitors/guide/why-did-my-monitor-settings-change-not-take-effect" >}}Monitor settings changes not taking effect{{< /nextlink >}} {{< nextlink href="monitors/guide/recovery-thresholds" >}}Recovery thresholds{{< /nextlink >}} {{< nextlink href="monitors/guide/alert_aggregation" >}}Alert aggregation{{< /nextlink >}} - {{< nextlink href="monitors/guide/notification-message-best-practices" >}}Notification message Best Practice{{< /nextlink >}} + {{< nextlink href="monitors/guide/notification-message-best-practices" >}}Notification Message Best Practices{{< /nextlink >}} {{< /whatsnext >}} {{< whatsnext desc="Tutorial:" >}} diff --git a/content/en/observability_pipelines/legacy/guide/ingest_aws_s3_logs_with_the_observability_pipelines_worker.md b/content/en/observability_pipelines/legacy/guide/ingest_aws_s3_logs_with_the_observability_pipelines_worker.md index d9787922aff..b6af740aac6 100644 --- a/content/en/observability_pipelines/legacy/guide/ingest_aws_s3_logs_with_the_observability_pipelines_worker.md +++ b/content/en/observability_pipelines/legacy/guide/ingest_aws_s3_logs_with_the_observability_pipelines_worker.md @@ -120,10 +120,10 @@ Apply the role to the running Observability Pipelines process. You can do this b ## Configure the Worker to receive notifications from the SQS queue 1. Use the below source configuration example to set up the Worker to: - a. Receive the SQS event notifications. - b. Read the associated logs in the S3 bucket. - c. Emit the logs to the console. - ```yaml + 1. Receive the SQS event notifications. + 1. Read the associated logs in the S3 bucket. + 1. Emit the logs to the console. + ```yaml sources: cloudtrail: type: aws_s3 diff --git a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md index c40013aa7c5..9e5051004fc 100644 --- a/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md +++ b/content/en/observability_pipelines/legacy/guide/route_logs_in_datadog_rehydratable_format_to_Amazon_S3.md @@ -232,13 +232,13 @@ Replace `${DD_ARCHIVES_BUCKET}` and $`{DD_ARCHIVES_REGION}` parameters based on 1. Navigate to your [Pipeline][8]. 1. (Optional) Add a remap transform to tag all logs going to `datadog_archives`. - a. Click **Edit** and then **Add More** in the **Add Transforms. - b. Click the **Remap** tile. - c. Enter a descriptive name for the component. - d. In the **Inputs** field, select the source to connect this destination to. - e. Add `.sender = "observability_pipelines_worker"` in the **Source** section. - f. Click **Save**. - g. Navigate back to your pipeline. + 1. Click **Edit** and then **Add More** in the **Add Transforms. + 1. Click the **Remap** tile. + 1. Enter a descriptive name for the component. + 1. In the **Inputs** field, select the source to connect this destination to. + 1. Add `.sender = "observability_pipelines_worker"` in the **Source** section. + 1. Click **Save**. + 1. Navigate back to your pipeline. 1. Click **Edit**. 1. Click **Add More** in the **Add Destination** tile. 1. Click the **Datadog Archives** tile. @@ -248,31 +248,31 @@ Replace `${DD_ARCHIVES_BUCKET}` and $`{DD_ARCHIVES_REGION}` parameters based on {{< tabs >}} {{% tab "AWS S3" %}} -7. In the **Bucket** field, enter the name of the S3 bucket you created earlier. -8. Enter `aws_s3` in the **Service** field. -9. Toggle **AWS S3** to enable those specific configuration options. -10. In the **Storage Class** field, select the storage class in the dropdown menu. -11. Set the other configuration options based on your use case. -12. Click **Save**. +8. In the **Bucket** field, enter the name of the S3 bucket you created earlier. +9. Enter `aws_s3` in the **Service** field. +10. Toggle **AWS S3** to enable those specific configuration options. +11. In the **Storage Class** field, select the storage class in the dropdown menu. +12. Set the other configuration options based on your use case. +13. Click **Save**. {{% /tab %}} {{% tab "Azure Blob" %}} -7. In the **Bucket** field, enter the name of the S3 bucket you created earlier. -8. Enter `azure_blob` in the **Service** field. -9. Toggle **Azure Blob** to enable those specific configuration options. -10. Enter the Azure Blob Storage Account connection string. -11. Set the other configuration options based on your use case. -12. Click **Save**. +8. In the **Bucket** field, enter the name of the S3 bucket you created earlier. +9. Enter `azure_blob` in the **Service** field. +10. Toggle **Azure Blob** to enable those specific configuration options. +11. Enter the Azure Blob Storage Account connection string. +12. Set the other configuration options based on your use case. +13. Click **Save**. {{% /tab %}} {{% tab "GCP Cloud Storage" %}} -7. In the **Bucket** field, enter the name of the S3 bucket you created earlier. -8. Enter `gcp_cloud_storage` in the **Service** field. -9. Toggle **GCP Cloud Storage** to enable those specific configuration options. -10. Set the configuration options based on your use case. -11. Click **Save**. +8. In the **Bucket** field, enter the name of the S3 bucket you created earlier. +9. Enter `gcp_cloud_storage` in the **Service** field. +10. Toggle **GCP Cloud Storage** to enable those specific configuration options. +11. Set the configuration options based on your use case. +12. Click **Save**. {{% /tab %}} {{< /tabs >}} diff --git a/content/en/observability_pipelines/legacy/guide/sensitive_data_scanner_transform.md b/content/en/observability_pipelines/legacy/guide/sensitive_data_scanner_transform.md index 2908127bbc2..74eeb573d6b 100644 --- a/content/en/observability_pipelines/legacy/guide/sensitive_data_scanner_transform.md +++ b/content/en/observability_pipelines/legacy/guide/sensitive_data_scanner_transform.md @@ -40,13 +40,12 @@ Sensitive data, such as credit card numbers, bank routing numbers, and API keys, - **Note:** If you select hashing, the UTF-8 bytes of the match is hashed with the 64-bit fingerprint of farmhash. 1. In the **Pattern** section: - To create a custom scanning rule: - a. Select **Custom** in the **type** dropdown menu. - b. In the **Define regex** field, enter the regex pattern to check against the data. See [Using regex for custom rules](#using-regex-for-custom-rules) for more information. + 1. Select **Custom** in the **type** dropdown menu. + 1. In the **Define regex** field, enter the regex pattern to check against the data. See [Using regex for custom rules](#using-regex-for-custom-rules) for more information. - To use an out-of-the-box scanning rule: - a. Select **Library** in the **type** dropdown menu. - b. Select the scanning rule you want to use in the **Name** dropdown menu. -1. In the **Scan entire event or portion of it** section: - a. Select if you want to scan the **Entire Event** or **Specific Attributes** in the **Target** dropdown menu. + 1. Select **Library** in the **type** dropdown menu. + 1. Select the scanning rule you want to use in the **Name** dropdown menu. +1. In the **Scan entire event or portion of it** section, select if you want to scan the **Entire Event** or **Specific Attributes** in the **Target** dropdown menu. - If you are scanning the entire event, you can optionally exclude specific attributes from getting scanned. - If you are scanning specific attributes, specify which attributes you want to scan. 1. Optionally, add one or more tags to associate with the matched events. diff --git a/content/en/observability_pipelines/legacy/guide/set_quotas_for_data_sent_to_a_destination.md b/content/en/observability_pipelines/legacy/guide/set_quotas_for_data_sent_to_a_destination.md index fc7155fc13b..30551ed4174 100644 --- a/content/en/observability_pipelines/legacy/guide/set_quotas_for_data_sent_to_a_destination.md +++ b/content/en/observability_pipelines/legacy/guide/set_quotas_for_data_sent_to_a_destination.md @@ -48,15 +48,14 @@ This guide walks you through how to: 1. Click the **Quota** tile. 1. Enter a name for the component. 1. Select one or more input for the transform. -1. In the **Limits** section: - a. Select the unit type. The unit of the quota limit can be the number of events or volume of data. - b. Enter the limit in the **Max** field. -1. Enter the timeframe in the **Window** field. - For example, to configure the transform to send up to 2GB of logs per day to the destination, set: - - **Bytes** as the unit type - - `2000000000`in the **Max** field - - `24h` in the **Window** field - +1. In the **Limits** section: + 1. Select the unit type. The unit of the quota limit can be the number of events or volume of data. + 1. Enter the limit in the **Max** field. +1. Enter the timeframe in the **Window** field.
+ For example, to configure the transform to send up to 2GB of logs per day to the destination, set: + - **Bytes** as the unit type + - `2000000000`in the **Max** field + - `24h` in the **Window** field 1. Click **Save**. 1. For each destination or transform that ingests logs from the `quota` transform, click the component's tile and add `.dropped` for the input ID for the data sent after the limit is met. @@ -136,19 +135,19 @@ To set up a monitor to alert when the quota is reached: 1. Navigate to the [New Monitor][5] page. 1. Select **Metric**. 1. Leave the detection method as **Threshold Alert**. -1. In the **Define the metric** field: - a. Enter `vector.component_sent_event_bytes_total` for the metric. - b. In the **from** field, add `component_id:,output:dropped` where `` is the name of your `quota` transform. - c. Enter `host` in the **sum by** field. - d. Leave the setting to evaluate the `sum` of the query over the `last 5 minutes`. +1. In the **Define the metric** field: + 1. Enter `vector.component_sent_event_bytes_total` for the metric. + 1. In the **from** field, add `component_id:,output:dropped` where `` is the name of your `quota` transform. + 1. Enter `host` in the **sum by** field. + 1. Leave the setting to evaluate the `sum` of the query over the `last 5 minutes`. 1. In the **Set alert conditions** section: - a. Leave the setting to trigger when the evaluated value is `above` the threshold for any `host`. - b. Enter `1` for the **Alert threshold**. This means that if the metric query is greater than 1, then the monitor alerts. -See [Metric Monitors][6] for more information. -1. In the **Configure notifications and automations** section: - a. Enter a name for your monitor. - b. Enter a notification message. See [Notifications][7] and [Variables][8] for more information on customizing your message. - c. Select who and which services the notifications are sent to. + 1. Leave the setting to trigger when the evaluated value is `above` the threshold for any `host`. + 1. Enter `1` for the **Alert threshold**. This means that if the metric query is greater than 1, then the monitor alerts.
+ See [Metric Monitors][6] for more information. +1. In the **Configure notifications and automations** section: + 1. Enter a name for your monitor. + 1. Enter a notification message. See [Notifications][7] and [Variables][8] for more information on customizing your message. + 1. Select who and which services the notifications are sent to. 1. Optionally, you can set [renotifications][9], tags, teams, and a [priority][10] for your monitor. 1. In the **Define permissions and audit notifications** section, you can define [permissions][11] and audit notifications. 1. Click **Create**. diff --git a/content/en/security/application_security/exploit-prevention.md b/content/en/security/application_security/exploit-prevention.md index 780a2902d73..dd09fd2683f 100644 --- a/content/en/security/application_security/exploit-prevention.md +++ b/content/en/security/application_security/exploit-prevention.md @@ -83,15 +83,10 @@ App and API Protection Exploit Prevention intercepts all SQL queries to determin 1. Navigate to [In-App WAF][4]. 2. If you have applied a Datadog managed policy to your services, then follow these steps: - - a. Clone the policy. For example, you can use the **Managed - Block attack tools** policy. - - b. Add a policy name and description. - - c. Click on the policy you created and select the **Local File Inclusion** ruleset. Enable blocking for the **Local File Inclusion exploit** rule. - - d. Similarly, select the **Server-side Request Forgery** ruleset and enable blocking for the **Server-side request forgery** exploit rule. - + 1. Clone the policy. For example, you can use the **Managed - Block attack tools** policy. + 1. Add a policy name and description. + 1. Click on the policy you created and select the **Local File Inclusion** ruleset. Enable blocking for the **Local File Inclusion exploit** rule. + 1. Similarly, select the **Server-side Request Forgery** ruleset and enable blocking for the **Server-side request forgery** exploit rule. 3. If you have applied a custom policy for your services, you can skip Steps 2.a and 2.b for cloning a policy and directly set the Exploit Prevention rules in **blocking** mode (Steps 2.c and 2.d). diff --git a/content/en/security/application_security/setup/single_step/_index.md b/content/en/security/application_security/setup/single_step/_index.md index 66340546a9e..61d6c6cca9f 100644 --- a/content/en/security/application_security/setup/single_step/_index.md +++ b/content/en/security/application_security/setup/single_step/_index.md @@ -31,14 +31,11 @@ With one command, you can install, configure, and start the Agent, while also in For an Ubuntu host: 1. Run the one-line installation command: - ```shell DD_API_KEY= DD_SITE="" DD_APM_INSTRUMENTATION_ENABLED=host DD_APM_INSTRUMENTATION_LIBRARIES="java:1,python:4,js:5,dotnet:3,php:1" DD_APPSEC_ENABLED=true bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_agent7.sh)" ``` - - a. Replace `` with your [Datadog API key][4]. - - b. Replace `` with your [Datadog site][3]. + 1. Replace `` with your [Datadog API key][4]. + 1. Replace `` with your [Datadog site][3].
You can also optionally configure the following:
    diff --git a/content/en/security/application_security/troubleshooting.md b/content/en/security/application_security/troubleshooting.md index 5f501cc3d11..929bd61db70 100644 --- a/content/en/security/application_security/troubleshooting.md +++ b/content/en/security/application_security/troubleshooting.md @@ -375,28 +375,17 @@ Use this [migration guide][1] to assess any breaking changes if you upgraded you If you don't see AAP threat information in the [Trace and Signals Explorer][2] for your Node.js application, follow these steps to troubleshoot the issue: -1. Confirm the latest version of AAP is running by checking that `appsec_enabled` is `true` in the [startup logs][3] - - a. If you don't see startup logs after a request has been sent, add the environment variable `DD_TRACE_STARTUP_LOGS=true` to enable startup logs. Check the startup logs for `appsec_enabled` is `true`. - - b. If `appsec_enabled` is `false`, then AAP was not enabled correctly. See [installation instructions][4]. - - c. If `appsec_enabled` is not in the startup logs, the latest AAP version needs to be installed. See [installation instructions][4]. - -2. Is the tracer working? Can you see relevant traces on the APM dashboard? - - AAP relies on the tracer so if you don't see traces, then the tracer might not be working. See [APM Troubleshooting][5]. - +1. Confirm the latest version of AAP is running by checking that `appsec_enabled` is `true` in the [startup logs][3]. + 1. If you don't see startup logs after a request has been sent, add the environment variable `DD_TRACE_STARTUP_LOGS=true` to enable startup logs. Check the startup logs for `appsec_enabled` is `true`. + 1. If `appsec_enabled` is `false`, then AAP was not enabled correctly. See [installation instructions][4]. + 1. If `appsec_enabled` is not in the startup logs, the latest AAP version needs to be installed. See [installation instructions][4]. +2. Confirm that the tracer is working by looking for relevant traces on the APM dashboard.
    + AAP relies on the tracer, so if you don't see traces, then the tracer might not be working. See [APM Troubleshooting][5]. 3. In your application directory, run the command `npm explore @datadog/native-appsec -- npm run install` and restart your app. - - a. If `@datadog/native-appsec` is not found then the installation is incorrect. - - b. If `@datadog/native-appsec` is found when starting your application, add the command to your runtime start script. - - c. If the tracer still does not work, you might be running an unsupported runtime. - + 1. If `@datadog/native-appsec` is not found, then the installation is incorrect. + 1. If `@datadog/native-appsec` is found when starting your application, add the command to your runtime start script. + 1. If the tracer still does not work, you might be running an unsupported runtime. 4. To enable logs, add the following environment variables: - ``` DD_TRACE_DEBUG=1 DD_TRACE_LOG_LEVEL=info @@ -405,6 +394,7 @@ If you don't see AAP threat information in the [Trace and Signals Explorer][2] f [1]: https://github.com/DataDog/dd-trace-js/blob/master/MIGRATING.md [2]: https://app.datadoghq.com/security/appsec/ [3]: /tracing/troubleshooting/tracer_startup_logs/ +[4]: /security/application_security/setup/nodejs/ [5]: /tracing/troubleshooting/ {{< /programming-lang >}} {{< programming-lang lang="python" >}} diff --git a/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md b/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md index 550d29b6257..d74b7508216 100644 --- a/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md +++ b/content/en/security/cloud_siem/guide/google-cloud-config-guide-for-cloud-siem.md @@ -138,20 +138,20 @@ The default behavior for Dataflow pipeline workers is to use your project's [Com 1. Enter a name for the job. 1. Select a regional endpoint. 1. In the **Dataflow template** dropdown menu, select **Pub/Sub to Datadog**. -1. In **Required Parameters** section: - a. In the **Pub/Sub input subscription** dropdown menu, select the default subscription that was created earlier when you created a new [Pub/Sub system](#create-a-google-cloud-publishsubscription-pubsub-system). - b. Enter the following in the **Datadog Logs API URL** field: +1. In the **Required Parameters** section: + 1. In the **Pub/Sub input subscription** dropdown menu, select the default subscription that was created earlier when you created a new [Pub/Sub system](#create-a-google-cloud-publishsubscription-pubsub-system). + 1. Enter the following in the **Datadog Logs API URL** field: ``` https://{{< region-param key="http_endpoint" code="true" >}} ``` **Note**: Ensure that the Datadog site selector on the right of this documentation page is set to your Datadog site before copying the URL above. - c. In the **Output deadletter Pub/Sub topic** field, select the [additional topic](#create-an-additional-topic-and-subscription-for-outputdeadlettertopic) you created earlier for receiving messages rejected by the Datadog API. - d. Specify a path for temporary files in your storage bucket in the **Temporary location** field. + 1. In the **Output deadletter Pub/Sub topic** field, select the [additional topic](#create-an-additional-topic-and-subscription-for-outputdeadlettertopic) you created earlier for receiving messages rejected by the Datadog API. + 1. Specify a path for temporary files in your storage bucket in the **Temporary location** field. 1. If you [created a secret in Secret Manager](#create-a-secret-in-secret-manager) for your Datadog API key value earlier: - a. Click **Optional Parameters** to see the additional fields. - b. Enter the resource name of the secret in the **Google Cloud Secret Manager ID** field. - To get the resource name, go to your secret in [Secret Manager][8]. Click on your secret. Click on the three dots under **Action** and select **Copy resource name**. - c. Enter `SECRET_MANAGER` in the **Source of the API key passed** field. + 1. Click **Optional Parameters** to see the additional fields. + 1. Enter the resource name of the secret in the **Google Cloud Secret Manager ID** field.
    + To get the resource name, go to your secret in [Secret Manager][8]. Click on your secret. Click on the three dots under **Action** and select **Copy resource name**. + 1. Enter `SECRET_MANAGER` in the **Source of the API key passed** field. 1. If you are not using a secret for your Datadog API key value: - **Recommended**: - Set `Source of API key passed` to `KMS`. diff --git a/content/en/security/cloud_siem/triage_and_investigate/investigate_security_signals.md b/content/en/security/cloud_siem/triage_and_investigate/investigate_security_signals.md index 5d8a6206fc7..46310b1f67e 100644 --- a/content/en/security/cloud_siem/triage_and_investigate/investigate_security_signals.md +++ b/content/en/security/cloud_siem/triage_and_investigate/investigate_security_signals.md @@ -60,14 +60,14 @@ To view your signals by MITRE ATT&CK Tactic and Technique: 1. Click the **Signals** tab at the top of the page. 1. Click on a security signal from the table. 1. In the **What Happened** section, see the logs that matched the query. Hover over the query to see the query details. - - You can also see specific information like username or network IP. In **Rule Details**, click the funnel icon to create a suppression rule or add the information to an existing suppression. See [Create suppression rule][11] for more details. + - You can also see specific information like username or network IP. In **Rule Details**, click the funnel icon to create a suppression rule or add the information to an existing suppression. See [Create suppression rule][11] for more details. you will like thiss 1. In the **Next Steps** section: - a. Under **Triage**, click the dropdown to change the triage status of the signal. The default status is `OPEN`. + 1. Under **Triage**, click the dropdown to change the triage status of the signal. The default status is `OPEN`. - `Open`: Datadog Security triggered a detection based on a rule, and the resulting signal is not yet resolved. - `Under Review`: During an active investigation, change the triage status to `Under Review`. From the `Under Review` state, you can move the status to `Archived` or `Open` as needed. - `Archived`: When the detection that caused the signal has been resolved, update the status to `Archived`. When a signal is archived, you can give a reason and description for future reference. If an archived issue resurfaces, or if further investigation is necessary, the status can be changed back to `Open`. All signals are locked 30 days after they have been created.
- b. Click **Assign Signal** to assign a signal to yourself or another Datadog user. - c. Under **Take Action**, you can create a case, declare an incident, edit suppressions, or run workflows. Creating a case automatically assigns the signal to you and sets the triage status to `Under Review`. + 1. Click **Assign Signal** to assign a signal to yourself or another Datadog user. + 1. Under **Take Action**, you can create a case, declare an incident, edit suppressions, or run workflows. Creating a case automatically assigns the signal to you and sets the triage status to `Under Review`. {{< img src="security/security_monitoring/investigate_security_signals/signal_side_panel.png" alt="The signal side panel of a compromised AWS IAM user access key showing two IP addresses and their locations" style="width:90%;" >}} diff --git a/content/en/security/sensitive_data_scanner/guide/investigate_sensitive_data_findings.md b/content/en/security/sensitive_data_scanner/guide/investigate_sensitive_data_findings.md index 07763dd0c27..1f6d3531a8b 100644 --- a/content/en/security/sensitive_data_scanner/guide/investigate_sensitive_data_findings.md +++ b/content/en/security/sensitive_data_scanner/guide/investigate_sensitive_data_findings.md @@ -43,17 +43,18 @@ To investigate a finding: 1. Click on the finding in the list. 2. In the finding panel, click **View Recent Changes** to navigate to [Audit Trail][3] and see if there are any recent configuration changes that caused the sensitive data finding. 3. Use the following options to explore different types of data matching the query: - a. To view all logs related to the query in Log Explorer, click **View All Logs**.
- b. To view all traces matching the query in Trace Explorer, click **View All APM Spans**.
- c. To view all RUM events matching the query, click **View All RUM Events**.
- d. To view all events matching the query, click **View All Events**. + 1. To view all logs related to the query in Log Explorer, click **View All Logs**. + 1. To view all traces matching the query in Trace Explorer, click **View All APM Spans**. + 1. To view all RUM events matching the query, click **View All RUM Events**. + 1. To view all events matching the query, click **View All Events**. {{< img src="sensitive_data_scanner/investigate_sensitive_data_issues/findings_panel_20251015.png" alt="The findings panel showing a critical visa card scanner finding" style="width:50%;">}} -4. In the **Blast Radius** section:
- a. View the Top 10 services, hosts, and environments impacted by this sensitive data findings.
- b. Click on a service to see more information about the service in the **Software Catalog**.
- c. Click on a host to see more information about the host in the Infrastructure List page. +4. In the **Blast Radius** section: + 1. View the Top 10 services, hosts, and environments impacted by this sensitive data findings. + 1. Click on a service to see more information about the service in the **Software Catalog**. + 1. Click on a host to see more information about the host in the Infrastructure List page. {{< img src="sensitive_data_scanner/investigate_sensitive_data_issues/blast_radius_02_01_2024.png" alt="The findings panel showing the top 10 impacted services" style="width:50%;">}} -If you want to modify the Scanning Rule that was used to detect the sensitive data finding, click **Modify Rule** at the top of the panel. + + If you want to modify the Scanning Rule that was used to detect the sensitive data finding, click **Modify Rule** at the top of the panel. Additionally, you can also: - Use [Case Management][1] to track, triage, and investigate the finding, click **Create Case** at the top of the panel. Associated cases are surfaced in the Findings page. diff --git a/content/en/tracing/guide/tutorial-enable-java-host.md b/content/en/tracing/guide/tutorial-enable-java-host.md index 8ce8c7beabb..3c578832918 100644 --- a/content/en/tracing/guide/tutorial-enable-java-host.md +++ b/content/en/tracing/guide/tutorial-enable-java-host.md @@ -312,46 +312,32 @@ The following steps walk you through adding annotations to the code to trace som 5. Update your build script configuration, and build the application: {{< tabs >}} - {{% tab "Maven" %}} - -a. Open `notes/pom.xml` and uncomment the lines configuring dependencies for manual tracing. The `dd-trace-api` library is used for the `@Trace` annotations, and `opentracing-util` and `opentracing-api` are used for manual span creation. - -b. Run: - - ```sh - ./mvnw clean package - - java -javaagent:../dd-java-agent.jar -Ddd.trace.sample.rate=1 -Ddd.service=notes -Ddd.env=dev -jar -Ddd.version=0.0.1 target/notes-0.0.1-SNAPSHOT.jar - ``` - - Or use the script: - - ```sh - sh ./scripts/mvn_instrumented_run.sh - ``` - + 1. Open `notes/pom.xml` and uncomment the lines configuring dependencies for manual tracing. The `dd-trace-api` library is used for the `@Trace` annotations, and `opentracing-util` and `opentracing-api` are used for manual span creation. + 1. Run: + ```sh + ./mvnw clean package + + java -javaagent:../dd-java-agent.jar -Ddd.trace.sample.rate=1 -Ddd.service=notes -Ddd.env=dev -jar -Ddd.version=0.0.1 target/notes-0.0.1-SNAPSHOT.jar + ``` + Or use the script: + ```sh + sh ./scripts/mvn_instrumented_run.sh + ``` {{% /tab %}} - {{% tab "Gradle" %}} - -a. Open `notes/build.gradle` and uncomment the lines configuring dependencies for manual tracing. The `dd-trace-api` library is used for the `@Trace` annotations, and `opentracing-util` and `opentracing-api` are used for manual span creation. - -b. Run: - ```sh - ./gradlew clean bootJar - - java -javaagent:../dd-java-agent.jar -Ddd.trace.sample.rate=1 -Ddd.service=notes -Ddd.env=dev -jar -Ddd.version=0.0.1 build/libs/notes-0.0.1-SNAPSHOT.jar - ``` - - Or use the script: - - ```sh - sh ./scripts/gradle_instrumented_run.sh - ``` - + 1. Open `notes/build.gradle` and uncomment the lines configuring dependencies for manual tracing. The `dd-trace-api` library is used for the `@Trace` annotations, and `opentracing-util` and `opentracing-api` are used for manual span creation. + 1. Run: + ```sh + ./gradlew clean bootJar + + java -javaagent:../dd-java-agent.jar -Ddd.trace.sample.rate=1 -Ddd.service=notes -Ddd.env=dev -jar -Ddd.version=0.0.1 build/libs/notes-0.0.1-SNAPSHOT.jar + ``` + Or use the script: + ```sh + sh ./scripts/gradle_instrumented_run.sh + ``` {{% /tab %}} - {{< /tabs >}} 6. Resend some HTTP requests, specifically some `GET` requests. diff --git a/content/en/tracing/trace_collection/custom_instrumentation/android/dd-api.md b/content/en/tracing/trace_collection/custom_instrumentation/android/dd-api.md index 5d2c02b68ec..39d621688d8 100644 --- a/content/en/tracing/trace_collection/custom_instrumentation/android/dd-api.md +++ b/content/en/tracing/trace_collection/custom_instrumentation/android/dd-api.md @@ -28,684 +28,624 @@ Send [traces][1] to Datadog from your Android applications with [Datadog's ## Setup 1. Add the Gradle dependency by declaring the library as a dependency in your `build.gradle` file: - -```groovy -dependencies { - implementation "com.datadoghq:dd-sdk-android-trace:x.x.x" -} -``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-trace:x.x.x" + } + ``` 2. Initialize Datadog SDK with your application context, tracking consent, and the [Datadog client token][4]. For security reasons, you must use a client token: you cannot use [Datadog API keys][5] to configure Datadog SDK as they would be exposed client-side in the Android application APK byte code. For more information about setting up a client token, see the [client token documentation][4]: + {{< site-region region="us" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .build(); -{{< site-region region="us" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).build() - - Datadog.initialize(this, configuration, trackingConsent) + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="eu" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.EU1) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="eu" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.EU1) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="us3" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( clientToken = "", env = "", variant = "" - ).useSite(DatadogSite.EU1) - .build() + ).useSite(DatadogSite.US3) + .build() - Datadog.initialize(this, configuration, trackingConsent) + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.EU1) - .build(); + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US3) + .build(); - Datadog.initialize(this, configuration, trackingConsent); - } + Datadog.initialize(this, configuration, trackingConsent); + } } -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="us3" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US3) - .build() + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="us5" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US5) + .build() - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US3) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="us5" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US5) - .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US5) + .build(); - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US5) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="gov" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US1_FED) - .build() + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="gov" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US1_FED) + .build() - Datadog.initialize(this, configuration, trackingConsent) + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US1_FED) - .build(); - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US1_FED) + .build(); + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="ap1" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP1) - .build() + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="ap1" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP1) + .build() - Datadog.initialize(this, configuration, trackingConsent) + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP1) - .build(); - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP1) + .build(); + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + {{< site-region region="ap2" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP2) + .build() + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP2) + .build(); -{{< site-region region="ap2" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP2) - .build() - Datadog.initialize(this, configuration, trackingConsent) + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP2) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + To be compliant with the GDPR regulation, the SDK requires the tracking consent value at + initialization. + The tracking consent can be one of the following values: + * `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it + to the data + collection endpoint. The SDK waits for the new tracking consent value to decide what to do with + the batched data. + * `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data + collection endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to + manually send any logs, traces, or + RUM events. + + To update the tracking consent after the SDK is initialized, call: + `Datadog.setTrackingConsent()`. + The SDK changes its behavior according to the new consent. For example, if the current tracking + consent is `TrackingConsent.PENDING` and you update it to: + * `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to + the data collection endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future + data. + + **Note**: In the credentials required for initialization, your application variant name is also + required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have + variants). This is important because it enables the right ProGuard `mapping.txt` file to be + automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For + more information see the [guide to uploading Android source mapping files][7]. + + Use the utility method `isInitialized` to check if the SDK is properly initialized: + + ```kotlin + if (Datadog.isInitialized()) { + // your code here } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -To be compliant with the GDPR regulation, the SDK requires the tracking consent value at -initialization. -The tracking consent can be one of the following values: -* `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it -to the data -collection endpoint. The SDK waits for the new tracking consent value to decide what to do with -the batched data. -* `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data -collection endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to -manually send any logs, traces, or -RUM events. - -To update the tracking consent after the SDK is initialized, call: -`Datadog.setTrackingConsent()`. -The SDK changes its behavior according to the new consent. For example, if the current tracking -consent is `TrackingConsent.PENDING` and you update it to: -* `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to -the data collection endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future -data. - -**Note**: In the credentials required for initialization, your application variant name is also -required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have -variants). This is important because it enables the right ProGuard `mapping.txt` file to be -automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For -more information see the [guide to uploading Android source mapping files][7]. - -Use the utility method `isInitialized` to check if the SDK is properly initialized: + ``` -```kotlin -if (Datadog.isInitialized()) { -// your code here -} -``` - -When writing your application, you can enable development logs by calling the `setVerbosity` method. -All internal messages in the library with a priority equal to or higher than the provided level are -then logged to Android's Logcat: - -```kotlin -Datadog.setVerbosity(Log.INFO) -``` + When writing your application, you can enable development logs by calling the `setVerbosity` method. + All internal messages in the library with a priority equal to or higher than the provided level are + then logged to Android's Logcat: + ```kotlin + Datadog.setVerbosity(Log.INFO) + ``` 3. Configure and enable Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val traceConfig = TraceConfiguration.Builder().build() -Trace.enable(traceConfig) -``` - -{{% /tab %}} - -{{% tab "Java" %}} - -```java -TraceConfiguration traceConfig = TraceConfiguration.Builder().build(); -Trace.enable(traceConfig); -``` - -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder().build() + Trace.enable(traceConfig) + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + TraceConfiguration traceConfig = TraceConfiguration.Builder().build(); + Trace.enable(traceConfig); + ``` + {{% /tab %}} + {{< /tabs >}} 4. Configure and register the `DatadogTracer`. You only need to do it once, usually in your application's `onCreate()` method: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -import com.datadog.android.trace.GlobalDatadogTracer -import com.datadog.android.trace.DatadogTracing - -GlobalDatadogTracer.registerIfAbsent( - DatadogTracing.newTracerBuilder() - .build() -) -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -import com.datadog.android.trace.GlobalDatadogTracer; -import com.datadog.android.trace.DatadogTracing; - -GlobalDatadogTracer.registerIfAbsent( - DatadogTracing.newTracerBuilder(Datadog.getInstance()).build() -); -``` - -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + import com.datadog.android.trace.GlobalDatadogTracer + import com.datadog.android.trace.DatadogTracing + + GlobalDatadogTracer.registerIfAbsent( + DatadogTracing.newTracerBuilder() + .build() + ) + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + import com.datadog.android.trace.GlobalDatadogTracer; + import com.datadog.android.trace.DatadogTracing; + + GlobalDatadogTracer.registerIfAbsent( + DatadogTracing.newTracerBuilder(Datadog.getInstance()).build() + ); + ``` + {{% /tab %}} + {{< /tabs >}} 5. (Optional) - Set the partial flush threshold to optimize the SDK's workload based on the number of spans your application generates. The library waits until the number of finished spans exceeds the threshold before writing them to disk. Setting this value to `1` writes each span as soon as it finishes. - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = DatadogTracing.newTracerBuilder() - .withPartialFlushMinSpans(10) - .build() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogTracer tracer = DatadogTracing.newTracerBuilder(Datadog.getInstance()) - .withPartialFlushMinSpans(10) - .build(); -``` - -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = DatadogTracing.newTracerBuilder() + .withPartialFlushMinSpans(10) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = DatadogTracing.newTracerBuilder(Datadog.getInstance()) + .withPartialFlushMinSpans(10) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} 6. Start a custom span using the following method: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = GlobalDatadogTracer.get() -val span = tracer.buildSpan("").start() -// Do something ... -// ... -// Then when the span should be closed -span.finish() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpan span = tracer.buildSpan("").start(); -// Do something ... -// ... -// Then when the span should be closed -span.finish(); -``` - -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val span = tracer.buildSpan("").start() + // Do something ... + // ... + // Then when the span should be closed + span.finish() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpan span = tracer.buildSpan("").start(); + // Do something ... + // ... + // Then when the span should be closed + span.finish(); + ``` + {{% /tab %}} + {{< /tabs >}} 7. To use scopes in synchronous calls: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val span = tracer.buildSpan("").start() -try { - val scope = tracer.activateSpan(span) - scope?.use { - // Do something ... - // ... - // Start a new Scope - val childSpan = tracer.buildSpan("").start() - try { - val innerScope = tracer.activateSpan(childSpan).use { innerScope -> - // Do something ... - } - } catch (e: Throwable) { - childSpan.logThrowable(e) - } finally { - childSpan.finish() - } - } -} catch (e: Error) { -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogSpan span = tracer.buildSpan("").start(); -try { - DatadogScope scope = tracer.activateSpan(span); - try { - // Do something ... - // ... - // Start a new Scope - DatadogSpan childSpan = tracer.buildSpan("").start(); - try { - DatadogScope innerScope = tracer.activateSpan(childSpan); - try { - // Do something ... - } - finally { - innerScope.close(); - } - } catch( Throwable e) { - childSpan.logThrowable(e); - } finally { - childSpan.finish(); - } - } - finally { - scope.close(); - } -} catch(Error e){ -} -``` - -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val span = tracer.buildSpan("").start() + try { + val scope = tracer.activateSpan(span) + scope?.use { + // Do something ... + // ... + // Start a new Scope + val childSpan = tracer.buildSpan("").start() + try { + val innerScope = tracer.activateSpan(childSpan).use { innerScope -> + // Do something ... + } + } catch (e: Throwable) { + childSpan.logThrowable(e) + } finally { + childSpan.finish() + } + } + } catch (e: Error) { + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogSpan span = tracer.buildSpan("").start(); + try { + DatadogScope scope = tracer.activateSpan(span); + try { + // Do something ... + // ... + // Start a new Scope + DatadogSpan childSpan = tracer.buildSpan("").start(); + try { + DatadogScope innerScope = tracer.activateSpan(childSpan); + try { + // Do something ... + } + finally { + innerScope.close(); + } + } catch( Throwable e) { + childSpan.logThrowable(e); + } finally { + childSpan.finish(); + } + } + finally { + scope.close(); + } + } catch(Error e){ + } + ``` + {{% /tab %}} + {{< /tabs >}} 8. To use scopes in asynchronous calls: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val span = tracer.buildSpan("").start() -try { - val scope = tracer.activateSpan(span) - scope.use { - // Do something ... - Thread { - // Step 2: reactivate the Span in the worker thread - tracer.activateSpan(span).use { - // Do something ... - } - }.start() - } -} catch (e: Throwable) { - span.logThrowable(e) -} finally { - span.finish() -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogSpan span = tracer.buildSpan("").start(); -try { - DatadogScope scope = tracer.activateSpan(span); - try { - // Do something ... - new Thread(() ->{ - // Step 2: reactivate the Span in the worker thread - DatadogScope scopeContinuation = tracer.activateSpan(span); - try { - // Do something - } finally { - scope.close(); - } - }).start(); - - } finally { - scope.close(); - } -} catch(Throwable e) { - span.logThrowable(e); -} finally { - span.finish(); -} -``` - -{{% /tab %}} -{{< /tabs >}} - -9. (Optional) To manually distribute traces between your environments, for example frontend to + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val span = tracer.buildSpan("").start() + try { + val scope = tracer.activateSpan(span) + scope.use { + // Do something ... + Thread { + // Step 2: reactivate the Span in the worker thread + tracer.activateSpan(span).use { + // Do something ... + } + }.start() + } + } catch (e: Throwable) { + span.logThrowable(e) + } finally { + span.finish() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogSpan span = tracer.buildSpan("").start(); + try { + DatadogScope scope = tracer.activateSpan(span); + try { + // Do something ... + new Thread(() ->{ + // Step 2: reactivate the Span in the worker thread + DatadogScope scopeContinuation = tracer.activateSpan(span); + try { + // Do something + } finally { + scope.close(); + } + }).start(); + + } finally { + scope.close(); + } + } catch(Throwable e) { + span.logThrowable(e); + } finally { + span.finish(); + } + ``` + {{% /tab %}} + {{< /tabs >}} +9. (Optional) To manually distribute traces between your environments, for example, frontend to backend: - - a. Inject tracer context in the client request. - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = GlobalDatadogTracer.get() -val span = tracer.buildSpan("").start() -val tracedRequestBuilder = Request.Builder() -tracer.propagate().inject( - span.context(), - tracedRequestBuilder -) { builder, key, value -> - builder?.addHeader(key, value) -} -val request = tracedRequestBuilder.build() -// Dispatch the request and finish the span after. -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpan span = tracer.buildSpan("").start(); -Request.Builder tracedRequestBuilder = new Request.Builder(); -tracer.propagate().inject( - span.context(), - tracedRequestBuilder, - new Function3(){ - @Override - public Unit invoke(Request.Builder builder, String key, String value) { - builder.addHeader(key, value); - return Unit.INSTANCE; - } - }); -Request request = tracedRequestBuilder.build(); -// Dispatch the request and finish the span after. -``` - -{{% /tab %}} -{{< /tabs >}} - -b. Extract the client tracer context from headers in server code. - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = GlobalDatadogTracer.get() -val extractedContext = tracer.propagate() - .extract(request) { carrier, classifier -> - val headers = carrier.headers.toMultimap() - .map { it.key to it.value.joinToString(";") } - .toMap() - - for ((key, value) in headers) classifier(key, value) - } - -val serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpanContext extractedContext = tracer.propagate() - .extract(request, - new Function2, Unit>() { - @Override - public Unit invoke( - Request carrier, - Function2 classifier - ) { - request.headers().forEach(pair -> { - String key = pair.component1(); - String value = pair.component2(); - - classifier.invoke(key, value); - }); - - return Unit.INSTANCE; - } - }); -DatadogSpan serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start(); -``` - -{{% /tab %}} -{{< /tabs >}} - -**Note**: For code bases using the OkHttp client, Datadog provides -the [implementation below](#okhttp). + 1. Inject tracer context in the client request. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val span = tracer.buildSpan("").start() + val tracedRequestBuilder = Request.Builder() + tracer.propagate().inject( + span.context(), + tracedRequestBuilder + ) { builder, key, value -> + builder?.addHeader(key, value) + } + val request = tracedRequestBuilder.build() + // Dispatch the request and finish the span after. + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpan span = tracer.buildSpan("").start(); + Request.Builder tracedRequestBuilder = new Request.Builder(); + tracer.propagate().inject( + span.context(), + tracedRequestBuilder, + new Function3(){ + @Override + public Unit invoke(Request.Builder builder, String key, String value) { + builder.addHeader(key, value); + return Unit.INSTANCE; + } + }); + Request request = tracedRequestBuilder.build(); + // Dispatch the request and finish the span after. + ``` + {{% /tab %}} + {{< /tabs >}} + 1. Extract the client tracer context from headers in server code. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val extractedContext = tracer.propagate() + .extract(request) { carrier, classifier -> + val headers = carrier.headers.toMultimap() + .map { it.key to it.value.joinToString(";") } + .toMap() + + for ((key, value) in headers) classifier(key, value) + } + + val serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpanContext extractedContext = tracer.propagate() + .extract(request, + new Function2, Unit>() { + @Override + public Unit invoke( + Request carrier, + Function2 classifier + ) { + request.headers().forEach(pair -> { + String key = pair.component1(); + String value = pair.component2(); + + classifier.invoke(key, value); + }); + + return Unit.INSTANCE; + } + }); + DatadogSpan serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start(); + ``` + {{% /tab %}} + {{< /tabs >}} + + **Note**: For code bases using the OkHttp client, Datadog provides + the [implementation below](#okhttp). 10. (Optional) To provide additional tags alongside your span: - -```kotlin -span.setTag("http.url", url) -``` - + ```kotlin + span.setTag("http.url", url) + ``` 11. (Optional) To mark a span as having an error, log it using corresponding methods: - -```kotlin -span.logThrowable(throwable) -``` - -```kotlin -span.logErrorMessage(message) -``` - + ```kotlin + span.logThrowable(throwable) + ``` + ```kotlin + span.logErrorMessage(message) + ``` 12. If you need to modify some attributes in your Span events before batching you can do so by providing an implementation of `SpanEventMapper` when enabling Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val traceConfig = TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -TraceConfiguration config = new TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build(); -``` - -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + TraceConfiguration config = new TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} ## Kotlin Extensions @@ -756,45 +696,38 @@ found in the `dd-sdk-android-okhttp` library) as follows: 1. Add the Gradle dependency to the `dd-sdk-android-okhttp` library in the module-level `build.gradle` file: - -```groovy -dependencies { - implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" -} -``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" + } + ``` 2. Add `DatadogInterceptor` to your `OkHttpClient`: - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracedHosts = listOf("example.com", "example.eu") -val okHttpClient = OkHttpClient.Builder() - .addInterceptor( - DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(RateBasedSampler(20f)) - .build() - ) - .build() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -List tracedHosts = Arrays.asList("example.com", "example.eu"); -OkHttpClient okHttpClient = new OkHttpClient.Builder() - .addInterceptor( - new DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(new RateBasedSampler(20f)) - .build() - ) - .build(); -``` - -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracedHosts = listOf("example.com", "example.eu") + val okHttpClient = OkHttpClient.Builder() + .addInterceptor( + DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(RateBasedSampler(20f)) + .build() + ) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + List tracedHosts = Arrays.asList("example.com", "example.eu"); + OkHttpClient okHttpClient = new OkHttpClient.Builder() + .addInterceptor( + new DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(new RateBasedSampler(20f)) + .build() + ) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} This creates a span around each request processed by the OkHttpClient (matching the provided hosts), with all the relevant information automatically filled (URL, method, status code, error), and @@ -804,11 +737,10 @@ Network traces are sampled with an adjustable sampling rate. A sampling of 100% default. The interceptor tracks requests at the application level. You can also add a `TracingInterceptor` at -the network level to get more details, for example when following redirections. +the network level to get more details; for example, when following redirections. {{< tabs >}} {{% tab "Kotlin" %}} - ```kotlin val tracedHosts = listOf("example.com", "example.eu") val okHttpClient = OkHttpClient.Builder() @@ -824,10 +756,8 @@ val okHttpClient = OkHttpClient.Builder() ) .build() ``` - {{% /tab %}} {{% tab "Java" %}} - ```java List tracedHosts = Arrays.asList("example.com", "example.eu"); OkHttpClient okHttpClient = new OkHttpClient.Builder() @@ -843,11 +773,10 @@ OkHttpClient okHttpClient = new OkHttpClient.Builder() ) .build(); ``` - {{% /tab %}} {{< /tabs >}} -In this case trace sampling decision made by the upstream interceptor for a particular request will +In this case, trace sampling decision made by the upstream interceptor for a particular request will be respected by the downstream interceptor. Because the way the OkHttp Request is executed (using a Thread pool), the request span won't be @@ -857,17 +786,14 @@ method: {{< tabs >}} {{% tab "Kotlin" %}} - ```kotlin val request = Request.Builder() .url(requestUrl) .parentSpan(parentSpan) .build() ``` - {{% /tab %}} {{% tab "Java" %}} - ```java Request.Builder requestBuilder = new Request.Builder() .url(requestUrl) @@ -875,7 +801,6 @@ Request request = OkHttpRequestExtKt .parentSpan(requestBuilder, parentSpan) .build(); ``` - {{% /tab %}} {{< /tabs >}} diff --git a/content/en/tracing/trace_collection/custom_instrumentation/opentracing/android.md b/content/en/tracing/trace_collection/custom_instrumentation/opentracing/android.md index 26cd9c5d63e..63f69977b82 100644 --- a/content/en/tracing/trace_collection/custom_instrumentation/opentracing/android.md +++ b/content/en/tracing/trace_collection/custom_instrumentation/opentracing/android.md @@ -34,631 +34,603 @@ If it is not possible to add Open Telemetry to your project, you can use the int
1. Add the Gradle dependency by declaring the library as a dependency in your `build.gradle` file: - -```groovy -dependencies { - implementation "com.datadoghq:dd-sdk-android-trace:2.x.x" -} -``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-trace:2.x.x" + } + ``` 2. Initialize Datadog SDK with your application context, tracking consent, and the [Datadog client token][4]. For security reasons, you must use a client token: you cannot use [Datadog API keys][5] to configure Datadog SDK as they would be exposed client-side in the Android application APK byte code. For more information about setting up a client token, see the [client token documentation][4]: - -{{< site-region region="us" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="us" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="eu" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.EU1) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="eu" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.EU1) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.EU1) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.EU1) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="us3" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US3) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="us3" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US3) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US3) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US3) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="us5" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US5) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="us5" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US5) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US5) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US5) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="gov" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US1_FED) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="gov" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US1_FED) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US1_FED) - .build(); - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US1_FED) + .build(); + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="ap1" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP1) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="ap1" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP1) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP1) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP1) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -{{< site-region region="ap2" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP2) - .build() - - Datadog.initialize(this, configuration, trackingConsent) + {{< site-region region="ap2" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP2) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } } -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP2) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP2) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } } -} -``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -To be compliant with the GDPR regulation, the SDK requires the tracking consent value at initialization. -The tracking consent can be one of the following values: -* `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it to the data -collection endpoint. The SDK waits for the new tracking consent value to decide what to do with the batched data. -* `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data collection endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to manually send any logs, traces, or -RUM events. - -To update the tracking consent after the SDK is initialized, call: `Datadog.setTrackingConsent()`. -The SDK changes its behavior according to the new consent. For example, if the current tracking consent is `TrackingConsent.PENDING` and you update it to: -* `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to the data collection endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future data. - -**Note**: In the credentials required for initialization, your application variant name is also required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have variants). This is important because it enables the right ProGuard `mapping.txt` file to be automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For more information see the [guide to uploading Android source mapping files][12]. - -Use the utility method `isInitialized` to check if the SDK is properly initialized: - -```kotlin -if (Datadog.isInitialized()) { - // your code here -} -``` -When writing your application, you can enable development logs by calling the `setVerbosity` method. All internal messages in the library with a priority equal to or higher than the provided level are then logged to Android's Logcat: -```kotlin -Datadog.setVerbosity(Log.INFO) -``` - + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + To be compliant with the GDPR regulation, the SDK requires the tracking consent value at initialization. + The tracking consent can be one of the following values: + * `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it to the data + collection endpoint. The SDK waits for the new tracking consent value to decide what to do with the batched data. + * `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data collection endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to manually send any logs, traces, or + RUM events. + + To update the tracking consent after the SDK is initialized, call: `Datadog.setTrackingConsent()`. + The SDK changes its behavior according to the new consent. For example, if the current tracking consent is `TrackingConsent.PENDING` and you update it to: + * `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to the data collection endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future data. + + **Note**: In the credentials required for initialization, your application variant name is also required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have variants). This is important because it enables the right ProGuard `mapping.txt` file to be automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For more information see the [guide to uploading Android source mapping files][12]. + + Use the utility method `isInitialized` to check if the SDK is properly initialized: + + ```kotlin + if (Datadog.isInitialized()) { + // your code here + } + ``` + When writing your application, you can enable development logs by calling the `setVerbosity` method. All internal messages in the library with a priority equal to or higher than the provided level are then logged to Android's Logcat: + ```kotlin + Datadog.setVerbosity(Log.INFO) + ``` 3. Configure and enable Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val traceConfig = TraceConfiguration.Builder().build() -Trace.enable(traceConfig) -``` -{{% /tab %}} - -{{% tab "Java" %}} -```java -TraceConfiguration traceConfig = TraceConfiguration.Builder().build(); -Trace.enable(traceConfig); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder().build() + Trace.enable(traceConfig) + ``` + {{% /tab %}} + + {{% tab "Java" %}} + ```java + TraceConfiguration traceConfig = TraceConfiguration.Builder().build(); + Trace.enable(traceConfig); + ``` + {{% /tab %}} + {{< /tabs >}} 4. Configure and register the Android Tracer. You only need to do it once, usually in your application's `onCreate()` method: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -import io.opentracing.util.GlobalTracer - -val tracer = AndroidTracer.Builder().build() -GlobalTracer.registerIfAbsent(tracer) -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -import io.opentracing.util.GlobalTracer; - -AndroidTracer tracer = new AndroidTracer.Builder().build(); -GlobalTracer.registerIfAbsent(tracer); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + import io.opentracing.util.GlobalTracer + + val tracer = AndroidTracer.Builder().build() + GlobalTracer.registerIfAbsent(tracer) + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + import io.opentracing.util.GlobalTracer; + + AndroidTracer tracer = new AndroidTracer.Builder().build(); + GlobalTracer.registerIfAbsent(tracer); + ``` + {{% /tab %}} + {{< /tabs >}} 5. (Optional) - Set the partial flush threshold to optimize the SDK's workload based on the number of spans your application generates. The library waits until the number of finished spans exceeds the threshold before writing them to disk. Setting this value to `1` writes each span as soon as it finishes. - -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = AndroidTracer.Builder() - .setPartialFlushThreshold(10) - .build() -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -AndroidTracer tracer = new AndroidTracer.Builder() - .setPartialFlushThreshold(10) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = AndroidTracer.Builder() + .setPartialFlushThreshold(10) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + AndroidTracer tracer = new AndroidTracer.Builder() + .setPartialFlushThreshold(10) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} 6. Start a custom span using the following method: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalTracer.get() -val span = tracer.buildSpan("").start() -// Do something ... -// ... -// Then when the span should be closed -span.finish() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -GlobalTracer tracer = GlobalTracer.get(); -Span span = tracer.buildSpan("").start(); -// Do something ... -// ... -// Then when the span should be closed -span.finish(); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalTracer.get() + val span = tracer.buildSpan("").start() + // Do something ... + // ... + // Then when the span should be closed + span.finish() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + GlobalTracer tracer = GlobalTracer.get(); + Span span = tracer.buildSpan("").start(); + // Do something ... + // ... + // Then when the span should be closed + span.finish(); + ``` + {{% /tab %}} + {{< /tabs >}} 7. To use scopes in synchronous calls: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val span = tracer.buildSpan("").start() -try { - val scope = tracer.activateSpan(span) - scope.use { - // Do something ... - // ... - // Start a new Scope - val childSpan = tracer.buildSpan("").start() - try { - tracer.activateSpan(childSpan).use { - // Do something ... - } - } catch(e: Error) { - childSpan.error(e) - } finally { - childSpan.finish() - } - } -} catch(e: Throwable) { - AndroidTracer.logThrowable(span, e) -} finally { - span.finish() -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -Span = tracer.buildSpan("").start(); -try { - Scope scope = tracer.activateSpan(span); - try { - // Do something ... - // ... - // Start a new Scope - Span childSpan = tracer.buildSpan("").start(); - try { - Scope innerScope = tracer.activateSpan(childSpan); - try { - // Do something ... - } finally { - innerScope.close(); - } - } catch(Throwable e) { - AndroidTracer.logThrowable(childSpan, e); - } finally { - childSpan.finish(); - } - } - finally { - scope.close(); - } -} catch(Error e) { - AndroidTracer.logThrowable(span, e); -} finally { - span.finish(); -} -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val span = tracer.buildSpan("").start() + try { + val scope = tracer.activateSpan(span) + scope.use { + // Do something ... + // ... + // Start a new Scope + val childSpan = tracer.buildSpan("").start() + try { + tracer.activateSpan(childSpan).use { + // Do something ... + } + } catch(e: Error) { + childSpan.error(e) + } finally { + childSpan.finish() + } + } + } catch(e: Throwable) { + AndroidTracer.logThrowable(span, e) + } finally { + span.finish() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + Span = tracer.buildSpan("").start(); + try { + Scope scope = tracer.activateSpan(span); + try { + // Do something ... + // ... + // Start a new Scope + Span childSpan = tracer.buildSpan("").start(); + try { + Scope innerScope = tracer.activateSpan(childSpan); + try { + // Do something ... + } finally { + innerScope.close(); + } + } catch(Throwable e) { + AndroidTracer.logThrowable(childSpan, e); + } finally { + childSpan.finish(); + } + } + finally { + scope.close(); + } + } catch(Error e) { + AndroidTracer.logThrowable(span, e); + } finally { + span.finish(); + } + ``` + {{% /tab %}} + {{< /tabs >}} 8. To use scopes in asynchronous calls: - {{< tabs >}} {{% tab "Kotlin" %}} -```kotlin -val span = tracer.buildSpan("").start() -try{ - val scope = tracer.activateSpan(span) - scope.use { - // Do something ... - doAsyncWork { - // Step 2: reactivate the Span in the worker thread - val scopeContinuation = tracer.scopeManager().activate(span) - scopeContinuation.use { - // Do something ... - } - } - } -} catch(e: Throwable) { - AndroidTracer.logThrowable(span, e) -} finally { - span.finish() -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -Span span = tracer.buildSpan("").start(); -try { - Scope scope = tracer.activateSpan(span); - try { - // Do something ... - new Thread(() -> { - // Step 2: reactivate the Span in the worker thread - Scope scopeContinuation = tracer.scopeManager().activate(span); - try { - // Do something - } finally { - scope.close(); - } - }).start(); - } finally { - scope.close(); - } -} catch (Throwable e){ - AndroidTracer.logThrowable(span, e); -} finally { - span.finish(); -} -``` -{{% /tab %}} -{{< /tabs >}} - + ```kotlin + val span = tracer.buildSpan("").start() + try{ + val scope = tracer.activateSpan(span) + scope.use { + // Do something ... + doAsyncWork { + // Step 2: reactivate the Span in the worker thread + val scopeContinuation = tracer.scopeManager().activate(span) + scopeContinuation.use { + // Do something ... + } + } + } + } catch(e: Throwable) { + AndroidTracer.logThrowable(span, e) + } finally { + span.finish() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + Span span = tracer.buildSpan("").start(); + try { + Scope scope = tracer.activateSpan(span); + try { + // Do something ... + new Thread(() -> { + // Step 2: reactivate the Span in the worker thread + Scope scopeContinuation = tracer.scopeManager().activate(span); + try { + // Do something + } finally { + scope.close(); + } + }).start(); + } finally { + scope.close(); + } + } catch (Throwable e){ + AndroidTracer.logThrowable(span, e); + } finally { + span.finish(); + } + ``` + {{% /tab %}} + {{< /tabs >}} 9. (Optional) To manually distribute traces between your environments, for example frontend to backend: - - a. Inject tracer context in the client request. - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalTracer.get() -val span = tracer.buildSpan("").start() -val tracedRequestBuilder = Request.Builder() -tracer.inject(span.context(), Format.Builtin.TEXT_MAP_INJECT, - TextMapInject { key, value -> - tracedRequestBuilder.addHeader(key, value) - } -) -val request = tracedRequestBuilder.build() -// Dispatch the request and finish the span after. -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -Tracer tracer = GlobalTracer.get(); -Span span = tracer.buildSpan("").start(); -Request.Builder tracedRequestBuilder = new Request.Builder(); -tracer.inject( - span.context(), - Format.Builtin.TEXT_MAP_INJECT, - new TextMapInject() { - @Override - public void put(String key, String value) { - tracedRequestBuilder.addHeader(key, value); - } - }); -Request request = tracedRequestBuilder.build(); -// Dispatch the request and finish the span after -``` -{{% /tab %}} -{{< /tabs >}} - -b. Extract the client tracer context from headers in server code. - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalTracer.get() -val extractedContext = tracer.extract( - Format.Builtin.TEXT_MAP_EXTRACT, - TextMapExtract { - request.headers().toMultimap() - .map { it.key to it.value.joinToString(";") } - .toMap() - .entrySet() - .iterator() - } - ) -val serverSpan = tracer.buildSpan("").asChildOf(extractedContext).start() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -Tracer tracer = GlobalTracer.get(); -SpanContext extractedContext = tracer.extract( - Format.Builtin.TEXT_MAP_EXTRACT, - new TextMapExtract() { - @Override - public Iterator> iterator() { - return request.headers().toMultimap() - .entrySet() - .stream() - .collect( - Collectors.toMap( - Map.Entry::getKey, - entry -> String.join(";", entry.getValue()) - ) - ) - .entrySet() - .iterator(); - } - }); -Span serverSpan = tracer.buildSpan("").asChildOf(extractedContext).start(); -``` -{{% /tab %}} -{{< /tabs >}} - -**Note**: For code bases using the OkHttp client, Datadog provides the [implementation below](#okhttp). + 1. Inject tracer context in the client request. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalTracer.get() + val span = tracer.buildSpan("").start() + val tracedRequestBuilder = Request.Builder() + tracer.inject(span.context(), Format.Builtin.TEXT_MAP_INJECT, + tMapInject { key, value -> + cedRequestBuilder.addHeader(key, value) + ) + val request = tracedRequestBuilder.build() + // Dispatch the request and finish the span after. + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + Tracer tracer = GlobalTracer.get(); + Span span = tracer.buildSpan("").start(); + Request.Builder tracedRequestBuilder = new Request.Builder(); + tracer.inject( + n.context(), + mat.Builtin.TEXT_MAP_INJECT, + TextMapInject() { + erride + lic void put(String key, String value) { + cedRequestBuilder.addHeader(key, value); + + Request request = tracedRequestBuilder.build(); + // Dispatch the request and finish the span after + ``` + {{% /tab %}} + {{< /tabs >}} + 1. Extract the client tracer context from headers in server code. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalTracer.get() + val extractedContext = tracer.extract( + Format.Builtin.TEXT_MAP_EXTRACT, + TextMapExtract { + request.headers().toMultimap() + .map { it.key to it.value.joinToString(";") } + .toMap() + .entrySet() + .iterator() + } + ) + val serverSpan = tracer.buildSpan("").asChildOf(extractedContext).start() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + Tracer tracer = GlobalTracer.get(); + SpanContext extractedContext = tracer.extract( + Format.Builtin.TEXT_MAP_EXTRACT, + new TextMapExtract() { + @Override + public Iterator> iterator() { + return request.headers().toMultimap() + .entrySet() + .stream() + .collect( + Collectors.toMap( + Map.Entry::getKey, + entry -> String.join(";", entry.getValue()) + ) + ) + .entrySet() + .iterator(); + } + }); + Span serverSpan = tracer.buildSpan("").asChildOf(extractedContext).start(); + ``` + {{% /tab %}} + {{< /tabs >}} + + **Note**: For code bases using the OkHttp client, Datadog provides the [implementation below](#okhttp). 10. (Optional) To provide additional tags alongside your span: - -```kotlin -span.setTag("http.url", url) -``` - + ```kotlin + span.setTag("http.url", url) + ``` 11. (Optional) To mark a span as having an error, log it using OpenTracing tags: + ```kotlin + span.log(mapOf(Fields.ERROR_OBJECT to throwable)) + ``` + ```kotlin + span.log(mapOf(Fields.MESSAGE to errorMessage)) + ``` + You can also use one of the following helper method in AndroidTracer: -```kotlin -span.log(mapOf(Fields.ERROR_OBJECT to throwable)) -``` -```kotlin -span.log(mapOf(Fields.MESSAGE to errorMessage)) -``` -You can also use one of the following helper method in AndroidTracer: - -```kotlin -AndroidTracer.logThrowable(span, throwable) -``` -```kotlin -AndroidTracer.logErrorMessage(span, message) -``` - + ```kotlin + AndroidTracer.logThrowable(span, throwable) + ``` + ```kotlin + AndroidTracer.logErrorMessage(span, message) + ``` 12. If you need to modify some attributes in your Span events before batching you can do so by providing an implementation of `SpanEventMapper` when enabling Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val traceConfig = TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -TraceConfiguration config = new TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + TraceConfiguration config = new TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} ## Kotlin Extensions @@ -730,47 +702,44 @@ In addition to manual tracing, the Datadog SDK provides the following integratio If you want to trace your OkHttp requests, you can add the provided [Interceptor][6] (which can be found in the `dd-sdk-android-okhttp` library) as follows: 1. Add the Gradle dependency to the `dd-sdk-android-okhttp` library in the module-level `build.gradle` file: - - ```groovy - dependencies { - implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" - } - ``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" + } + ``` 2. Add `DatadogInterceptor` to your `OkHttpClient`: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracedHosts = listOf("example.com", "example.eu") -val okHttpClient = OkHttpClient.Builder() - .addInterceptor( - DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(RateBasedSampler(20f)) - .build() - ) - .build() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -List tracedHosts = Arrays.asList("example.com", "example.eu"); -OkHttpClient okHttpClient = new OkHttpClient.Builder() - .addInterceptor( - new DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(new RateBasedSampler(20f)) - .build() - ) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracedHosts = listOf("example.com", "example.eu") + val okHttpClient = OkHttpClient.Builder() + .addInterceptor( + DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(RateBasedSampler(20f)) + .build() + ) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + List tracedHosts = Arrays.asList("example.com", "example.eu"); + OkHttpClient okHttpClient = new OkHttpClient.Builder() + .addInterceptor( + new DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(new RateBasedSampler(20f)) + .build() + ) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} This creates a span around each request processed by the OkHttpClient (matching the provided hosts), with all the relevant information automatically filled (URL, method, status code, error), and propagates the tracing information to your backend to get a unified trace within Datadog. Network traces are sampled with an adjustable sampling rate. A sampling of 20% is applied by default. -The interceptor tracks requests at the application level. You can also add a `TracingInterceptor` at the network level to get more details, for example when following redirections. +The interceptor tracks requests at the application level. You can also add a `TracingInterceptor` at the network level to get more details; for example, when following redirections. {{< tabs >}} {{% tab "Kotlin" %}} @@ -809,7 +778,7 @@ OkHttpClient okHttpClient = new OkHttpClient.Builder() {{% /tab %}} {{< /tabs >}} -In this case trace sampling decision made by the upstream interceptor for a particular request will be respected by the downstream interceptor. +In this case, trace sampling decision made by the upstream interceptor for a particular request will be respected by the downstream interceptor. Because the way the OkHttp Request is executed (using a Thread pool), the request span won't be automatically linked with the span that triggered the request. You can manually provide a parent span in the `OkHttp Request.Builder` as follows by using `Request.Builder.parentSpan` extension method: @@ -841,152 +810,148 @@ Request request = OkHttpRequestExtKt To provide a continuous trace inside a RxJava stream you need to follow the steps below: 1. Add the [OpenTracing for RxJava][8] dependency into your project and follow the **Readme** file - for instructions. For example for a continuous trace you just have to add: -```kotlin - TracingRxJava3Utils.enableTracing(GlobalTracer.get()) -``` -2. Then in your project open a scope when the Observable is subscribed and close it when it completes. Any span + for instructions. For example, for a continuous trace, you have to add: + ```kotlin + TracingRxJava3Utils.enableTracing(GlobalTracer.get()) + ``` +2. Then, in your project, open a scope when the Observable is subscribed and close it when it completes. Any span created inside the stream operators will be displayed inside this scope (parent Span): - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -var spanScope: Scope? = null -Single.fromSupplier { } - .subscribeOn(Schedulers.io()) - .map { - val span = GlobalTracer.get().buildSpan("").start() - // ... - span.finish() - } - .doOnSubscribe { - val span = GlobalTracer.get() - .buildSpan("") - .start() - spanScope = GlobalTracer.get().scopeManager().activate(span) - } - .doFinally { - GlobalTracer.get().scopeManager().activeSpan()?.let { - it.finish() - } - spanScope?.close() - } -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -ThreadLocal scopeStorage = new ThreadLocal<>(); -... -Single.fromSupplier({}) - .subscribeOn(Schedulers.io()) - .map(data -> { - final Span span = GlobalTracer.get().buildSpan("").start(); - // ... - span.finish(); - // ... - }) - .doOnSubscribe(disposable -> { - final Span span = GlobalTracer.get().buildSpan("").start(); - Scope spanScope = GlobalTracer.get().scopeManager().activate(span); - scopeStorage.set(spanScope); - }) - .doFinally(() -> { - final Span activeSpan = GlobalTracer.get().scopeManager().activeSpan(); - if (activeSpan != null) { - activeSpan.finish(); - } - Scope spanScope = scopeStorage.get(); - if (spanScope != null) { - spanScope.close(); - scopeStorage.remove(); - } - }) - }; -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + var spanScope: Scope? = null + Single.fromSupplier { } + .subscribeOn(Schedulers.io()) + .map { + val span = GlobalTracer.get().buildSpan("").start() + // ... + span.finish() + } + .doOnSubscribe { + val span = GlobalTracer.get() + .buildSpan("") + .start() + spanScope = GlobalTracer.get().scopeManager().activate(span) + } + .doFinally { + GlobalTracer.get().scopeManager().activeSpan()?.let { + it.finish() + } + spanScope?.close() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + ThreadLocal scopeStorage = new ThreadLocal<>(); + ... + Single.fromSupplier({}) + .subscribeOn(Schedulers.io()) + .map(data -> { + final Span span = GlobalTracer.get().buildSpan("").start(); + // ... + span.finish(); + // ... + }) + .doOnSubscribe(disposable -> { + final Span span = GlobalTracer.get().buildSpan("").start(); + Scope spanScope = GlobalTracer.get().scopeManager().activate(span); + scopeStorage.set(spanScope); + }) + .doFinally(() -> { + final Span activeSpan = GlobalTracer.get().scopeManager().activeSpan(); + if (activeSpan != null) { + activeSpan.finish(); + } + Scope spanScope = scopeStorage.get(); + if (spanScope != null) { + spanScope.close(); + scopeStorage.remove(); + } + }) + }; + ``` + {{% /tab %}} + {{< /tabs >}} ### RxJava + Retrofit For a continuous trace inside a RxJava stream that uses Retrofit for the network requests: -1. Configure the [Datadog Interceptor](#okhttp) +1. Configure the [Datadog Interceptor](#okhttp). 2. Use the [Retrofit RxJava][9] adapters to use synchronous Observables for the network requests: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -Retrofit.Builder() - .baseUrl("") - .addCallAdapterFactory(RxJava3CallAdapterFactory.createSynchronous()) - .client(okHttpClient) - .build() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -new Retrofit.Builder() - .baseUrl("") - .addCallAdapterFactory(RxJava3CallAdapterFactory.createSynchronous()) - .client(okHttpClient) - .build(); - ``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + Retrofit.Builder() + .baseUrl("") + .addCallAdapterFactory(RxJava3CallAdapterFactory.createSynchronous()) + .client(okHttpClient) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + new Retrofit.Builder() + .baseUrl("") + .addCallAdapterFactory(RxJava3CallAdapterFactory.createSynchronous()) + .client(okHttpClient) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} 3. Open a scope around your Rx stream as follows: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -var spanScope: Scope? = null -remoteDataSource.getData(query) - .subscribeOn(Schedulers.io()) - .map { - // ... - } - .doOnSuccess { - localDataSource.persistData(it) - } - .doOnSubscribe { - val span = GlobalTracer.get().buildSpan("").start() - spanScope = GlobalTracer.get().scopeManager().activate(span) - } - .doFinally { - GlobalTracer.get().scopeManager().activeSpan()?.let { - it.finish() - } - spanScope?.close() - } -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -ThreadLocal scopeStorage = new ThreadLocal<>(); -... -remoteDataSource.getData(query) - .subscribeOn(Schedulers.io()) - .map(data -> { /*...*/ }) - .doOnSuccess(data -> { - localDataSource.persistData(data); - }) - .doOnSubscribe(disposable -> { - final Span span = GlobalTracer.get().buildSpan("").start(); - Scope spanScope = GlobalTracer.get().scopeManager().activate(span); - scopeStorage.set(spanScope); - }) - .doFinally(() -> { - final Span activeSpan = GlobalTracer.get().scopeManager().activeSpan(); - if (activeSpan != null) { - activeSpan.finish(); - } - Scope spanScope = scopeStorage.get(); - if (spanScope != null) { - spanScope.close(); - scopeStorage.remove(); - } - }); - ``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + var spanScope: Scope? = null + remoteDataSource.getData(query) + .subscribeOn(Schedulers.io()) + .map { + // ... + } + .doOnSuccess { + localDataSource.persistData(it) + } + .doOnSubscribe { + val span = GlobalTracer.get().buildSpan("").start() + spanScope = GlobalTracer.get().scopeManager().activate(span) + } + .doFinally { + GlobalTracer.get().scopeManager().activeSpan()?.let { + it.finish() + } + spanScope?.close() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + ThreadLocal scopeStorage = new ThreadLocal<>(); + ... + remoteDataSource.getData(query) + .subscribeOn(Schedulers.io()) + .map(data -> { /*...*/ }) + .doOnSuccess(data -> { + localDataSource.persistData(data); + }) + .doOnSubscribe(disposable -> { + final Span span = GlobalTracer.get().buildSpan("").start(); + Scope spanScope = GlobalTracer.get().scopeManager().activate(span); + scopeStorage.set(spanScope); + }) + .doFinally(() -> { + final Span activeSpan = GlobalTracer.get().scopeManager().activeSpan(); + if (activeSpan != null) { + activeSpan.finish(); + } + Scope spanScope = scopeStorage.get(); + if (spanScope != null) { + spanScope.close(); + scopeStorage.remove(); + } + }); + ``` + {{% /tab %}} + {{< /tabs >}} ## Batch collection diff --git a/content/en/tracing/trace_collection/dd_libraries/android.md b/content/en/tracing/trace_collection/dd_libraries/android.md index ec17deb0499..41225017f09 100644 --- a/content/en/tracing/trace_collection/dd_libraries/android.md +++ b/content/en/tracing/trace_collection/dd_libraries/android.md @@ -28,670 +28,646 @@ Send [traces][1] to Datadog from your Android applications with [Datadog's `dd-s ## Setup 1. Add the Gradle dependency by declaring the library as a dependency in your `build.gradle` file: - -```groovy -dependencies { - implementation "com.datadoghq:dd-sdk-android-trace:x.x.x" -} -``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-trace:x.x.x" + } + ``` 2. Initialize Datadog SDK with your application context, tracking consent, and the [Datadog client token][4]. For security reasons, you must use a client token: you cannot use [Datadog API keys][5] to configure Datadog SDK as they would be exposed client-side in the Android application APK byte code. For more information about setting up a client token, see the [client token documentation][4]: + {{< site-region region="us" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` -{{< site-region region="us" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).build() - - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` - -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="eu" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.EU1) - .build() - - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.EU1) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` - -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="us3" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US3) - .build() - - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US3) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` - -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="us5" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US5) - .build() - - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` - -{{% /tab %}} -{{% tab "Java" %}} - -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US5) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` - -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -{{< site-region region="gov" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} + {{% /tab %}} + {{% tab "Java" %}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.US1_FED) - .build() + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .build(); - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -{{% /tab %}} -{{% tab "Java" %}} + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.US1_FED) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` + {{< site-region region="eu" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.EU1) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.EU1) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -{{< site-region region="ap1" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP1) - .build() + {{< site-region region="us3" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US3) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US3) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -{{% /tab %}} -{{% tab "Java" %}} + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP1) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` + {{< site-region region="us5" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US5) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US5) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -{{< site-region region="ap2" >}} -{{< tabs >}} -{{% tab "Kotlin" %}} + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -```kotlin -class SampleApplication : Application() { - override fun onCreate() { - super.onCreate() - val configuration = Configuration.Builder( - clientToken = "", - env = "", - variant = "" - ).useSite(DatadogSite.AP2) - .build() + {{< site-region region="gov" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.US1_FED) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` - Datadog.initialize(this, configuration, trackingConsent) - } -} -``` + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.US1_FED) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -{{% /tab %}} -{{% tab "Java" %}} + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -```java -public class SampleApplication extends Application { - @Override - public void onCreate() { - super.onCreate(); - Configuration configuration = new Configuration.Builder("", "", "") - .useSite(DatadogSite.AP2) - .build(); - - Datadog.initialize(this, configuration, trackingConsent); - } -} -``` + {{< site-region region="ap1" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP1) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` -{{% /tab %}} -{{< /tabs >}} -{{< /site-region >}} - -To be compliant with the GDPR regulation, the SDK requires the tracking consent value at -initialization. -The tracking consent can be one of the following values: - -* `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it to - the data - collection endpoint. The SDK waits for the new tracking consent value to decide what to do with - the batched data. -* `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data collection - endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to manually - send any logs, traces, or - RUM events. - -To update the tracking consent after the SDK is initialized, call: -`Datadog.setTrackingConsent()`. -The SDK changes its behavior according to the new consent. For example, if the current tracking -consent is `TrackingConsent.PENDING` and you update it to: - -* `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to the - data collection endpoint. -* `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future - data. - -**Note**: In the credentials required for initialization, your application variant name is also -required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have -variants). This is important because it enables the right ProGuard `mapping.txt` file to be -automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For -more information see the [guide to uploading Android source mapping files][7]. - -Use the utility method `isInitialized` to check if the SDK is properly initialized: + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP1) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` -```kotlin -if (Datadog.isInitialized()) { - // your code here -} -``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} -When writing your application, you can enable development logs by calling the `setVerbosity` method. -All internal messages in the library with a priority equal to or higher than the provided level are -then logged to Android's Logcat: + {{< site-region region="ap2" >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + + ```kotlin + class SampleApplication : Application() { + override fun onCreate() { + super.onCreate() + val configuration = Configuration.Builder( + clientToken = "", + env = "", + variant = "" + ).useSite(DatadogSite.AP2) + .build() + + Datadog.initialize(this, configuration, trackingConsent) + } + } + ``` -```kotlin -Datadog.setVerbosity(Log.INFO) -``` + {{% /tab %}} + {{% tab "Java" %}} + + ```java + public class SampleApplication extends Application { + @Override + public void onCreate() { + super.onCreate(); + Configuration configuration = new Configuration.Builder("", "", "") + .useSite(DatadogSite.AP2) + .build(); + + Datadog.initialize(this, configuration, trackingConsent); + } + } + ``` + {{% /tab %}} + {{< /tabs >}} + {{< /site-region >}} + + To be compliant with the GDPR regulation, the SDK requires the tracking consent value at + initialization. + The tracking consent can be one of the following values: + + * `TrackingConsent.PENDING`: The SDK starts collecting and batching the data but does not send it to + the data + collection endpoint. The SDK waits for the new tracking consent value to decide what to do with + the batched data. + * `TrackingConsent.GRANTED`: The SDK starts collecting the data and sends it to the data collection + endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK does not collect any data. You will not be able to manually + send any logs, traces, or + RUM events. + + To update the tracking consent after the SDK is initialized, call: + `Datadog.setTrackingConsent()`. + The SDK changes its behavior according to the new consent. For example, if the current tracking + consent is `TrackingConsent.PENDING` and you update it to: + + * `TrackingConsent.GRANTED`: The SDK sends all current batched data and future data directly to the + data collection endpoint. + * `TrackingConsent.NOT_GRANTED`: The SDK wipes all batched data and does not collect any future + data. + + **Note**: In the credentials required for initialization, your application variant name is also + required, and should use your `BuildConfig.FLAVOR` value (or an empty string if you don't have + variants). This is important because it enables the right ProGuard `mapping.txt` file to be + automatically uploaded at build time to be able to view de-obfuscated RUM error stack traces. For + more information see the [guide to uploading Android source mapping files][7]. + + Use the utility method `isInitialized` to check if the SDK is properly initialized: + + ```kotlin + if (Datadog.isInitialized()) { + // your code here + } + ``` + + When writing your application, you can enable development logs by calling the `setVerbosity` method. + All internal messages in the library with a priority equal to or higher than the provided level are + then logged to Android's Logcat: + + ```kotlin + Datadog.setVerbosity(Log.INFO) + ``` 3. Configure and enable Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val traceConfig = TraceConfiguration.Builder().build() -Trace.enable(traceConfig) -``` -{{% /tab %}} - -{{% tab "Java" %}} -```java -TraceConfiguration traceConfig = new TraceConfiguration.Builder().build(); -Trace.enable(traceConfig); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder().build() + Trace.enable(traceConfig) + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + TraceConfiguration traceConfig = new TraceConfiguration.Builder().build(); + Trace.enable(traceConfig); + ``` + {{% /tab %}} + {{< /tabs >}} 4. Configure and register the `DatadogTracer`. You only need to do it once, usually in your application's `onCreate()` method: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -import com.datadog.android.trace.GlobalDatadogTracer -import com.datadog.android.trace.DatadogTracing - -GlobalDatadogTracer.registerIfAbsent( - DatadogTracing.newTracerBuilder() - .build() -) -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -import com.datadog.android.trace.GlobalDatadogTracer; -import com.datadog.android.trace.DatadogTracing; - -GlobalDatadogTracer.registerIfAbsent( - DatadogTracing.newTracerBuilder(Datadog.getInstance()) - .build() -); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + import com.datadog.android.trace.GlobalDatadogTracer + import com.datadog.android.trace.DatadogTracing + + GlobalDatadogTracer.registerIfAbsent( + DatadogTracing.newTracerBuilder() + .build() + ) + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + import com.datadog.android.trace.GlobalDatadogTracer; + import com.datadog.android.trace.DatadogTracing; + + GlobalDatadogTracer.registerIfAbsent( + DatadogTracing.newTracerBuilder(Datadog.getInstance()) + .build() + ); + ``` + {{% /tab %}} + {{< /tabs >}} 5. (Optional) - Set the partial flush threshold to optimize the SDK's workload based on the number of spans your application generates. The library waits until the number of finished spans exceeds the threshold before writing them to disk. Setting this value to `1` writes each span as soon as it finishes. + {{< tabs >}} + {{% tab "Kotlin" %}} -{{< tabs >}} -{{% tab "Kotlin" %}} - -```kotlin -val tracer = DatadogTracing.newTracerBuilder() - .withPartialFlushMinSpans(10) - .build() -``` + ```kotlin + val tracer = DatadogTracing.newTracerBuilder() + .withPartialFlushMinSpans(10) + .build() + ``` -{{% /tab %}} -{{% tab "Java" %}} - -```java -DatadogTracer tracer = DatadogTracing.newTracerBuilder(Datadog.getInstance()) - .withPartialFlushMinSpans(10) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = DatadogTracing.newTracerBuilder(Datadog.getInstance()) + .withPartialFlushMinSpans(10) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} 6. Start a custom span using the following method: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalDatadogTracer.get() -val span = tracer.buildSpan("").start() -// Do something ... -// ... -// Then when the span should be closed -span.finish() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpan span = tracer.buildSpan("").start(); -// Do something ... -// ... -// Then when the span should be closed -span.finish(); -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val span = tracer.buildSpan("").start() + // Do something ... + // ... + // Then when the span should be closed + span.finish() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpan span = tracer.buildSpan("").start(); + // Do something ... + // ... + // Then when the span should be closed + span.finish(); + ``` + {{% /tab %}} + {{< /tabs >}} 7. To use scopes in synchronous calls: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val span = tracer.buildSpan("").start() -try { - val scope = tracer.activateSpan(span) - scope?.use { - // Do something ... - // ... - // Start a new Scope - val childSpan = tracer.buildSpan("").start() - try { - val innerScope = tracer.activateSpan(childSpan).use { innerScope -> - - } - } catch (e: Throwable) { - childSpan.logThrowable(e) - } finally { - childSpan.finish() - } - } -} catch (e: Error) { -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -DatadogSpan span = tracer.buildSpan("").start(); -try { - DatadogScope scope = tracer.activateSpan(span); - try { - // Do something ... - // ... - // Start a new Scope - DatadogSpan childSpan = tracer.buildSpan("").start(); - try { - DatadogScope innerScope = tracer.activateSpan(childSpan); - try { - // Do something ... - } finally { - innerScope.close(); - } - } catch (Throwable e) { - childSpan.logThrowable(e); - } finally { - childSpan.finish(); - } - } finally { - scope.close(); - } -} catch (Error e) { -} -``` -{{% /tab %}} -{{< /tabs >}} - + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val span = tracer.buildSpan("").start() + try { + val scope = tracer.activateSpan(span) + scope?.use { + // Do something ... + // ... + // Start a new Scope + val childSpan = tracer.buildSpan("").start() + try { + val innerScope = tracer.activateSpan(childSpan).use { innerScope -> + + } + } catch (e: Throwable) { + childSpan.logThrowable(e) + } finally { + childSpan.finish() + } + } + } catch (e: Error) { + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogSpan span = tracer.buildSpan("").start(); + try { + DatadogScope scope = tracer.activateSpan(span); + try { + // Do something ... + // ... + // Start a new Scope + DatadogSpan childSpan = tracer.buildSpan("").start(); + try { + DatadogScope innerScope = tracer.activateSpan(childSpan); + try { + // Do something ... + } finally { + innerScope.close(); + } + } catch (Throwable e) { + childSpan.logThrowable(e); + } finally { + childSpan.finish(); + } + } finally { + scope.close(); + } + } catch (Error e) { + } + ``` + {{% /tab %}} + {{< /tabs >}} 8. To use scopes in asynchronous calls: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val span = tracer.buildSpan("").start() -try { - val scope = tracer.activateSpan(span) - scope.use { - // Do something ... - Thread { - // Step 2: reactivate the Span in the worker thread - tracer.activateSpan(span).use { - // Do something ... - } - }.start() - } -} catch(e: Throwable) { - span.logThrowable(e) -} finally { - span.finish() -} -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -DatadogSpan span = tracer.buildSpan("").start(); -try { - DatadogScope scope = tracer.activateSpan(span); - try { - // Do something ... - new Thread(() -> { - // Step 2: reactivate the Span in the worker thread - DatadogScope scopeContinuation = tracer.activateSpan(span); - try { - // Do something - } finally { - scope.close(); - } - }).start(); - } finally { - scope.close(); - } -} catch (Throwable e){ - span.logThrowable(e); -} finally { - span.finish(); -} -``` -{{% /tab %}} -{{< /tabs >}} - -9. (Optional) To manually distribute traces between your environments, for example frontend to backend: - - a. Inject tracer context in the client request. - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalDatadogTracer.get() -val span = tracer.buildSpan("").start() -val tracedRequestBuilder = Request.Builder() -tracer.propagate().inject( - span.context(), - tracedRequestBuilder -) { builder, key, value -> - builder?.addHeader(key, value) -} -val request = tracedRequestBuilder.build() -// Dispatch the request and finish the span after. -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpan span = tracer.buildSpan("").start(); -Request.Builder tracedRequestBuilder = new Request.Builder(); -tracer.propagate().inject( - span.context(), - tracedRequestBuilder, - new Function3(){ - @Override - public Unit invoke(Request.Builder builder, String key, String value) { - builder.addHeader(key, value); - return Unit.INSTANCE; - } - } -); -Request request = tracedRequestBuilder.build(); -// Dispatch the request and finish the span after. -``` -{{% /tab %}} -{{< /tabs >}} - - b. Extract the client tracer context from headers in server code. - {{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracer = GlobalDatadogTracer.get() -val extractedContext = tracer.propagate() - .extract(request) { carrier, classifier -> - val headers = carrier.headers.toMultimap() - .map { it.key to it.value.joinToString(";") } - .toMap() - - for ((key, value) in headers) classifier(key, value) - } + {{% tab "Kotlin" %}} + ```kotlin + val span = tracer.buildSpan("").start() + try { + val scope = tracer.activateSpan(span) + scope.use { + // Do something ... + Thread { + // Step 2: reactivate the Span in the worker thread + tracer.activateSpan(span).use { + // Do something ... + } + }.start() + } + } catch(e: Throwable) { + span.logThrowable(e) + } finally { + span.finish() + } + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogSpan span = tracer.buildSpan("").start(); + try { + DatadogScope scope = tracer.activateSpan(span); + try { + // Do something ... + new Thread(() -> { + // Step 2: reactivate the Span in the worker thread + DatadogScope scopeContinuation = tracer.activateSpan(span); + try { + // Do something + } finally { + scope.close(); + } + }).start(); + } finally { + scope.close(); + } + } catch (Throwable e){ + span.logThrowable(e); + } finally { + span.finish(); + } + ``` + {{% /tab %}} + {{< /tabs >}} -val serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -DatadogTracer tracer = GlobalDatadogTracer.get(); -DatadogSpanContext extractedContext = tracer.propagate() - .extract(request, - new Function2, Unit>() { - @Override - public Unit invoke( - Request carrier, - Function2 classifier - ) { - request.headers().forEach(pair -> { - String key = pair.component1(); - String value = pair.component2(); - - classifier.invoke(key, value); - }); - - return Unit.INSTANCE; - } - }); -DatadogSpan serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start(); -``` +9. (Optional) To manually distribute traces between your environments, for example, frontend to backend: + 1. Inject tracer context in the client request. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val span = tracer.buildSpan("").start() + val tracedRequestBuilder = Request.Builder() + tracer.propagate().inject( + span.context(), + tracedRequestBuilder + ) { builder, key, value -> + builder?.addHeader(key, value) + } + val request = tracedRequestBuilder.build() + // Dispatch the request and finish the span after. + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpan span = tracer.buildSpan("").start(); + Request.Builder tracedRequestBuilder = new Request.Builder(); + tracer.propagate().inject( + span.context(), + tracedRequestBuilder, + new Function3(){ + @Override + public Unit invoke(Request.Builder builder, String key, String value) { + builder.addHeader(key, value); + return Unit.INSTANCE; + } + } + ); + Request request = tracedRequestBuilder.build(); + // Dispatch the request and finish the span after. + ``` + {{% /tab %}} + {{< /tabs >}} + 1. Extract the client tracer context from headers in server code. + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracer = GlobalDatadogTracer.get() + val extractedContext = tracer.propagate() + .extract(request) { carrier, classifier -> + val headers = carrier.headers.toMultimap() + .map { it.key to it.value.joinToString(";") } + .toMap() + + for ((key, value) in headers) classifier(key, value) + } + + val serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + DatadogTracer tracer = GlobalDatadogTracer.get(); + DatadogSpanContext extractedContext = tracer.propagate() + .extract(request, + new Function2, Unit>() { + @Override + public Unit invoke( + Request carrier, + Function2 classifier + ) { + request.headers().forEach(pair -> { + String key = pair.component1(); + String value = pair.component2(); + + classifier.invoke(key, value); + }); + + return Unit.INSTANCE; + } + }); + DatadogSpan serverSpan = tracer.buildSpan("").withParentContext(extractedContext).start(); + ``` {{% /tab %}} {{< /tabs >}} -**Note**: For code bases using the OkHttp client, Datadog provides the [implementation below](#okhttp). + **Note**: For code bases using the OkHttp client, Datadog provides the [implementation below](#okhttp). 10. (Optional) To provide additional tags alongside your span: - -```kotlin -span.setTag("http.url", url) -``` - + ```kotlin + span.setTag("http.url", url) + ``` 11. (Optional) To mark a span as having an error, log it using corresponding methods: -```kotlin -span.logThrowable(throwable) -``` -```kotlin -span.logErrorMessage(message) -``` - + ```kotlin + span.logThrowable(throwable) + ``` + ```kotlin + span.logErrorMessage(message) + ``` 12. If you need to modify some attributes in your Span events before batching you can do so by providing an implementation of `SpanEventMapper` when enabling Trace feature: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val traceConfig = TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -TraceConfiguration config = new TraceConfiguration.Builder() - // ... - .setEventMapper(spanEventMapper) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val traceConfig = TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + TraceConfiguration config = new TraceConfiguration.Builder() + // ... + .setEventMapper(spanEventMapper) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} ## Kotlin Extensions @@ -735,47 +711,44 @@ In addition to manual tracing, the Datadog SDK provides the following integratio If you want to trace your OkHttp requests, you can add the provided [Interceptor][6] (which can be found in the `dd-sdk-android-okhttp` library) as follows: 1. Add the Gradle dependency to the `dd-sdk-android-okhttp` library in the module-level `build.gradle` file: - -```groovy -dependencies { - implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" -} -``` - + ```groovy + dependencies { + implementation "com.datadoghq:dd-sdk-android-okhttp:x.x.x" + } + ``` 2. Add `DatadogInterceptor` to your `OkHttpClient`: - -{{< tabs >}} -{{% tab "Kotlin" %}} -```kotlin -val tracedHosts = listOf("example.com", "example.eu") -val okHttpClient = OkHttpClient.Builder() - .addInterceptor( - DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(RateBasedSampler(20f)) - .build() - ) - .build() -``` -{{% /tab %}} -{{% tab "Java" %}} -```java -List tracedHosts = Arrays.asList("example.com", "example.eu"); -OkHttpClient okHttpClient = new OkHttpClient.Builder() - .addInterceptor( - new DatadogInterceptor.Builder(tracedHosts) - .setTraceSampler(new RateBasedSampler(20f)) - .build() - ) - .build(); -``` -{{% /tab %}} -{{< /tabs >}} + {{< tabs >}} + {{% tab "Kotlin" %}} + ```kotlin + val tracedHosts = listOf("example.com", "example.eu") + val okHttpClient = OkHttpClient.Builder() + .addInterceptor( + DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(RateBasedSampler(20f)) + .build() + ) + .build() + ``` + {{% /tab %}} + {{% tab "Java" %}} + ```java + List tracedHosts = Arrays.asList("example.com", "example.eu"); + OkHttpClient okHttpClient = new OkHttpClient.Builder() + .addInterceptor( + new DatadogInterceptor.Builder(tracedHosts) + .setTraceSampler(new RateBasedSampler(20f)) + .build() + ) + .build(); + ``` + {{% /tab %}} + {{< /tabs >}} This creates a span around each request processed by the OkHttpClient (matching the provided hosts), with all the relevant information automatically filled (URL, method, status code, error), and propagates the tracing information to your backend to get a unified trace within Datadog. Network traces are sampled with an adjustable sampling rate. A sampling of 100% is applied by default. -The interceptor tracks requests at the application level. You can also add a `TracingInterceptor` at the network level to get more details, for example when following redirections. +The interceptor tracks requests at the application level. You can also add a `TracingInterceptor` at the network level to get more details; for example, when following redirections. {{< tabs >}} {{% tab "Kotlin" %}} @@ -814,7 +787,7 @@ OkHttpClient okHttpClient = new OkHttpClient.Builder() {{% /tab %}} {{< /tabs >}} -In this case trace sampling decision made by the upstream interceptor for a particular request will be respected by the downstream interceptor. +In this case, trace sampling decision made by the upstream interceptor for a particular request will be respected by the downstream interceptor. Because the way the OkHttp Request is executed (using a Thread pool), the request span won't be automatically linked with the span that triggered the request. You can manually provide a parent span in the `OkHttp Request.Builder` as follows by using `Request.Builder.parentSpan` extension method: diff --git a/layouts/shortcodes/observability_pipelines/processors/quota.en.md b/layouts/shortcodes/observability_pipelines/processors/quota.en.md index fe3f8851767..ba14f352981 100644 --- a/layouts/shortcodes/observability_pipelines/processors/quota.en.md +++ b/layouts/shortcodes/observability_pipelines/processors/quota.en.md @@ -20,16 +20,16 @@ To set up the quota processor: - Logs that do not match the quota filter are sent to the next step of the pipeline. 1. In the **Unit for quota** dropdown menu, select if you want to measure the quota by the number of `Events` or by the `Volume` in bytes. 1. Set the daily quota limit and select the unit of magnitude for your desired quota. -1. Optional, Click **Add Field** if you want to set a quota on a specific service or region field. - a. Enter the field name you want to partition by. See the [Partition example](#partition-example) for more information. - i. Select the **Ignore when missing** if you want the quota applied only to events that match the partition. See the [Ignore when missing example](#example-for-the-ignore-when-missing-option) for more information. - ii. Optional: Click **Overrides** if you want to set different quotas for the partitioned field. - - Click **Download as CSV** for an example of how to structure the CSV. - - Drag and drop your overrides CSV to upload it. You can also click **Browse** to select the file to upload it. See the [Overrides example](#overrides-example) for more information. - b. Click **Add Field** if you want to add another partition. +1. Optional: Click **Add Field** if you want to set a quota on a specific service or region field. + 1. Enter the field name you want to partition by. See the [Partition example](#partition-example) for more information. + 1. Select the **Ignore when missing** if you want the quota applied only to events that match the partition. See the [Ignore when missing example](#example-for-the-ignore-when-missing-option) for more information. + 1. Optional: Click **Overrides** if you want to set different quotas for the partitioned field. + - Click **Download as CSV** for an example of how to structure the CSV. + - Drag and drop your overrides CSV to upload it. You can also click **Browse** to select the file to upload it. See the [Overrides example](#overrides-example) for more information. + 1. Click **Add Field** if you want to add another partition. 1. In the **When quota is met** dropdown menu, select if you want to **drop events**, **keep events**, or **send events to overflow destination**, when the quota has been met. - 1. If you select **send events to overflow destination**, an overflow destination is added with the following cloud storage options: **Amazon S3**, **Azure Blob**, and **Google Cloud**. - 1. Select the cloud storage you want to send overflow logs to. See the setup instructions for your cloud storage: [Amazon S3][5002], [Azure Blob Storage][5003], or [Google Cloud Storage][5004]. + 1. If you select **send events to overflow destination**, an overflow destination is added with the following cloud storage options: **Amazon S3**, **Azure Blob**, and **Google Cloud**. + 1. Select the cloud storage you want to send overflow logs to. See the setup instructions for your cloud storage: [Amazon S3][5002], [Azure Blob Storage][5003], or [Google Cloud Storage][5004]. #### Examples