Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,9 @@ Audit Event Forwarding allows you to send audit events from Datadog to custom de

6. Enter a name for the destination.
7. In the **Configure Destination** section, enter the following details:

a. The endpoint to which you want to send the logs. The endpoint must start with `https://`. An example endpoint for Elasticsearch: `https://<your_account>.us-central1.gcp.cloud.es.io`.

b. The name of the destination index where you want to send the logs.

c. Optionally, select the index rotation for how often you want to create a new index: `No Rotation`, `Every Hour`, `Every Day`, `Every Week`, or `Every Month`. The default is `No Rotation`.

1. The endpoint to which you want to send the logs. The endpoint must start with `https://`. An example endpoint for Elasticsearch: `https://<your_account>.us-central1.gcp.cloud.es.io`.
1. The name of the destination index where you want to send the logs.
1. Optionally, select the index rotation for how often you want to create a new index: `No Rotation`, `Every Hour`, `Every Day`, `Every Week`, or `Every Month`. The default is `No Rotation`.
8. In the **Configure Authentication** section, enter the username and password for your Elasticsearch account.
9. Click **Save**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,11 @@ Prerequisite: Python and `pip` installed on your localhost. Windows users see [I
3. Create a new folder: `mkdir <NAME_OF_THE_FOLDER>`.
4. Enter the folder: `cd <NAME_OF_THE_FOLDER>`.
5. Download the script [api_query_data.py][3] to the folder created in step 3 and edit it:

a. Replace `<YOUR_DD_API_KEY>` and `<YOUR_DD_APP_KEY>` with your [Datadog API and app keys][4].

b. Replace `system.cpu.idle` with a metric you want to fetch. A list of your metrics is displayed in the [Datadog Metric Summary][5].

c. Optionally, replace `*` with a host to filter the data. A list of your hosts is displayed in the [Datadog Infrastructure List][6].

d. Optionally, change the time period to collect the data. The current setting is 3600 seconds (one hour). **Note**: If you run this too aggressively, you may reach the [Datadog API limits][7].

e. Save your file and confirm its location.
1. Replace `<YOUR_DD_API_KEY>` and `<YOUR_DD_APP_KEY>` with your [Datadog API and app keys][4].
1. Replace `system.cpu.idle` with a metric you want to fetch. A list of your metrics is displayed in the [Datadog Metric Summary][5].
1. Optionally, replace `*` with a host to filter the data. A list of your hosts is displayed in the [Datadog Infrastructure List][6].
1. Optionally, change the time period to collect the data. The current setting is 3600 seconds (one hour). **Note**: If you run this too aggressively, you may reach the [Datadog API limits][7].
1. Save your file and confirm its location.

Once the above is complete:

Expand Down
16 changes: 5 additions & 11 deletions content/en/developers/integrations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,19 +84,13 @@ Follow these steps to create a new integration with Datadog.
1. **Apply to the Datadog Partner Network.** Once accepted, a member of the Datadog Technology Partner team will reach out to schedule an introductory call.
2. **Request a Datadog sandbox account** for development via the Datadog Partner Network portal.
3. **Start developing your integration** using the Integration Developer Platform:

a. Define the basic details about your integration.

b. Define and write your integration code by following the instructions to create one of the following integration types:
1. Define the basic details about your integration.
1. Define and write your integration code by following the instructions to create one of the following integration types:
- [Agent-based integration][5]
- [API-based integration][6]

c. Specify what type of data your integration queries or submits.

d. Create a dashboard, and optionally create monitors or security rules.

e. Fill in the remaining fields: setup and uninstallation instructions, images, support details, and other key details that help describe the value of your integration.

1. Specify what type of data your integration queries or submits.
1. Create a dashboard, and optionally create monitors or security rules.
1. Fill in the remaining fields: setup and uninstallation instructions, images, support details, and other key details that help describe the value of your integration.
4. **Test your integration** in your Datadog sandbox account.
5. **Submit your integration for review.**
6. **Once approved, your integration is published.**
Expand Down
144 changes: 68 additions & 76 deletions content/en/getting_started/integrations/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,86 +45,78 @@ This process can be repeated for as many AWS accounts as necessary, or you can a

## Prerequisites

Before getting started, ensure you have the following prerequisites:

1. An [AWS][7] account. Your AWS user needs the following IAM permissions to successfully run the CloudFormation template:

* cloudformation:CreateStack
* cloudformation:CreateUploadBucket
* cloudformation:DeleteStack
* cloudformation:DescribeStacks
* cloudformation:DescribeStackEvents
* cloudformation:GetStackPolicy
* cloudformation:GetTemplateSummary
* cloudformation:ListStacks
* cloudformation:ListStackResources
* ec2:DescribeSecurityGroups
* ec2:DescribeSubnets
* ec2:DescribeVpcs
* iam:AttachRolePolicy
* iam:CreatePolicy
* iam:CreateRole
* iam:DeleteRole
* iam:DeleteRolePolicy
* iam:DetachRolePolicy
* iam:GetRole
* iam:GetRolePolicy
* iam:PassRole
* iam:PutRolePolicy
* iam:TagRole
* iam:UpdateAssumeRolePolicy
* kms:Decrypt
* lambda:AddPermission
* lambda:CreateFunction
* lambda:DeleteFunction
* lambda:GetCodeSigningConfig
* lambda:GetFunction
* lambda:GetFunctionCodeSigningConfig
* lambda:GetLayerVersion
* lambda:InvokeFunction
* lambda:PutFunctionConcurrency
* lambda:RemovePermission
* lambda:TagResource
* logs:CreateLogGroup
* logs:DeleteLogGroup
* logs:DescribeLogGroups
* logs:PutRetentionPolicy
* oam:ListSinks
* oam:ListAttachedLinks
* s3:CreateBucket
* s3:DeleteBucket
* s3:DeleteBucketPolicy
* s3:GetEncryptionConfiguration
* s3:GetObject
* s3:GetObjectVersion
* s3:PutBucketPolicy
* s3:PutBucketPublicAccessBlock
* s3:PutEncryptionConfiguration
* s3:PutLifecycleConfiguration
* secretsmanager:CreateSecret
* secretsmanager:DeleteSecret
* secretsmanager:GetSecretValue
* secretsmanager:PutSecretValue
* serverlessrepo:CreateCloudFormationTemplate
Before getting started, ensure you have an [AWS][7] account. Your AWS user needs the following IAM permissions to successfully run the CloudFormation template:
- cloudformation:CreateStack
- cloudformation:CreateUploadBucket
- cloudformation:DeleteStack
- cloudformation:DescribeStacks
- cloudformation:DescribeStackEvents
- cloudformation:GetStackPolicy
- cloudformation:GetTemplateSummary
- cloudformation:ListStacks
- cloudformation:ListStackResources
- ec2:DescribeSecurityGroups
- ec2:DescribeSubnets
- ec2:DescribeVpcs
- iam:AttachRolePolicy
- iam:CreatePolicy
- iam:CreateRole
- iam:DeleteRole
- iam:DeleteRolePolicy
- iam:DetachRolePolicy
- iam:GetRole
- iam:GetRolePolicy
- iam:PassRole
- iam:PutRolePolicy
- iam:TagRole
- iam:UpdateAssumeRolePolicy
- kms:Decrypt
- lambda:AddPermission
- lambda:CreateFunction
- lambda:DeleteFunction
- lambda:GetCodeSigningConfig
- lambda:GetFunction
- lambda:GetFunctionCodeSigningConfig
- lambda:GetLayerVersion
- lambda:InvokeFunction
- lambda:PutFunctionConcurrency
- lambda:RemovePermission
- lambda:TagResource
- logs:CreateLogGroup
- logs:DeleteLogGroup
- logs:DescribeLogGroups
- logs:PutRetentionPolicy
- oam:ListSinks
- oam:ListAttachedLinks
- s3:CreateBucket
- s3:DeleteBucket
- s3:DeleteBucketPolicy
- s3:GetEncryptionConfiguration
- s3:GetObject
- s3:GetObjectVersion
- s3:PutBucketPolicy
- s3:PutBucketPublicAccessBlock
- s3:PutEncryptionConfiguration
- s3:PutLifecycleConfiguration
- secretsmanager:CreateSecret
- secretsmanager:DeleteSecret
- secretsmanager:GetSecretValue
- secretsmanager:PutSecretValue
- serverlessrepo:CreateCloudFormationTemplate

## Setup

2. Go to the [AWS integration configuration page][8] in Datadog and click **Add AWS Account**.

3. Configure the integration's settings under the **Automatically using CloudFormation** option.
a. Select the AWS regions to integrate with.
b. Add your Datadog [API key][9].
c. Optionally, send logs and other data to Datadog with the [Datadog Forwarder Lambda][1].
d. Optionally, enable [Cloud Security Misconfigurations][54] to scan your cloud environment, hosts, and containers for misconfigurations and security risks.

4. Click **Launch CloudFormation Template**. This opens the AWS Console and loads the CloudFormation stack. All the parameters are filled in based on your selections in the prior Datadog form, so you do not need to edit those unless desired.
1. Go to the [AWS integration configuration page][8] in Datadog and click **Add AWS Account**.
1. Configure the integration's settings under the **Automatically using CloudFormation** option.
1. Select the AWS regions to integrate with.
1. Add your Datadog [API key][9].
1. Optionally, send logs and other data to Datadog with the [Datadog Forwarder Lambda][1].
1. Optionally, enable [Cloud Security Misconfigurations][54] to scan your cloud environment, hosts, and containers for misconfigurations and security risks.
1. Click **Launch CloudFormation Template**. This opens the AWS Console and loads the CloudFormation stack. All the parameters are filled in based on your selections in the prior Datadog form, so you do not need to edit those unless desired.
**Note:** The `DatadogAppKey` parameter enables the CloudFormation stack to make API calls to Datadog to add and edit the Datadog configuration for this AWS account. The key is automatically generated and tied to your Datadog account.

5. Check the required boxes from AWS and click **Create stack**. This launches the creation process for the Datadog stack along with three nested stacks. This could take several minutes. Ensure that the stack is successfully created before proceeding.

6. After the stack is created, go back to the AWS integration tile in Datadog and click **Ready!**

7. Wait up to 10 minutes for data to start being collected, and then view the out-of-the-box [AWS overview dashboard][12] to see metrics sent by your AWS services and infrastructure:
1. Check the required boxes from AWS and click **Create stack**. This launches the creation process for the Datadog stack along with three nested stacks. This could take several minutes. Ensure that the stack is successfully created before proceeding.
1. After the stack is created, go back to the AWS integration tile in Datadog and click **Ready!**
1. Wait up to 10 minutes for data to start being collected, and then view the out-of-the-box [AWS overview dashboard][12] to see metrics sent by your AWS services and infrastructure:
{{< img src="getting_started/integrations/aws-dashboard.png" alt="The AWS overview dashboard in the Datadog account. On the left is the AWS logo and an AWS events graph showing 'No matching entries found'. In the center are graphs related to EBS volumes with numerical data displayed and a heatmap showing consistent data. Along the right are graphs related to ELBs showing numerical data as well as a timeseries graph showing spiky data from three sources.">}}

## Configuration
Expand Down
15 changes: 7 additions & 8 deletions content/en/getting_started/integrations/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,17 +131,16 @@ Follow these steps to deploy the Datadog Azure integration through [Terraform][2
- App Service Plans
- Container Apps

You can also click to enable custom metric collection from [Azure Application Insights][101], and disable the collection of usage metrics.

You can also click to enable custom metric collection from [Azure Application Insights][101], and disable the collection of usage metrics.
4. Optionally, click the resource collection toggle to disable the collection of configuration information from your Azure resources.
5. Configure log collection:
a. If a log forwarder already exists in the tenant, extend its scope to include any new subscriptions or management groups.
b. If you're creating a new log forwarder:
a. Enter a resource group name to store the log forwarder control plane.
b. Select a control plane subscription for the log-forwarding orchestration (LFO).
c. Select a region for the control plane.
See the [Architecture section][102] of the automated log forwarding guide for more information about this architecture.
- If a log forwarder already exists in the tenant, extend its scope to include any new subscriptions or management groups.
- If you're creating a new log forwarder:
1. Enter a resource group name to store the log forwarder control plane.
1. Select a control plane subscription for the log-forwarding orchestration (LFO).
1. Select a region for the control plane.

See the [Architecture section][102] of the automated log forwarding guide for more information about this architecture.
6. Copy and run the command under **Initialize and apply the Terraform**.

[100]: https://app.datadoghq.com/integrations/azure/
Expand Down
15 changes: 6 additions & 9 deletions content/en/infrastructure/containers/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -417,8 +417,7 @@ field#status.conditions.HorizontalAbleToScale.status:"False"

You can use the `kubernetes_state_core` check to collect custom resource metrics when running the Datadog Cluster Agent.

1. Write defintions for your custom resources and the fields to turn into metrics according to the following format:

1. Write definitions for your custom resources and the fields to turn into metrics according to the following format:
```yaml
#=(...)
collectCrMetrics:
Expand Down Expand Up @@ -456,13 +455,11 @@ You can use the `kubernetes_state_core` check to collect custom resource metrics
path: [metadata, generation]
```

By default, RBAC and API resource names are derived from the kind in groupVersionKind by converting it to lowercase, and adding an "s" suffix (for example, Kind: ENIConfig → eniconfigs). If the Custom Resource Definition (CRD) uses a different plural form, you can override this behavior by specifying the resource field. In the example above, CNINode overrides the default by setting resource: "cninode-pluralized".
By default, RBAC and API resource names are derived from the kind in groupVersionKind by converting it to lowercase, and adding an "s" suffix (for example, Kind: ENIConfig → eniconfigs). If the Custom Resource Definition (CRD) uses a different plural form, you can override this behavior by specifying the resource field. In the example above, CNINode overrides the default by setting resource: "cninode-pluralized".

Metric names are produced using the following rules:

a. No prefix precified: `kubernetes_state_customresource.<metrics.name>`

b. Prefix precified: `kubernetes_state_customresource.<metricNamePrefix>_<metric.name>`
- No prefix: `kubernetes_state_customresource.<metrics.name>`
- Prefix: `kubernetes_state_customresource.<metricNamePrefix>_<metric.name>`

For more details, see [Custom Resource State Metrics][5].

Expand Down Expand Up @@ -492,9 +489,9 @@ You can use the `kubernetes_state_core` check to collect custom resource metrics
{{% /tab %}}
{{% tab "Datadog Operator" %}}

<div class="alert alert-info">
<div class="alert alert-info">
This functionality requires Agent Operator v1.20+.
</div>
</div>

1. Install the Datadog Operator with an option that grants the Datadog Agent permission to collect custom resources:

Expand Down
4 changes: 2 additions & 2 deletions content/en/integrations/guide/aws-organizations-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ Copy the Template URL from the Datadog AWS integration configuration page to use
- Select your Datadog APP key on Datadog AWS integration configuration page and use it in the `DatadogAppKey` parameter in the StackSet.

- *Optionally:*
a. Enable [Cloud Security Misconfigurations][5] to scan your cloud environment, hosts, and containers for misconfigurations and security risks.
b. Disable metric collection if you do not want to monitor your AWS infrastructure. This is recommended only for [Cloud Cost Management][6] (CCM) or [Cloud Security Misconfigurations][5] specific use cases.
1. Enable [Cloud Security Misconfigurations][5] to scan your cloud environment, hosts, and containers for misconfigurations and security risks.
1. Disable metric collection if you do not want to monitor your AWS infrastructure. This is recommended only for [Cloud Cost Management][6] (CCM) or [Cloud Security Misconfigurations][5] specific use cases.

3. **Configure StackSet options**
Keep the **Execution configuration** option as `Inactive` so the StackSet performs one operation at a time.
Expand Down
Loading
Loading