Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,11 @@ The following is a list of bootstrap options, their related pipeline environment
: <li style="list-style-type: '- '">The Observability Pipelines Worker cannot route external requests through reverse proxies, such as HAProxy and NGINX.</li>
: <li style="list-style-type: '- '">The <code>DD_PROXY_HTTP(S)</code> and <code>HTTP(S)_PROXY</code> environment variables need to be already exported in your environment for the Worker to resolve them. They cannot be prepended to the Worker installation script.</li>

`secret`
: **Pipeline environment variable**: None
: **Priority**: N/A
: **Description**: Connects the Worker to your secrets manager. See [Secrets Management][12] for configuration information.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link won't work until Secrets Mgmt child PR is merged: #33745


`site`
: **Pipeline environment variable**: `DD_SITE`
: **Priority**: `DD_SITE`
Expand Down Expand Up @@ -121,4 +126,5 @@ api:
[8]: /observability_pipelines/install_the_worker/worker_commands/#run-tap-or-top-the-worker
[9]: https://github.com/DataDog/helm-charts/blob/main/charts/observability-pipelines-worker/values.yaml#L33-L40
[10]: https://github.com/DataDog/helm-charts/blob/main/charts/observability-pipelines-worker/values.yaml#L303-L329
[11]: /remote_configuration/#security-considerations
[11]: /remote_configuration/#security-considerations
[12]: /observability_pipelines/configuration/secrets_management/
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,12 @@ You can create a pipeline with one of the following methods:
{{< tabs >}}
{{% tab "Logs" %}}

1. Navigate to [Observability Pipelines][7].
1. Select a [template][4] based on your use case.
1. Select and set up your [source][1].
1. Add [processors][2] to transform, redact, and enrich your log data.
1. Navigate to [Observability Pipelines][1].
1. Select a [template][2] based on your use case.
1. Select and set up your [source][3].
1. Add [processors][4] to transform, redact, and enrich your log data.
- If you want to copy a processor, click the copy icon for that processor and then use `command-v` to paste it.
1. Select and set up [destinations][3] for your processed logs.
1. Select and set up [destinations][5] for your processed logs.

### Add or remove components

Expand All @@ -56,7 +56,7 @@ You can create a pipeline with one of the following methods:
If you want to add another group of processors for a destination:
1. Click the plus sign (**+**) at the bottom of the existing processor group.
1. Click the name of the processor group to update it.
1. Optionally, enter a group filter. See [Search Syntax][11] for more information.
1. Optionally, enter a group filter. See [Search Syntax][6] for more information.
1. Click **Add** to add processors to the group.
1. If you want to copy all processors in a group and paste them into the same processor group or a different group:
1. Click the three dots on the processor group.
Expand Down Expand Up @@ -92,17 +92,12 @@ To delete a destination, click on the pencil icon to the top right of the destin
- You can add a total of three destinations for a pipeline.
- A specific destination can only be added once. For example, you cannot add multiple Splunk HEC destinations.

[1]: /observability_pipelines/sources/
[2]: /observability_pipelines/processors/
[3]: /observability_pipelines/destinations/
[4]: /observability_pipelines/configuration/explore_templates/
[5]: /observability_pipelines/configuration/update_existing_pipelines/
[6]: /observability_pipelines/configuration/install_the_worker/
[7]: https://app.datadoghq.com/observability-pipelines
[8]: /monitors/types/metric/
[9]: /observability_pipelines/guide/environment_variables/
[10]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options
[11]: /observability_pipelines/search_syntax/logs/
[1]: https://app.datadoghq.com/observability-pipelines
[2]: /observability_pipelines/configuration/explore_templates/
[3]: /observability_pipelines/sources/
[4]: /observability_pipelines/processors/
[5]: /observability_pipelines/destinations/
[6]: /observability_pipelines/search_syntax/logs/

{{% /tab %}}
{{% tab "Metrics" %}}
Expand Down Expand Up @@ -143,23 +138,16 @@ If you want to add another group of processors for a destination:

### Install the Worker and deploy the pipeline

After you have set up your source, processors, and destinations:
After you have set up your source, processors, and destinations, click **Next: Install**. See [Install the Worker][12] for instructions on how to install the Worker for your platform. See [Advanced Worker Configurations][5] for bootstrapping options.

1. Click **Next: Install**.
1. Select the platform on which you want to install the Worker.
1. Enter the [environment variables][9] for your sources and destinations, if applicable.
1. Follow the instructions on installing the Worker for your platform. The command provided in the UI to install the Worker has the relevant environment variables populated.
- See [Install the Worker][6] for more information.
- **Note**: If you are using a proxy, see the `proxy` option in [Bootstrap options][10].
1. Enable out-of-the-box monitors for your pipeline.
1. Navigate to the [Pipelines][7] page and find your pipeline.
1. Click **Enable monitors** in the **Monitors** column for your pipeline.
1. Click **Start** to set up a monitor for one of the suggested use cases.<br>
- The metric monitor is configured based on the selected use case. You can update the configuration to further customize it. See the [Metric monitor documentation][8] for more information.
If you want to make changes to your pipeline after you have deployed it, see [Update Existing Pipelines][11].

After you have set up your pipeline, see [Update Existing Pipelines][11] if you want to make any changes to it.
### Enable out-of-the-box monitors for your pipeline

See [Advanced Worker Configurations][5] for bootstrapping options.
1. Navigate to the [Pipelines][4] page and find your pipeline.
1. Click **Enable monitors** in the **Monitors** column for your pipeline.
1. Click **Start** to set up a monitor for one of the suggested use cases.<br>
- The metric monitor is configured based on the selected use case. You can update the configuration to further customize it. See the [Metric monitor documentation][13] for more information.

## Set up a pipeline with the API

Expand Down Expand Up @@ -228,4 +216,6 @@ To delete a pipeline in the UI:
[8]: /api/latest/observability-pipelines/#update-a-pipeline
[9]: /observability_pipelines/guide/environment_variables/
[10]: https://registry.terraform.io/providers/DataDog/datadog/latest/docs
[11]: /observability_pipelines/configuration/update_existing_pipelines/?
[11]: /observability_pipelines/configuration/update_existing_pipelines/?
[12]: /observability_pipelines/configuration/install_the_worker/
[13]: /monitors/types/metric/
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases:

## Overview

For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.
For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you are using environment variables and want to update source and destination environment variables, you must manually update the Worker with the new values.

This document goes through updating the pipeline in the UI. You can also use the [update a pipeline][2] API or [datadog_observability_pipeline][3] Terraform resource to update existing pipelines.

Expand All @@ -18,9 +18,10 @@ This document goes through updating the pipeline in the UI. You can also use the
1. Click **Edit Pipeline** in the top right corner.
1. Make changes to the pipeline.
- If you are updating the source or destination settings shown in the tiles, or updating and adding processors, make the changes and then click **Deploy Changes**.
- To update source or destination environment variables, click **Go to Worker Installation Steps** and see [Update source or destination variables](#update-source-or-destination-variables) for instructions.
- To update source or destination environment variables, click **Go to Worker Installation Steps** and see [Update source or destination environment variables](#update-source-or-destination-environment-variables) for instructions.
1. If you updated secret identifiers or environment variables, restart the Worker.

### Update source or destination variables
### Update source or destination environment variables

On the Worker installation page:
1. Select your platform in the **Choose your installation platform** dropdown menu.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,48 @@ Set up the Amazon OpenSearch destination and its environment variables when you

### Set up the destination

1. Optionally, enter the name of the Amazon OpenSearch index. See [template syntax][3] if you want to route logs to different indexes based on specific fields in your logs.
1. Select an authentication strategy, **Basic** or **AWS**. For **AWS**, enter the AWS region.
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
<div class="alert alert-danger">Only enter the identifiers for the Amazon OpenSearch endpoint URL, and if applicable, username and password. Do <b>not</b> enter the actual values.</div>

1. Enter the identifier for your Amazon OpenSearch endpoint URL. If you leave it blank, the [default](#set-secrets) is used.
1. (Optional) Enter the name of the Amazon OpenSearch index. See [template syntax][3] if you want to route logs to different indexes based on specific fields in your logs.
1. Select an authentication strategy, **Basic** or **AWS**. If you selected:
- **Basic**:
- Enter the identifier for your Amazon OpenSearch username. If you leave it blank, the [default](#set-secrets) is used.
- Enter the identifier for your Amazon OpenSearch password. If you leave it blank, the [default](#set-secrets) is used.
- **AWS**:
1. Enter the AWS region.
1. (Optional) Select an AWS authentication option. The **Assume role** option should only be used if the user or role you created earlier needs to assume a different role to access the specific AWS resource and that permission has to be explicitly defined.<br>If you select **Assume role**:
1. Enter the ARN of the IAM role you want to assume.
1. Optionally, enter the assumed role session name and external ID.
1. (Optional) Toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
- If left disabled, the maximum size for buffering is 500 events.
- If enabled:
1. Select the buffer type you want to set (**Memory** or **Disk**).
1. Enter the buffer size and select the unit.

### Set the environment variables
### Set secrets

{{% observability_pipelines/set_secrets_intro %}}

{{< tabs >}}
{{% tab "Secrets Management" %}}

- Amazon OpenSearch endpoint URL identifier:
- The default identifier is `DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL`.
- Amazon OpenSearch authentication username identifier:
- The default identifier is `DESTINATION_AMAZON_OPENSEARCH_USERNAME`.
- Amazon OpenSearch authentication password identifier:
- The default identifier is `DESTINATION_AMAZON_OPENSEARCH_PASSWORD`.

{{% /tab %}}

{{% tab "Environment Variables" %}}

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/amazon_opensearch %}}

{{% /tab %}}
{{< /tabs >}}

## How the destination works

### Event batching
Expand Down
16 changes: 15 additions & 1 deletion content/en/observability_pipelines/destinations/amazon_s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,24 @@ Then these are the values you enter for configuring the S3 bucket for Log Archiv

{{< img src="observability_pipelines/setup/amazon_s3_archive.png" alt="The log archive configuration with the example values" style="width:70%;" >}}

### Set the environment variables
### Set secrets

{{% observability_pipelines/set_secrets_intro %}}

{{< tabs >}}
{{% tab "Secrets Management" %}}

There are no secret identifiers to configure.

{{% /tab %}}

{{% tab "Environment Variables" %}}

{{% observability_pipelines/destination_env_vars/datadog_archives_amazon_s3 %}}

{{% /tab %}}
{{< /tabs >}}

## Route logs to Snowflake using the Amazon S3 destination

You can route logs from Observability Pipelines to Snowflake using the Amazon S3 destination by configuring Snowpipe in Snowflake to automatically ingest those logs. To set this up:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,32 +20,62 @@ You need to do the following before setting up the Amazon Security Lake destinat

Set up the Amazon Security Lake destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

**Notes**:
- When you add the Amazon Security Lake destination, the OCSF processor is automatically added so that you can convert your logs to Parquet before they are sent to Amazon Security Lake. See [Remap to OCSF documentation][3] for setup instructions.
- Only logs formatted by the OCSF processor are converted to Parquet.

### Set up the destination

1. Enter your S3 bucket name.
1. Enter the AWS region.
1. Enter the custom source name.
1. Optionally, select an [AWS authentication][5] option.
1. Enter the ARN of the IAM role you want to assume.
1. Optionally, enter the assumed role session name and external ID.
1. Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required.<br>**Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations][4] for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
- If left disabled, the maximum size for buffering is 500 events.
- If enabled:
1. Select the buffer type you want to set (**Memory** or **Disk**).
1. Enter the buffer size and select the unit.

**Notes**:
- When you add the Amazon Security Lake destination, the OCSF processor is automatically added so that you can convert your logs to Parquet before they are sent to Amazon Security Lake. See [Remap to OCSF documentation][3] for setup instructions.
- Only logs formatted by the OCSF processor are converted to Parquet.
#### Optional settings

##### AWS authentication

1. Select an [AWS authentication][5] option.
1. Enter the ARN of the IAM role you want to assume.
1. Optionally, enter the assumed role session name and external ID.

##### Enable TLS

### Set the environment variables
Toggle the switch to **Enable TLS**. If you enable TLS, the following certificate and key files are required.

**Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations][4] for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.
- Enter the identifier for your Amazon Security Lake key pass. If you leave it blank, the [default](#set-secrets) is used.
- **Note**: Only enter the identifier for the key pass. Do **not** enter the actual key pass.
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER or PEM (X.509).
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) root file in DER or PEM (X.509).
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.

##### Buffering options

Toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
- If left disabled, the maximum size for buffering is 500 events.
- If enabled:
1. Select the buffer type you want to set (**Memory** or **Disk**).
1. Enter the buffer size and select the unit.

### Set secrets

{{% observability_pipelines/set_secrets_intro %}}

{{< tabs >}}
{{% tab "Secrets Management" %}}

- Amazon Security Lake TLS passphrase identifier (when TLS is enabled):
- The default identifier is `DESTINATION_AWS_SECURITY_LAKE_KEY_PASS`.

{{% /tab %}}

{{% tab "Environment Variables" %}}

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/amazon_security_lake %}}

{{% /tab %}}
{{< /tabs >}}

## How the destination works

### AWS Authentication
Expand Down
Loading
Loading