Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5906,21 +5906,26 @@ menu:
url: /observability_pipelines/configuration/install_the_worker/run_multiple_pipelines_on_a_host/
parent: observability_pipelines_install_the_worker
weight: 10302
- name: Secrets Management
url: observability_pipelines/configuration/secrets_management
parent: observability_pipelines_configuration
identifier: observability_pipelines_secrets_management
weight: 104
- name: Live Capture
identifier: observability_pipelines/configuration/live_capture/
url: /observability_pipelines/configuration/live_capture/
parent: observability_pipelines_configuration
weight: 104
weight: 105
- name: Update Existing Pipelines
url: observability_pipelines/configuration/update_existing_pipelines
parent: observability_pipelines_configuration
identifier: observability_pipelines_update_existing_pipelines
weight: 105
weight: 106
- name: Access Control
url: observability_pipelines/configuration/access_control
parent: observability_pipelines_configuration
identifier: observability_pipelines_access_control
weight: 106
weight: 107
- name: Sources
url: observability_pipelines/sources/
parent: observability_pipelines
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,40 +37,55 @@ After setting up your pipeline using the API or Terraform, follow the instructio
{{< tabs >}}
{{% tab "Docker" %}}

Run the below command to install the Worker.

```shell
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
```
1. If you are using:
- **Secrets Manager**: Run this command to install the Worker:
```
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-v /path/to/local/bootstrap.yaml:/etc/observability-pipelines-worker/bootstrap.yaml \
datadog/observability-pipelines-worker run
```
- **Environment variables**: Run this command to install the Worker:

```shell
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
```

You must replace the placeholders with the following values, if applicable:
- `<DATADOG_API_KEY>`: Your Datadog API.
- **Note**: The API key must be [enabled for Remote Configuration][1].
- `<PIPELINE_ID>`: The ID of your pipeline.
- `<DATADOG_SITE>`: The [Datadog site][2].
- `<SOURCE_ENV_VARIABLE>`: The environment variables required by the source you are using for your pipeline.
- For example: `DD_OP_SOURCE_DATADOG_AGENT_ADDRESS=0.0.0.0:8282`
- See [Environment Variables][3] for a list of source environment variables.
- `<DESTINATION_ENV_VARIABLE>`: The environment variables required by the destinations you are using for your pipeline.
- For example: `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL=https://hec.splunkcloud.com:8088`
- See [Environment Variables][3] for a list of destination environment variables.

You must replace the placeholders with the following values:
- `<DATADOG_API_KEY>`: Your Datadog API.
- **Note**: The API key must be [enabled for Remote Configuration][1].
- `<PIPELINE_ID>`: The ID of your pipeline.
- `<DATADOG_SITE>`: The [Datadog site][2].
- `<SOURCE_ENV_VARIABLE>`: The environment variables required by the source you are using for your pipeline.
- For example: `DD_OP_SOURCE_DATADOG_AGENT_ADDRESS=0.0.0.0:8282`
- See [Environment Variables][3] for a list of source environment variables.
- `<DESTINATION_ENV_VARIABLE>`: The environment variables required by the destinations you are using for your pipeline.
- For example: `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL=https://hec.splunkcloud.com:8088`
- See [Environment Variables][3] for a list of destination environment variables.

**Note**: By default, the `docker run` command exposes the same port the Worker is listening on. If you want to map the Worker's container port to a different port on the Docker host, use the `-p | --publish` option in the command:
```
-p 8282:8088 datadog/observability-pipelines-worker run
```
**Note**: By default, the `docker run` command exposes the same port the Worker is listening on. If you want to map the Worker's container port to a different port on the Docker host, use the `-p | --publish` option in the command:
```
-p 8282:8088 datadog/observability-pipelines-worker run
```
1. Modify the Worker bootstrap file to connect the Worker to your secrets manager. See [Secret Management][4] for more information.
1. Restart the Worker to use the updated bootstrap file:
```
sudo systemctl restart observability-pipelines-worker
```

See [Update Existing Pipelines][3] if you want to make changes to your pipeline's configuration.

[1]: https://app.datadoghq.com/organization-settings/remote-config/setup
[2]: /getting_started/site/
[3]: /observability_pipelines/environment_variables/
[4]: /observability_pipelines/configuration/secrets_management

{{% /tab %}}
{{% tab "Kubernetes" %}}
Expand Down Expand Up @@ -147,33 +162,40 @@ If you are running a self-hosted and self-managed Kubernetes cluster, and define

Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see [Manually install the Worker on Linux](#manually-install-the-worker-on-linux).

Run the one-step command below to install the Worker.

```bash
DD_API_KEY=<DATADOG_API_KEY> DD_OP_PIPELINE_ID=<PIPELINE_ID> DD_SITE=<DATADOG_SITE> <SOURCE_ENV_VARIABLE> <DESTINATION_ENV_VARIABLE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker2.sh)"
```

You must replace the placeholders with the following values:

- `<DATADOG_API_KEY>`: Your Datadog API.
- **Note**: The API key must be [enabled for Remote Configuration][1].
- `<PIPELINE_ID>`: The ID of your pipeline.
- `<DATADOG_SITE>`: The [Datadog site][2].
- `<SOURCE_ENV_VARIABLE>`: The environment variables required by the source you are using for your pipeline.
- For example: `DD_OP_SOURCE_DATADOG_AGENT_ADDRESS=0.0.0.0:8282`
- See [Environment Variables][3] for a list of source environment variables.
- `<DESTINATION_ENV_VARIABLE>`: The environment variables required by the destinations you are using for your pipeline.
- For example: `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL=https://hec.splunkcloud.com:8088`
- See [Environment Variables][3] for a list of destination environment variables.

**Note**: The environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
1. If you are using:
- **Secrets Manager**: Run this one-step command to install the Worker:
```bash
DD_API_KEY=<DATADOG_API_KEY> DD_OP_PIPELINE_ID=<PIPELINE_ID> DD_SITE=<DATADOG_SITE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker2.sh)"
```
- **Environment variables**: Run this one-step command to install the Worker:
```bash
DD_API_KEY=<DATADOG_API_KEY> DD_OP_PIPELINE_ID=<PIPELINE_ID> DD_SITE=<DATADOG_SITE> <SOURCE_ENV_VARIABLE> <DESTINATION_ENV_VARIABLE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker2.sh)"
```
You must replace the placeholders with the following values, if applicable:
- `<DATADOG_API_KEY>`: Your Datadog API.
- **Note**: The API key must be [enabled for Remote Configuration][1].
- `<PIPELINE_ID>`: The ID of your pipeline.
- `<DATADOG_SITE>`: The [Datadog site][2].
- `<SOURCE_ENV_VARIABLE>`: The environment variables required by the source you are using for your pipeline.
- For example: `DD_OP_SOURCE_DATADOG_AGENT_ADDRESS=0.0.0.0:8282`
- See [Environment Variables][3] for a list of source environment variables.
- `<DESTINATION_ENV_VARIABLE>`: The environment variables required by the destinations you are using for your pipeline.
- For example: `DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL=https://hec.splunkcloud.com:8088`
- See [Environment Variables][3] for a list of destination environment variables.
**Note**: The environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
1. Modify the Worker bootstrap file to connect the Worker to your secrets manager. See [Secret Management][5] for more information.
1. Restart the Worker to use the updated bootstrap file:
```
sudo systemctl restart observability-pipelines-worker
```

See [Update Existing Pipelines][4] if you want to make changes to your pipeline's configuration.

[1]: https://app.datadoghq.com/organization-settings/remote-config/setup
[2]: /getting_started/site/
[3]: /observability_pipelines/environment_variables/
[4]: /observability_pipelines/configuration/update_existing_pipelines
[5]: /observability_pipelines/configuration/secrets_management

{{% /tab %}}
{{% tab "CloudFormation" %}}
Expand Down Expand Up @@ -208,34 +230,36 @@ See [Update Existing Pipelines][1] if you want to make changes to your pipeline'
After you set up your source, destinations, and processors on the Build page of the pipeline UI, follow the steps on the Install page to install the Worker.

1. Select the platform on which you want to install the Worker.
1. Enter the [environment variables][7] for your sources and destinations, if applicable.
1. Follow the instructions on installing the Worker for your platform. The command provided in the UI to install the Worker has the relevant environment variables populated.
1. In **Review your secrets management**, if you select:
- **Secrets Manager** (Recommended): Ensure that your secrets are configured in your secrets manager.
- **Environment Variables**: Enter the [environment variables][7] for your sources and destinations, if applicable.
1. Follow the instructions on installing the Worker for your platform.

{{< tabs >}}
{{% tab "Docker" %}}

1. Click **Select API key** to choose the Datadog API key you want to use.
- **Note**: The API key must be [enabled for Remote Configuration][1].
1. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
```shell
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
```
**Note**: By default, the `docker run` command exposes the same port the Worker is listening on. If you want to map the Worker's container port to a different port on the Docker host, use the `-p | --publish` option in the command:
```
-p 8282:8088 datadog/observability-pipelines-worker run
```
1. Run the command provided in the UI to install the Worker. If you are using:
- Secrets Manager: The command points to the Worker bootstrap file that you configure to resolve secrets using your secrets manager.
- Environment variables: The command is automatically populated with the environment variables you entered earlier.
- **Note**: By default, the `docker run` command exposes the same port the Worker is listening on. If you want to map the Worker's container port to a different port on the Docker host, use the `-p | --publish` option in the command:
```
-p 8282:8088 datadog/observability-pipelines-worker run
```
1. If you are using **Secrets Manager**:
1. Modify the Worker bootstrap file to connect the Worker to your secrets manager. See [Secret Management][3] for more information.
1. Restart the Worker to use the updated bootstrap file:
```
sudo systemctl restart observability-pipelines-worker
```
1. Navigate back to the Observability Pipelines installation page and click **Deploy**.

See [Update Existing Pipelines][2] if you want to make changes to your pipeline's configuration.

[1]: https://app.datadoghq.com/organization-settings/remote-config/setup
[2]: /observability_pipelines/configuration/update_existing_pipelines/
[3]: /observability_pipelines/configuration/secrets_management

{{% /tab %}}
{{% tab "Kubernetes" %}}
Expand Down Expand Up @@ -302,14 +326,20 @@ Follow the steps below if you want to use the one-line installation script to in
1. Click **Select API key** to choose the Datadog API key you want to use.
- **Note**: The API key must be [enabled for Remote Configuration][2].
1. Run the one-step command provided in the UI to install the Worker.

**Note**: The environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
- **Note**: If you are using environment variables, the environment variables used by the Worker in `/etc/default/observability-pipelines-worker` are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
1. If you are using **Secrets Manager**:
1. Modify the Worker bootstrap file to connect the Worker to your secrets manager. See [Secret Management][3] for more information.
1. Restart the Worker to use the updated bootstrap file:
```
sudo systemctl restart observability-pipelines-worker
```
1. Navigate back to the Observability Pipelines installation page and click **Deploy**.

See [Update Existing Pipelines][1] if you want to make changes to your pipeline's configuration.

[1]: /observability_pipelines/configuration/update_existing_pipelines
[2]: https://app.datadoghq.com/organization-settings/remote-config/setup
[3]: /observability_pipelines/configuration/secrets_management

{{% /tab %}}
{{% tab "ECS Fargate" %}}
Expand Down Expand Up @@ -373,16 +403,25 @@ If you prefer not to use the one-line installation script for Linux, follow thes
sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
```
1. Add your keys, site (for example, `datadoghq.com` for US1), source, and destination environment variables to the Worker's environment file:
```shell
sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
```
1. If you are using:
- **Secrets Manager**: Add your API key, site (for example, `datadoghq.com` for US1), and pipeline ID to the Worker's environment file:
```shell
sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
EOF
```
- **Environment variables**: Add your API key, site (for example, `datadoghq.com` for US1), source, and destination environment variables to the Worker's environment file:
```shell
sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
```
1. Start the worker:
```
sudo systemctl restart observability-pipelines-worker
Expand Down Expand Up @@ -417,16 +456,25 @@ See [Update Existing Pipelines][1] if you want to make changes to your pipeline'
sudo yum makecache
sudo yum install observability-pipelines-worker
```
1. Add your keys, site (for example, `datadoghq.com` for US1), source, and destination environment variables to the Worker's environment file:
```shell
sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
```
1. If you are using:
- **Secrets Manager**: Add your API key, site (for example, `datadoghq.com` for US1), and pipelines ID to the Worker's environment file:
```shell
sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
EOF
```
- **Environment variables**: Add your API key, site (for example, `datadoghq.com` for US1), source, and destination environment variables to the Worker's environment file:
```shell
sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
```
1. Start the worker:
```shell
sudo systemctl restart observability-pipelines-worker
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: Secrets Management
description: Learn how to set up the Worker to retrieve secrets from your secrets manager.
disable_toc: false
further_reading:
- link: /observability_pipelines/configuration/set_up_pipelines/"
tag: "Documentation"
text: "Set up pipelines"
- link: /observability_pipelines/configuration/install_the_worker"
tag: "Documentation"
text: "Install the Worker"
---

## Overview

The Observability Pipelines Worker helps you securely manage your secrets by integrating with the following secrets management solution:

- AWS Secrets Manager
- AWS Systems Manager
- Azure Key Vault
- HashiCorp Vault
- JSON File
- YAML File

## Configure the Worker to retrieve secrets

{{% collapse-content title="AWS Secrets Manager" level="h4" expanded=false id="aws-secrets-manager" %}}

TKTK

{{% /collapse-content %}}
{{% collapse-content title="AWS Systems Manager" level="h4" expanded=false id="aws-systems-manager" %}}

TKTK

{{% /collapse-content %}}
{{% collapse-content title="Azure Key Vault" level="h4" expanded=false id="azure-key-vault" %}}

TKTK

{{% /collapse-content %}}
{{% collapse-content title="HashiCorp Vault" level="h4" expanded=false id="hashicorp-vault" %}}

TKTK

{{% /collapse-content %}}
{{% collapse-content title="JSON File" level="h4" expanded=false id="json-file" %}}

TKTK

{{% /collapse-content %}}
{{% collapse-content title="YAML File" level="h4" expanded=false id="yaml-file" %}}

TKTK

{{% /collapse-content %}}

## Further reading

{{< partial name="whats-next/whats-next.html" >}}
Loading