diff --git a/content/en/observability_pipelines/configuration/install_the_worker/_index.md b/content/en/observability_pipelines/configuration/install_the_worker/_index.md index 8097c414ef70b..21761a17b94b1 100644 --- a/content/en/observability_pipelines/configuration/install_the_worker/_index.md +++ b/content/en/observability_pipelines/configuration/install_the_worker/_index.md @@ -126,6 +126,8 @@ The Observability Pipelines Worker supports all major Kubernetes distributions, See [Update Existing Pipelines][5] if you want to make changes to your pipeline's configuration. +**Note**: If you enable [disk buffering][6] for destinations, you can use Kubernetes [persistence volume][7] to handle back pressure when a destination is unavailable or can't keep up with the volume of data that the Worker is sending. + #### Self-hosted and self-managed Kubernetes clusters If you are running a self-hosted and self-managed Kubernetes cluster, and defined zones with node labels using `topology.kubernetes.io/zone`, then you can use the Helm chart values file as is. However, if you are not using the label `topology.kubernetes.io/zone`, you need to update the `topologyKey` in the `values.yaml` file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire `topology.kubernetes.io/zone` section. @@ -135,6 +137,8 @@ If you are running a self-hosted and self-managed Kubernetes cluster, and define [3]: https://app.datadoghq.com/organization-settings/remote-config/setup [4]: /observability_pipelines/environment_variables/ [5]: https://github.com/DataDog/helm-charts/blob/main/charts/observability-pipelines-worker/values.yaml +[6]: /observability_pipelines/scaling_and_performance/handling_load_and_backpressure/#disk-buffers +[7]: https://github.com/DataDog/helm-charts/blob/23624b6e49eef98e84b21689672bb63a7a5df48b/charts/observability-pipelines-worker/values.yaml#L268 {{% /tab %}} {{% tab "Linux" %}} diff --git a/content/en/observability_pipelines/scaling_and_performance/handling_load_and_backpressure.md b/content/en/observability_pipelines/scaling_and_performance/handling_load_and_backpressure.md index 4cd84039ccb4a..c5ccb2380f5a2 100644 --- a/content/en/observability_pipelines/scaling_and_performance/handling_load_and_backpressure.md +++ b/content/en/observability_pipelines/scaling_and_performance/handling_load_and_backpressure.md @@ -44,6 +44,12 @@ Observability Pipelines destination's buffers are configured to block events, wh Observability Pipelines destinations can be configured with disk buffers (in Preview). When disk buffering is enabled for a destination, every event is first sent through the buffer and written to the data files, before the data is sent to the downstream integration. By default, data is not synchronized for every write, but instead synchronized on an interval (500 milliseconds), which allows for high throughput with a reduced risk of data loss. +### Kubernetes persistence volume + +If you enable disk buffering for destinations, you can use Kubernetes [persistence volume][1] to handle back pressure when a destination is unavailable or can't keep up with the volume of data that the Worker is sending. + ## Further reading {{< partial name="whats-next/whats-next.html" >}} + +[1]: https://github.com/DataDog/helm-charts/blob/23624b6e49eef98e84b21689672bb63a7a5df48b/charts/observability-pipelines-worker/values.yaml#L268 \ No newline at end of file diff --git a/layouts/shortcodes/observability_pipelines/install_worker/kubernetes.md b/layouts/shortcodes/observability_pipelines/install_worker/kubernetes.md index 84871d3b40c08..fdf6eb1567e3c 100644 --- a/layouts/shortcodes/observability_pipelines/install_worker/kubernetes.md +++ b/layouts/shortcodes/observability_pipelines/install_worker/kubernetes.md @@ -37,6 +37,8 @@ The Observability Pipelines Worker supports all major Kubernetes distributions, See [Update Existing Pipelines][602] if you want to make changes to your pipeline's configuration. +**Note**: If you enable [disk buffering][605] for destinations, you can use Kubernetes [persistence volume][606] to handle back pressure when a destination is unavailable or can't keep up with the volume of data that the Worker is sending. + #### Self-hosted and self-managed Kubernetes clusters If you are running a self-hosted and self-managed Kubernetes cluster, and defined zones with node labels using `topology.kubernetes.io/zone`, then you can use the Helm chart values file as is. However, if you are not using the label `topology.kubernetes.io/zone`, you need to update the `topologyKey` in the `values.yaml` file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire `topology.kubernetes.io/zone` section. @@ -44,4 +46,6 @@ If you are running a self-hosted and self-managed Kubernetes cluster, and define [601]: /resources/yaml/observability_pipelines/v2/setup/values.yaml [602]: /observability_pipelines/update_existing_pipelines [603]: https://github.com/DataDog/helm-charts/blob/main/charts/observability-pipelines-worker/values.yaml -[604]: https://app.datadoghq.com/organization-settings/remote-config/setup \ No newline at end of file +[604]: https://app.datadoghq.com/organization-settings/remote-config/setup +[605]: /observability_pipelines/scaling_and_performance/handling_load_and_backpressure/#disk-buffers +[606]: https://github.com/DataDog/helm-charts/blob/23624b6e49eef98e84b21689672bb63a7a5df48b/charts/observability-pipelines-worker/values.yaml#L268 \ No newline at end of file