You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/use-cases/observability/clickstack/example-datasets/kubernetes.md
+75-43Lines changed: 75 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,17 +12,20 @@ import DemoArchitecture from '@site/docs/use-cases/observability/clickstack/exam
12
12
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
13
13
import hyperdx_kubernetes_data from '@site/static/images/use-cases/observability/hyperdx-kubernetes-data.png';
14
14
import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
15
+
import hyperdx_cloud_datasource from '@site/static/images/use-cases/observability/hyperdx_cloud_datasource.png';
16
+
import hyperdx_create_new_source from '@site/static/images/use-cases/observability/hyperdx_create_new_source.png';
17
+
import hyperdx_create_trace_datasource from '@site/static/images/use-cases/observability/hyperdx_create_trace_datasource.png';
18
+
import dashboard_kubernetes from '@site/static/images/use-cases/observability/hyperdx-dashboard-kubernetes.png';
15
19
16
20
This guide allows you to collect logs and metrics from your Kubernetes system, sending them to **ClickStack** for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official Open Telemetry demo.
17
21
18
22
## Prerequisites {#prerequisites}
19
23
20
24
This guide requires you to have:
21
25
22
-
- A **Kubernetes cluster** (v1.20+ recommended) with:
23
-
- atleast 32GiB of RAM and 100GB of disk space available on one node for ClickHouse.
26
+
- A **Kubernetes cluster** (v1.20+ recommended) with at least 32 GiB of RAM and 100GB of disk space available on one node for ClickHouse.
24
27
-**[Helm](https://helm.sh/)** v3+
25
-
-**kubectl**, configured to interact with your cluster
28
+
-**`kubectl`**, configured to interact with your cluster
26
29
27
30
## Deployment options {#deployment-options}
28
31
@@ -43,8 +46,9 @@ To simulate application traffic, you can optionally deploy the ClickStack fork o
43
46
44
47
If your setup needs TLS certificates, install [cert-manager](https://cert-manager.io/) using Helm:
This **step is optional and intended for users with no existing pods to monitor**. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.
56
60
57
-
This following deploys the full ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis/Valkey), and integrations with HyperDX and ClickHouse.
61
+
The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.
58
62
59
63
All services are deployed to the `otel-demo` namespace. Each deployment includes:
60
64
61
65
- Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
62
66
- All services send their instrumentation to a `my-hyperdx-hdx-oss-v2-otel-collector` OpenTelemetry collector (not deployed)
63
67
-[Forwarding of resource tags](/use-cases/observability/clickstack/ingesting-data/kubernetes#forwarding-resouce-tags-to-pods) to correlate logs, metrics and traces via the environment variable `OTEL_RESOURCE_ATTRIBUTES`.
### Add the ClickStack Helm chart repository {#add-helm-clickstack}
105
108
106
-
To deploy ClickStack we use the [official Helm chart](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm).
109
+
To deploy ClickStack, we use the [official Helm chart](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm).
107
110
108
111
This requires us to add the HyperDX Helm repository:
109
112
@@ -116,7 +119,6 @@ helm repo update
116
119
117
120
With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or use ClickHouse Cloud, where HyperDX is also available as a managed service.
This chart also installs ClickHouse and the otel-collector. For production it is recommended that you use the clickhouse and otel-collector operators instead and/or use ClickHouse Cloud.
142
143
143
-
To disable clickhouse and otel-collector, set the following values:
144
-
```
144
+
This chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use ClickHouse Cloud.
145
+
146
+
To disable clickhouse and OTel collector, set the following values:
If you'd rather use ClickHouse Cloud, you can deploy Clickstack and [disable the included ClickHouse](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud).
159
+
If you'd rather use ClickHouse Cloud, you can deploy ClickStack and [disable the included ClickHouse](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud).
155
160
156
161
:::note
157
162
The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they are not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, [providing access to the secure ingestion key](#retrieve-ingestion-api-key) needed to ingest through the deployed OTel collector, but should not be exposed to end users.
158
163
:::
159
164
160
-
```
165
+
```shell
161
166
# specify ClickHouse Cloud credentials
162
167
export CLICKHOUSE_URL=<CLICKHOUSE_CLOUD_URL># full https url
To verify the deployment status, run the following command and confirm all components are in the `Running` state. Note that ClickHouse will be absent from this for users using ClickHouse Cloud:
Even when using ClickHouse Cloud, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in the ClickHouse Cloud-hosted version.
187
192
:::
188
193
189
-
For security, the service uses ClusterIP and is not exposed externally by default.
194
+
For security, the service uses `ClusterIP` and is not exposed externally by default.
190
195
191
196
To access the HyperDX UI, port forward from 3000 to the local port 8080.
192
197
193
198
```shell
194
199
kubectl port-forward \
195
-
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
200
+
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
196
201
8080:3000 \
197
-
-n otel-demo
202
+
-n otel-demo
198
203
```
199
204
200
205
Navigate [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
@@ -211,10 +216,9 @@ Navigate to [`Team Settings`](http://localhost:8080/team) and copy the `Ingestio
211
216
212
217
<Imageimg={copy_api_key}alt="Copy API key"size="lg"/>
213
218
214
-
215
219
### Create API Key Kubernetes Secret {#create-api-key-kubernetes-secret}
216
220
217
-
Create a new Kubernetes secret with the Ingestion API Key and config map containing the location of the OTel collector deployed with the ClickStack helm chart. This will be used by later components to allow ingest into collector deployed with the ClickStack helm chart:
221
+
Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:
218
222
219
223
```shell
220
224
# create secret with the ingestion API key
@@ -238,7 +242,7 @@ Trace and log data from demo services should now begin to flow into HyperDX.
238
242
239
243
### Add the OpenTelemetry Helm repo {#add-otel-helm-repo}
240
244
241
-
To collect Kubernetes metrics we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
245
+
To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
242
246
243
247
This requires us to install the OpenTelemetry Helm repo:
To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors each with its own manifest. The two manifests provided —`k8s_deployment.yaml` and `k8s_daemonset.yaml`— work together to collect comprehensive telemetry data from your Kubernetes cluster.
256
+
To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided -`k8s_deployment.yaml` and `k8s_daemonset.yaml` - work together to collect comprehensive telemetry data from your Kubernetes cluster.
253
257
254
258
-`k8s_deployment.yaml` deploys a **single OpenTelemetry Collector instance** responsible for collecting **cluster-wide events and metadata**. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data.
255
259
256
260
-`k8s_daemonset.yaml` deploys a **DaemonSet-based collector** that runs on every node in your cluster. It collects **node-level and pod-level metrics**, as well as container logs, using components like `kubeletstats`, `hostmetrics`, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.
257
261
258
262
Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor
376
381
kubernetesAttributes:
377
382
enabled: true
378
-
# When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
383
+
# When enabled, the processor will extract all labels for an associated pod and add them as resource attributes.
379
384
# The label's exact name will be the key.
380
385
extractAllPodLabels: true
381
-
# When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
386
+
# When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes.
382
387
# The annotation's exact name will be the key.
383
388
extractAllPodAnnotations: true
384
389
# Configures the collector to collect node, pod, and container metrics from the API server on a kubelet..
@@ -453,23 +458,48 @@ config:
453
458
454
459
### Explore Kubernetes data in HyperDX {#explore-kubernetes-data-hyperdx}
455
460
456
-
Navigate to your HyperDX UI - either your Kubernetesdeployed instance or ClickHouse Cloud instance.
461
+
Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via ClickHouse Cloud.
457
462
463
+
<p/>
464
+
<details>
465
+
<summary>Using ClickHouse Cloud</summary>
466
+
467
+
If using ClickHouse Cloud, simply log in to your ClickHouse Cloud service and select "HyperDX" from the left menu. You will be automatically authenticated and will not need to create a user.
468
+
469
+
When prompted to create a datasource, retain all default values within the create source model, completing the Table field with the value `otel_logs` - to create a logs source. All other settings should be auto-detected, allowing you to click `Save New Source`.
470
+
471
+
<Image force img={hyperdx_cloud_datasource} alt="ClickHouse Cloud HyperDX Datasource" size="lg"/>
472
+
473
+
You will also need to create a datasource for traces and metrics.
474
+
475
+
For example, to create sources for traces and OTel metrics, users can select `Create New Source` from the top menu.
476
+
477
+
<Image force img={hyperdx_create_new_source} alt="HyperDX create new source" size="lg"/>
458
478
459
-
If using ClickHouse Cloud, simply login into your ClickHouse Cloud service and select HyperDX from the left menu.
479
+
From here, select the required source type followed by the appropriate table e.g. for traces, select the table `otel_traces`. All settings should be auto-detected.
460
480
481
+
<Image force img={hyperdx_create_trace_datasource} alt="HyperDX create trace source" size="lg"/>
461
482
483
+
:::note Correlating sources
484
+
Note that different data sources in ClickStack—such as logs and traces—can be correlated with each other. To enable this, additional configuration is required on each source. For example, in the logs source, you can specify a corresponding trace source, and vice versa in the traces source. See "Correlated sources" for further details.
485
+
:::
486
+
487
+
</details>
488
+
489
+
<details>
490
+
491
+
<summary>Using self-managed deployment</summary>
462
492
463
493
To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at [http://localhost:8080](http://localhost:8080).
464
494
465
495
```shell
466
496
kubectl port-forward \
467
-
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
497
+
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
468
498
8080:3000 \
469
-
-n otel-demo
499
+
-n otel-demo
470
500
```
471
501
472
-
:::ClickStack in production
502
+
:::note ClickStack in production
473
503
In production, if not using HyperDX in ClickHouse Cloud, we recommend using an ingress with TLS if not using HyperDX in ClickHouse Cloud. For example:
To explore the Kubernetes data, navigate to the dedicated present dashboard at `/kubernetes` e.g. [http://localhost:8080/kubernetes](http://localhost:8080/kubernetes).
485
516
517
+
Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data.
0 commit comments