Skip to content

Commit 44234fe

Browse files
committed
format spelling fixes
1 parent 6371a77 commit 44234fe

File tree

5 files changed

+125
-150
lines changed

5 files changed

+125
-150
lines changed

docs/use-cases/observability/clickstack/example-datasets/kubernetes.md

Lines changed: 75 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,20 @@ import DemoArchitecture from '@site/docs/use-cases/observability/clickstack/exam
1212
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
1313
import hyperdx_kubernetes_data from '@site/static/images/use-cases/observability/hyperdx-kubernetes-data.png';
1414
import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
15+
import hyperdx_cloud_datasource from '@site/static/images/use-cases/observability/hyperdx_cloud_datasource.png';
16+
import hyperdx_create_new_source from '@site/static/images/use-cases/observability/hyperdx_create_new_source.png';
17+
import hyperdx_create_trace_datasource from '@site/static/images/use-cases/observability/hyperdx_create_trace_datasource.png';
18+
import dashboard_kubernetes from '@site/static/images/use-cases/observability/hyperdx-dashboard-kubernetes.png';
1519

1620
This guide allows you to collect logs and metrics from your Kubernetes system, sending them to **ClickStack** for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official Open Telemetry demo.
1721

1822
## Prerequisites {#prerequisites}
1923

2024
This guide requires you to have:
2125

22-
- A **Kubernetes cluster** (v1.20+ recommended) with:
23-
- atleast 32GiB of RAM and 100GB of disk space available on one node for ClickHouse.
26+
- A **Kubernetes cluster** (v1.20+ recommended) with at least 32 GiB of RAM and 100GB of disk space available on one node for ClickHouse.
2427
- **[Helm](https://helm.sh/)** v3+
25-
- **kubectl**, configured to interact with your cluster
28+
- **`kubectl`**, configured to interact with your cluster
2629

2730
## Deployment options {#deployment-options}
2831

@@ -43,8 +46,9 @@ To simulate application traffic, you can optionally deploy the ClickStack fork o
4346

4447
If your setup needs TLS certificates, install [cert-manager](https://cert-manager.io/) using Helm:
4548

46-
```
47-
# Add Cert manager repo
49+
```shell
50+
# Add Cert manager repo
51+
4852
helm repo add jetstack https://charts.jetstack.io
4953

5054
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set startupapicheck.timeout=5m --set installCRDs=true --set global.leaderElection.namespace=cert-manager
@@ -54,24 +58,23 @@ helm install cert-manager jetstack/cert-manager --namespace cert-manager --creat
5458

5559
This **step is optional and intended for users with no existing pods to monitor**. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.
5660

57-
This following deploys the full ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis/Valkey), and integrations with HyperDX and ClickHouse.
61+
The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.
5862

5963
All services are deployed to the `otel-demo` namespace. Each deployment includes:
6064

6165
- Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
6266
- All services send their instrumentation to a `my-hyperdx-hdx-oss-v2-otel-collector` OpenTelemetry collector (not deployed)
6367
- [Forwarding of resource tags](/use-cases/observability/clickstack/ingesting-data/kubernetes#forwarding-resouce-tags-to-pods) to correlate logs, metrics and traces via the environment variable `OTEL_RESOURCE_ATTRIBUTES`.
6468

65-
6669
```shell
67-
## download demo kubernertes manifest file
70+
## download demo Kubernetes manifest file
6871
curl -O https://raw.githubusercontent.com/ClickHouse/opentelemetry-demo/refs/heads/main/kubernetes/opentelemetry-demo.yaml
6972
# wget alternative
7073
# wget https://raw.githubusercontent.com/ClickHouse/opentelemetry-demo/refs/heads/main/kubernetes/opentelemetry-demo.yaml
7174
kubectl apply --namespace otel-demo -f opentelemetry-demo.yaml
7275
```
7376

74-
On deployment of the demo, confirm all pods have been successfuly created and are in the `Running` state:
77+
On deployment of the demo, confirm all pods have been successfully created and are in the `Running` state:
7578

7679
```shell
7780
kubectl get pods -n=otel-demo
@@ -103,7 +106,7 @@ valkey-cart-5f7b667bb7-gl5v4 1/1 Running 0 13m
103106

104107
### Add the ClickStack Helm chart repository {#add-helm-clickstack}
105108

106-
To deploy ClickStack we use the [official Helm chart](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm).
109+
To deploy ClickStack, we use the [official Helm chart](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm).
107110

108111
This requires us to add the HyperDX Helm repository:
109112

@@ -116,7 +119,6 @@ helm repo update
116119

117120
With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or use ClickHouse Cloud, where HyperDX is also available as a managed service.
118121

119-
120122
<details>
121123
<summary>Self-managed deployment</summary>
122124

@@ -138,35 +140,38 @@ helm install my-hyperdx hyperdx/hdx-oss-v2 --set clickhouse.persistence.dataSi
138140
```
139141

140142
:::warning ClickStack in production
141-
This chart also installs ClickHouse and the otel-collector. For production it is recommended that you use the clickhouse and otel-collector operators instead and/or use ClickHouse Cloud.
142143

143-
To disable clickhouse and otel-collector, set the following values:
144-
```
144+
This chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use ClickHouse Cloud.
145+
146+
To disable clickhouse and OTel collector, set the following values:
147+
148+
```shell
145149
helm install myrelease <chart-name-or-path> --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.enabled=false
146150
```
151+
147152
:::
148153

149154
</details>
150155

151156
<details>
152157
<summary>Using ClickHouse Cloud</summary>
153158

154-
If you'd rather use ClickHouse Cloud, you can deploy Clickstack and [disable the included ClickHouse](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud).
159+
If you'd rather use ClickHouse Cloud, you can deploy ClickStack and [disable the included ClickHouse](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud).
155160

156161
:::note
157162
The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they are not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, [providing access to the secure ingestion key](#retrieve-ingestion-api-key) needed to ingest through the deployed OTel collector, but should not be exposed to end users.
158163
:::
159164

160-
```
165+
```shell
161166
# specify ClickHouse Cloud credentials
162167
export CLICKHOUSE_URL=<CLICKHOUSE_CLOUD_URL> # full https url
163168
export CLICKHOUSE_USER=<CLICKHOUSE_USER>
164169
export CLICKHOUSE_PASSWORD=<CLICKHOUSE_PASSWORD>
165170

166171
helm install my-hyperdx hyperdx/hdx-oss-v2 --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.clickhouseEndpoint=${CLICKHOUSE_URL} --set clickhouse.config.users.otelUser=${CLICKHOUSE_USER} --set clickhouse.config.users.otelUserPassword=${CLICKHOUSE_PASSWORD} --set global.storageClassName="standard-rwo" -n otel-demo
167172
```
168-
</details>
169173

174+
</details>
170175

171176
To verify the deployment status, run the following command and confirm all components are in the `Running` state. Note that ClickHouse will be absent from this for users using ClickHouse Cloud:
172177

@@ -186,15 +191,15 @@ my-hyperdx-hdx-oss-v2-otel-collector-64cf698f5c-8s7qj 1/1 Running 0
186191
Even when using ClickHouse Cloud, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in the ClickHouse Cloud-hosted version.
187192
:::
188193

189-
For security, the service uses ClusterIP and is not exposed externally by default.
194+
For security, the service uses `ClusterIP` and is not exposed externally by default.
190195

191196
To access the HyperDX UI, port forward from 3000 to the local port 8080.
192197

193198
```shell
194199
kubectl port-forward \
195-
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
200+
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
196201
8080:3000 \
197-
-n otel-demo
202+
-n otel-demo
198203
```
199204

200205
Navigate [http://localhost:8080](http://localhost:8080) to access the HyperDX UI.
@@ -211,10 +216,9 @@ Navigate to [`Team Settings`](http://localhost:8080/team) and copy the `Ingestio
211216

212217
<Image img={copy_api_key} alt="Copy API key" size="lg"/>
213218

214-
215219
### Create API Key Kubernetes Secret {#create-api-key-kubernetes-secret}
216220

217-
Create a new Kubernetes secret with the Ingestion API Key and config map containing the location of the OTel collector deployed with the ClickStack helm chart. This will be used by later components to allow ingest into collector deployed with the ClickStack helm chart:
221+
Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:
218222

219223
```shell
220224
# create secret with the ingestion API key
@@ -238,7 +242,7 @@ Trace and log data from demo services should now begin to flow into HyperDX.
238242

239243
### Add the OpenTelemetry Helm repo {#add-otel-helm-repo}
240244

241-
To collect Kubernetes metrics we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
245+
To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
242246

243247
This requires us to install the OpenTelemetry Helm repo:
244248

@@ -249,15 +253,15 @@ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm
249253

250254
### Deploy Kubernetes collector components {#deploy-kubernetes-collector-components}
251255

252-
To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors each with its own manifest. The two manifests provided `k8s_deployment.yaml` and `k8s_daemonset.yaml` work together to collect comprehensive telemetry data from your Kubernetes cluster.
256+
To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided - `k8s_deployment.yaml` and `k8s_daemonset.yaml` - work together to collect comprehensive telemetry data from your Kubernetes cluster.
253257

254258
- `k8s_deployment.yaml` deploys a **single OpenTelemetry Collector instance** responsible for collecting **cluster-wide events and metadata**. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data.
255259

256260
- `k8s_daemonset.yaml` deploys a **DaemonSet-based collector** that runs on every node in your cluster. It collects **node-level and pod-level metrics**, as well as container logs, using components like `kubeletstats`, `hostmetrics`, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.
257261

258262
Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.
259263

260-
First install the collector as a deployment:
264+
First, install the collector as a deployment:
261265

262266
```shell
263267
# download manifest file
@@ -270,7 +274,7 @@ helm install --namespace otel-demo k8s-otel-deployment open-telemetry/openteleme
270274
<summary>k8s_deployment.yaml</summary>
271275

272276
```yaml
273-
# deployment.yaml
277+
# k8s_deployment.yaml
274278
mode: deployment
275279

276280
image:
@@ -283,14 +287,14 @@ replicaCount: 1
283287
presets:
284288
kubernetesAttributes:
285289
enabled: true
286-
# When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
290+
# When enabled, the processor will extract all labels for an associated pod and add them as resource attributes.
287291
# The label's exact name will be the key.
288292
extractAllPodLabels: true
289-
# When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
293+
# When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes.
290294
# The annotation's exact name will be the key.
291295
extractAllPodAnnotations: true
292-
# Configures the collector to collect kubernetes events.
293-
# Adds the k8sobject receiver to the logs pipeline and collects kubernetes events by default.
296+
# Configures the collector to collect Kubernetes events.
297+
# Adds the k8sobject receiver to the logs pipeline and collects Kubernetes events by default.
294298
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-objects-receiver
295299
kubernetesEvents:
296300
enabled: true
@@ -332,8 +336,7 @@ config:
332336
333337
</details>
334338
335-
336-
Next deploy the collector as a DaemonSet for node and pod-level metrics and logs:
339+
Next, deploy the collector as a DaemonSet for node and pod-level metrics and logs:
337340
338341
```shell
339342
# download manifest file
@@ -344,10 +347,12 @@ helm install --namespace otel-demo k8s-otel-daemonset open-telemetry/opentelemet
344347

345348
<details>
346349

347-
<summary>k8s_daemonset.yaml</summary>
350+
<summary>
351+
`k8s_daemonset.yaml`
352+
</summary>
348353

349354
```yaml
350-
# daemonset.yaml
355+
# k8s_daemonset.yaml
351356
mode: daemonset
352357

353358
image:
@@ -375,10 +380,10 @@ presets:
375380
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor
376381
kubernetesAttributes:
377382
enabled: true
378-
# When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
383+
# When enabled, the processor will extract all labels for an associated pod and add them as resource attributes.
379384
# The label's exact name will be the key.
380385
extractAllPodLabels: true
381-
# When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
386+
# When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes.
382387
# The annotation's exact name will be the key.
383388
extractAllPodAnnotations: true
384389
# Configures the collector to collect node, pod, and container metrics from the API server on a kubelet..
@@ -453,23 +458,48 @@ config:
453458
454459
### Explore Kubernetes data in HyperDX {#explore-kubernetes-data-hyperdx}
455460
456-
Navigate to your HyperDX UI - either your Kubernetes deployed instance or ClickHouse Cloud instance.
461+
Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via ClickHouse Cloud.
457462
463+
<p/>
464+
<details>
465+
<summary>Using ClickHouse Cloud</summary>
466+
467+
If using ClickHouse Cloud, simply log in to your ClickHouse Cloud service and select "HyperDX" from the left menu. You will be automatically authenticated and will not need to create a user.
468+
469+
When prompted to create a datasource, retain all default values within the create source model, completing the Table field with the value `otel_logs` - to create a logs source. All other settings should be auto-detected, allowing you to click `Save New Source`.
470+
471+
<Image force img={hyperdx_cloud_datasource} alt="ClickHouse Cloud HyperDX Datasource" size="lg"/>
472+
473+
You will also need to create a datasource for traces and metrics.
474+
475+
For example, to create sources for traces and OTel metrics, users can select `Create New Source` from the top menu.
476+
477+
<Image force img={hyperdx_create_new_source} alt="HyperDX create new source" size="lg"/>
458478

459-
If using ClickHouse Cloud, simply login into your ClickHouse Cloud service and select HyperDX from the left menu.
479+
From here, select the required source type followed by the appropriate table e.g. for traces, select the table `otel_traces`. All settings should be auto-detected.
460480

481+
<Image force img={hyperdx_create_trace_datasource} alt="HyperDX create trace source" size="lg"/>
461482

483+
:::note Correlating sources
484+
Note that different data sources in ClickStack—such as logs and traces—can be correlated with each other. To enable this, additional configuration is required on each source. For example, in the logs source, you can specify a corresponding trace source, and vice versa in the traces source. See "Correlated sources" for further details.
485+
:::
486+
487+
</details>
488+
489+
<details>
490+
491+
<summary>Using self-managed deployment</summary>
462492

463493
To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at [http://localhost:8080](http://localhost:8080).
464494

465495
```shell
466496
kubectl port-forward \
467-
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
497+
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
468498
8080:3000 \
469-
-n otel-demo
499+
-n otel-demo
470500
```
471501

472-
:::ClickStack in production
502+
:::note ClickStack in production
473503
In production, if not using HyperDX in ClickHouse Cloud, we recommend using an ingress with TLS if not using HyperDX in ClickHouse Cloud. For example:
474504

475505
```shell
@@ -480,10 +510,12 @@ helm upgrade my-hyperdx hyperdx/hdx-oss-v2 \
480510
```
481511
::::
482512

513+
</details>
483514

484515
To explore the Kubernetes data, navigate to the dedicated present dashboard at `/kubernetes` e.g. [http://localhost:8080/kubernetes](http://localhost:8080/kubernetes).
485516

517+
Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data.
486518

519+
</VerticalStepper>
487520

488-
489-
</VerticalStepper>
521+
<Image img={dashboard_kubernetes} alt="ClickHouse kubernetes" size="lg"/>

docs/use-cases/observability/clickstack/ingesting-data/kubernetes.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,14 +43,14 @@ kubectl create configmap -n=otel-demo otel-config-vars --from-literal=YOUR_OTEL_
4343

4444
The DaemonSet will collect logs and metrics from each node in the cluster but will not collect Kubernetes events or cluster-wide metrics.
4545

46-
Download the daemonset manifest:
46+
Download the DaemonSet manifest:
4747

4848
```shell
4949
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/example-datasets/_snippets/k8s_daemonset.yaml
5050
```
5151
<details>
5252

53-
<summary>k8s_daemonset.yaml</summary>
53+
<summary>`k8s_daemonset.yaml`</summary>
5454

5555
```yaml
5656
# daemonset.yaml
@@ -229,7 +229,6 @@ config:
229229
230230
</details>
231231
232-
233232
## Deploying the OpenTelemetry collector {#deploying-the-otel-collector}
234233
235234
The OpenTelemetry collector can now be deployed in your Kubernetes cluster using

0 commit comments

Comments
 (0)