Skip to content

Commit fac49cf

Browse files
authored
feat(cockpit-k8s): néda review & test doc metrics
1 parent 3a5861c commit fac49cf

File tree

1 file changed

+52
-35
lines changed

1 file changed

+52
-35
lines changed

observability/cockpit/how-to/send-metrics-from-k8s-to-cockpit.mdx

Lines changed: 52 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -9,32 +9,32 @@ tags: kubernetes cockpit metrics observability monitoring cluster
99
categories:
1010
- observability
1111
dates:
12-
validation: 2025/01/07
13-
posted: 2025/01/07
12+
validation: 2025/01/20
13+
posted: 2025/01/20
1414
---
1515

1616

17-
This page shows you how to send application metrics created in a Kubernetes cluster to your Cockpit either by using a Helm chart or by deploying a Helm chart with [Terraform](https://www.terraform.io/).
17+
In this page we will show you how to send application metrics from your Kubernetes cluster to your Cockpit using either a Helm chart or deploying a Helm chart with [Terraform](https://www.terraform.io/).
1818

19-
In this example, we use [k8s-monitoring](https://artifacthub.io/packages/helm/grafana/k8s-monitoring/1.6.16) which installs an Alloy Daemon set to your Kubernetes cluster to export metrics to your Cockpit.
19+
We will use the [k8s-monitoring](https://artifacthub.io/packages/helm/grafana/k8s-monitoring/1.6.16) Helm Chart, which installs an Alloy Daemon set to export your Kubernetes cluster's metrics to your Cockpit.
2020

2121
<Macro id="requirements" />
2222

2323
- A Scaleway account metricsged into the [console](https://console.scaleway.com)
2424
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
25-
- [Created](/observability/cockpit/how-to/create-external-data-sources/) a custom external data source of type metrics
26-
- [Created](/observability/cockpit/how-to/create-token/) a Cockpit token for the same region as the data source
27-
- A running Kubernetes cluster containing a deployed application exposing metrics
25+
- [Created](/observability/cockpit/how-to/create-external-data-sources/) a custom external data source of the [metrics type](/observability/cockpit/concepts/#data-types)
26+
- [Created](/observability/cockpit/how-to/create-token/) a Cockpit token in the same region as the metrics data source
27+
- A running Kubernetes cluster containing your application deployed
2828
- [Created](/identity-and-access-management/iam/how-to/create-api-keys/) an API key and retrieved your API secret key
2929

3030
<Message type="important">
31-
- Sending metrics for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) for more information.
31+
- Sending metrics for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) page for more information.
3232
</Message>
3333

3434

3535
## Configure the Helm chart
3636

37-
Create a `values.yml` file to configure your Helm chart, using the example below. Make sure that you replace `$SCW_CLUSTER_NAME` with the name of your Scaleway Kubernetes cluster, `$COCKPIT_CUSTOM_DATASOURCE_HOST` with the hostname of your custom endpoint, and `$COCKPIT_TOKEN` with your Cockpit token.
37+
Create a `values.yml` file to configure your Helm chart, using the example below. Make sure that you replace `$SCW_CLUSTER_NAME` with the name of your Scaleway Kubernetes cluster, `$COCKPIT_CUSTOM_METRICS_DATASOURCE_URL` with the URL of your custom metrics data source (you can find it under the "API URL" section in the [Data sources tab](https://console.scaleway.com/cockpit/dataSource) of the Scaleway console), and `$COCKPIT_TOKEN` with your Cockpit token.
3838

3939
```yaml
4040
cluster:
@@ -47,7 +47,7 @@ destinations:
4747
protocol: "http"
4848
metrics:
4949
enabled: true
50-
url: "$COCKPIT_CUSTOM_DATASOURCE_HOST/api/v1/push"
50+
url: "$COCKPIT_CUSTOM_METRICS_DATASOURCE_URL/api/v1/push"
5151
tenantId: "$COCKPIT_TOKEN"
5252

5353
logs:
@@ -66,31 +66,29 @@ alloy-singleton:
6666
```
6767
6868
<Message type="info">
69-
The template above is only an example to send metrics to your Cockpit. You can also send logs to Cockpit using this Helm chart.
70-
You can check our guide to [send logs from your cluster to Cockpit](// ADD LINK TO LOGS TUTO)
69+
The template above is for sending metrics to your Cockpit. You can also configure it to send logs to Cockpit using this Helm chart.
70+
Refer to our dedicated doucumentation to [send logs from your cluster to Cockpit](/observability/cockpit/how-to/send-logs-from-k8s-to-cockpit)
7171
</Message>
7272
73-
## Add annotation to your deployed pod to enable auto-discovery
73+
## Add annotations for auto-discovery
7474
75-
In order for k8s-monitoring to discover the pods it needs to scrape, you need to add specific annotation to the pods you want to scrape
75+
Annotations in Kubernetes provide a way to attach metadata to your resources. For `k8s-monitoring`, these annotations signal which pods should be scraped for metrics, and what port to use. For the sake of this documentation, we are adding annotations to specify we want `k8s-monitoring` to scrape the pods from your deployment. Make sure that you replace $METRICS_PORT with the port where your application exposes Prometheus metrics.
7676

77-
Add annotation to indicate to k8s-monitoring to scrape the pods from your deployment. Make sure to replace $METRIC_PORT with your Prometheus port.
78-
79-
### Kubernetes
77+
### Kubernetes deployment template
8078

8179
```yaml
8280
apiVersion: apps/v1
8381
kind: Deployment
8482
metadata:
8583
...
8684
annotations:
87-
"k8s.grafana.com/metrics.portNumber" = "$METRIC_PORT"
85+
"k8s.grafana.com/metrics.portNumber" = "$METRICS_PORT"
8886
"k8s.grafana.com/scrape" = "true"
8987
spec:
9088
...
9189
```
9290

93-
### Terraform
91+
### Terraform deployment template
9492

9593
```terraform
9694
resource "kubernetes_deployment_v1" "your_application_deployment" {
@@ -101,7 +99,7 @@ resource "kubernetes_deployment_v1" "your_application_deployment" {
10199
metadata {
102100
...
103101
annotations = {
104-
"k8s.grafana.com/metrics.portNumber" = "$METRIC_PORT"
102+
"k8s.grafana.com/metrics.portNumber" = "$METRICS_PORT"
105103
"k8s.grafana.com/scrape" = "true"
106104
}
107105
}
@@ -111,21 +109,21 @@ resource "kubernetes_deployment_v1" "your_application_deployment" {
111109
}
112110
```
113111

114-
## Send Kubernetes metrics to your Cockpit using Helm chart with Terraform
112+
## Send Kubernetes metrics using Helm chart with Terraform
115113

116-
1. Set up the Helm Terraform provider:
114+
1. Create a `provider.tf` file and paste the following template to set up the Helm Terraform provider:
117115
```terraform
118116
provider "helm" {
119117
kubernetes {
120-
host = your_k8s_cluster_host
121-
token = your_k8s_cluster_token
118+
host = your_k8s_cluster_host # The URL of your Kubernetes API server.
119+
token = your_k8s_cluster_token # Authentication token to access the cluster.
122120
cluster_ca_certificate = base64decode(
123-
your_k8s_cluster_ca_certificate
121+
your_k8s_cluster_ca_certificate # The cluster's CA certificate.
124122
)
125123
}
126124
}
127125
```
128-
2. Create a Helm release resource with the path to your `values.yml`:
126+
2. Create a `maint.tf` file and paste the following template to create a Helm release resource. Make sure that you replace `/your-path/to/values.yml` with the actual path to your values file.
129127
```
130128
resource "helm_release" "metrics-ingester" {
131129
name = "my-metrics-ingester"
@@ -138,23 +136,42 @@ resource "kubernetes_deployment_v1" "your_application_deployment" {
138136
values = [file("/your-path/to/values.yml")]
139137
}
140138
```
141-
3. Run `terraform apply` to apply the new Terraform configuration.
139+
3. Save your changes.
140+
4. Run `terraform init` to initialize your Terraform configuration and download any necessary providers.
141+
5. Run `terraform apply` to apply your configuration.
142+
5. Type `yes` when prompted to confirm the actions.
143+
144+
## Send Kubernetes metrics using Helm chart
142145
143-
## Send Kubernetes metrics to your Cockpit using Helm chart
146+
Once you have configured your `values.yml` file, you can use Helm to deploy the metric-forwarding configuration to your Kubernetes cluster. Before you can install the Helm chart, ensure that your `kubectl` tool is properly connected to your Kubernetes cluster. `kubectl` is the command-line tool for interacting with Kubernetes clusters.
144147
145-
1. Connect your kubectl to your Kubernetes cluster
146-
2. Run the following command to apply your Helm chart with the `values.yml` file:
148+
1. [Connect](/containers/kubernetes/how-to/connect-cluster-kubectl/) `kubectl` to your Kubernetes cluster
149+
2. Run the command below to install the `k8s-monitoring` Helm chart:
147150
```
148-
helm install -f /your-path/to/values.yml my-metrics-ingester k8s-monitoring --version 1.6.16
151+
helm install -f /your-path/to/values.yml your-metrics-ingester k8s-monitoring --version 1.6.16
149152
```
150-
Make sure to replace `-f` flag with the correct path to your `values.yml` file.
153+
The `-f` flag specifies the path to your `values.yml` file, which contains the configuration for the Helm chart. Make sure that you replace `/your-path/to/values.yml` with the correct path where your `values.yml` file is stored. Make sure that you also replace `your-metrics-ingester` with the metric ingester of your choice. In our configuration we are using `alloy-lm-ingester`.
154+
155+
Helm installs the `k8s-monitoring` chart, which includes the Alloy DaemonSet configured to collect metrics from your Kubernetes cluster.
156+
The DaemonSet ensures that a pod is running on each node in your cluster, which collects metrics and forwards them to the specified Prometheus endpoint in your Cockpit.
157+
158+
3. Optionally, check the status of the release to ensure it was installed:
159+
160+
```
161+
helm list
162+
```
151163

152164

153165
## Explore your metrics in Cockpit
154166

155167
Now that your metrics are exported to your Cockpit, you can access and query them.
156168

157-
1. Click **Cockpit** in the Observability section of the [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
169+
1. Click **Cockpit** in the Observability section of the Scaleway [console](https://console.scaleway.com/) side menu. The **Cockpit** overview page displays.
158170
2. Click **Open dashboards** to open your managed dashboards in Grafana. You are redirected to the Grafana website.
159-
3. Click the **Home** icon > **Explore**. Select your custom data source in the upper left corner.
160-
4. You can now query your metrics from your Kubernetes cluster and use the datasource to create graph.
171+
3. Log in to Grafana using your [Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/).
172+
4. Click the **Home** icon, then click **Explore**.
173+
5. Select your custom data source in the search drop-down on the upper left corner of your screen.
174+
6. In the **Labels filter** drop-down, select the `cluster` label and in the **Value** drop-down, select your cluster.
175+
7. Optionally, click the **Clock** icon on the top right corner of your screen and filter by time range.
176+
8. Click **Run query** to see your metrics. An output similar to the following should display.
177+
<Lightbox src="scaleway-cpt-k8s-terraform-metrics.webp" alt="" />

0 commit comments

Comments
 (0)