Skip to content

Commit 3a9f9ed

Browse files
KatsuyuLaure-di
authored andcommitted
docs(cockpit): added documentation on how to add custom metrics from k8s cluster to cockpit int-add-observability (scaleway#4176)
1 parent dbfce02 commit 3a9f9ed

File tree

2 files changed

+326
-0
lines changed

2 files changed

+326
-0
lines changed
Lines changed: 149 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,149 @@
1+
---
2+
meta:
3+
title: How to send logs from your Kubernetes cluster to your Cockpit
4+
description: Learn how to send your pod logs to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods logs to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and log analysis in your infrastructure.
5+
content:
6+
h1: How to send logs from your Kubernetes cluster to your Cockpit
7+
paragraph: Learn how to send your pod logs to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods logs to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and log analysis in your infrastructure.
8+
tags: kubernetes cockpit logs observability monitoring cluster
9+
categories:
10+
- observability
11+
dates:
12+
validation: 2025/01/20
13+
posted: 2025/01/20
14+
---
15+
16+
In this page, we will show you how to send application logs from your Kubernetes cluster to your Cockpit using either a Helm chart or deploying a Helm chart with [Terraform](https://www.terraform.io/).
17+
18+
We will use the [k8s-monitoring](https://artifacthub.io/packages/helm/grafana/k8s-monitoring/1.6.16) Helm Chart, which installs an Alloy Daemon set to export your Kubernetes cluster's logs to your Cockpit.
19+
20+
<Macro id="requirements" />
21+
22+
- A Scaleway account logged into the [console](https://console.scaleway.com)
23+
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
24+
- [Created](/observability/cockpit/how-to/create-external-data-sources/) a custom external data source of the [logs type](/observability/cockpit/concepts/#data-types)
25+
- [Created](/observability/cockpit/how-to/create-token/) a Cockpit token in the same region as the logs data source
26+
- A running Kubernetes cluster containing your deployed application
27+
- [Created](/identity-and-access-management/iam/how-to/create-api-keys/) an API key and retrieved your API secret key
28+
29+
<Message type="important">
30+
- Sending logs for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) page for more information.
31+
</Message>
32+
33+
## Configure the Helm chart
34+
35+
Create a `values.yml` file to configure your Helm chart, using the example below. Make sure that you replace `$SCW_CLUSTER_NAME` with the name of your Scaleway Kubernetes cluster, `COCKPIT_CUSTOM_LOGS_DATASOURCE_URL` with the URL of your custom logs data source (you can find it under the "API URL" section in the [Data sources tab](https://console.scaleway.com/cockpit/dataSource) of the Scaleway console), and `$COCKPIT_TOKEN` with your Cockpit token.
36+
37+
```yaml
38+
cluster:
39+
name: "$SCW_CLUSTER_NAME"
40+
global:
41+
scrape_interval: 60s
42+
destinations:
43+
- name: "my-cockpit-logs"
44+
type: "loki"
45+
protocol: "http"
46+
logs:
47+
enabled: true
48+
url: "$COCKPIT_CUSTOM_LOGS_DATASOURCE_URL/loki/api/v1/push" ##You can find your logs URL in the **Data sources** tab of the Scaleway console under the "API URL" section of the relevant data source
49+
tenantId: "$COCKPIT_TOKEN"
50+
51+
metrics:
52+
enabled: false
53+
traces:
54+
enabled: false
55+
clusterEvents:
56+
enabled: true
57+
destinations: ["my-cockpit-logs"]
58+
# -- Node logs.
59+
nodeLogs:
60+
enabled: true
61+
destinations: ["my-cockpit-logs"]
62+
# -- Pod logs.
63+
podLogs:
64+
enabled: true
65+
destinations: ["my-cockpit-logs"]
66+
volumeGatherSettings:
67+
onlyGatherNewLogLines: true
68+
69+
# An Alloy instance for collecting log data.
70+
alloy-logs:
71+
enabled: true
72+
logging:
73+
level: info
74+
format: logfmt
75+
alloy-singleton:
76+
enabled: true
77+
```
78+
79+
<Message type="note">
80+
The template above is for sending logs to your Cockpit. You can also configure it to send metrics to Cockpit using this Helm chart.
81+
Refer to our dedicated documentation to [send metrics from your cluster to Cockpit](/observability/cockpit/how-to/send-metrics-froms-k8s-to-cockpit).
82+
</Message>
83+
84+
## Send Kubernetes logs using Helm chart
85+
86+
Once you have configured your `values.yml` file, you can use Helm to deploy the log-forwarding configuration to your Kubernetes cluster. Before installing the Helm chart, ensure that your `kubectl` tool is properly connected to your Kubernetes cluster. `kubectl` is the command-line tool for interacting with Kubernetes clusters.
87+
88+
1. [Connect](/containers/kubernetes/how-to/connect-cluster-kubectl/) `kubectl` to your Kubernetes cluster
89+
2. Run the command below to install the `k8s-monitoring` Helm chart:
90+
```
91+
helm install -f /your-path/to/values.yml name-of-your-choice-for-your-log-ingester k8s-monitoring --version 1.6.16
92+
```
93+
The `-f` flag specifies the path to your `values.yml` file, which contains the configuration for the Helm chart. Make sure that you replace `/your-path/to/values.yml` with the correct path where your `values.yml` file is stored. Make sure that you also replace `name-of-your-choice-for-your-log-ingester` with a clear name (ex. `alloy-logs-ingester`). In our configuration, we are using `alloy-lm-ingester`.
94+
95+
Helm installs the `k8s-monitoring` chart, which includes the Alloy DaemonSet configured to collect logs from your Kubernetes cluster.
96+
The DaemonSet ensures that a pod is running on each node in your cluster, which collects logs and forwards them to the specified Loki endpoint in your Cockpit.
97+
98+
3. Optionally, check the status of the release to ensure it was installed:
99+
100+
```
101+
helm list
102+
```
103+
104+
## Send Kubernetes logs using Helm chart with Terraform
105+
106+
You can also use Terraform to manage and deploy Helm charts, providing you with more automation and consistency to manage your Kubernetes resources.
107+
108+
1. Create a `provider.tf` file and paste the following template to set up the Helm Terraform provider:
109+
```terraform
110+
provider "helm" {
111+
kubernetes {
112+
host = your_k8s_cluster_host # The URL of your Kubernetes API server.
113+
token = your_k8s_cluster_token # Authentication token to access the cluster.
114+
cluster_ca_certificate = base64decode(
115+
your_k8s_cluster_ca_certificate # The cluster's CA certificate.
116+
)
117+
}
118+
}
119+
```
120+
2. Create a `maint.tf` file and paste the following template to create a Helm release resource. Make sure that you replace `/your-path/to/values.yml` with the actual path to your values file.
121+
```
122+
resource "helm_release" "alloy" {
123+
name = "name-of-your-log-ingester"
124+
repository = "https://grafana.github.io/helm-charts"
125+
chart = "k8s-monitoring"
126+
version = "2.0.2"
127+
128+
namespace = "log-ingester"
129+
create_namespace = true
130+
values = [file("/your-path/to/values.yml")]
131+
}
132+
```
133+
3. Save your changes.
134+
4. Run `terraform init` to initialize your Terraform configuration and download any necessary providers.
135+
5. Run `terraform apply` to apply your configuration.
136+
6. Type `yes` when prompted to confirm the actions.
137+
138+
## Explore your logs in Cockpit
139+
140+
1. Click **Cockpit** in the Observability section of the Scaleway [console](https://console.scaleway.com/) side menu. The **Cockpit Overview** page displays.
141+
2. Click **Open dashboards** to open your managed dashboards in Grafana. You are redirected to the Grafana website.
142+
3. Log in to Grafana using your [Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/).
143+
4. Click the **Home** icon, then click **Explore**.
144+
5. Select your custom data source in the search drop-down on the upper left corner of your screen.
145+
6. In the **Labels filter** drop-down, select the `cluster` label and in the **Value** drop-down, select your cluster.
146+
7. Optionally, click the **Clock** icon on the top right corner of your screen and filter by time range.
147+
8. Click **Run query** to see your logs. An output similar to the following should display.
148+
149+
<Lightbox src="scaleway-cpt-k8s-terraform-logs.webp" alt="" />
Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
---
2+
meta:
3+
title: How to send metrics from your Kubernetes cluster to your Cockpit
4+
description: Learn how to send your pod metrics to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods metrics to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and metrics analysis in your infrastructure.
5+
content:
6+
h1: How to send metrics from your Kubernetes cluster to your Cockpit
7+
paragraph: Learn how to send your pod metrics to your Cockpit using Scaleway's comprehensive guide. This tutorial covers sending Kubernetes pods metrics to Scaleway's Cockpit for centralized monitoring and analysis using Grafana, ensuring efficient monitoring and metrics analysis in your infrastructure.
8+
tags: kubernetes cockpit metrics observability monitoring cluster
9+
categories:
10+
- observability
11+
dates:
12+
validation: 2025/01/20
13+
posted: 2025/01/20
14+
---
15+
16+
17+
In this page we will show you how to send application metrics from your Kubernetes cluster to your Cockpit, either by using a Helm chart or deploying a Helm chart with [Terraform](https://www.terraform.io/).
18+
19+
We will use the [k8s-monitoring](https://artifacthub.io/packages/helm/grafana/k8s-monitoring/1.6.16) Helm Chart, which installs an Alloy Daemon set to export your Kubernetes cluster's metrics to your Cockpit.
20+
21+
<Macro id="requirements" />
22+
23+
- A Scaleway account metricsged into the [console](https://console.scaleway.com)
24+
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
25+
- [Created](/observability/cockpit/how-to/create-external-data-sources/) a custom external data source of the [metrics type](/observability/cockpit/concepts/#data-types)
26+
- [Created](/observability/cockpit/how-to/create-token/) a Cockpit token in the same region as the metrics data source
27+
- A running Kubernetes cluster containing your deployed application
28+
- [Created](/identity-and-access-management/iam/how-to/create-api-keys/) an API key and retrieved your API secret key
29+
30+
<Message type="important">
31+
- Sending metrics for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) page for more information.
32+
</Message>
33+
34+
35+
## Configure the Helm chart
36+
37+
Create a `values.yml` file to configure your Helm chart, using the example below. Make sure that you replace `$SCW_CLUSTER_NAME` with the name of your Scaleway Kubernetes cluster, `$COCKPIT_CUSTOM_METRICS_DATASOURCE_URL` with the URL of your custom metrics data source (you can find it under the "API URL" section in the [Data sources tab](https://console.scaleway.com/cockpit/dataSource) of the Scaleway console), and `$COCKPIT_TOKEN` with your Cockpit token.
38+
39+
```yaml
40+
cluster:
41+
name: "$SCW_CLUSTER_NAME"
42+
global:
43+
scrape_interval: 60s
44+
destinations:
45+
- name: "my-cockpit-metrics"
46+
type: "prometheus"
47+
protocol: "http"
48+
metrics:
49+
enabled: true
50+
url: "$COCKPIT_CUSTOM_METRICS_DATASOURCE_URL/api/v1/push"
51+
tenantId: "$COCKPIT_TOKEN"
52+
53+
logs:
54+
enabled: false
55+
traces:
56+
enabled: false
57+
58+
annotationAutodiscovery:
59+
enabled: true
60+
destinations: ["my-cockpit-metrics"]
61+
62+
alloy-metrics:
63+
enabled: true
64+
alloy-singleton:
65+
enabled: true
66+
```
67+
68+
<Message type="note">
69+
The template above is for sending metrics to your Cockpit. You can also configure it to send logs to Cockpit using this Helm chart.
70+
Refer to our dedicated documentation to [send logs from your cluster to Cockpit](/observability/cockpit/how-to/send-logs-from-k8s-to-cockpit)
71+
</Message>
72+
73+
## Add annotations for auto-discovery
74+
75+
Annotations in Kubernetes provide a way to attach metadata to your resources. For `k8s-monitoring`, these annotations signal which pods should be scraped for metrics, and what port to use. For the sake of this documentation, we are adding annotations to specify we want `k8s-monitoring` to scrape the pods from our deployment. Make sure that you replace `$METRICS_PORT` with the port where your application exposes Prometheus metrics.
76+
77+
### Kubernetes deployment template
78+
79+
```yaml
80+
apiVersion: apps/v1
81+
kind: Deployment
82+
metadata:
83+
...
84+
annotations:
85+
"k8s.grafana.com/metrics.portNumber" = "$METRICS_PORT"
86+
"k8s.grafana.com/scrape" = "true"
87+
spec:
88+
...
89+
```
90+
91+
### Terraform deployment template
92+
93+
```terraform
94+
resource "kubernetes_deployment_v1" "your_application_deployment" {
95+
...
96+
spec {
97+
...
98+
template {
99+
metadata {
100+
...
101+
annotations = {
102+
"k8s.grafana.com/metrics.portNumber" = "$METRICS_PORT"
103+
"k8s.grafana.com/scrape" = "true"
104+
}
105+
}
106+
...
107+
}
108+
}
109+
}
110+
```
111+
112+
## Send Kubernetes metrics using Helm chart with Terraform
113+
114+
1. Create a `provider.tf` file and paste the following template to set up the Helm Terraform provider:
115+
```terraform
116+
provider "helm" {
117+
kubernetes {
118+
host = your_k8s_cluster_host # The URL of your Kubernetes API server.
119+
token = your_k8s_cluster_token # Authentication token to access the cluster.
120+
cluster_ca_certificate = base64decode(
121+
your_k8s_cluster_ca_certificate # The cluster's CA certificate.
122+
)
123+
}
124+
}
125+
```
126+
2. Create a `maint.tf` file and paste the following template to create a Helm release resource. Make sure that you replace `/your-path/to/values.yml` with the actual path to your values file.
127+
```
128+
resource "helm_release" "alloy" {
129+
name = "name-of-your-metrics-ingester"
130+
repository = "https://grafana.github.io/helm-charts"
131+
chart = "k8s-monitoring"
132+
version = "2.0.2"
133+
134+
namespace = "metrics-ingester"
135+
create_namespace = true
136+
values = [file("/your-path/to/values.yml")]
137+
}
138+
```
139+
3. Save your changes.
140+
4. Run `terraform init` to initialize your Terraform configuration and download any necessary providers.
141+
5. Run `terraform apply` to apply your configuration.
142+
6. Type `yes` when prompted to confirm the actions.
143+
144+
## Send Kubernetes metrics using Helm chart
145+
146+
Once you have configured your `values.yml` file, you can use Helm to deploy the metric-forwarding configuration to your Kubernetes cluster. Before installing the Helm chart, ensure that your `kubectl` tool is properly connected to your Kubernetes cluster. `kubectl` is the command-line tool for interacting with Kubernetes clusters.
147+
148+
1. [Connect](/containers/kubernetes/how-to/connect-cluster-kubectl/) `kubectl` to your Kubernetes cluster
149+
2. Run the command below to install the `k8s-monitoring` Helm chart:
150+
```
151+
helm install -f /your-path/to/values.yml name-of-your-choice-for-your-metric-ingester k8s-monitoring --version 1.6.16
152+
```
153+
The `-f` flag specifies the path to your `values.yml` file, which contains the configuration for the Helm chart. Make sure that you replace `/your-path/to/values.yml` with the correct path where your `values.yml` file is stored. Make sure that you also replace `name-of-your-choice-for-your-metric-ingester` with a clear name (ex. `alloy-metrics-ingester`). In our configuration, we are using `alloy-lm-ingester`.
154+
155+
Helm installs the `k8s-monitoring` chart, which includes the Alloy DaemonSet configured to collect metrics from your Kubernetes cluster.
156+
The DaemonSet ensures that a pod is running on each node in your cluster, which collects metrics and forwards them to the specified Prometheus endpoint in your Cockpit.
157+
158+
3. Optionally, check the status of the release to ensure it was installed:
159+
160+
```
161+
helm list
162+
```
163+
164+
165+
## Explore your metrics in Cockpit
166+
167+
Now that your metrics are exported to your Cockpit, you can access and query them.
168+
169+
1. Click **Cockpit** in the Observability section of the Scaleway [console](https://console.scaleway.com/) side menu. The **Cockpit Overview** page displays.
170+
2. Click **Open dashboards** to open your managed dashboards in Grafana. You are redirected to the Grafana website.
171+
3. Log in to Grafana using your [Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/).
172+
4. Click the **Home** icon, then click **Explore**.
173+
5. Select your custom data source in the search drop-down on the upper left corner of your screen.
174+
6. In the **Labels filter** drop-down, select the `cluster` label and in the **Value** drop-down, select your cluster.
175+
7. Optionally, click the **Clock** icon on the top right corner of your screen and filter by time range.
176+
8. Click **Run query** to see your metrics. An output similar to the following should display.
177+
<Lightbox src="scaleway-cpt-k8s-terraform-metrics.webp" alt="" />

0 commit comments

Comments
 (0)