Skip to content

Commit d401bab

Browse files
committed
docs(cpt): feedback
1 parent aebd7b5 commit d401bab

File tree

1 file changed

+35
-36
lines changed

1 file changed

+35
-36
lines changed

pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx

Lines changed: 35 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ content:
88
categories:
99
- observability cockpit
1010
dates:
11-
validation: 2025-05-09
11+
validation: 2025-05-12
1212
posted: 2023-11-06
1313
---
1414

1515

16-
Cockpit does not support Grafana-managed alerting. It integrates with Grafana to visualize metrics, but alerts are managed through the Scaleway alert manager. You should use Grafana only to define alert rules, not to evaluate or receive alert notifications. Once the conditions of your alert rule are met, the Scaleway alert manager evaluates the rule and sends a notification to the **contact points you have configured in the Scaleway console**.
16+
Cockpit does not support Grafana-managed alerting. It integrates with Grafana to visualize metrics, but alerts are managed through the Scaleway alert manager. You should use Grafana only to define alert rules, not to evaluate or receive alert notifications. Once the conditions of your alert rule are met, the Scaleway alert manager evaluates the rule and sends a notification to the contact points you have configured in the Scaleway console or in Grafana.
1717

1818
This page shows you how to create alert rules in Grafana for monitoring Scaleway resources integrated with Cockpit, such as Instances, Object Storage, and Kubernetes. These alerts rely on Scaleway-provided metrics, which are preconfigured and available in the **Metrics browser** drop-down when using the **Scaleway Metrics data source** in the Grafana interface. This page explains how to use the `Scaleway Metrics` data source, interpret metrics, set alert conditions, and activate alerts.
1919

@@ -24,8 +24,7 @@ This page shows you how to create alert rules in Grafana for monitoring Scaleway
2424
- Scaleway resources you can monitor
2525
- [Created Grafana credentials](/cockpit/how-to/retrieve-grafana-credentials/) with the **Editor** role
2626
- [Enabled](/cockpit/how-to/enable-alert-manager/) the Scaleway alert manager
27-
- [Created](/cockpit/how-to/add-contact-points/) at least one contact point **in the Scaleway console**, otherwise, alerts will not be delivered
28-
- Selected the **Scaleway Alerting** alert manager in Grafana
27+
- [Created](/cockpit/how-to/add-contact-points/) a contact point in the Scaleway console or in Grafana (with the `Scaleway Alerting` alert manager of the same region as your `Scaleway Metrics` data source), otherwise alerts will not be delivered
2928

3029
## Switch to data source managed alert rules
3130

@@ -45,8 +44,8 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
4544
<TabsTab label="Scaleway Instance">
4645
The steps below explain how to create the metric selection and configure an alert condition that triggers when **your Instance consumes more than 10% of a single CPU core over the past 5 minutes.**
4746

48-
1. Type a name for your alert.
49-
2. Select the data source you want to configure alerts for. For the sake of this documentation, we are choosing the **Scaleway Metrics** data source.
47+
1. Type a name for your alert. For example, `alert-for-high-cpu-usage`.
48+
2. Select the **Scaleway Metrics** data source.
5049
3. Click the **Metrics browser** drop-down.
5150
<Lightbox src="scaleway-metrics-browser.webp" alt="" />
5251
<Lightbox src="scaleway-metrics-displayed.webp" alt="" />
@@ -57,26 +56,26 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
5756
5. Select the appropriate labels to filter your metric and target specific resources.
5857
6. Choose values for your selected labels. The **Resulting selector** field displays your final query selector.
5958
<Lightbox src="scaleway-metric-selection.webp" alt="" />
60-
7. Click **Use query** to validate your metric selection. Your selection displays in the query field next to the **Metrics browser** button. This prepares it for use in the alert condition, which we will define in the next steps.
61-
8. In the query field, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id` and `resource_name`) correspond to those of the target resource.
59+
7. Click **Use query** to validate your metric selection.
60+
8. In the query field next to the **Metrics browser** button, paste the following query. Make sure that the values for the labels you have selected (for example, `resource_id` and `resource_name`) correspond to those of the target resource.
6261
```bash
6362
rate(instance_server_cpu_seconds_total{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",resource_name="name-of-your-resource"}[5m]) > 0.1
6463
```
6564
9. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
65+
10. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert, and optionally, add annotations.
66+
11. Enter a label in the **Labels** field and a name in the **Value** field.
6667
<Message type="note">
67-
For example, to wait until the condition has been met continuously for 5 minutes, type `5` and select `minutes` in the drop-down.
68+
In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
69+
For example, if an alert has the label `team = instances-team`, you are telling Grafana to send a notification to the Instances team when your alert `alert-for-high-cpu-usage` gets triggered. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
6870
</Message>
69-
10. Enter a namespace in the **Namespace** field to help you categorize and manage your alert, then click **Enter**.
70-
11. Enter a name in the **Group** field to help you categorize and manage your alert, then click **Enter**.
71-
12. Optionally, add a summary and a description.
72-
13. Click **Save rule** in the top right corner of your screen to save and activate your alert.
73-
14. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and your [contact point](/cockpit/concepts/#contact-points) should receive an email informing them that the alert is firing.
71+
12. Click **Save rule** in the top right corner of your screen to save and activate your alert.
72+
13. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contact point](/cockpit/concepts/#contact-points).
7473
</TabsTab>
7574
<TabsTab label="Object Storage bucket">
7675
The steps below explain how to create the metric selection and configure an alert condition that triggers when **the object count in your bucket exceeds a specific threshold**.
7776

7877
1. Type a name for your alert.
79-
2. Select the data source you want to configure alerts for. For the sake of this documentation, we are choosing the **Scaleway Metrics** data source.
78+
2. Select the **Scaleway Metrics** data source.
8079
3. Click the **Metrics browser** drop-down.
8180
<Lightbox src="scaleway-metrics-browser.webp" alt="" />
8281
<Lightbox src="scaleway-metrics-displayed.webp" alt="" />
@@ -92,20 +91,20 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
9291
object_storage_bucket_objects_total{region="fr-par", resource_id="my-bucket"} > 2000
9392
```
9493
9. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
94+
10. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert, and optionally, add annotations.
95+
11. Enter a label in the **Labels** field and a name in the **Value** field.
9596
<Message type="note">
96-
For example, to wait until the condition has been met continuously for 5 minutes, type `5` and select `minutes` in the drop-down.
97+
In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
98+
For example, if an alert has the label `team = object-storage-team`, you are telling Grafana to send a notification to the Object Storage team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
9799
</Message>
98-
10. Enter a namespace in the **Namespace** field to help you categorize and manage your alert, then click **Enter**.
99-
11. Enter a name in the **Group** field to help you categorize and manage your alert, then click **Enter**.
100-
12. Optionally, add a summary and a description.
101-
13. Click **Save rule** in the top right corner of your screen to save and activate your alert.
102-
14. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and your [contact point](/cockpit/concepts/#contact-points) should receive an email informing them that the alert is firing.
100+
12. Click **Save rule** in the top right corner of your screen to save and activate your alert.
101+
13. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contact point](/cockpit/concepts/#contact-points).
103102
</TabsTab>
104103
<TabsTab label="Kubernetes pod">
105104
The steps below explain how to create the metric selection and configure an alert condition that triggers when **no new pod activity occurs, which could mean your cluster is stuck or unresponsive.**
106105

107106
1. Type a name for your alert.
108-
2. Select the data source you want to configure alerts for. For the sake of this documentation, we are choosing the **Scaleway Metrics** data source.
107+
2. Select the **Scaleway Metrics** data source.
109108
3. Click the **Metrics browser** drop-down.
110109
<Lightbox src="scaleway-metrics-browser.webp" alt="" />
111110
<Lightbox src="scaleway-metrics-displayed.webp" alt="" />
@@ -121,20 +120,20 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
121120
rate(kubernetes_cluster_k8s_shoot_nodes_pods_usage_total{resource_name="k8s-par-quizzical-chatelet"}[15m]) == 0
122121
```
123122
9. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
123+
10. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert, and optionally, add annotations.
124+
11. Enter a label in the **Labels** field and a name in the **Value** field.
124125
<Message type="note">
125-
For example, to wait until the condition has been met continuously for 5 minutes, type `5` and select `minutes` in the drop-down.
126+
In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
127+
For example, if an alert has the label `team = kubernetes-team`, you are telling Grafana to send a notification to the Kubernetes team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
126128
</Message>
127-
10. Enter a namespace in the **Namespace** field to help you categorize and manage your alert, then click **Enter**.
128-
11. Enter a name in the **Group** field to help you categorize and manage your alert, then click **Enter**.
129-
12. Optionally, add a summary and a description.
130-
13. Click **Save rule** in the top right corner of your screen to save and activate your alert.
131-
14. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and your [contact point](/cockpit/concepts/#contact-points) should receive an email informing them that the alert is firing.
129+
12. Click **Save rule** in the top right corner of your screen to save and activate your alert.
130+
13. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contact point](/cockpit/concepts/#contact-points).
132131
</TabsTab>
133132
<TabsTab label="Cockpit logs">
134133
The steps below explain how to create the metric selection and configure an alert condition that triggers when **no logs are stored for 5 minutes, which may indicate your app or system is broken**.
135134

136135
1. Type a name for your alert.
137-
2. Select the data source you want to configure alerts for. For the sake of this documentation, we are choosing the **Scaleway Metrics** data source.
136+
2. Select the **Scaleway Metrics** data source.
138137
3. Click the **Metrics browser** drop-down.
139138
<Lightbox src="scaleway-metrics-browser.webp" alt="" />
140139
<Lightbox src="scaleway-metrics-displayed.webp" alt="" />
@@ -150,18 +149,18 @@ Switch between the tabs below to create alerts for a Scaleway Instance, an Objec
150149
observability_cockpit_loki_chunk_store_stored_chunks_total:increase5m{resource_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} == 0
151150
```
152151
9. In the **Set alert evaluation behavior** field, specify how long the condition must be true before triggering the alert.
152+
10. Enter a name in the **Namespace** and **Group** fields to categorize and manage your alert, and optionally, add annotations.
153+
11. Enter a label in the **Labels** field and a name in the **Value** field.
153154
<Message type="note">
154-
For example, to wait until the condition has been met continuously for 5 minutes, type `5` and select `minutes` in the drop-down.
155+
In Grafana, notifications are sent by matching alerts to notification policies based on labels. This step is about deciding how alerts will reach you or your team (Slack, email, etc.) based on labels you attach to them. Then, you can set up rules that define who receives notifications in the **Notification policies** page.
156+
For example, if an alert has the label `team = cockpit-team`, you are telling Grafana to send a notification to the Cockpit team when your alert is firing. Find out how to [configure notification policies in Grafana](/tutorials/configure-slack-alerting/#configuring-a-notification-policy).
155157
</Message>
156-
10. Enter a namespace in the **Namespace** field to help you categorize and manage your alert, then click **Enter**.
157-
11. Enter a name in the **Group** field to help you categorize and manage your alert, then click **Enter**.
158-
12. Optionally, add a summary and a description.
159-
13. Click **Save rule** in the top right corner of your screen to save and activate your alert. Your alert will start evaluating based on the rule you have defined.
160-
14. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and your [contact point](/cockpit/concepts/#contact-points) should receive an email informing them that the alert is firing.
158+
12. Click **Save rule** in the top right corner of your screen to save and activate your alert.
159+
13. Optionally, check that your configuration works by temporarily lowering the threshold. This will trigger the alert and notify your [contact point](/cockpit/concepts/#contact-points).
161160
</TabsTab>
162161
</Tabs>
163162

164-
You can view your firing alerts in the **Alert rules** section of your Grafana (Home > Alerting > Alert rules).
163+
You can view your firing alerts in the **Alert rules** section of your Grafana (**Home** > **Alerting** > **Alert rules**).
165164

166165
<Lightbox src="scaleway-alerts-firing.webp" alt="" />
167166

0 commit comments

Comments
 (0)