You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/scenarios/understand_impact_of_changes/7-alerting-dashboards-slos/3-slos.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ weight: 3
5
5
time: 10 minutes
6
6
---
7
7
8
-
We can now use the created Monitoring MetricSet together with Service Level Objectives a similar way we used them with dashboards and detectors/alerts before. For that we want to be clear about some key concepts:
8
+
We can now use the created Monitoring MetricSet together with Service Level Objectives a similar way we used them with dashboards and detectors/alerts before. For that we want to be clear about some key concepts:
9
9
10
10
## Key Conzepts of Service Level Monitoring
11
11
@@ -21,34 +21,34 @@ We can now use the created Monitoring MetricSet together with Service Level Obje
21
21
22
22
## Creating a new Service Level Objective
23
23
24
-
There is an easy to follow wizard to create a new Service Level Objective (SLO). In the left navigation just follow the link "**Detectors & SLOs**". From there select the third tab "**SLOs**" and click the blue button to the right that says "**Create SLO**".
24
+
There is an easy to follow wizard to create a new Service Level Objective (SLO). In the left navigation just follow the link "**Detectors & SLOs**". From there select the third tab "**SLOs**" and click the blue button to the right that says "**Create SLO**".
25
25
26
26

27
27
28
28
The wizard guides you through some easy steps. And if everything during the previous steps worked out, you will have no problems here. ;)
29
29
30
-
In our case we want to use `Service & endpoint` as our **Metric type** instead of `Custom metric`. We filter the **Environment** down to the environment that we are using during this workshop (i.e. `tagging-workshop-yourname`) and select the `creditcheckservice` from the **Service and endpoint** list. Our **Indicator type** for this workshop will be `Request latency` and not `Request success`.
30
+
In our case we want to use `Service & endpoint` as our **Metric type** instead of `Custom metric`. We filter the **Environment** down to the environment that we are using during this workshop (i.e. `tagging-workshop-yourname`) and select the `creditcheckservice` from the **Service and endpoint** list. Our **Indicator type** for this workshop will be `Request latency` and not `Request success`.
31
31
32
32
Now we can select our **Filters**. Since we are using the `Request latency` as the **Indicator type** and that is a metric of the APM Service, we can filter on `credit.score.category`. Feel free to try out what happens, when you set the **Indicator type** to `Request success`.
33
33
34
34
Today we are only interested in our `exceptional` credit scores. So please select that as the filter.
35
35
36
36

37
37
38
-
In the next step we define the objective we want to reach. For the `Request latency` type, we define the **Target (%)**, the **Latency (ms)** and the **Compliance Window**. Please set these to `99`, `100` and `Last 7 days`. This will give us a good idea what we are achieving already.
38
+
In the next step we define the objective we want to reach. For the `Request latency` type, we define the **Target (%)**, the **Latency (ms)** and the **Compliance Window**. Please set these to `99`, `100` and `Last 7 days`. This will give us a good idea what we are achieving already.
39
39
40
40
Here we will already be in shock or play around with the numbers to make it not so scary. Feel free to play around with the numbers to see how well we achieve the objective and how much we have left to burn.
41
41
42
42

43
43
44
-
The third step gives us the chance to alert (aka annoy) people who should be aware about these SLOs to initiate countermeasures. These "people" can also be mechanism like ITSM systems or webhooks to initiate automatic remediation steps.
44
+
The third step gives us the chance to alert (aka annoy) people who should be aware about these SLOs to initiate countermeasures. These "people" can also be mechanism like ITSM systems or webhooks to initiate automatic remediation steps.
45
45
46
46
Activate all categories you want to alert on and add recipients to the different alerts.
47
47
48
48

49
49
50
-
The next step is only the naming for this SLO. Have your own naming convention ready for this. In our case we would just name it `creditchceckservice:score:exceptional:YOURNAME` and click the **Create**-button **BUT** you can also **just cancel the wizard** by clicking anything in the left navigation and confirming to **Discard changes**.
50
+
The next step is only the naming for this SLO. Have your own naming convention ready for this. In our case we would just name it `creditchceckservice:score:exceptional:YOURNAME` and click the **Create**-button **BUT** you can also **just cancel the wizard** by clicking anything in the left navigation and confirming to **Discard changes**.
51
51
52
52

53
53
54
-
And with that we have (*nearly*) successfully created an SLO including the alerting in case we might miss or goals.
54
+
And with that we have (*nearly*) successfully created an SLO including the alerting in case we might miss or goals.
@@ -34,4 +34,4 @@ The `credit.score.category` tag appears again as a **Pending MetricSet**. After
34
34
35
35
This mechanism creates a new dimension from the tag on a bunch of metrics that can be used to filter these metrics based on the values of that new dimension. **Important**: To differentiate between the original and the copy, the dots in the tag name are replaced by underscores for the new dimension. With that the metrics become a dimension named `credit_score_category` and not `credit.score.category`.
36
36
37
-
Next, let's explore how we can use this **Monitoring MetricSet**.
37
+
Next, let's explore how we can use this **Monitoring MetricSet**.
0 commit comments