You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Frequently asked questions about Azure Monitor metric alerts
2
+
title: Troubleshoot Azure Monitor metric alerts
3
3
description: Common issues with Azure Monitor metric alerts and possible solutions.
4
4
ms.author: abbyweisberg
5
5
ms.topic: troubleshooting
@@ -22,38 +22,37 @@ If you believe a metric alert should have fired but it didn't, and it isn't list
22
22
- Check that **Aggregation type** and **Aggregation granularity (Period)** are configured as expected. **Aggregation type** determines how metric values are aggregated. To learn more, see [Azure Monitor Metrics aggregation and display explained](../essentials/metrics-aggregation-explained.md#aggregation-types). **Aggregation granularity (Period)** controls how far back the evaluation aggregates the metric values each time the alert rule runs.
23
23
- Check that **Threshold value** or **Sensitivity** are configured as expected.
24
24
- For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured. **Number of violations** might filter alerts, and **Ignore data before** can affect how the thresholds are calculated.
25
-
26
-
> [!NOTE]
27
-
> Dynamic thresholds require at least 3 days and 30 metric samples before they become active.
28
-
25
+
26
+
> [!NOTE]
27
+
> Dynamic thresholds require at least 3 days and 30 metric samples before they become active.
28
+
29
29
1.**Check if the alert fired but didn't send the notification.**
30
30
31
-
Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list but have an issue with some of its actions or notifications, see [Troubleshooting problems in Azure Monitor alerts](./alerts-troubleshoot.md).
31
+
Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list but have an issue with some of its actions or notifications, see [Troubleshooting problems in Azure Monitor alerts](./alerts-troubleshoot.md).
32
32
33
33
1.**Check if the alert is already active.**
34
34
35
-
Check if there's already a fired alert on the metric time series for which you expected to get an alert. Metric alerts are stateful, which means that once an alert is fired on a specific metric time series, more alerts on that time series won't be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition isn't met for three consecutive evaluations.
35
+
Check if there's already a fired alert on the metric time series for which you expected to get an alert. Metric alerts are stateful, which means that once an alert is fired on a specific metric time series, more alerts on that time series won't be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition isn't met for three consecutive evaluations.
36
36
37
37
1.**Check the dimensions used.**
38
38
39
-
If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series, without any dimensions selected, configure another alert rule on the metric without selecting dimensions.
39
+
If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series, without any dimensions selected, configure another alert rule on the metric without selecting dimensions.
40
40
41
41
1.**Check the aggregation and time granularity.**
42
42
43
-
If you're using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
44
-
45
-
* The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule.
46
-
* The selected **Time granularity** is the same as **Aggregation granularity (Period)** in your alert rule, and isn't set to **Automatic**.
43
+
If you're using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
44
+
- The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule.
45
+
- The selected **Time granularity** is the same as **Aggregation granularity (Period)** in your alert rule, and isn't set to **Automatic**.
47
46
48
47
1.**Check if the alert rule is missing the first evaluation period in a time series.**
49
48
50
-
You can reduce the likelihood of missing the first evaluation of added time series by making sure that you choose an **Aggregation granularity (Period)** that's larger than the **Frequency of evaluation** in the following cases:
49
+
You can reduce the likelihood of missing the first evaluation of added time series by making sure that you choose an **Aggregation granularity (Period)** that's larger than the **Frequency of evaluation** in the following cases:
51
50
52
-
- When a new dimension value combination is added to a metric alert rule that monitors multiple dimensions.
53
-
- When a new resource is added to the scope to a metric alert rule that monitors multiple resources.
54
-
- When the metric is emitted after a period longer than 24 hours in which it wasn't emitted for metric alert rule that monitors a metric that isn't emitted continuously (sparse metric).
51
+
- When a new dimension value combination is added to a metric alert rule that monitors multiple dimensions.
52
+
- When a new resource is added to the scope to a metric alert rule that monitors multiple resources.
53
+
- When the metric is emitted after a period longer than 24 hours in which it wasn't emitted for metric alert rule that monitors a metric that isn't emitted continuously (sparse metric).
55
54
56
-
## The metric alert is not triggered every time my condition is met
55
+
## The metric alert is not triggered every time the condition is met
57
56
58
57
Metric alerts are stateful by default, so other alerts aren't fired if there's already a fired alert on a specific time series. To make a specific metric alert rule stateless and get alerted on every evaluation in which the alert condition is met, use one of these options:
59
58
@@ -198,9 +197,10 @@ This error indicates an issue with the alert rule scope. This can happen when ed
198
197
## The service limits for metric alert rules is too small
199
198
200
199
The allowed number of metric alert rules per subscription is subject to [service limits](../service-limits.md).
200
+
201
201
See [Check the number of metric alert rules in use](alerts-manage-alert-rules.md#check-the-number-of-metric-alert-rules-in-use) to see how many metric alert rules are currently in use.
202
202
203
-
If you've reached the quota limit, the following steps might help resolve the issue:
203
+
If you've reached the service limit, the following steps might help resolve the issue:
204
204
205
205
1. Try deleting or disabling metric alert rules that aren't used anymore.
206
206
1. Switch to using metric alert rules that monitor multiple resources. With this capability, a single alert rule can monitor multiple resources by using only one alert rule counted against the quota. For more information about this capability and the supported resource types, see [metric-alerts](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions).
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-troubleshoot.md
+17-13Lines changed: 17 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,7 @@ If you can see a fired alert in the portal, but its configured action didn't tri
162
162
- After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes.
163
163
- The retry logic assumes that the call can be retried. The status codes: 408, 429, 503, 504, or `HttpRequestException`, `WebException`, or `TaskCancellationException` allow for the call to be retried.
164
164
165
-
## Action or notification happened more than once
165
+
## The action or notification happened more than once
166
166
167
167
If you received a notification for an alert (such as an email or an SMS) more than once, or the alert's action (such as webhook or Azure function) was triggered multiple times, follow these steps:
168
168
@@ -180,27 +180,31 @@ If you received a notification for an alert (such as an email or an SMS) more th
180
180
<!-- convertborder later -->
181
181
:::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of multiple action groups in an alert." border="false":::
182
182
183
-
## Action or notification has unexpected content
183
+
## The action or notification has unexpected content
184
184
185
-
Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is resilient and quick but occasionally suffers outages. When there are outages, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider might have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems isn't feasible.
185
+
1.**Was there an outage that triggered the use of the fallback email provider?**
186
186
187
-
Notifications generated by the fallback solution contain a note that says:
187
+
Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is resilient and quick but occasionally suffers outages. When there are outages, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider might have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems isn't feasible.
188
188
189
-
"This is a degraded email experience. That means the formatting may be off or details could be missing. For more information on the degraded email experience, read here."
189
+
Notifications generated by the fallback solution contain a note that says:
190
190
191
-
If your notification doesn't contain this note and you received the alert, but believe some of its fields are missing or incorrect, check the payload format.
191
+
"This is a degraded email experience. That means the formatting may be off or details could be missing. For more information on the degraded email experience, read here."
192
+
193
+
If your notification doesn't contain this note and you received the alert, but believe some of its fields are missing or incorrect, check the payload format.
194
+
195
+
1.**What format did you use when configuring the alert rule?**
192
196
193
-
Each action type (email, webhook, etc.) has two formats - the default, legacy format, and the [common schema format](./alerts-common-schema.md). When you create an action group, you specify the format of the action. Different actions in the action groups may have different formats.
197
+
Each action type (email, webhook, etc.) has two formats - the default, legacy format, and the [common schema format](./alerts-common-schema.md). When you create an action group, you specify the format of the action. Different actions in the action groups may have different formats.
Check if the format specified at the action level is what you expect. For example, you might have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
203
+
Check if the format specified at the action level is what you expect. For example, you might have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
200
204
201
-
Also, check the payload format (JSON) for [activity log alerts](../alerts/activity-log-alerts-webhook.md), for [log search alerts](../alerts/alerts-log-webhook.md) (both Application Insights and log analytics), for [metric alerts](alerts-metric-near-real-time.md#payload-schema), for the [common alert schema](../alerts/alerts-common-schema.md), and for the deprecated [classic metric alerts](./alerts-webhooks.md).
205
+
Also, check the payload format (JSON) for [activity log alerts](../alerts/activity-log-alerts-webhook.md), for [log search alerts](../alerts/alerts-log-webhook.md) (both Application Insights and log analytics), for [metric alerts](alerts-metric-near-real-time.md#payload-schema), for the [common alert schema](../alerts/alerts-common-schema.md), and for the deprecated [classic metric alerts](./alerts-webhooks.md).
202
206
203
-
### **Search results are not included in log search alert notifications.**
207
+
### **The search results are not included in the log search alert notifications.**
204
208
205
209
As of log search alerts API version 2021-08-01, search results were removed from alert notification payload.
206
210
Search results are only available for alert rules created with older API versions (2018-04-16). Creation of new alert rules through the Azure portal will, by default, create the rule with the newer version.
@@ -222,7 +226,7 @@ Also, check the payload format (JSON) for [activity log alerts](../alerts/activi
222
226
223
227
Custom properties are only passed to the payload for actions, such as webhook, Azure function or logic apps. Custom properties aren't included in for notifications (email/SMS/push).
224
228
225
-
## Alert processing rule isn't working as expected
229
+
## The alert processing rule isn't working as expected
226
230
227
231
If you can see a fired alert in the portal, but a related alert processing rule didn't work as expected, follow these steps:
0 commit comments