Skip to content

Commit c9d1213

Browse files
committed
Fixes
1 parent cf22734 commit c9d1213

File tree

2 files changed

+35
-31
lines changed

2 files changed

+35
-31
lines changed

articles/azure-monitor/alerts/alerts-troubleshoot-metric.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Frequently asked questions about Azure Monitor metric alerts
2+
title: Troubleshoot Azure Monitor metric alerts
33
description: Common issues with Azure Monitor metric alerts and possible solutions.
44
ms.author: abbyweisberg
55
ms.topic: troubleshooting
@@ -22,38 +22,37 @@ If you believe a metric alert should have fired but it didn't, and it isn't list
2222
- Check that **Aggregation type** and **Aggregation granularity (Period)** are configured as expected. **Aggregation type** determines how metric values are aggregated. To learn more, see [Azure Monitor Metrics aggregation and display explained](../essentials/metrics-aggregation-explained.md#aggregation-types). **Aggregation granularity (Period)** controls how far back the evaluation aggregates the metric values each time the alert rule runs.
2323
- Check that **Threshold value** or **Sensitivity** are configured as expected.
2424
- For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured. **Number of violations** might filter alerts, and **Ignore data before** can affect how the thresholds are calculated.
25-
26-
> [!NOTE]
27-
> Dynamic thresholds require at least 3 days and 30 metric samples before they become active.
28-
25+
26+
> [!NOTE]
27+
> Dynamic thresholds require at least 3 days and 30 metric samples before they become active.
28+
2929
1. **Check if the alert fired but didn't send the notification.**
3030

31-
Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list but have an issue with some of its actions or notifications, see [Troubleshooting problems in Azure Monitor alerts](./alerts-troubleshoot.md).
31+
Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list but have an issue with some of its actions or notifications, see [Troubleshooting problems in Azure Monitor alerts](./alerts-troubleshoot.md).
3232

3333
1. **Check if the alert is already active.**
3434

35-
Check if there's already a fired alert on the metric time series for which you expected to get an alert. Metric alerts are stateful, which means that once an alert is fired on a specific metric time series, more alerts on that time series won't be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition isn't met for three consecutive evaluations.
35+
Check if there's already a fired alert on the metric time series for which you expected to get an alert. Metric alerts are stateful, which means that once an alert is fired on a specific metric time series, more alerts on that time series won't be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition isn't met for three consecutive evaluations.
3636

3737
1. **Check the dimensions used.**
3838

39-
If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series, without any dimensions selected, configure another alert rule on the metric without selecting dimensions.
39+
If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series, without any dimensions selected, configure another alert rule on the metric without selecting dimensions.
4040

4141
1. **Check the aggregation and time granularity.**
4242

43-
If you're using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
44-
45-
* The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule.
46-
* The selected **Time granularity** is the same as **Aggregation granularity (Period)** in your alert rule, and isn't set to **Automatic**.
43+
If you're using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
44+
- The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule.
45+
- The selected **Time granularity** is the same as **Aggregation granularity (Period)** in your alert rule, and isn't set to **Automatic**.
4746

4847
1. **Check if the alert rule is missing the first evaluation period in a time series.**
4948

50-
You can reduce the likelihood of missing the first evaluation of added time series by making sure that you choose an **Aggregation granularity (Period)** that's larger than the **Frequency of evaluation** in the following cases:
49+
You can reduce the likelihood of missing the first evaluation of added time series by making sure that you choose an **Aggregation granularity (Period)** that's larger than the **Frequency of evaluation** in the following cases:
5150

52-
- When a new dimension value combination is added to a metric alert rule that monitors multiple dimensions.
53-
- When a new resource is added to the scope to a metric alert rule that monitors multiple resources.
54-
- When the metric is emitted after a period longer than 24 hours in which it wasn't emitted for metric alert rule that monitors a metric that isn't emitted continuously (sparse metric).
51+
- When a new dimension value combination is added to a metric alert rule that monitors multiple dimensions.
52+
- When a new resource is added to the scope to a metric alert rule that monitors multiple resources.
53+
- When the metric is emitted after a period longer than 24 hours in which it wasn't emitted for metric alert rule that monitors a metric that isn't emitted continuously (sparse metric).
5554

56-
## The metric alert is not triggered every time my condition is met
55+
## The metric alert is not triggered every time the condition is met
5756

5857
Metric alerts are stateful by default, so other alerts aren't fired if there's already a fired alert on a specific time series. To make a specific metric alert rule stateless and get alerted on every evaluation in which the alert condition is met, use one of these options:
5958

@@ -198,9 +197,10 @@ This error indicates an issue with the alert rule scope. This can happen when ed
198197
## The service limits for metric alert rules is too small
199198

200199
The allowed number of metric alert rules per subscription is subject to [service limits](../service-limits.md).
200+
201201
See [Check the number of metric alert rules in use](alerts-manage-alert-rules.md#check-the-number-of-metric-alert-rules-in-use) to see how many metric alert rules are currently in use.
202202

203-
If you've reached the quota limit, the following steps might help resolve the issue:
203+
If you've reached the service limit, the following steps might help resolve the issue:
204204

205205
1. Try deleting or disabling metric alert rules that aren't used anymore.
206206
1. Switch to using metric alert rules that monitor multiple resources. With this capability, a single alert rule can monitor multiple resources by using only one alert rule counted against the quota. For more information about this capability and the supported resource types, see [metric-alerts](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions).

articles/azure-monitor/alerts/alerts-troubleshoot.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ If you can see a fired alert in the portal, but its configured action didn't tri
162162
- After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes.
163163
- The retry logic assumes that the call can be retried. The status codes: 408, 429, 503, 504, or `HttpRequestException`, `WebException`, or `TaskCancellationException` allow for the call to be retried.
164164

165-
## Action or notification happened more than once
165+
## The action or notification happened more than once
166166

167167
If you received a notification for an alert (such as an email or an SMS) more than once, or the alert's action (such as webhook or Azure function) was triggered multiple times, follow these steps:
168168

@@ -180,27 +180,31 @@ If you received a notification for an alert (such as an email or an SMS) more th
180180
<!-- convertborder later -->
181181
:::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of multiple action groups in an alert." border="false":::
182182

183-
## Action or notification has unexpected content
183+
## The action or notification has unexpected content
184184

185-
Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is resilient and quick but occasionally suffers outages. When there are outages, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider might have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems isn't feasible.
185+
1. **Was there an outage that triggered the use of the fallback email provider?**
186186

187-
Notifications generated by the fallback solution contain a note that says:
187+
Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is resilient and quick but occasionally suffers outages. When there are outages, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider might have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems isn't feasible.
188188

189-
"This is a degraded email experience. That means the formatting may be off or details could be missing. For more information on the degraded email experience, read here."
189+
Notifications generated by the fallback solution contain a note that says:
190190

191-
If your notification doesn't contain this note and you received the alert, but believe some of its fields are missing or incorrect, check the payload format.
191+
"This is a degraded email experience. That means the formatting may be off or details could be missing. For more information on the degraded email experience, read here."
192+
193+
If your notification doesn't contain this note and you received the alert, but believe some of its fields are missing or incorrect, check the payload format.
194+
195+
1. **What format did you use when configuring the alert rule?**
192196

193-
Each action type (email, webhook, etc.) has two formats - the default, legacy format, and the [common schema format](./alerts-common-schema.md). When you create an action group, you specify the format of the action. Different actions in the action groups may have different formats.
197+
Each action type (email, webhook, etc.) has two formats - the default, legacy format, and the [common schema format](./alerts-common-schema.md). When you create an action group, you specify the format of the action. Different actions in the action groups may have different formats.
194198

195-
For example, for webhook actions:
199+
For example, for webhook actions:
196200

197-
:::image type="content" source="media/alerts-troubleshoot/webhook.png" lightbox="media/alerts-troubleshoot/webhook.png" alt-text="Screenshot of webhook action schema option." border="false":::
201+
:::image type="content" source="media/alerts-troubleshoot/webhook.png" lightbox="media/alerts-troubleshoot/webhook.png" alt-text="Screenshot of webhook action schema option." border="false":::
198202

199-
Check if the format specified at the action level is what you expect. For example, you might have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
203+
Check if the format specified at the action level is what you expect. For example, you might have developed code that responds to alerts (webhook, function, logic app, etc.), expecting one format, but later in the action you or another person specified a different format.
200204

201-
Also, check the payload format (JSON) for [activity log alerts](../alerts/activity-log-alerts-webhook.md), for [log search alerts](../alerts/alerts-log-webhook.md) (both Application Insights and log analytics), for [metric alerts](alerts-metric-near-real-time.md#payload-schema), for the [common alert schema](../alerts/alerts-common-schema.md), and for the deprecated [classic metric alerts](./alerts-webhooks.md).
205+
Also, check the payload format (JSON) for [activity log alerts](../alerts/activity-log-alerts-webhook.md), for [log search alerts](../alerts/alerts-log-webhook.md) (both Application Insights and log analytics), for [metric alerts](alerts-metric-near-real-time.md#payload-schema), for the [common alert schema](../alerts/alerts-common-schema.md), and for the deprecated [classic metric alerts](./alerts-webhooks.md).
202206

203-
### **Search results are not included in log search alert notifications.**
207+
### **The search results are not included in the log search alert notifications.**
204208

205209
As of log search alerts API version 2021-08-01, search results were removed from alert notification payload.
206210
Search results are only available for alert rules created with older API versions (2018-04-16). Creation of new alert rules through the Azure portal will, by default, create the rule with the newer version.
@@ -222,7 +226,7 @@ Also, check the payload format (JSON) for [activity log alerts](../alerts/activi
222226

223227
Custom properties are only passed to the payload for actions, such as webhook, Azure function or logic apps. Custom properties aren't included in for notifications (email/SMS/push).
224228

225-
## Alert processing rule isn't working as expected
229+
## The alert processing rule isn't working as expected
226230

227231
If you can see a fired alert in the portal, but a related alert processing rule didn't work as expected, follow these steps:
228232

0 commit comments

Comments
 (0)