You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [Combine Azure Resource Graph tables with a Log Analytics workspace](../logs/azure-monitor-data-explorer-proxy.md#combine-azure-resource-graph-tables-with-a-log-analytics-workspace)
62
+
* Not suppported in government clouds
63
+
63
64
1. Select **Run** to run the alert.
64
65
1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**.
65
66
1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information.
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-create-metric-alert-rule.md
+45-1Lines changed: 45 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: This article shows you how to create a new metric alert rule.
4
4
author: AbbyMSFT
5
5
ms.author: abbyweisberg
6
6
ms.topic: how-to
7
-
ms.date: 11/27/2023
7
+
ms.date: 03/07/2024
8
8
ms.reviewer: harelbr
9
9
---
10
10
@@ -16,6 +16,14 @@ You create an alert rule by combining the resources to be monitored, the monitor
16
16
17
17
Alerts triggered by these alert rules contain a payload that uses the [common alert schema](alerts-common-schema.md).
18
18
19
+
## Permissions to create metric alert rules
20
+
21
+
To create a metric alert rule, you must have the following permissions:
22
+
23
+
- Read permission on the target resource of the alert rule.
24
+
- Write permission on the resource group in which the alert rule is created. If you're creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides.
25
+
- Read permission on any action group associated to the alert rule, if applicable.
- Metric alert rule names can't end with a space or a period.
145
+
- The combined resource group name and alert rule name can't exceed 252 characters.
146
+
147
+
> [!NOTE]
148
+
> If the alert rule name contains characters that aren't alphabetic or numeric, for example, spaces, punctuation marks, or symbols, these characters might be URL-encoded when retrieved by certain clients.
149
+
150
+
## Restrictions when you use dimensions in a metric alert rule with multiple conditions
151
+
152
+
Metric alerts support alerting on multi-dimensional metrics and support defining multiple conditions, up to five conditions per alert rule.
153
+
154
+
Consider the following constraints when you use dimensions in an alert rule that contains multiple conditions:
155
+
156
+
- You can only select one value per dimension within each condition.
157
+
- You can't use the option to **Select all current and future values**. Select the asterisk (\*).
158
+
- When metrics that are configured in different conditions support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics in the relevant conditions.
159
+
For example:
160
+
- Consider a metric alert rule that's defined on a storage account and monitors two conditions:
161
+
* Total **Transactions** > 5
162
+
* Average **SuccessE2ELatency** > 250 ms
163
+
- You want to update the first condition and only monitor transactions where the **ApiName** dimension equals `"GetBlob"`.
164
+
- Because both the **Transactions** and **SuccessE2ELatency** metrics support an **ApiName** dimension, you'll need to update both conditions, and have them specify the **ApiName** dimension with a `"GetBlob"` value.
165
+
166
+
167
+
## Considerations when creating an alert rule that contains multiple criteria
168
+
- You can only select one value per dimension within each criterion.
169
+
- You can't use an asterisk (\*) as a dimension value.
170
+
- When metrics that are configured in different criteria support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-multiple-criteria).
171
+
172
+
129
173
## Next steps
130
174
[View and manage your alert instances](alerts-manage-alert-instances.md)
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-manage-alert-rules.md
+11-1Lines changed: 11 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Manage your alert rules
3
3
description: Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
4
4
author: AbbyMSFT
5
5
ms.author: abbyweisberg
6
-
ms.topic: conceptual
6
+
ms.topic: how-to
7
7
ms.custom: devx-track-azurecli
8
8
ms.date: 01/14/2024
9
9
ms.reviewer: harelbr
@@ -120,6 +120,16 @@ Metric alert rules have these dedicated PowerShell cmdlets:
120
120
-[Update](/rest/api/monitor/metricalerts/update): Update a metric alert rule.
121
121
-[Delete](/rest/api/monitor/metricalerts/delete): Delete a metric alert rule.
122
122
123
+
## Delete metric alert rules defined on a deleted resource
124
+
125
+
When you delete an Azure resource, associated metric alert rules aren't deleted automatically. To delete alert rules associated with a resource that's been deleted:
126
+
127
+
1. Open the resource group in which the deleted resource was defined.
128
+
1. In the list that displays the resources, select the **Show hidden types** checkbox.
129
+
1. Filter the list by Type == **microsoft.insights/metricalerts**.
130
+
1. Select the relevant alert rules and select **Delete**.
131
+
132
+
123
133
## Manage log search alert rules using the CLI
124
134
125
135
This section describes how to manage log search alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-troubleshoot-log.md
+36-43Lines changed: 36 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,9 @@ You can use log alerts to evaluate resources logs every set frequency by using a
16
16
> [!NOTE]
17
17
> This article doesn't consider cases where the Azure portal shows that an alert rule was triggered but a notification isn't received. For such cases, see [Action or notification on my alert did not work as expected](./alerts-troubleshoot.md#action-or-notification-on-my-alert-did-not-work-as-expected).
18
18
19
-
## Log search alert wasn't fired when it should
19
+
## A log search alert didn't fire when it should have
20
20
21
-
### The alert rule is in a degraded or unavailable health state
21
+
1.**Is the alert rule is in a degraded or unavailable health state?**
22
22
23
23
View the health status of your log search alert rule:
24
24
@@ -31,42 +31,40 @@ View the health status of your log search alert rule:
31
31
32
32
See [Monitor the health of log search alert rules](log-alert-rule-health.md#monitor-the-health-of-log-search-alert-rules) to learn more.
33
33
34
-
### Data ingestion time for logs
34
+
1.**Check the log ingestion latency.**
35
35
36
36
Azure Monitor processes terabytes of customers' logs from across the world, which can cause [logs ingestion latency](../logs/data-ingestion-time.md).
37
37
38
38
Logs are semi-structured data and are inherently more latent than metrics. If you're experiencing more than a 4-minute delay in fired alerts, you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
39
39
40
40
To mitigate latency, the system retries the alert evaluation multiple times. After the data arrives, the alert fires, which in most cases don't equal the log record time.
41
41
42
-
### Actions are muted or alert rule is defined to resolve automatically
42
+
1.**Are the actions muted or was the alert rule configured to resolve automatically?**
43
43
44
-
Log alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**.
45
-
46
-
A common issue is that you think that the alert didn't fire, but it was actually the rule configuration.
44
+
A common issue is that you think that the alert didn't fire, but the rule was configured so that the alert would not fire. See the advanced options of the [log search alert rule](./alerts-create-log-alert-rule.md) to verify that both of the following are not selected:
45
+
* The **Mute actions** checkbox: allows you to mute fired alert actions for a set amount of time.
46
+
***Automatically resolve alerts**: configures the alert to only fire once per condition being met.
## A log search alert fired when it shouldn't have
51
51
52
52
A configured [log alert rule in Azure Monitor](./alerts-log.md) might be triggered unexpectedly. The following sections describe some common reasons.
53
53
54
-
### Alert triggered by partial data
55
-
56
-
Azure Monitor processes terabytes of customers' logs from across the world, which can cause [logs ingestion latency](../logs/data-ingestion-time.md).
54
+
1.**Was the alert triggered due to latency issues?**
57
55
58
-
Logs are semi-structured data and are inherently more latent than metrics. If you're experiencing many misfires in fired alerts, you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
56
+
Azure Monitor processes terabytes of customer logs globally, which can cause [logs ingestion latency](../logs/data-ingestion-time.md). There are built-in capabilities to prevent false alerts, but they can still occur on very latent data (over ~30 minutes) and data with latency spikes.
59
57
60
-
Log alerts work best when you try to detect data in the logs. It works less well when you try to detect lack of data in the logs, like alerting on virtual machine heartbeat.
58
+
Logs are semi-structured data and are inherently more latent than metrics. If you're experiencing many misfires in fired alerts, consider using [metric alerts](alerts-types.md#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
61
59
62
-
There are built-in capabilities to prevent false alerts, but they can still occur on very latent data (over ~30 minutes) and data with latency spikes.
60
+
Log search alerts work best when you are try to detect specific data in the logs. They are less effective when you are trying to detect lack of data in the logs, like alerting on virtual machine heartbeat.
63
61
64
-
## Log search alert rule was disabled
62
+
1.**Was the the Log search alert rule disabled?**
65
63
66
64
If a log search alert rule query fails to evaluate continuously for a week, Azure Monitor disables it automatically.
67
65
The following sections list some reasons why Azure Monitor might disable a log search alert rule. Additionally, there's an example of the [Activity log](../../azure-monitor/essentials/activity-log.md) event that is submitted when a rule is disabled.
68
66
69
-
####Activity log example when rule is disabled
67
+
### Activity log example when rule is disabled
70
68
71
69
```json
72
70
{
@@ -129,57 +127,55 @@ The following sections list some reasons why Azure Monitor might disable a log s
129
127
}
130
128
```
131
129
132
-
### Alert rule scope no longer exists or was moved
130
+
1.**Was the alert rule resource moved or deleted?**
133
131
134
-
If an alert rule scope resource moves, gets renamed, or is deleted, all log alert rules referring to that resource will break. To fix this issue, alert rules need to be recreated using a valid target resource for the scope.
132
+
If an alert rule resource moves, gets renamed, or is deleted, all log alert rules referring to that resource will break. To fix this issue, alert rules need to be recreated using a valid target resource for the scope.
135
133
136
-
### The alert rule uses a system-assigned managed identity with empty permissions
134
+
1.**Does the alert rule uses a system-assigned managed identity?**
137
135
138
136
When you create a log alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the rule’s identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log alerts.
139
137
140
-
### Query used in a log search alert isn't valid
138
+
1.**Is the query used in the log search alert rule valid?**
141
139
142
-
When a log alert rule is created, the query is validated for correct syntax. But sometimes, the query provided in the log alert rule can start to fail. Some common reasons are:
140
+
When a log alert rule is created, the query is validated for correct syntax. But sometimes the query provided in the log alert rule can start to fail. Some common reasons are:
143
141
144
-
- Rules were created via the API, and validation was skipped by the user.
142
+
- Rules were created via the API, and the user skipped validation.
145
143
- The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved.
146
144
- The [query fails](../logs/api/errors.md) because:
147
145
- The logging solution wasn't [deployed to the workspace](../insights/solutions.md#install-a-monitoring-solution), so tables aren't created.
148
146
- Data stopped flowing to a table in the query for more than 30 days.
149
-
-[Custom logs tables](../agents/data-sources-custom-logs.md)aren't yet created, because the data flow hasn't started.
150
-
- Changes in [query language](/azure/kusto/query/) include a revised format for commands and functions, so the query provided earlier is no longer valid.
147
+
-[Custom logs tables](../agents/data-sources-custom-logs.md)haven't been created because the data flow hasn't started.
148
+
- Changes in the [query language](/azure/kusto/query/) include a revised format for commands and functions, so the query provided earlier is no longer valid.
151
149
152
150
[Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log search alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log alert rule to ensure monitoring'.
153
151
154
-
## Issues configuring Log search alert rules
152
+
## Error messages when configuring log search alert rules
155
153
156
-
### The query couldn't be validated since you need permission for the logs error
154
+
### The query couldn't be validated since you need permission for the logs
157
155
158
-
If you receive this error message when creating or editing your alert rule query, make sure you have enough permissions to read the target resource logs.
156
+
If you receive this error message when creating or editing your alert rule query, make sure you have permissions to read the target resource logs.
159
157
160
-
- Permissions required to read logs in workspace-context access mode:
- Permissions required to read logs in resource-context access mode (including workspace-based Application Insights resource):
163
-
Microsoft.Insights/logs/tableName/read.
158
+
- Permissions required to read logs in workspace-context access mode: `Microsoft.OperationalInsights/workspaces/query/read`.
159
+
- Permissions required to read logs in resource-context access mode (including workspace-based Application Insights resource): `Microsoft.Insights/logs/tableName/read`.
164
160
165
161
See [Manage access to Log Analytics workspaces](../logs/manage-access.md) to learn more about permissions.
166
162
167
-
### One-minute frequency is not supported for this query error
163
+
### One-minute frequency is not supported for this query
168
164
169
165
There are some limitations to using a one minute alert rule frequency. When you set the alert rule frequency to one minute, an internal manipulation is performed to optimize the query. This manipulation can cause the query to fail if it contains unsupported operations.
170
166
171
167
For a list of unsupported scenarios, see [this note](https://aka.ms/lsa_1m_limits).
172
168
173
-
### Failed to resolve scalar expression named <> error
169
+
### Failed to resolve scalar expression named <>
174
170
175
-
This error message when creating or editing your alert rule query can be returned if:
171
+
This error message can be returned when creating or editing your alert rule query if:
176
172
177
173
- You are referencing a column that doesn't exist in the table schema.
178
174
- You are referencing a column that wasn't used in a prior project clause of the query.
179
175
180
176
To mitigate this, you can either add the column to the previous project clause or use the [columnifexists](https://learn.microsoft.com/azure/data-explorer/kusto/query/column-ifexists-function) operator.
181
177
182
-
### ScheduledQueryRules API is not supported for read only OMS Alerts error
178
+
### ScheduledQueryRules API is not supported for read only OMS Alerts
183
179
184
180
This error message is returned when trying to update or delete rules created with the legacy API version by using the Azure Portal.
185
181
@@ -189,23 +185,20 @@ This error message is returned when trying to update or delete rules created wit
189
185
## Alert rule quota was reached
190
186
191
187
For details about the number of log search alert rules per subscription and maximum limits of resources, see [Azure Monitor service limits](../service-limits.md).
192
-
193
-
### Recommended Steps
194
-
195
188
If you've reached the quota limit, the following steps might help resolve the issue.
196
189
197
190
1. Delete or disable log search alert rules that aren’t used anymore.
198
-
1. Use [splitting of alerts by dimensions](alerts-unified-log.md#split-by-alert-dimensions) to reduce rules count. These rulescan monitor many resources and detection cases.
191
+
1. Use [splitting by dimensions](alerts-types.md#monitor-multiple-instances-of-a-resource-using-dimensions) to reduce the number of rules. When you use splitting by dimensions, one rule can monitor many resources.
199
192
1. If you need the quota limit to be increased, continue to open a support request, and provide the following information:
200
193
201
194
- The Subscription IDs and Resource IDs for which the quota limit needs to be increased
202
195
- The reason for quota increase
203
196
- The resource type for the quota increase, such as **Log Analytics** or **Application Insights**
204
197
- The requested quota limit
205
198
206
-
### To check the current usage of new log alert rules
207
-
208
-
#### From the Azure portal
199
+
### Check the current usage of log alert rules
200
+
201
+
#### Check the number of log alert rules in use in the Azure portal
209
202
210
203
1. On the Alerts screen in Azure Monitor, select **Alert rules**.
211
204
1. In the **Subscription** dropdown control, filter to the subscription you want. (Make sure you don't filter to a specific resource group, resource type, or resource.)
@@ -214,7 +207,7 @@ If you've reached the quota limit, the following steps might help resolve the is
214
207
215
208
The total number of log search alert rules is displayed above the rules list.
216
209
217
-
#### From API
210
+
###Use the API to check the number of log alert rules in use
0 commit comments