You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
+14-16Lines changed: 14 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: AbbyMSFT
5
5
ms.author: abbyweisberg
6
6
ms.topic: conceptual
7
7
ms.custom: ignite-2022
8
-
ms.date: 02/12/2023
8
+
ms.date: 03/05/2023
9
9
ms.reviewer: harelbr
10
10
---
11
11
# Create a new alert rule
@@ -30,23 +30,20 @@ Then you define these elements for the resulting alert actions by using:
30
30
31
31
1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**.
32
32
33
-
The **Available signal types** for your selected resources are at the bottom right of the pane.
34
-
35
33
> [!NOTE]
36
34
> If you select a Log analytics workspace resource, keep in mind that if the workspace receives telemetry from resources in more than one subscription, alerts are sent about those resources from different subscriptions.
37
35
38
36
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule.":::
39
37
40
-
1. Select **Include all future resources** to include any future resources added to the selected scope.
41
-
1. Select **Done**.
38
+
1. Select **Apply**.
42
39
1. Select **Next: Condition** at the bottom of the page.
43
-
1. On the **Select a signal** pane, filter the list of signals by using the signal type and monitor service:
40
+
1. On the **Select a signal** pane, you can search for the signal name or you can filter the list of signals by:
44
41
-**Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
45
-
-**Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected.
42
+
-**Signal source**: The service sending the signal. The list is pre-populated based on the type of alert rule you selected.
46
43
47
44
This table describes the services available for each type of alert rule:
48
45
49
-
|Signal type |Monitor service|Description |
46
+
|Signal type |Signal source|Description |
50
47
|---------|---------|---------|
51
48
|Metrics|Platform |For metric signals, the monitor service is the metric namespace. "Platform" means the metrics are provided by the resource provider, namely, Azure.|
52
49
||Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. |
@@ -60,7 +57,8 @@ Then you define these elements for the resulting alert actions by using:
60
57
|Resource health|Resource health|The service that provides the resource-level health status. |
61
58
|Service health|Service health|The service that provides the subscription-level health status. |
62
59
63
-
1. Select the **Signal name**, and follow the steps in the following tab that corresponds to the type of alert you're creating.
60
+
1. Select the **Signal name** and **Apply**.
61
+
1. Follow the steps in the tab that corresponds to the type of alert you're creating.
64
62
65
63
### [Metric alert](#tab/metric)
66
64
@@ -75,7 +73,7 @@ Then you define these elements for the resulting alert actions by using:
75
73
76
74
Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
77
75
78
-
If you select more than one dimension value, each time series that results from the combination will trigger its own alert and be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
76
+
If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
79
77
80
78
|Field |Description |
81
79
|---------|---------|
@@ -89,7 +87,7 @@ Then you define these elements for the resulting alert actions by using:
89
87
|Field |Description |
90
88
|---------|---------|
91
89
|Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A **static threshold** evaluates the rule by using the threshold value that you configure.<br>**Dynamic thresholds** use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
92
-
|Operator|Select the operator for comparing the metric value against the threshold. <br>If you are using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold|
90
+
|Operator|Select the operator for comparing the metric value against the threshold. <br>If you're using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold|
93
91
|Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max. |
94
92
|Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. |
95
93
|Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
@@ -102,9 +100,9 @@ Then you define these elements for the resulting alert actions by using:
102
100
|Field |Description |
103
101
|---------|---------|
104
102
|Check every|Select how often the alert rule checks if the condition is met. |
105
-
|Lookback period|Select how far back to look each time the data is checked. For example, every 1 minute you’ll be looking at the past 5 minutes.|
103
+
|Lookback period|Select how far back to look each time the data is checked. For example, every 1 minute, look back 5 minutes.|
106
104
107
-
1. (Optional) In the **Advanced options** section, you can specify how many failures within a specific time period will trigger the alert. For example, you can specify that you only want to trigger an alert if there were three failures in the last hour. This setting is defined by your application business policy.
105
+
1. (Optional) In the **Advanced options** section, you can specify how many failures within a specific time period trigger an alert. For example, you can specify that you only want to trigger an alert if there were three failures in the last hour. Your application business policy should determine this setting.
108
106
109
107
Select values for these fields:
110
108
@@ -121,7 +119,7 @@ Then you define these elements for the resulting alert actions by using:
121
119
> [!NOTE]
122
120
> If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience).
123
121
124
-
1. On the **Logs** pane, write a query that will return the log events for which you want to create an alert.
122
+
1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert.
125
123
To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries.
126
124
127
125
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
@@ -177,7 +175,7 @@ Then you define these elements for the resulting alert actions by using:
177
175
178
176
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule.":::
179
177
180
-
1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
178
+
1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting.
181
179
182
180
Select values for these fields under **Number of violations to trigger the alert**:
183
181
@@ -334,7 +332,7 @@ Then you define these elements for the resulting alert actions by using:
334
332
335
333
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot that shows the Tags tab when creating a new alert rule.":::
336
334
337
-
1. On the **Review + create** tab, a validation will run and inform you of any issues.
335
+
1. On the **Review + create** tab, the rule is validated, and lets you know about any issues.
338
336
1. When validation passes and you've reviewed the settings, select the **Create** button.
339
337
340
338
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot that shows the Review and create tab when creating a new alert rule.":::
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,9 @@ ms.date: 01/22/2023
11
11
# Azure Monitor workspace (preview)
12
12
An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions.
13
13
14
-
14
+
> [!Note]
15
+
> Log Analytics workspaces contain logs and metrics data from multiple Azure resources, whereas Azure Monitor workspaces contain only metrics related to Prometheus.
16
+
15
17
## Contents of Azure Monitor workspace
16
18
Azure Monitor workspaces will eventually contain all metric data collected by Azure Monitor. Currently, only Prometheus metrics are data hosted in an Azure Monitor workspace.
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/basic-logs-configure.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ Configure a table for Basic logs if:
58
58
| Media Services |[AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests)|
|[Code pipeline insights](defender-for-devops-introduction.md)| Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud |[Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
26
+
|[Code pipeline insights](defender-for-devops-introduction.md)| Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud |[Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
title: Demystifying Defender for Servers | Defender for Cloud in the field
3
+
titleSuffix: Microsoft Defender for Cloud
4
+
description: Learn about different deployment options in Defender for Servers
5
+
ms.topic: reference
6
+
ms.date: 03/05/2023
7
+
---
8
+
9
+
# Demystifying Defender for Servers | Defender for Cloud in the field
10
+
11
+
**Episode description**: In this episode of Defender for Cloud in the Field, Tom Janetscheck joins Yuri Diogenes to talk about the different deployment options in Defender for Servers. Tom covers the different agents available and the scenarios that will be most used for each agent, including the agentless feature. Tom also talks about the different vulnerability assessment solutions available, and how to deploy Defender for Servers at scale via policy or custom automation.
0 commit comments