Skip to content

Commit 179420b

Browse files
committed
fixes
1 parent b85f8c2 commit 179420b

File tree

5 files changed

+17
-16
lines changed

5 files changed

+17
-16
lines changed

articles/azure-monitor/best-practices-cost.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,12 +62,12 @@ You may be able to significantly reduce your costs by optimizing the configurati
6262

6363
| Recommendation | Description |
6464
|:---|:---|
65-
| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a [commitment tier](../logs/cost-logs.md#commitment-tiers) or [dedicated cluster](../logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](../logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
66-
| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](../logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost. |
67-
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](../logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. |
68-
| Regularly analyze collected data to identify trends and anomalies. | Use [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md) to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
69-
| Create an alert when data collection is high. | To avoid unexpected bills, you should be [proactively notified anytime you experience excessive usage](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). Notification allows you to address any potential anomalies before the end of your billing period. |
70-
| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](../logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](../logs/daily-cap.md#when-to-use-a-daily-cap).<br><br>If you do set a daily cap, in addition to [creating an alert when the cap is reached](../logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts),ensure that you also [create an alert rule to be notified when some percentage has been reached (90% for example)](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection. |
65+
| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
66+
| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost. |
67+
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. |
68+
| Regularly analyze collected data to identify trends and anomalies. | Use [Log Analytics workspace insights](logs/log-analytics-workspace-insights-overview.md) to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
69+
| Create an alert when data collection is high. | To avoid unexpected bills, you should be [proactively notified anytime you experience excessive usage](logs/analyze-usage.md#send-alert-when-data-collection-is-high). Notification allows you to address any potential anomalies before the end of your billing period. |
70+
| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](logs/daily-cap.md#when-to-use-a-daily-cap).<br><br>If you do set a daily cap, in addition to [creating an alert when the cap is reached](logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts),ensure that you also [create an alert rule to be notified when some percentage has been reached (90% for example)](logs/analyze-usage.md#send-alert-when-data-collection-is-high). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection. |
7171

7272

7373

articles/azure-monitor/includes/waf-workspace-cost.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ ms.date: 03/30/2023
2020

2121
| Recommendation | Description |
2222
|:---|:---|
23-
| Configure pricing tier for the amount of data that each workspace typically collects. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a [commitment tier](../logs/cost-logs.md#commitment-tiers), which allows you to commit to a daily minimum of data collected in exchange for a lower rate. If you collect enough data in workspaces in a single region, you can link them to a [dedicated cluster](../logs/logs-dedicated-clusters.md) and combine their collected volume using [cluster pricing](../logs/cost-logs.md#dedicated-clusters).<br><br>See [Azure Monitor Logs cost calculations and options](../logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers. |
23+
| Configure pricing tier for the amount of data that each workspace typically collects. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a [commitment tier](../logs/cost-logs.md#commitment-tiers), which allows you to commit to a daily minimum of data collected in exchange for a lower rate. If you collect enough data across workspaces in a single region, you can link them to a [dedicated cluster](../logs/logs-dedicated-clusters.md) and combine their collected volume using [cluster pricing](../logs/cost-logs.md#dedicated-clusters).<br><br>See [Azure Monitor Logs cost calculations and options](../logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers. |
2424
| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](../logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently and don't use them for alerting, this query cost can be more than offset by the reduced ingestion cost. |
25-
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](../logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. |
25+
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). Consider your particular requirements for having data readily available for log queries. If you need to retain data for occasional investigation or analysis of historical data, configure [Archived Logs](../logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost and still have access to it through [search jobs](../logs/search-jobs.md) or [restoring a set of data](../logs/restore.md) to the workspace. If you need to store data for extended periods for compliance reasons but don't need it available for log queries, then use [data export](../logs/logs-data-export.md) to send it to Azure storage. |
2626
| Regularly analyze collected data to identify trends and anomalies. | Use [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md) to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
2727
| Create an alert when data collection is high. | To avoid unexpected bills, you should be [proactively notified anytime you experience excessive usage](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). Notification allows you to address any potential anomalies before the end of your billing period. |
2828
| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](../logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](../logs/daily-cap.md#when-to-use-a-daily-cap).<br><br>If you do set a daily cap, in addition to [creating an alert when the cap is reached](../logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts),ensure that you also [create an alert rule to be notified when some percentage has been reached (90% for example)](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection. |

articles/azure-monitor/includes/waf-workspace-operation.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ ms.date: 03/30/2023
1212
> - Design a workspace architecture with the minimal number of workspaces to meet your business requirements.
1313
> - Use Log Analytics workspace insights to track the health and performance of your Log Analytics workspaces.
1414
> - Create alert rules to be proactively notified of operational issues in the workspace.
15-
> - Use functions for log queries commonly used by your organization.
1615
1716
### Configuration recommendations
1817

articles/azure-monitor/includes/waf-workspace-reliability.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,12 @@ ms.topic: include
66
ms.date: 03/30/2023
77
---
88

9+
Log Analytics workspaces offer a high degree of reliability without any design decisions. There may be conditions where a temporary loss of access to the workspace can result in data loss, but features of other Azure Monitor components such as data buffering with the Azure Monitor agent, help to mitigate these situations.
10+
11+
The reliability situations to consider for Log Analytics workspaces are availability of the workspace and protection of collected data in the rare case of failure of an Azure datacenter or region. There is no standard feature for failover between workspaces in different regions, but there are strategies that you can use if you have particular requirements for availability or compliance.
12+
13+
Some availability features require a dedicated cluster. Since this requires a commitment of at least 500 GB per day from all workspaces in the same region, reliability will not typically be your primary criteria for including dedicated clusters in your design.
14+
915
### Design checklist
1016

1117
> [!div class="checklist"]
@@ -14,15 +20,11 @@ ms.date: 03/30/2023
1420
> - If you require the workspace to be available in the case of a region failure, or you don't collect enough data for a dedicated cluster, configure data collection to send critical data to multiple workspaces in different regions.
1521
> - If you require data to be protected in the case of datacenter or region failure, configure data export from the workspace to save data in an alternate location.
1622
17-
The reliability situations to consider for Log Analytics workspaces are availability of the workspace and protection of collected data in the rare case of failure of a datacenter or region. There is no standard feature for failover between workspaces in different regions, but there are strategies that you can use if you have particular requirements for availability or compliance.
18-
19-
Some availability features require a dedicated cluster. Since this requires a commitment of at least 500 GB per day from all workspaces in the same region, reliability will not typically be your primary criteria for including dedicated clusters in your design.
20-
2123
### Configuration recommendations
2224

2325
| Recommendation | Benefit |
2426
|:---|:---|
2527
| Create a health status alert rule for your workspace. | A [health status alert](../logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts) will proactively notify you if a workspace becomes unavailable because of a datacenter or regional failure. |
26-
| If you collect enough data for a dedicated cluster, create a dedicated cluster in an availability zone. | Workspaces linked to a [dedicated cluster](../logs/logs-dedicated-clusters.md) located in an [availability zone](../logs/availability-zones.md#data-resilience---supported-regions) remain available if a datacenter fails. |
27-
| If you require the workspace to be available in the case of a region failure, or you don't collect enough data for a dedicated cluster, configure data collection to send critical data to multiple workspaces in different regions. | Configure your data sources to send to multiple workspaces in different regions. For example, configure DCRs for multiple workspaces for Azure Monitor agent running on virtual machines, and multiple diagnostic settings to collection resource logs from Azure resources. This configuration results in duplicate ingestion and retention charges so only use it for critical data.<br><br>Even though the data will be available in the alternate workspace in case of failure, resources that rely on the data such as alerts and workbooks would not know to use this workspace. Consider storing ARM templates for critical resources with configuration for the alternate workspace in Azure DevOps or as disabled [policies](../../governance/policy/overview.md) that can be quickly enabled in a failover scenario. |
28+
| If you collect enough data, create a dedicated cluster in an availability zone. | Workspaces linked to a [dedicated cluster](../logs/logs-dedicated-clusters.md) located in an [availability zone](../logs/availability-zones.md#data-resilience---supported-regions) remain available if a datacenter fails. |
29+
| If you require the workspace to be available in the case of a region failure, or you don't collect enough data for a dedicated cluster, configure data collection to send critical data to multiple workspaces in different regions. | Configure your data sources to send to multiple workspaces in different regions. For example, configure DCRs for multiple workspaces for Azure Monitor agent running on virtual machines, and multiple diagnostic settings to collection resource logs from Azure resources. This configuration results in duplicate ingestion and retention charges so only use it for critical data.<br><br>Even though the data will be available in the alternate workspace in case of failure, resources that rely on the data such as alerts and workbooks would not know to use this workspace. Consider storing ARM templates for critical resources with configuration for the alternate workspace in Azure DevOps or as disabled [policies](../../governance/policy/overview.md) that can quickly be enabled in a failover scenario. |
2830
| If you require data to be protected in the case of datacenter or region failure, configure data export from the workspace to save data in an alternate location. | The [data export feature of Azure Monitor](../logs/logs-data-export.md) allows you to continuously export data sent to specific tables to Azure storage where it can be retained for extended periods. Use [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS to replicate this data to other regions.<br><br>Data export only supports [specific tables](../logs/logs-data-export.md?tabs=portal#supported-tables). If you require export of other tables, including custom tables, then you can use other methods of exporting data including Logic apps to protect your data. The data can't be restored to the workspace and can be difficult to analyze, so this is primarily a solution to meet compliance for data retention. |

0 commit comments

Comments
 (0)