You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/includes/waf-logs-cost.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,8 @@ ms.date: 03/30/2023
21
21
| Recommendation | Benefit |
22
22
|:---|:---|
23
23
| Configure pricing tier for the amount of data that each Log Analytics workspace typically collects. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a [commitment tier](../logs/cost-logs.md#commitment-tiers), which allows you to commit to a daily minimum of data collected in exchange for a lower rate. If you collect enough data across workspaces in a single region, you can link them to a [dedicated cluster](../logs/logs-dedicated-clusters.md) and combine their collected volume using [cluster pricing](../logs/cost-logs.md#dedicated-clusters).<br><br>See [Azure Monitor Logs cost calculations and options](../logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers. |
24
+
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 31 days (90 days if Sentinel is enabled on the workspace and 90 days for Application insights data). Consider your particular requirements for having data readily available for log queries. You can significantly reduce your cost by configuring [Archived Logs](../logs/data-retention-archive.md), which allows you to retain data for up to seven years and still access it occassionaly using [search jobs](../logs/search-jobs.md) or [restoring a set of data](../logs/restore.md) to the workspace. |
24
25
| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](../logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently and don't use them for alerting, this query cost can be more than offset by the reduced ingestion cost. |
25
-
| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). Consider your particular requirements for having data readily available for log queries. If you need to retain data for occasional investigation or analysis of historical data, configure [Archived Logs](../logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost and still have access to it through [search jobs](../logs/search-jobs.md) or [restoring a set of data](../logs/restore.md) to the workspace. |
26
26
| Regularly analyze collected data to identify trends and anomalies. | Use [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md) to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
27
27
| Create an alert when data collection is high. | To avoid unexpected bills, you should be [proactively notified anytime you experience excessive usage](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). Notification allows you to address any potential anomalies before the end of your billing period. |
28
28
| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](../logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](../logs/daily-cap.md#when-to-use-a-daily-cap).<br><br>If you do set a daily cap, in addition to [creating an alert when the cap is reached](../logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts),ensure that you also [create an alert rule to be notified when some percentage has been reached (90% for example)](../logs/analyze-usage.md#send-alert-when-data-collection-is-high). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection. |
Copy file name to clipboardExpand all lines: articles/azure-monitor/includes/waf-logs-performance.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,4 +15,4 @@ ms.date: 03/30/2023
15
15
16
16
| Recommendation | Benefit |
17
17
|:---|:---|
18
-
| Configure log query auditing and use Log Analytics workspace insights to identify slow and inefficient queries. |[Log query auditing](../logs/query-audit.md) stores the compute time required to run each each query and the time until results are returned. [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md#query-audit-tab) uses this data to list potentially inefficient queries in your workspace. Consider rewriting these queries to improve their performance. Refer to [Optimize log queries in Azure Monitor](../logs/query-optimization.md) for guidance on optimizing your log queries. |
18
+
| Configure log query auditing and use Log Analytics workspace insights to identify slow and inefficient queries. |[Log query auditing](../logs/query-audit.md) stores the compute time required to run each query and the time until results are returned. [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md#query-audit-tab) uses this data to list potentially inefficient queries in your workspace. Consider rewriting these queries to improve their performance. Refer to [Optimize log queries in Azure Monitor](../logs/query-optimization.md) for guidance on optimizing your log queries. |
Copy file name to clipboardExpand all lines: articles/azure-monitor/includes/waf-logs-reliability.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,5 +26,5 @@ Some availability features require a dedicated cluster. Since this requires a co
26
26
|:---|:---|
27
27
| If you collect enough data, create a dedicated cluster in an availability zone. | Workspaces linked to a [dedicated cluster](../logs/logs-dedicated-clusters.md) located in an [availability zone](../logs/availability-zones.md#data-resilience---supported-regions) remain available if a datacenter fails. |
28
28
| If you require the workspace to be available in the case of a region failure, or you don't collect enough data for a dedicated cluster, configure data collection to send critical data to multiple workspaces in different regions. | Configure your data sources to send to multiple workspaces in different regions. For example, configure DCRs for multiple workspaces for Azure Monitor agent running on virtual machines, and multiple diagnostic settings to collection resource logs from Azure resources. This configuration results in duplicate ingestion and retention charges so only use it for critical data.<br><br>Even though the data will be available in the alternate workspace in case of failure, resources that rely on the data such as alerts and workbooks would not know to use this workspace. Consider storing ARM templates for critical resources with configuration for the alternate workspace in Azure DevOps or as disabled [policies](../../governance/policy/overview.md) that can quickly be enabled in a failover scenario. |
29
-
| If you require data to be protected in the case of datacenter or region failure, configure data export from the workspace to save data in an alternate location. | The [data export feature of Azure Monitor](../logs/logs-data-export.md) allows you to continuously export data sent to specific tables to Azure storage where it can be retained for extended periods. Use [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS to replicate this data to other regions.<br><br>Data export only supports [specific tables](../logs/logs-data-export.md?tabs=portal#supported-tables). If you require export of other tables, including custom tables, then you can use other methods of exporting data including Logic apps to protect your data. The data can't be restored to the workspace and can be difficult to analyze, so this is primarily a solution to meet compliance for data retention. |
29
+
| If you require data to be protected in the case of datacenter or region failure, configure data export from the workspace to save data in an alternate location. | The [data export feature of Azure Monitor](../logs/logs-data-export.md) allows you to continuously export data sent to specific tables to Azure storage where it can be retained for extended periods. Use [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS to replicate this data to other regions. If you require export of [tables that aren't supported by data export](../logs/logs-data-export.md?tabs=portal#limitations)then you can use other methods of exporting data including Logic apps to protect your data. This is primarily a solution to meet compliance for data retention since the data can be difficult to analyze and restore back to the workspace. |
30
30
| Create a health status alert rule for your Log Analytics workspace. | A [health status alert](../logs/log-analytics-workspace-health.md#view-log-analytics-workspace-health-and-set-up-health-status-alerts) will proactively notify you if a workspace becomes unavailable because of a datacenter or regional failure. |
Copy file name to clipboardExpand all lines: articles/azure-monitor/includes/waf-logs-security.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,6 @@ ms.date: 03/30/2023
12
12
> - Determine whether to combine your operational data and your security data in the same Log Analytics workspace.
13
13
> - Configure access for different types of data in the workspace required for different roles in your organization.
14
14
> - Consider using Azure private link to remove access to your workspace from public networks.
15
-
> - If you collect enough data for a dedicated cluster, link workspaces to the cluster to take advantage of additional security features.
16
15
> - Use customer managed keys if you require your own encryption key to protect data and saved queries in your workspaces.
17
16
> - Export audit data for long term retention or immutability.
18
17
> - Configure log query auditing to track which users are running queries.
@@ -27,8 +26,7 @@ ms.date: 03/30/2023
27
26
| Determine whether to combine your operational data and your security data in the same Log Analytics workspace. | Your decision whether to combine this data depends on your particular security requirements. Combining them in a single workspace gives you better visibility across all your data, although your security team may require a dedicated workspace. See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) for details on making this decision for your environment. |
28
27
| Configure access for different types of data in the workspace required for different roles in your organization. | Set the [access control mode](../logs/manage-access.md#access-control-mode) for the workspace to *Use resource or workspace permissions* to allow resource owners to use [resource-context](../logs/manage-access.md#access-mode) to access their data without being granted explicit access to the workspace. This simplifies your workspace configuration and helps to ensure users will not be able to access data they shouldn't.<br><br>Assign the appropriate [built-in role](../logs/manage-access.md#azure-rbac) to grant workspace permissions to administrators at either the subscription, resource group, or workspace level depending on their scope of responsibilities.<br><br>Leverage [table level RBAC](../logs/manage-access.md#set-table-level-read-access) for users who require access to a set of tables across multiple resources. Users with table permissions have access to all the data in the table regardless of their resource permissions.<br><br>See [Manage access to Log Analytics workspaces](../logs/manage-access.md) for details on the different options for granting access to data in the workspace. |
29
28
| Consider using Azure private link to remove access to your workspace from public networks. | Connections to public endpoints are secured with end-to-end encryption. If require a private endpoint, you can use [Azure private link](../logs/private-link-security.md) to allow resources to connect to your Log Analytics workspace through authorized private networks. Private link can also be used to force workspace data ingestion through ExpressRoute or a VPN. See [Design your Azure Private Link setup](../logs/private-link-design.md) to determine the best network and DNS topology for your environment. |
30
-
| If you collect enough data for a dedicated cluster, link workspaces to the cluster to take advantage of additional security features. | Workspaces linked to [dedicated clusters](../logs/logs-dedicated-clusters.md) are able to use [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) which enables a second level of data encryption using separate algorithm and keys and [lockbox](../logs/customer-managed-keys.md#customer-lockbox) which allows data access control during support cases. A dedicated cluster is also required for [customer-managed keys](../logs/customer-managed-keys.md) mentioned below. |
31
-
| Use customer managed keys if you require your own encryption key to protect data and saved queries in your workspaces. | Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). If you require your own encryption key and collect enough data for a [dedicated cluster](../logs/logs-dedicated-clusters.md), use [customer-managed key](../logs/customer-managed-keys.md) for greater flexibility to controls to logs. If you use Microsoft Sentinel, then make sure that you're familiar with the considerations at [Set up Microsoft Sentinel customer-managed key](../../sentinel/customer-managed-keys.md#considerations). |
29
+
| Use customer managed keys if you require your own encryption key to protect data and saved queries in your workspaces. | Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). If you require your own encryption key and collect enough data for a [dedicated cluster](../logs/logs-dedicated-clusters.md), use [customer-managed key](../logs/customer-managed-keys.md) for greater flexibility and key lifecycle control. If you use Microsoft Sentinel, then make sure that you're familiar with the considerations at [Set up Microsoft Sentinel customer-managed key](../../sentinel/customer-managed-keys.md#considerations). |
32
30
| Export audit data for long term retention or immutability. | You may have collected audit data in your workspace that's subject to regulations requiring its long term retention. Data in a Log Analytics workspace can’t be altered, but it can be purged. Use [data export](../logs/logs-data-export.md) to send data to an Azure storage account with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to protect against data tampering. Not every type of logs has the same relevance for compliance, auditing, or security, so determine the specific data types that should be exported. |
33
31
| Configure log query auditing to track which users are running queries. |[Log query auditing](../logs/query-audit.md) records the details for each query that's run in a workspace. Treat this audit data as security data and secure the [LAQueryLogs](/azure/azure-monitor/reference/tables/laquerylogs) table appropriately. Configure the audit logs for each workspace to be sent to the local workspace, or consolidate in a dedicated security workspace if you separate your operational and security data. Use [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md) to periodically review this data and consider creating log query alert rules to proactively notify you if unauthorized users are attempting to run queries. |
34
32
| Determine a strategy to filter or obfuscate sensitive data in your workspace. | You may be collecting data that includes [sensitive information](../logs/personal-data-mgmt.md). Filter records that shouldn't be collected using the configuration for the particular data source. Use a [transformation](../essentials/data-collection-transformations.md) if only particular columns in the data should be removed or obfuscated.<br><br>If you have standards that require the original data to be unmodified, then you can use the ['h' literal](/azure/data-explorer/kusto/query/scalar-data-types/string#obfuscated-string-literals) in KQL queries to obfuscate query results displayed in workbooks. |
0 commit comments