You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/platform/manage-cost-storage.md
+16-12Lines changed: 16 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.service: azure-monitor
11
11
ms.workload: na
12
12
ms.tgt_pltfrm: na
13
13
ms.topic: conceptual
14
-
ms.date: 05/07/2020
14
+
ms.date: 05/09/2020
15
15
ms.author: bwren
16
16
ms.subservice:
17
17
---
@@ -33,7 +33,7 @@ The default pricing for Log Analytics is a **Pay-As-You-Go** model based on data
33
33
- Number of VMs monitored
34
34
- Type of data collected from each monitored VM
35
35
36
-
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higherlevel Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
36
+
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher-level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
37
37
38
38
In all pricing tiers, the data volume is calculated from a string representation of the data as it is prepared to be stored. Several [properties common to all data types](https://docs.microsoft.com/azure/azure-monitor/platform/log-standard-properties) are not included in the calculation of the event size, including `_ResourceId`, `_ItemId`, `_IsBillable` and `_BilledSize`.
39
39
@@ -45,7 +45,7 @@ Log Analytics Clusters are collections of workspaces into a single managed Azure
45
45
46
46
The cluster capacity reservation level is configured via programatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000 GB/day or more in increments of 100 GB/day. This is detailed [here](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#create-cluster-resource). If your cluster needs a reservation above 2000 GB/day contact us at [[email protected]](mailto:[email protected]).
47
47
48
-
Because the billing for ingested data is done at the cluster level, workspaces associated to a cluster no longer have a pricing tier. The ingested data quantities from each workspace associated to a cluster is aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Security Center](https://docs.microsoft.com/azure/security-center/) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. Data retention is still billed at the workspace level. Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster.
48
+
Because the billing for ingested data is done at the cluster level, workspaces associated to a cluster no longer have a pricing tier. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Security Center](https://docs.microsoft.com/azure/security-center/) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. Data retention is still billed at the workspace level. Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster.
49
49
50
50
## Estimating the costs to manage your environment
51
51
@@ -120,8 +120,10 @@ When the retention is lowered, there is a several day grace period before the ol
120
120
121
121
The retention can also be [set via Azure Resource Manager](https://docs.microsoft.com/azure/azure-monitor/platform/template-workspace-configuration#configure-a-log-analytics-workspace) using the `retentionInDays` parameter. Additionally, if you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This functionality is only exposed via Azure Resource Manager.
122
122
123
-
Two data types -- `Usage` and `AzureActivity` -- are retained for 90 days by default, and there is no charge for for this 90 day retention. These data types are also free from data ingestion charges.
124
123
124
+
Two data types -- `Usage` and `AzureActivity` -- are retained for 90 days by default, and there is no charge for this 90-day retention. These data types are also free from data ingestion charges.
125
+
126
+
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents` and `AppTraces`) are also retained for 90 days by default, and there is no charge for this 90-day retention. Their retention can be adjusted using the retention by data type functionality.
125
127
126
128
127
129
### Retention by data type
@@ -132,7 +134,7 @@ It is also possible to specify different retention settings for individual data
Note that the data type (table) is casesensitive. To get the current per data type retention settings of a particular data type (in this example SecurityEvent), use:
137
+
Note that the data type (table) is case-sensitive. To get the current per data type retention settings of a particular data type (in this example SecurityEvent), use:
136
138
137
139
```JSON
138
140
GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
@@ -160,7 +162,7 @@ Valid values for `retentionInDays` are from 30 through 730.
160
162
161
163
The `Usage` and `AzureActivity` data types cannot be set with custom retention. They will take on the maximum of the default workspace retention or 90 days.
162
164
163
-
A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/). Here's an example using ARMClient, setting SecurityEvent data to a 730day retention:
165
+
A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/). Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
164
166
165
167
```
166
168
armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"
@@ -188,9 +190,9 @@ The following steps describe how to configure a limit to manage the volume of da
188
190
189
191
1. From your workspace, select **Usage and estimated costs** from the left pane.
190
192
2. On the **Usage and estimated costs** page for the selected workspace, click **Data volume management** from the top of the page.
191
-
3. Daily cap is **OFF** by default ? click **ON** to enable it, and then set the data volume limit in GB/day.
193
+
3. Daily cap is **OFF** by default > click **ON** to enable it, and then set the data volume limit in GB/day.
192
194
193
-

195
+

194
196
195
197
### Alert when Daily Cap reached
196
198
@@ -333,7 +335,6 @@ union withsource = tt *
333
335
> [!TIP]
334
336
> Use these `union *` queries sparingly as scans across data types are [resource intensive](https://docs.microsoft.com/azure/azure-monitor/log-query/query-optimization#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type.
335
337
336
-
337
338
### Data volume by Azure resource, resource group, or subscription
338
339
339
340
For data from nodes hosted in Azure you can get the **size** of ingested data __per computer__, use the _ResourceId [property](log-standard-properties.md#_resourceid), which provides the full path to the resource:
@@ -365,10 +366,13 @@ Changing `subscriptionId` to `resourceGroup` will show the billable ingested dat
365
366
> Some of the fields of the Usage data type, while still in the schema, have been deprecated and will their values are no longer populated.
366
367
> These are **Computer** as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**.
367
368
369
+
368
370
### Querying for common data types
369
371
370
372
To dig deeper into the source of data for a particular data type, here are some useful example queries:
- learn more [here](https://docs.microsoft.com/azure/azure-monitor/app/pricing#data-volume-for-workspace-based-application-insights-resources)
372
376
+**Security** solution
373
377
-`SecurityEvent | summarize AggregatedValue = count() by EventID`
374
378
+**Log Management** solution
@@ -556,7 +560,7 @@ When creating the alert for the first query -- when there is more than 100 GB of
556
560
557
561
-**Define alert condition** specify your Log Analytics workspace as the resource target.
558
562
-**Alert criteria** specify the following:
559
-
-**Signal Name** select **Custom log search**
563
+
-**Signal Name**> select **Custom log search**
560
564
-**Search query** to `union withsource = $table Usage | where QuantityUnit == "MBytes" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | extend Type = $table | summarize DataGB = sum((Quantity / 1000.)) by Type | where DataGB > 100`
561
565
-**Alert logic** is **Based on***number of results* and **Condition** is *Greater than* a **Threshold** of *0*
562
566
-**Time period** of *1440* minutes and **Alert frequency** to every *60* minutes since the usage data only updates once per hour.
@@ -570,7 +574,7 @@ When creating the alert for the second query -- when it is predicted that there
570
574
571
575
-**Define alert condition** specify your Log Analytics workspace as the resource target.
572
576
-**Alert criteria** specify the following:
573
-
-**Signal Name** select **Custom log search**
577
+
-**Signal Name**> select **Custom log search**
574
578
-**Search query** to `union withsource = $table Usage | where QuantityUnit == "MBytes" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | extend Type = $table | summarize EstimatedGB = sum(((Quantity * 8) / 1000.)) by Type | where EstimatedGB > 100`
575
579
-**Alert logic** is **Based on***number of results* and **Condition** is *Greater than* a **Threshold** of *0*
576
580
-**Time period** of *180* minutes and **Alert frequency** to every *60* minutes since the usage data only updates once per hour.
@@ -595,7 +599,7 @@ If you are on the legacy Free pricing tier and have sent more than 500 MB of dat
595
599
Operation | where OperationCategory == 'Data Collection Status'
596
600
```
597
601
598
-
When data collection stops, the OperationStatus is **Warning**. When data collection starts, the OperationStatus is **Succeeded**. The following table describes reasons that data collection stops and a suggested action to resume data collection:
602
+
When data collection stops, the `OperationStatus` is **Warning**. When data collection starts, the `OperationStatus` is **Succeeded**. The following table describes reasons that data collection stops and a suggested action to resume data collection:
0 commit comments