Skip to content

Commit 0a27e1d

Browse files
committed
additional doc updates from PM team
1 parent deada6e commit 0a27e1d

File tree

1 file changed

+16
-12
lines changed

1 file changed

+16
-12
lines changed

articles/azure-monitor/platform/manage-cost-storage.md

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.service: azure-monitor
1111
ms.workload: na
1212
ms.tgt_pltfrm: na
1313
ms.topic: conceptual
14-
ms.date: 05/07/2020
14+
ms.date: 05/09/2020
1515
ms.author: bwren
1616
ms.subservice:
1717
---
@@ -33,7 +33,7 @@ The default pricing for Log Analytics is a **Pay-As-You-Go** model based on data
3333
- Number of VMs monitored
3434
- Type of data collected from each monitored VM
3535

36-
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
36+
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher-level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
3737

3838
In all pricing tiers, the data volume is calculated from a string representation of the data as it is prepared to be stored. Several [properties common to all data types](https://docs.microsoft.com/azure/azure-monitor/platform/log-standard-properties) are not included in the calculation of the event size, including `_ResourceId`, `_ItemId`, `_IsBillable` and `_BilledSize`.
3939

@@ -45,7 +45,7 @@ Log Analytics Clusters are collections of workspaces into a single managed Azure
4545

4646
The cluster capacity reservation level is configured via programatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000 GB/day or more in increments of 100 GB/day. This is detailed [here](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#create-cluster-resource). If your cluster needs a reservation above 2000 GB/day contact us at [[email protected]](mailto:[email protected]).
4747

48-
Because the billing for ingested data is done at the cluster level, workspaces associated to a cluster no longer have a pricing tier. The ingested data quantities from each workspace associated to a cluster is aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Security Center](https://docs.microsoft.com/azure/security-center/) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. Data retention is still billed at the workspace level. Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster.
48+
Because the billing for ingested data is done at the cluster level, workspaces associated to a cluster no longer have a pricing tier. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Security Center](https://docs.microsoft.com/azure/security-center/) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster. Data retention is still billed at the workspace level. Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster.
4949

5050
## Estimating the costs to manage your environment
5151

@@ -120,8 +120,10 @@ When the retention is lowered, there is a several day grace period before the ol
120120

121121
The retention can also be [set via Azure Resource Manager](https://docs.microsoft.com/azure/azure-monitor/platform/template-workspace-configuration#configure-a-log-analytics-workspace) using the `retentionInDays` parameter. Additionally, if you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This functionality is only exposed via Azure Resource Manager.
122122

123-
Two data types -- `Usage` and `AzureActivity` -- are retained for 90 days by default, and there is no charge for for this 90 day retention. These data types are also free from data ingestion charges.
124123

124+
Two data types -- `Usage` and `AzureActivity` -- are retained for 90 days by default, and there is no charge for this 90-day retention. These data types are also free from data ingestion charges.
125+
126+
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents` and `AppTraces`) are also retained for 90 days by default, and there is no charge for this 90-day retention. Their retention can be adjusted using the retention by data type functionality.
125127

126128

127129
### Retention by data type
@@ -132,7 +134,7 @@ It is also possible to specify different retention settings for individual data
132134
/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
133135
```
134136

135-
Note that the data type (table) is case sensitive. To get the current per data type retention settings of a particular data type (in this example SecurityEvent), use:
137+
Note that the data type (table) is case-sensitive. To get the current per data type retention settings of a particular data type (in this example SecurityEvent), use:
136138

137139
```JSON
138140
GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
@@ -160,7 +162,7 @@ Valid values for `retentionInDays` are from 30 through 730.
160162

161163
The `Usage` and `AzureActivity` data types cannot be set with custom retention. They will take on the maximum of the default workspace retention or 90 days.
162164

163-
A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/). Here's an example using ARMClient, setting SecurityEvent data to a 730 day retention:
165+
A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/). Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
164166

165167
```
166168
armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"
@@ -188,9 +190,9 @@ The following steps describe how to configure a limit to manage the volume of da
188190

189191
1. From your workspace, select **Usage and estimated costs** from the left pane.
190192
2. On the **Usage and estimated costs** page for the selected workspace, click **Data volume management** from the top of the page.
191-
3. Daily cap is **OFF** by default ? click **ON** to enable it, and then set the data volume limit in GB/day.
193+
3. Daily cap is **OFF** by default > click **ON** to enable it, and then set the data volume limit in GB/day.
192194

193-
![Log Analytics configure data limit](media/manage-cost-storage/set-daily-volume-cap-01.png)
195+
![Log Analytics configured data limit](media/manage-cost-storage/set-daily-volume-cap-01.png)
194196

195197
### Alert when Daily Cap reached
196198

@@ -333,7 +335,6 @@ union withsource = tt *
333335
> [!TIP]
334336
> Use these `union *` queries sparingly as scans across data types are [resource intensive](https://docs.microsoft.com/azure/azure-monitor/log-query/query-optimization#query-performance-pane) to execute. If you do not need results **per computer** then query on the Usage data type.
335337
336-
337338
### Data volume by Azure resource, resource group, or subscription
338339

339340
For data from nodes hosted in Azure you can get the **size** of ingested data __per computer__, use the _ResourceId [property](log-standard-properties.md#_resourceid), which provides the full path to the resource:
@@ -365,10 +366,13 @@ Changing `subscriptionId` to `resourceGroup` will show the billable ingested dat
365366
> Some of the fields of the Usage data type, while still in the schema, have been deprecated and will their values are no longer populated.
366367
> These are **Computer** as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**.
367368
369+
368370
### Querying for common data types
369371

370372
To dig deeper into the source of data for a particular data type, here are some useful example queries:
371373

374+
+ **Workspace-based Application Insights** resources
375+
- learn more [here](https://docs.microsoft.com/azure/azure-monitor/app/pricing#data-volume-for-workspace-based-application-insights-resources)
372376
+ **Security** solution
373377
- `SecurityEvent | summarize AggregatedValue = count() by EventID`
374378
+ **Log Management** solution
@@ -556,7 +560,7 @@ When creating the alert for the first query -- when there is more than 100 GB of
556560

557561
- **Define alert condition** specify your Log Analytics workspace as the resource target.
558562
- **Alert criteria** specify the following:
559-
- **Signal Name** select **Custom log search**
563+
- **Signal Name** > select **Custom log search**
560564
- **Search query** to `union withsource = $table Usage | where QuantityUnit == "MBytes" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | extend Type = $table | summarize DataGB = sum((Quantity / 1000.)) by Type | where DataGB > 100`
561565
- **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
562566
- **Time period** of *1440* minutes and **Alert frequency** to every *60* minutes since the usage data only updates once per hour.
@@ -570,7 +574,7 @@ When creating the alert for the second query -- when it is predicted that there
570574

571575
- **Define alert condition** specify your Log Analytics workspace as the resource target.
572576
- **Alert criteria** specify the following:
573-
- **Signal Name** select **Custom log search**
577+
- **Signal Name** > select **Custom log search**
574578
- **Search query** to `union withsource = $table Usage | where QuantityUnit == "MBytes" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | extend Type = $table | summarize EstimatedGB = sum(((Quantity * 8) / 1000.)) by Type | where EstimatedGB > 100`
575579
- **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
576580
- **Time period** of *180* minutes and **Alert frequency** to every *60* minutes since the usage data only updates once per hour.
@@ -595,7 +599,7 @@ If you are on the legacy Free pricing tier and have sent more than 500 MB of dat
595599
Operation | where OperationCategory == 'Data Collection Status'
596600
```
597601

598-
When data collection stops, the OperationStatus is **Warning**. When data collection starts, the OperationStatus is **Succeeded**. The following table describes reasons that data collection stops and a suggested action to resume data collection:
602+
When data collection stops, the `OperationStatus` is **Warning**. When data collection starts, the `OperationStatus` is **Succeeded**. The following table describes reasons that data collection stops and a suggested action to resume data collection:
599603

600604
|Reason collection stops| Solution|
601605
|-----------------------|---------|

0 commit comments

Comments
 (0)