You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/platform/manage-cost-storage.md
+57-48Lines changed: 57 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: bwren
7
7
manager: carmonm
8
8
editor: ''
9
9
ms.assetid:
10
-
10
+
ms.service: azure-monitor
11
11
ms.workload: na
12
12
ms.tgt_pltfrm: na
13
13
ms.topic: conceptual
@@ -205,126 +205,121 @@ Once alert is defined and the limit is reached, an alert is triggered and perfor
205
205
206
206
Higher usage is caused by one, or both of:
207
207
- More nodes than expected sending data to Log Analytics workspace
208
-
- More data than expected being sent to Log Analytics workspace
208
+
- More data than expected being sent to Log Analytics workspace (perhaps due to starting to use a new solution or a configuration change to an existing solution)
209
209
210
210
## Understanding nodes sending data
211
211
212
-
To understand the number of computers reporting heartbeats each day in the last month, use
212
+
To understand the number of nodes reporting heartbeats from the agent each day in the last month, use
213
213
214
214
```kusto
215
-
Heartbeat | where TimeGenerated > startofday(ago(31d))
216
-
| summarize dcount(Computer) by bin(TimeGenerated, 1d)
215
+
Heartbeat
216
+
| where TimeGenerated > startofday(ago(31d))
217
+
| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
217
218
| render timechart
218
219
```
219
-
220
-
To get a list of computers which will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes which are sending **billed data types** (some data types are free).
221
-
To do this, use the `_IsBillable`[property](log-standard-properties.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the list of computers with billed data:
220
+
The get a count of nodes sending data seen can be determined using:
| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
239
236
```
240
237
241
238
> [!NOTE]
242
239
> Use these `union withsource = tt *` queries sparingly as scans across data types are expensive to execute. This query replaces the old way of querying per-computer information with the Usage data type.
243
240
244
-
A more accurate calculation of what will actually be billed is to get the count of computers per hour that are sending billed data types.
245
-
(For workspaces in the legacy Per Node pricing tier, Log Analytics calculates the number of nodes which need to be billed on an hourly basis.)
241
+
## Understanding ingested data volume
242
+
243
+
On the **Usage and Estimated Costs** page, the *Data ingestion per solution* chart shows the total volume of data sent and how much is being sent by each solution. This allows you to determine trends such as whether the overall data usage (or usage by a particular solution) is growing, remaining steady or decreasing.
244
+
245
+
### Data volume by solution
246
+
247
+
The query used to view the billable data volume by solution is
Note that the clause `where IsBillable = true` filters out data types from certain solutions for which there is no ingestion charge.
256
257
257
-
On the **Usage and Estimated Costs** page, the *Data ingestion per solution* chart shows the total volume of data sent and how much is being sent by each solution. This allows you to determine trends such as whether the overall data usage (or usage by a particular solution) is growing, remaining steady or decreasing. The query used to generate this is
258
+
### Data volume by type
259
+
260
+
You can drill in further to see data trends for by data type:
258
261
259
262
```kusto
260
263
Usage | where TimeGenerated > startofday(ago(31d))| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) by Solution, DataType
276
+
| sort by Solution asc, DataType asc
272
277
```
273
278
274
279
### Data volume by computer
275
280
276
-
To see the **size** of billable events ingested per computer, use the `_BilledSize`[property](log-standard-properties.md#_billedsize), which provides the size in bytes:
281
+
The `Usage` data type does not include information at the completer level. To see the **size** of ingested data per computer, use the `_BilledSize`[property](log-standard-properties.md#_billedsize), which provides the size in bytes:
| summarize eventCount=count() by computerName | sort by eventCount nulls last
294
-
```
295
-
296
-
If you want to see counts for billable data types are sending data to a specific computer, use:
297
-
298
-
```kusto
299
-
union withsource = tt *
300
-
| where Computer == "computer name"
301
-
| where _IsBillable == true
302
-
| summarize count() by tt | sort by count_ nulls last
298
+
| summarize eventCount = count() by computerName | sort by eventCount nulls last
303
299
```
304
300
305
301
### Data volume by Azure resource, resource group, or subscription
306
302
307
-
For data from nodes hosted in Azure you can get the **size** of billable events ingested __per computer__, use the _ResourceId [property](log-standard-properties.md#_resourceid), which provides the full path to the resource:
303
+
For data from nodes hosted in Azure you can get the **size** of ingested data__per computer__, use the _ResourceId [property](log-standard-properties.md#_resourceid), which provides the full path to the resource:
308
304
309
305
```kusto
310
306
union withsource = tt *
311
307
| where _IsBillable == true
312
-
| summarize Bytes=sum(_BilledSize) by _ResourceId | sort by Bytes nulls last
308
+
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | sort by Bytes nulls last
313
309
```
314
310
315
-
For data from nodes hosted in Azure you can get the **size** of billable events ingested __per Azure subscription__, parse the `_ResourceId` property as:
311
+
For data from nodes hosted in Azure you can get the **size** of ingested data__per Azure subscription__, parse the `_ResourceId` property as:
316
312
317
313
```kusto
318
314
union withsource = tt *
319
315
| where _IsBillable == true
320
316
| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
| summarize Bytes=sum(_BilledSize) by subscriptionId | sort by Bytes nulls last
318
+
| summarize BillableDataBytes = sum(_BilledSize) by subscriptionId | sort by Bytes nulls last
323
319
```
324
320
325
321
Changing `subscriptionId` to `resourceGroup` will show the billable ingested data volume by Azure resource group.
326
322
327
-
328
323
> [!NOTE]
329
324
> Some of the fields of the Usage data type, while still in the schema, have been deprecated and will their values are no longer populated.
330
325
> These are **Computer** as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**.
@@ -362,6 +357,20 @@ Some suggestions for reducing the volume of logs collected include:
362
357
| AzureDiagnostics | Change resource log collection to: <br> - Reduce the number of resources send logs to Log Analytics <br> - Collect only required logs |
363
358
| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
364
359
360
+
### Getting nodes as billed in the Per Node pricing tier
361
+
362
+
To get a list of computers which will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes which are sending **billed data types** (some data types are free).
363
+
To do this, use the `_IsBillable`[property](log-standard-properties.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the count of computers with billed
364
+
data per hour (which is the granularity at which nodes are counted and billed):
| summarize billableNodes=dcount(computerName) by bin(TimeGenerated, 1h) | sort by TimeGenerated asc
372
+
```
373
+
365
374
### Getting Security and Automation node counts
366
375
367
376
If you are on "Per node (OMS)" pricing tier, then you are charged based on the number of nodes and solutions you use, the number of Insights and Analytics nodes for which you are being billed will be shown in table on the **Usage and Estimated Cost** page.
@@ -503,4 +512,4 @@ There are some additional Log Analytics limits, some of which depend on the Log
503
512
- To configure an effective event collection policy, review [Azure Security Center filtering policy](../../security-center/security-center-enable-data-collection.md).
0 commit comments