Skip to content

Commit c39d679

Browse files
authored
Merge pull request #213785 from bwren/cost-management
Cost optimization rewrite with WAF
2 parents b18bb9e + e4e2762 commit c39d679

19 files changed

+251
-296
lines changed

articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ To send data to Log Analytics, create the data collection rule in the *same regi
3434
1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**:
3535

3636
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
37-
3837
- **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
3938

4039
[ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
@@ -107,9 +106,12 @@ This capability is enabled as part of the Azure CLI monitor-control-service exte
107106
For sample templates, see [Azure Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md).
108107

109108
---
109+
110110
## Filter events using XPath queries
111111

112-
You're charged for any data you collect in a Log Analytics workspace, so collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
112+
Since you're charged for any data you collect in a Log Analytics workspace, you should limit data collection from your agent to only the event data that you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
113+
114+
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
113115

114116
To specify more filters, use custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you might want to return only events from the Application event log with an event ID of 1035. The `XPathQuery` for these events would be `*[System[EventID=1035]]`. Because you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
115117

@@ -145,6 +147,7 @@ Examples of using a custom XPath to filter events:
145147
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
146148
| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
147149
150+
148151
## Next steps
149152
150153
- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).

articles/azure-monitor/agents/data-collection-rule-sample-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ The sample [data collection rule](../essentials/data-collection-rule-overview.md
2424
- Sends all data to a Log Analytics workspace named centralWorkspace.
2525

2626
> [!NOTE]
27-
> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
27+
> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
2828
2929
## Sample DCR
3030

articles/azure-monitor/best-practices-cost.md

Lines changed: 67 additions & 147 deletions
Large diffs are not rendered by default.

articles/azure-monitor/containers/container-insights-cost.md

Lines changed: 24 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,10 @@ This article provides pricing guidance for Container insights to help you unders
1414
* Measure costs after Container insights has been enabled for one or more containers.
1515
* Control the collection of data and make cost reductions.
1616

17-
Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.
17+
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
1818

19-
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected. It's also dependent on the plan selected and how long you chose to store data generated from your clusters.
19+
20+
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters.
2021

2122
>[!NOTE]
2223
>All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
@@ -29,62 +30,7 @@ The following types of data collected from a Kubernetes cluster with Container i
2930
- Active scraping of Prometheus metrics
3031
- [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
3132

32-
## What's collected from Kubernetes clusters?
33-
34-
Container insights includes a predefined set of metrics and inventory items that are collected and written as log data in your Log Analytics workspace. All the metrics listed here are collected every minute.
35-
36-
### Node metrics collected
37-
38-
The 24 metrics per node that are collected:
39-
40-
- cpuUsageNanoCores
41-
- cpuCapacityNanoCores
42-
- cpuAllocatableNanoCores
43-
- memoryRssBytes
44-
- memoryWorkingSetBytes
45-
- memoryCapacityBytes
46-
- memoryAllocatableBytes
47-
- restartTimeEpoch
48-
- used (disk)
49-
- free (disk)
50-
- used_percent (disk)
51-
- io_time (diskio)
52-
- writes (diskio)
53-
- reads (diskio)
54-
- write_bytes (diskio)
55-
- write_time (diskio)
56-
- iops_in_progress (diskio)
57-
- read_bytes (diskio)
58-
- read_time (diskio)
59-
- err_in (net)
60-
- err_out (net)
61-
- bytes_recv (net)
62-
- bytes_sent (net)
63-
- Kubelet_docker_operations (kubelet)
64-
65-
### Container metrics
66-
67-
The eight metrics per container that are collected:
68-
69-
- cpuUsageNanoCores
70-
- cpuRequestNanoCores
71-
- cpuLimitNanoCores
72-
- memoryRssBytes
73-
- memoryWorkingSetBytes
74-
- memoryRequestBytes
75-
- memoryLimitBytes
76-
- restartTimeEpoch
77-
78-
### Cluster inventory
79-
80-
The cluster inventory data that's collected by default:
81-
82-
- KubePodInventory: 1 per pod per minute
83-
- KubeNodeInventory: 1 per node per minute
84-
- KubeServices: 1 per service per minute
85-
- ContainerInventory: 1 per container per minute
86-
87-
## Estimate costs to monitor your AKS cluster
33+
## Estimating costs to monitor your AKS cluster
8834

8935
The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
9036

@@ -187,10 +133,29 @@ If you use [Prometheus metric scraping](container-insights-prometheus.md), make
187133
188134
### Configure Basic Logs
189135
190-
You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
136+
You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
191137
192138
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
193139
140+
## Data collected from Kubernetes clusters
141+
142+
### Metric data
143+
Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute.
144+
145+
146+
| Type | Metrics |
147+
|:---|:---|
148+
| Node metrics | `cpuUsageNanoCores`<br>`cpuCapacityNanoCores`<br>`cpuAllocatableNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryCapacityBytes`<br>`memoryAllocatableBytes`<br>`restartTimeEpoch`<br>`used` (disk)<br>`free` (disk)<br>`used_percent` (disk)<br>`io_time` (diskio)<br>`writes` (diskio)<br>`reads` (diskio)<br>`write_bytes` (diskio)<br>`write_time` (diskio)<br>`iops_in_progress` (diskio)<br>`read_bytes` (diskio)<br>`read_time` (diskio)<br>`err_in` (net)<br>`err_out` (net)<br>`bytes_recv` (net)<br>`bytes_sent` (net)<br>`Kubelet_docker_operations` (kubelet)
149+
| Container metrics | `cpuUsageNanoCores`<br>`cpuRequestNanoCores`<br>`cpuLimitNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryRequestBytes`<br>`memoryLimitBytes`<br>`restartTimeEpoch`
150+
151+
### Cluster inventory
152+
153+
The following list is the cluster inventory data collected by default:
154+
155+
- KubePodInventory – 1 per pod per minute
156+
- KubeNodeInventory – 1 per node per minute
157+
- KubeServices – 1 per service per minute
158+
- ContainerInventory – 1 per container per minute
194159
## Next steps
195160
196161
To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).

articles/azure-monitor/essentials/data-collection-transformations.md

Lines changed: 7 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -10,33 +10,16 @@ ms.reviwer: nikeist
1010
# Data collection transformations in Azure Monitor (preview)
1111
Transformations in Azure Monitor allow you to filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they are implemented. It provides links to other content for actually creating a transformation.
1212

13-
## When to use transformations
14-
Transformations are useful for a variety of scenarios, including those described below.
13+
## Why to use transformations
14+
The following table describes the different goals that transformations can be used to achieve.
1515

16-
### Reduce data costs
17-
Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.
18-
19-
- **Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.
20-
21-
- **Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.
22-
23-
- **Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.
24-
25-
26-
### Remove sensitive data
27-
You may have a data source that sends information you don't want stored for privacy or compliancy reasons.
28-
29-
- **Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.
30-
31-
- **Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number.
32-
33-
34-
### Enrich data with additional or calculated information
35-
Use a transformation to add information to data that provides business context or simplifies querying the data later.
16+
| Category | Details |
17+
|:---|:---|
18+
| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number. |
19+
| Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. |
20+
| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original. |
3621

37-
- **Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.
3822

39-
- **Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns.
4023

4124
## Supported tables
4225
Transformations may be applied to the following tables in a Log Analytics workspace.

articles/azure-monitor/essentials/diagnostic-settings.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,19 @@ The following table provides unique requirements for each destination including
123123
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.|
124124
| Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
125125

126+
## Controlling costs
127+
128+
There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
129+
130+
You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
131+
132+
Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use [transformations](data-collection-transformations.md) on the workspace to filter logs that you don't require.
133+
134+
135+
You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
136+
137+
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
138+
126139
## Create diagnostic settings
127140

128141
You can create and edit diagnostic settings by using multiple methods.

articles/azure-monitor/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -498,7 +498,7 @@ sections:
498498
answer: |
499499
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each month free of charge. The free allowance is large enough to cover development, and publishing an app for a few users. You can set a cap to prevent more than a specified amount of data from being processed.
500500
501-
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to [limit your charges](best-practices-cost.md#application-insights).
501+
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to [limit your charges](best-practices-cost.md#data-collection).
502502
503503
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It's suitable if you want to use Continuous Export on a large scale.
504504

0 commit comments

Comments
 (0)