You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,6 @@ To send data to Log Analytics, create the data collection rule in the *same regi
34
34
1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**:
35
35
36
36
-**Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
37
-
38
37
-**Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
39
38
40
39
[](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
@@ -107,9 +106,12 @@ This capability is enabled as part of the Azure CLI monitor-control-service exte
107
106
For sample templates, see [Azure Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md).
108
107
109
108
---
109
+
110
110
## Filter events using XPath queries
111
111
112
-
You're charged for any data you collect in a Log Analytics workspace, so collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
112
+
Since you're charged for any data you collect in a Log Analytics workspace, you should limit data collection from your agent to only the event data that you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
To specify more filters, use custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you might want to return only events from the Application event log with an event ID of 1035. The `XPathQuery` for these events would be `*[System[EventID=1035]]`. Because you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
115
117
@@ -145,6 +147,7 @@ Examples of using a custom XPath to filter events:
145
147
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
146
148
| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
147
149
150
+
148
151
## Next steps
149
152
150
153
- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).
Copy file name to clipboardExpand all lines: articles/azure-monitor/agents/data-collection-rule-sample-agent.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ The sample [data collection rule](../essentials/data-collection-rule-overview.md
24
24
- Sends all data to a Log Analytics workspace named centralWorkspace.
25
25
26
26
> [!NOTE]
27
-
> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
27
+
> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected. It's also dependent on the plan selected and how long you chose to store data generated from your clusters.
19
+
20
+
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters.
20
21
21
22
>[!NOTE]
22
23
>All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
@@ -29,62 +30,7 @@ The following types of data collected from a Kubernetes cluster with Container i
29
30
- Active scraping of Prometheus metrics
30
31
-[Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
31
32
32
-
## What's collected from Kubernetes clusters?
33
-
34
-
Container insights includes a predefined set of metrics and inventory items that are collected and written as log data in your Log Analytics workspace. All the metrics listed here are collected every minute.
35
-
36
-
### Node metrics collected
37
-
38
-
The 24 metrics per node that are collected:
39
-
40
-
- cpuUsageNanoCores
41
-
- cpuCapacityNanoCores
42
-
- cpuAllocatableNanoCores
43
-
- memoryRssBytes
44
-
- memoryWorkingSetBytes
45
-
- memoryCapacityBytes
46
-
- memoryAllocatableBytes
47
-
- restartTimeEpoch
48
-
- used (disk)
49
-
- free (disk)
50
-
- used_percent (disk)
51
-
- io_time (diskio)
52
-
- writes (diskio)
53
-
- reads (diskio)
54
-
- write_bytes (diskio)
55
-
- write_time (diskio)
56
-
- iops_in_progress (diskio)
57
-
- read_bytes (diskio)
58
-
- read_time (diskio)
59
-
- err_in (net)
60
-
- err_out (net)
61
-
- bytes_recv (net)
62
-
- bytes_sent (net)
63
-
- Kubelet_docker_operations (kubelet)
64
-
65
-
### Container metrics
66
-
67
-
The eight metrics per container that are collected:
68
-
69
-
- cpuUsageNanoCores
70
-
- cpuRequestNanoCores
71
-
- cpuLimitNanoCores
72
-
- memoryRssBytes
73
-
- memoryWorkingSetBytes
74
-
- memoryRequestBytes
75
-
- memoryLimitBytes
76
-
- restartTimeEpoch
77
-
78
-
### Cluster inventory
79
-
80
-
The cluster inventory data that's collected by default:
81
-
82
-
- KubePodInventory: 1 per pod per minute
83
-
- KubeNodeInventory: 1 per node per minute
84
-
- KubeServices: 1 per service per minute
85
-
- ContainerInventory: 1 per container per minute
86
-
87
-
## Estimate costs to monitor your AKS cluster
33
+
## Estimating costs to monitor your AKS cluster
88
34
89
35
The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
90
36
@@ -187,10 +133,29 @@ If you use [Prometheus metric scraping](container-insights-prometheus.md), make
187
133
188
134
### Configure Basic Logs
189
135
190
-
You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
136
+
You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
191
137
192
138
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
193
139
140
+
## Data collected from Kubernetes clusters
141
+
142
+
### Metric data
143
+
Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute.
The following list is the cluster inventory data collected by default:
154
+
155
+
- KubePodInventory – 1 per pod per minute
156
+
- KubeNodeInventory – 1 per node per minute
157
+
- KubeServices – 1 per service per minute
158
+
- ContainerInventory – 1 per container per minute
194
159
## Next steps
195
160
196
161
To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/data-collection-transformations.md
+7-24Lines changed: 7 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,33 +10,16 @@ ms.reviwer: nikeist
10
10
# Data collection transformations in Azure Monitor (preview)
11
11
Transformations in Azure Monitor allow you to filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they are implemented. It provides links to other content for actually creating a transformation.
12
12
13
-
## When to use transformations
14
-
Transformations are useful for a variety of scenarios, including those described below.
13
+
## Why to use transformations
14
+
The following table describes the different goals that transformations can be used to achieve.
15
15
16
-
### Reduce data costs
17
-
Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.
18
-
19
-
-**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.
20
-
21
-
-**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.
22
-
23
-
-**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.
24
-
25
-
26
-
### Remove sensitive data
27
-
You may have a data source that sends information you don't want stored for privacy or compliancy reasons.
28
-
29
-
-**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.
30
-
31
-
-**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number.
32
-
33
-
34
-
### Enrich data with additional or calculated information
35
-
Use a transformation to add information to data that provides business context or simplifies querying the data later.
16
+
| Category | Details |
17
+
|:---|:---|
18
+
| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number. |
19
+
| Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. |
20
+
| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original. |
36
21
37
-
-**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.
38
22
39
-
-**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns.
40
23
41
24
## Supported tables
42
25
Transformations may be applied to the following tables in a Log Analytics workspace.
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/diagnostic-settings.md
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,6 +123,19 @@ The following table provides unique requirements for each destination including
123
123
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.|
124
124
| Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
125
125
126
+
## Controlling costs
127
+
128
+
There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
129
+
130
+
You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
131
+
132
+
Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use [transformations](data-collection-transformations.md) on the workspace to filter logs that you don't require.
133
+
134
+
135
+
You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
Copy file name to clipboardExpand all lines: articles/azure-monitor/faq.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -498,7 +498,7 @@ sections:
498
498
answer: |
499
499
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each month free of charge. The free allowance is large enough to cover development, and publishing an app for a few users. You can set a cap to prevent more than a specified amount of data from being processed.
500
500
501
-
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to [limit your charges](best-practices-cost.md#application-insights).
501
+
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to [limit your charges](best-practices-cost.md#data-collection).
502
502
503
503
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It's suitable if you want to use Continuous Export on a large scale.
0 commit comments