You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/data-ingestion-time.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,8 +35,8 @@ Agents and management solutions use different strategies to collect data from a
35
35
| Windows events, Syslog events, and performance metrics | Collected immediately||
36
36
| Linux performance counters | Polled at 30-second intervals||
37
37
| IIS logs and text logs | Collected after their timestamp changes | For IIS logs, this schedule is influenced by the [rollover schedule configured on IIS](../agents/data-sources-iis-logs.md). |
38
-
| Active Directory Replication solution | Assessment every five days | The agent collects these logs only when assessment is complete.|
39
-
| Active Directory Assessment solution | Weekly assessment of your Active Directory infrastructure | The agent collects these logs only when assessment is complete.|
38
+
| Active Directory Replication solution | Assessment every five days | The agent collects the logs only when assessment is complete.|
39
+
| Active Directory Assessment solution | Weekly assessment of your Active Directory infrastructure | The agent collects the logs only when assessment is complete.|
40
40
41
41
### Agent upload frequency
42
42
@@ -85,7 +85,7 @@ Another process that adds latency is the process that handles custom logs. In so
85
85
86
86
### New custom data types provisioning
87
87
88
-
When a new type of custom data is created from a [custom log](../agents/data-sources-custom-logs.md) or the [Data Collector API](../logs/data-collector-api.md), the system creates a dedicated storage container. This is a one-time overhead that occurs only on the first appearance of this data type.
88
+
When a new type of custom data is created from a [custom log](../agents/data-sources-custom-logs.md) or the [Data Collector API](../logs/data-collector-api.md), the system creates a dedicated storage container. This one-time overhead occurs only on the first appearance of this data type.
89
89
90
90
### Surge protection
91
91
@@ -108,7 +108,7 @@ Ingestion time might vary for different resources under different circumstances.
108
108
|:---|:---|:---|
109
109
| Record created at data source |[TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, it will be set to the same time as _TimeReceived. | If at processing time the Time Generated value is older than 3 days, the row will be dropped. |
110
110
| Record received by Azure Monitor ingestion endpoint |[_TimeReceived](./log-standard-columns.md#_timereceived)| This field isn't optimized for mass processing and shouldn't be used to filter large datasets. |
111
-
| Record stored in workspace and available for queries |[ingestion_time()](/azure/kusto/query/ingestiontimefunction)| We recommend using ingestion_time() if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. |
111
+
| Record stored in workspace and available for queries |[ingestion_time()](/azure/kusto/query/ingestiontimefunction)| We recommend using `ingestion_time()` if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. |
112
112
113
113
### Ingestion latency delays
114
114
You can measure the latency of a specific record by comparing the result of the [ingestion_time()](/azure/kusto/query/ingestiontimefunction) function to the `TimeGenerated` property. This data can be used with various aggregations to discover how ingestion latency behaves. Examine some percentile of the ingestion time to get insights for large amounts of data.
@@ -137,7 +137,7 @@ Heartbeat
137
137
| render timechart
138
138
```
139
139
140
-
Use the following query to show computer ingestion time by the country/region they're located in, which is based on their IP address:
140
+
Use the following query to show computer ingestion time by the country/region where they're located, which is based on their IP address:
141
141
142
142
```Kusto
143
143
Heartbeat
@@ -160,7 +160,7 @@ AzureDiagnostics
160
160
### Resources that stop responding
161
161
In some cases, a resource could stop sending data. To understand if a resource is sending data or not, look at its most recent record, which can be identified by the standard `TimeGenerated` field.
162
162
163
-
Use the _Heartbeat_ table to check the availability of a VM because a heartbeat is sent once a minute by the agent. Use the following query to list the active computers that haven’t reported heartbeat recently:
163
+
Use the `Heartbeat` table to check the availability of a VM because a heartbeat is sent once a minute by the agent. Use the following query to list the active computers that haven’t reported heartbeat recently:
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/get-started-queries.md
+26-26Lines changed: 26 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Here's a video version of this tutorial:
35
35
36
36
## Write a new query
37
37
38
-
Queries can start with either a table name or the *search* command. It's a good idea to start with a table name because it defines a clear scope for the query. It also improves query performance and the relevance of the results.
38
+
Queries can start with either a table name or the `search` command. It's a good idea to start with a table name because it defines a clear scope for the query. It also improves query performance and the relevance of the results.
39
39
40
40
> [!NOTE]
41
41
> KQL, which is used by Azure Monitor, is case sensitive. Language keywords are usually written in lowercase. When you use names of tables or columns in a query, be sure to use the correct case, as shown on the schema pane.
@@ -49,11 +49,11 @@ SecurityEvent
49
49
| take 10
50
50
```
51
51
52
-
The preceding query returns 10 results from the *SecurityEvent* table, in no specific order. This common way to get a glance at a table helps you to understand its structure and content. Let's examine how it's built:
52
+
The preceding query returns 10 results from the `SecurityEvent` table, in no specific order. This common way to get a glance at a table helps you to understand its structure and content. Let's examine how it's built:
53
53
54
-
* The query starts with the table name *SecurityEvent*, which defines the scope of the query.
54
+
* The query starts with the table name `SecurityEvent`, which defines the scope of the query.
55
55
* The pipe (|) character separates commands, so the output of the first command is the input of the next. You can add any number of piped elements.
56
-
* Following the pipe is the **take** command, which returns a specific number of arbitrary records from the table.
56
+
* Following the pipe is the `take` command, which returns a specific number of arbitrary records from the table.
57
57
58
58
We could run the query even without adding `| take 10`. The command would still be valid, but it could return up to 10,000 results.
59
59
@@ -66,36 +66,36 @@ search in (SecurityEvent) "Cryptographic"
66
66
| take 10
67
67
```
68
68
69
-
This query searches the *SecurityEvent* table for records that contain the phrase "Cryptographic." Of those records, 10 records will be returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search will go over *all* tables. The process would then take longer and be less efficient.
69
+
This query searches the `SecurityEvent` table for records that contain the phrase "Cryptographic." Of those records, 10 records will be returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search will go over *all* tables. The process would then take longer and be less efficient.
70
70
71
71
> [!IMPORTANT]
72
72
> Search queries are ordinarily slower than table-based queries because they have to process more data.
73
73
74
74
## Sort and top
75
-
Although **take** is useful for getting a few records, the results are selected and displayed in no particular order. To get an ordered view, you could **sort** by the preferred column:
75
+
Although `take` is useful for getting a few records, the results are selected and displayed in no particular order. To get an ordered view, you could `sort` by the preferred column:
76
76
77
77
```Kusto
78
78
SecurityEvent
79
79
| sort by TimeGenerated desc
80
80
```
81
81
82
-
The preceding query could return too many results though, and it might also take some time. The query sorts the entire *SecurityEvent* table by the *TimeGenerated* column. The Analytics portal then limits the display to only 10,000 records. This approach isn't optimal.
82
+
The preceding query could return too many results though, and it might also take some time. The query sorts the entire `SecurityEvent` table by the `TimeGenerated` column. The Analytics portal then limits the display to only 10,000 records. This approach isn't optimal.
83
83
84
-
The best way to get only the latest 10 records is to use **top**, which sorts the entire table on the server side and then returns the top records:
84
+
The best way to get only the latest 10 records is to use `top`, which sorts the entire table on the server side and then returns the top records:
85
85
86
86
```Kusto
87
87
SecurityEvent
88
88
| top 10 by TimeGenerated
89
89
```
90
90
91
-
Descending is the default sorting order, so you would usually omit the **desc** argument. The output looks like this example.
91
+
Descending is the default sorting order, so you would usually omit the `desc` argument. The output looks like this example.
92
92
93
93

94
94
95
95
## The where operator: Filter on a condition
96
96
Filters, as indicated by their name, filter the data by a specific condition. Filtering is the most common way to limit query results to relevant information.
97
97
98
-
To add a filter to a query, use the **where** operator followed by one or more conditions. For example, the following query returns only *SecurityEvent* records where _Level_ equals _8_:
98
+
To add a filter to a query, use the `where` operator followed by one or more conditions. For example, the following query returns only `SecurityEvent` records where `Level equals _8`:
99
99
100
100
```Kusto
101
101
SecurityEvent
@@ -109,27 +109,27 @@ When you write filter conditions, you can use the following expressions:
|*and*, *or*| Required between conditions|`Level == 16 or CommandLine != ""`|
112
+
|`and`, `or`| Required between conditions|`Level == 16 or CommandLine != ""`|
113
113
114
114
To filter by multiple conditions, you can use either of the following approaches:
115
115
116
-
Use **and**, as shown here:
116
+
Use `and`, as shown here:
117
117
118
118
```Kusto
119
119
SecurityEvent
120
120
| where Level == 8 and EventID == 4672
121
121
```
122
122
123
-
Pipe multiple **where** elements, one after the other, as shown here:
123
+
Pipe multiple `where` elements, one after the other, as shown here:
124
124
125
125
```Kusto
126
126
SecurityEvent
127
127
| where Level == 8
128
128
| where EventID == 4672
129
129
```
130
-
130
+
131
131
> [!NOTE]
132
-
> Values can have different types, so you might need to cast them to perform comparisons on the correct type. For example, the *SecurityEvent Level* column is of type String, so you must cast it to a numerical type, such as *int* or *long*, before you can use numerical operators on it, as shown here:
132
+
> Values can have different types, so you might need to cast them to perform comparisons on the correct type. For example, the `SecurityEvent Level` column is of type String, so you must cast it to a numerical type, such as `int` or `long`, before you can use numerical operators on it, as shown here:
133
133
> `SecurityEvent | where toint(Level) >= 10`
134
134
135
135
## Specify a time range
@@ -156,7 +156,7 @@ In the preceding time filter, `ago(30m)` means "30 minutes ago." This query retu
156
156
157
157
## Use project and extend to select and compute columns
158
158
159
-
Use **project** to select specific columns to include in the results:
159
+
Use `project` to select specific columns to include in the results:
160
160
161
161
```Kusto
162
162
SecurityEvent
@@ -168,19 +168,19 @@ The preceding example generates the following output:
168
168
169
169

170
170
171
-
You can also use **project** to rename columns and define new ones. The next example uses **project** to do the following:
171
+
You can also use `project` to rename columns and define new ones. The next example uses `project` to do the following:
172
172
173
-
* Select only the *Computer* and *TimeGenerated* original columns.
174
-
* Display the *Activity* column as *EventDetails*.
175
-
* Create a new column named *EventCode*. The **substring()** function is used to get only the first four characters from the **Activity** field.
173
+
* Select only the `Computer` and `TimeGenerated` original columns.
174
+
* Display the `Activity` column as `EventDetails`.
175
+
* Create a new column named `EventCode`. The `substring()` function is used to get only the first four characters from the `Activity` field.
You can use **extend** to keep all original columns in the result set and define other ones. The following query uses **extend** to add the *EventCode* column. This column might not be displayed at the end of the table results. You would need to expand the details of a record to view it.
183
+
You can use `extend` to keep all original columns in the result set and define other ones. The following query uses `extend` to add the `EventCode` column. This column might not be displayed at the end of the table results. You would need to expand the details of a record to view it.
184
184
185
185
```Kusto
186
186
SecurityEvent
@@ -189,9 +189,9 @@ SecurityEvent
189
189
```
190
190
191
191
## Use summarize to aggregate groups of rows
192
-
Use **summarize** to identify groups of records according to one or more columns and apply aggregations to them. The most common use of **summarize** is *count*, which returns the number of results in each group.
192
+
Use `summarize` to identify groups of records according to one or more columns and apply aggregations to them. The most common use of `summarize` is `count`, which returns the number of results in each group.
193
193
194
-
The following query reviews all *Perf* records from the last hour, groups them by *ObjectName*, and counts the records in each group:
194
+
The following query reviews all `Perf` records from the last hour, groups them by `ObjectName`, and counts the records in each group:
195
195
196
196
```Kusto
197
197
Perf
@@ -207,15 +207,15 @@ Perf
207
207
| summarize count() by ObjectName, CounterName
208
208
```
209
209
210
-
Another common use is to perform mathematical or statistical calculations on each group. The following example calculates the average *CounterValue* for each computer:
210
+
Another common use is to perform mathematical or statistical calculations on each group. The following example calculates the average `CounterValue` for each computer:
211
211
212
212
```Kusto
213
213
Perf
214
214
| where TimeGenerated > ago(1h)
215
215
| summarize avg(CounterValue) by Computer
216
216
```
217
217
218
-
Unfortunately, the results of this query are meaningless because we mixed together different performance counters. To make the results more meaningful, calculate the average separately for each combination of *CounterName* and *Computer*:
218
+
Unfortunately, the results of this query are meaningless because we mixed together different performance counters. To make the results more meaningful, calculate the average separately for each combination of `CounterName` and `Computer`:
219
219
220
220
```Kusto
221
221
Perf
@@ -226,7 +226,7 @@ Perf
226
226
### Summarize by a time column
227
227
Grouping results can also be based on a time column or another continuous value. Simply summarizing `by TimeGenerated`, though, would create groups for every single millisecond over the time range because these values are unique.
228
228
229
-
To create groups based on continuous values, it's best to break the range into manageable units by using **bin**. The following query analyzes *Perf* records that measure free memory (*Available MBytes*) on a specific computer. It calculates the average value of each 1-hour period over the last 7 days:
229
+
To create groups based on continuous values, it's best to break the range into manageable units by using `bin`. The following query analyzes `Perf` records that measure free memory (`Available MBytes`) on a specific computer. It calculates the average value of each 1-hour period over the last 7 days:
0 commit comments