Skip to content

Commit 6f7a368

Browse files
committed
edit pass: log-articles-batch-7
1 parent 18e23c2 commit 6f7a368

File tree

3 files changed

+68
-76
lines changed

3 files changed

+68
-76
lines changed

articles/azure-monitor/logs/data-ingestion-time.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ Agents and management solutions use different strategies to collect data from a
3535
| Windows events, Syslog events, and performance metrics | Collected immediately| |
3636
| Linux performance counters | Polled at 30-second intervals| |
3737
| IIS logs and text logs | Collected after their timestamp changes | For IIS logs, this schedule is influenced by the [rollover schedule configured on IIS](../agents/data-sources-iis-logs.md). |
38-
| Active Directory Replication solution | Assessment every five days | The agent collects these logs only when assessment is complete.|
39-
| Active Directory Assessment solution | Weekly assessment of your Active Directory infrastructure | The agent collects these logs only when assessment is complete.|
38+
| Active Directory Replication solution | Assessment every five days | The agent collects the logs only when assessment is complete.|
39+
| Active Directory Assessment solution | Weekly assessment of your Active Directory infrastructure | The agent collects the logs only when assessment is complete.|
4040

4141
### Agent upload frequency
4242

@@ -85,7 +85,7 @@ Another process that adds latency is the process that handles custom logs. In so
8585

8686
### New custom data types provisioning
8787

88-
When a new type of custom data is created from a [custom log](../agents/data-sources-custom-logs.md) or the [Data Collector API](../logs/data-collector-api.md), the system creates a dedicated storage container. This is a one-time overhead that occurs only on the first appearance of this data type.
88+
When a new type of custom data is created from a [custom log](../agents/data-sources-custom-logs.md) or the [Data Collector API](../logs/data-collector-api.md), the system creates a dedicated storage container. This one-time overhead occurs only on the first appearance of this data type.
8989

9090
### Surge protection
9191

@@ -108,7 +108,7 @@ Ingestion time might vary for different resources under different circumstances.
108108
|:---|:---|:---|
109109
| Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, it will be set to the same time as _TimeReceived. | If at processing time the Time Generated value is older than 3 days, the row will be dropped. |
110110
| Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field isn't optimized for mass processing and shouldn't be used to filter large datasets. |
111-
| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | We recommend using ingestion_time() if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. |
111+
| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | We recommend using `ingestion_time()` if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. |
112112

113113
### Ingestion latency delays
114114
You can measure the latency of a specific record by comparing the result of the [ingestion_time()](/azure/kusto/query/ingestiontimefunction) function to the `TimeGenerated` property. This data can be used with various aggregations to discover how ingestion latency behaves. Examine some percentile of the ingestion time to get insights for large amounts of data.
@@ -137,7 +137,7 @@ Heartbeat
137137
| render timechart
138138
```
139139

140-
Use the following query to show computer ingestion time by the country/region they're located in, which is based on their IP address:
140+
Use the following query to show computer ingestion time by the country/region where they're located, which is based on their IP address:
141141

142142
``` Kusto
143143
Heartbeat
@@ -160,7 +160,7 @@ AzureDiagnostics
160160
### Resources that stop responding
161161
In some cases, a resource could stop sending data. To understand if a resource is sending data or not, look at its most recent record, which can be identified by the standard `TimeGenerated` field.
162162

163-
Use the _Heartbeat_ table to check the availability of a VM because a heartbeat is sent once a minute by the agent. Use the following query to list the active computers that haven’t reported heartbeat recently:
163+
Use the `Heartbeat` table to check the availability of a VM because a heartbeat is sent once a minute by the agent. Use the following query to list the active computers that haven’t reported heartbeat recently:
164164

165165
``` Kusto
166166
Heartbeat

articles/azure-monitor/logs/get-started-queries.md

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Here's a video version of this tutorial:
3535
3636
## Write a new query
3737

38-
Queries can start with either a table name or the *search* command. It's a good idea to start with a table name because it defines a clear scope for the query. It also improves query performance and the relevance of the results.
38+
Queries can start with either a table name or the `search` command. It's a good idea to start with a table name because it defines a clear scope for the query. It also improves query performance and the relevance of the results.
3939

4040
> [!NOTE]
4141
> KQL, which is used by Azure Monitor, is case sensitive. Language keywords are usually written in lowercase. When you use names of tables or columns in a query, be sure to use the correct case, as shown on the schema pane.
@@ -49,11 +49,11 @@ SecurityEvent
4949
| take 10
5050
```
5151

52-
The preceding query returns 10 results from the *SecurityEvent* table, in no specific order. This common way to get a glance at a table helps you to understand its structure and content. Let's examine how it's built:
52+
The preceding query returns 10 results from the `SecurityEvent` table, in no specific order. This common way to get a glance at a table helps you to understand its structure and content. Let's examine how it's built:
5353

54-
* The query starts with the table name *SecurityEvent*, which defines the scope of the query.
54+
* The query starts with the table name `SecurityEvent`, which defines the scope of the query.
5555
* The pipe (|) character separates commands, so the output of the first command is the input of the next. You can add any number of piped elements.
56-
* Following the pipe is the **take** command, which returns a specific number of arbitrary records from the table.
56+
* Following the pipe is the `take` command, which returns a specific number of arbitrary records from the table.
5757

5858
We could run the query even without adding `| take 10`. The command would still be valid, but it could return up to 10,000 results.
5959

@@ -66,36 +66,36 @@ search in (SecurityEvent) "Cryptographic"
6666
| take 10
6767
```
6868

69-
This query searches the *SecurityEvent* table for records that contain the phrase "Cryptographic." Of those records, 10 records will be returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search will go over *all* tables. The process would then take longer and be less efficient.
69+
This query searches the `SecurityEvent` table for records that contain the phrase "Cryptographic." Of those records, 10 records will be returned and displayed. If you omit the `in (SecurityEvent)` part and run only `search "Cryptographic"`, the search will go over *all* tables. The process would then take longer and be less efficient.
7070

7171
> [!IMPORTANT]
7272
> Search queries are ordinarily slower than table-based queries because they have to process more data.
7373
7474
## Sort and top
75-
Although **take** is useful for getting a few records, the results are selected and displayed in no particular order. To get an ordered view, you could **sort** by the preferred column:
75+
Although `take` is useful for getting a few records, the results are selected and displayed in no particular order. To get an ordered view, you could `sort` by the preferred column:
7676

7777
```Kusto
7878
SecurityEvent
7979
| sort by TimeGenerated desc
8080
```
8181

82-
The preceding query could return too many results though, and it might also take some time. The query sorts the entire *SecurityEvent* table by the *TimeGenerated* column. The Analytics portal then limits the display to only 10,000 records. This approach isn't optimal.
82+
The preceding query could return too many results though, and it might also take some time. The query sorts the entire `SecurityEvent` table by the `TimeGenerated` column. The Analytics portal then limits the display to only 10,000 records. This approach isn't optimal.
8383

84-
The best way to get only the latest 10 records is to use **top**, which sorts the entire table on the server side and then returns the top records:
84+
The best way to get only the latest 10 records is to use `top`, which sorts the entire table on the server side and then returns the top records:
8585

8686
```Kusto
8787
SecurityEvent
8888
| top 10 by TimeGenerated
8989
```
9090

91-
Descending is the default sorting order, so you would usually omit the **desc** argument. The output looks like this example.
91+
Descending is the default sorting order, so you would usually omit the `desc` argument. The output looks like this example.
9292

9393
![Screenshot that shows the top 10 records sorted in descending order.](media/get-started-queries/top10.png)
9494

9595
## The where operator: Filter on a condition
9696
Filters, as indicated by their name, filter the data by a specific condition. Filtering is the most common way to limit query results to relevant information.
9797

98-
To add a filter to a query, use the **where** operator followed by one or more conditions. For example, the following query returns only *SecurityEvent* records where _Level_ equals _8_:
98+
To add a filter to a query, use the `where` operator followed by one or more conditions. For example, the following query returns only `SecurityEvent` records where `Level equals _8`:
9999

100100
```Kusto
101101
SecurityEvent
@@ -109,27 +109,27 @@ When you write filter conditions, you can use the following expressions:
109109
| == | Check equality<br>(case-sensitive) | `Level == 8` |
110110
| =~ | Check equality<br>(case-insensitive) | `EventSourceName =~ "microsoft-windows-security-auditing"` |
111111
| !=, <> | Check inequality<br>(both expressions are identical) | `Level != 4` |
112-
| *and*, *or* | Required between conditions| `Level == 16 or CommandLine != ""` |
112+
| `and`, `or` | Required between conditions| `Level == 16 or CommandLine != ""` |
113113

114114
To filter by multiple conditions, you can use either of the following approaches:
115115

116-
Use **and**, as shown here:
116+
Use `and`, as shown here:
117117

118118
```Kusto
119119
SecurityEvent
120120
| where Level == 8 and EventID == 4672
121121
```
122122

123-
Pipe multiple **where** elements, one after the other, as shown here:
123+
Pipe multiple `where` elements, one after the other, as shown here:
124124

125125
```Kusto
126126
SecurityEvent
127127
| where Level == 8
128128
| where EventID == 4672
129129
```
130-
130+
131131
> [!NOTE]
132-
> Values can have different types, so you might need to cast them to perform comparisons on the correct type. For example, the *SecurityEvent Level* column is of type String, so you must cast it to a numerical type, such as *int* or *long*, before you can use numerical operators on it, as shown here:
132+
> Values can have different types, so you might need to cast them to perform comparisons on the correct type. For example, the `SecurityEvent Level` column is of type String, so you must cast it to a numerical type, such as `int` or `long`, before you can use numerical operators on it, as shown here:
133133
> `SecurityEvent | where toint(Level) >= 10`
134134
135135
## Specify a time range
@@ -156,7 +156,7 @@ In the preceding time filter, `ago(30m)` means "30 minutes ago." This query retu
156156

157157
## Use project and extend to select and compute columns
158158

159-
Use **project** to select specific columns to include in the results:
159+
Use `project` to select specific columns to include in the results:
160160

161161
```Kusto
162162
SecurityEvent
@@ -168,19 +168,19 @@ The preceding example generates the following output:
168168

169169
![Screenshot that shows the query "project" results list.](media/get-started-queries/project.png)
170170

171-
You can also use **project** to rename columns and define new ones. The next example uses **project** to do the following:
171+
You can also use `project` to rename columns and define new ones. The next example uses `project` to do the following:
172172

173-
* Select only the *Computer* and *TimeGenerated* original columns.
174-
* Display the *Activity* column as *EventDetails*.
175-
* Create a new column named *EventCode*. The **substring()** function is used to get only the first four characters from the **Activity** field.
173+
* Select only the `Computer` and `TimeGenerated` original columns.
174+
* Display the `Activity` column as `EventDetails`.
175+
* Create a new column named `EventCode`. The `substring()` function is used to get only the first four characters from the `Activity` field.
176176

177177
```Kusto
178178
SecurityEvent
179179
| top 10 by TimeGenerated
180180
| project Computer, TimeGenerated, EventDetails=Activity, EventCode=substring(Activity, 0, 4)
181181
```
182182

183-
You can use **extend** to keep all original columns in the result set and define other ones. The following query uses **extend** to add the *EventCode* column. This column might not be displayed at the end of the table results. You would need to expand the details of a record to view it.
183+
You can use `extend` to keep all original columns in the result set and define other ones. The following query uses `extend` to add the `EventCode` column. This column might not be displayed at the end of the table results. You would need to expand the details of a record to view it.
184184

185185
```Kusto
186186
SecurityEvent
@@ -189,9 +189,9 @@ SecurityEvent
189189
```
190190

191191
## Use summarize to aggregate groups of rows
192-
Use **summarize** to identify groups of records according to one or more columns and apply aggregations to them. The most common use of **summarize** is *count*, which returns the number of results in each group.
192+
Use `summarize` to identify groups of records according to one or more columns and apply aggregations to them. The most common use of `summarize` is `count`, which returns the number of results in each group.
193193

194-
The following query reviews all *Perf* records from the last hour, groups them by *ObjectName*, and counts the records in each group:
194+
The following query reviews all `Perf` records from the last hour, groups them by `ObjectName`, and counts the records in each group:
195195

196196
```Kusto
197197
Perf
@@ -207,15 +207,15 @@ Perf
207207
| summarize count() by ObjectName, CounterName
208208
```
209209

210-
Another common use is to perform mathematical or statistical calculations on each group. The following example calculates the average *CounterValue* for each computer:
210+
Another common use is to perform mathematical or statistical calculations on each group. The following example calculates the average `CounterValue` for each computer:
211211

212212
```Kusto
213213
Perf
214214
| where TimeGenerated > ago(1h)
215215
| summarize avg(CounterValue) by Computer
216216
```
217217

218-
Unfortunately, the results of this query are meaningless because we mixed together different performance counters. To make the results more meaningful, calculate the average separately for each combination of *CounterName* and *Computer*:
218+
Unfortunately, the results of this query are meaningless because we mixed together different performance counters. To make the results more meaningful, calculate the average separately for each combination of `CounterName` and `Computer`:
219219

220220
```Kusto
221221
Perf
@@ -226,7 +226,7 @@ Perf
226226
### Summarize by a time column
227227
Grouping results can also be based on a time column or another continuous value. Simply summarizing `by TimeGenerated`, though, would create groups for every single millisecond over the time range because these values are unique.
228228

229-
To create groups based on continuous values, it's best to break the range into manageable units by using **bin**. The following query analyzes *Perf* records that measure free memory (*Available MBytes*) on a specific computer. It calculates the average value of each 1-hour period over the last 7 days:
229+
To create groups based on continuous values, it's best to break the range into manageable units by using `bin`. The following query analyzes `Perf` records that measure free memory (`Available MBytes`) on a specific computer. It calculates the average value of each 1-hour period over the last 7 days:
230230

231231
```Kusto
232232
Perf

0 commit comments

Comments
 (0)