Skip to content

Commit 25d2297

Browse files
authored
Update Stream and Filter logs docs (#1364)
This PR closes [Issue 1405](#1405) and updates the example logs so they'll be easier to find in the UI.
1 parent 2cb3bcc commit 25d2297

File tree

2 files changed

+99
-93
lines changed

2 files changed

+99
-93
lines changed

solutions/observability/logs/filter-aggregate-logs.md

Lines changed: 30 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,9 @@ This guide shows you how to:
2929
::::
3030

3131

32-
The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation.
32+
The examples on this page use the following ingest pipeline and index template. The pipeline and template need to be set before you create your data stream in the following steps. Set them in **Developer Tools**, which you can find by searching for `Developer Tools` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
33+
34+
If you haven't used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation.
3335

3436
Set the ingest pipeline with the following command:
3537

@@ -94,28 +96,28 @@ Add some logs with varying timestamps and log levels to your data stream:
9496
```console
9597
POST logs-example-default/_bulk
9698
{ "create": {} }
97-
{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
99+
{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
98100
{ "create": {} }
99-
{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
101+
{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
100102
{ "create": {} }
101-
{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
103+
{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
102104
{ "create": {} }
103-
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
105+
{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
104106
```
105107

106-
For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Discover:
108+
For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on April 14th or 15th. From Discover:
107109

108110
1. Make sure **All logs** is selected in the **Data views** menu.
109111
1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`:
110112

111113
```text
112114
log.level: ("ERROR" or "WARN")
113115
```
114-
1. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`.
116+
1. Click the current time range, select **Absolute**, and set the **Start date** to `Apr 14, 2025 @ 00:00:00.000`.
115117
116118
![Set the time range start date](../../images/serverless-logs-start-date.png "")
117119
118-
1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`.
120+
1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Apr 15, 2025 @ 23:59:59.999`.
119121
120122
![Set the time range end date](/solutions/images/serverless-logs-end-date.png "")
121123
@@ -141,16 +143,16 @@ First, from **Developer Tools**, add some logs with varying timestamps and log l
141143
```console
142144
POST logs-example-default/_bulk
143145
{ "create": {} }
144-
{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
146+
{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
145147
{ "create": {} }
146-
{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
148+
{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
147149
{ "create": {} }
148-
{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
150+
{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
149151
{ "create": {} }
150-
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
152+
{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
151153
```
152154

153-
Let’s say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`.
155+
Let’s say you want to look into an event that occurred between April 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`.
154156

155157
```console
156158
POST /logs-example-default/_search
@@ -161,8 +163,8 @@ POST /logs-example-default/_search
161163
{
162164
"range": {
163165
"@timestamp": {
164-
"gte": "2023-09-14T00:00:00",
165-
"lte": "2023-09-15T23:59:59"
166+
"gte": "2025-04-14T00:00:00",
167+
"lte": "2025-04-15T23:59:59"
166168
}
167169
}
168170
},
@@ -186,27 +188,27 @@ The filtered results should show `WARN` and `ERROR` logs that occurred within th
186188
...
187189
"hits": [
188190
{
189-
"_index": ".ds-logs-example-default-2023.09.25-000001",
191+
"_index": ".ds-logs-example-default-2025.04.25-000001",
190192
"_id": "JkwPzooBTddK4OtTQToP",
191193
"_score": 0,
192194
"_source": {
193195
"message": "192.168.1.101 Disk usage exceeds 90%.",
194196
"log": {
195197
"level": "WARN"
196198
},
197-
"@timestamp": "2023-09-15T08:15:20.234Z"
199+
"@timestamp": "2025-04-15T08:15:20.234Z"
198200
}
199201
},
200202
{
201-
"_index": ".ds-logs-example-default-2023.09.25-000001",
203+
"_index": ".ds-logs-example-default-2025.04.25-000001",
202204
"_id": "A5YSzooBMYFrNGNwH75O",
203205
"_score": 0,
204206
"_source": {
205207
"message": "192.168.1.102 Critical system failure detected.",
206208
"log": {
207209
"level": "ERROR"
208210
},
209-
"@timestamp": "2023-09-14T10:30:45.789Z"
211+
"@timestamp": "2025-04-14T10:30:45.789Z"
210212
}
211213
}
212214
]
@@ -226,19 +228,19 @@ First, from **Developer Tools**, add some logs with varying log levels to your d
226228
```console
227229
POST logs-example-default/_bulk
228230
{ "create": {} }
229-
{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
231+
{ "message": "2025-04-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." }
230232
{ "create": {} }
231-
{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
233+
{ "message": "2025-04-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." }
232234
{ "create": {} }
233-
{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." }
235+
{ "message": "2025-04-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." }
234236
{ "create": {} }
235-
{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." }
237+
{ "message": "2025-04-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." }
236238
{ "create": {} }
237-
{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
239+
{ "message": "2025-04-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." }
238240
{ "create": {} }
239-
{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
241+
{ "message": "2025-04-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." }
240242
{ "create": {} }
241-
{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." }
243+
{ "message": "2025-04-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." }
242244
```
243245

244246
Next, run this command to aggregate your log data using the `log.level` field:
@@ -300,8 +302,8 @@ GET /logs-example-default/_search
300302
"query": {
301303
"range": {
302304
"@timestamp": {
303-
"gte": "2023-09-14T00:00:00",
304-
"lte": "2023-09-15T23:59:59"
305+
"gte": "2025-04-14T00:00:00",
306+
"lte": "2025-04-15T23:59:59"
305307
}
306308
}
307309
},

0 commit comments

Comments
 (0)