You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Configure advanced settings for streams [streams-advanced-settings]
8
8
9
-
The **Advanced** tab on the **Manage stream** page shows the underlying configuration details of your stream. While Streams simplifies many configurations, it doesn't support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates or modify other ingest pipelines that used by the stream.
10
-
11
-
This UI is intended for advanced users.
9
+
The **Advanced** tab shows the underlying {{es}} configuration details of your stream. While Streams simplifies many configurations, it doesn't support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates or modify other ingest pipelines that used by the stream.
Use the **Data quality** tab to find failed and degraded documents in your stream. The **Data quality** tab is made up of the following components:
10
+
11
+
-**Degraded documents**: Documents with the `ignored` property usually because of malformed fields or exceeding the limit of total fields when `ignore_above:false`. This component shows the total number of degraded documents, the percentage, and status (**Good**, **Degraded**, **Poor**).
12
+
-**Failed documents**: Documents that were rejected during ingestion.
13
+
-**Issues**: {applies_to}`stack: preview 9.2`Find issues with specific fields, how often they've occurred, and when they've occurred.
14
+
15
+
For more information on data quality, refer to the [data set quality](../../data-set-quality-monitoring.md) documentation.
Copy file name to clipboardExpand all lines: solutions/observability/streams/management/extract.md
+16-9Lines changed: 16 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,21 +15,24 @@ The UI also shows indexing problems, such as mapping conflicts, so you can addre
15
15
Applied changes aren't retroactive and only affect *future ingested data*.
16
16
:::
17
17
18
+
## Supported processors
19
+
Streams supports the following processors:
20
+
21
+
-[Date](./extract/date.md): convert date strings into timestamps with options for timezone, locale, and output format settings.
22
+
-[Dissect](./extract/dissect.md): extract fields from structured log messages using defined delimiters instead of patterns, making it faster than Grok and ideal for consistently formatted logs.
23
+
-[Grok](./extract/grok.md): extract fields from unstructured log messages using predefined or custom patterns, supports multiple match attempts in sequence, and can automatically generate patterns with an LLM connector.
24
+
-[Set](./extract/set.md): assign a specific value to a field, creating the field if it doesn’t exist or overwriting its value if it does.
25
+
-[Rename](./extract/rename.md): change the name of a field, moving its value to a new field name and removing the original.
26
+
-[Append](./extract/append.md): add a value to an existing array field, or create the field as an array if it doesn’t exist.
27
+
18
28
## Add a processor [streams-add-processors]
19
29
20
30
Streams uses {{es}} ingest pipelines to process your data. Ingest pipelines are made up of processors that transform your data.
21
31
22
32
To add a processor:
23
33
24
34
1. Select **Add processor** to open a list of supported processors.
25
-
1. Select a processor from the list:
26
-
-[Date](./extract/date.md)
27
-
-[Dissect](./extract/dissect.md)
28
-
-[Grok](./extract/grok.md)
29
-
- GeoIP
30
-
- Rename
31
-
- Set
32
-
- URL Decode
35
+
1. Select a processor from the list.
33
36
1. Select **Add Processor** to save the processor.
34
37
35
38
:::{note}
@@ -39,7 +42,10 @@ Editing processors with JSON is planned for a future release, and additional pro
39
42
### Add conditions to processors [streams-add-processor-conditions]
40
43
41
44
You can provide a condition for each processor under **Optional fields**. Conditions are boolean expressions that are evaluated for each document. Provide a field, a value, and a comparator.
42
-
Processors support these comparators:
45
+
46
+
:::{dropdown} Supported comparators
47
+
Streams processors support the following comparators:
48
+
43
49
- equals
44
50
- not equals
45
51
- less than
@@ -51,6 +57,7 @@ Processors support these comparators:
Use the append processor to add a value to an existing array field, or create the field as an array if it doesn’t exist.
10
+
11
+
To use an append processor:
12
+
13
+
1. Set **Source Field** to the field you want append values to.
14
+
1. Set **Target field** to the values you want to append to the **Source Field**.
15
+
16
+
This functionality uses the {{es}} rename pipeline processor. Refer to the [rename processor](elasticsearch://reference/enrich-processor/rename-processor.md) {{es}} documentation for more information.
Copy file name to clipboardExpand all lines: solutions/observability/streams/management/extract/date.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,14 @@ applies_to:
8
8
9
9
The date processor parses date strings and uses them as the timestamp of the document.
10
10
11
+
To parse a date string using the date processor:
12
+
13
+
1. Set the **Source Field** to the field containing the timestamp.
14
+
1. Set the **Format** field to one of the accepted date formats (ISO8602, UNIX, UNIX_MS, or TAI64N) or use a Java time pattern. Refer to the [example formats](#streams-date-examples) for more information.
15
+
11
16
This functionality uses the {{es}} date pipeline processor. Refer to the [date processor](elasticsearch://reference/enrich-processor/date-processor.md) {{es}} documentation for more information.
12
17
13
-
## Examples
18
+
## Example formats [streams-date-examples]
14
19
15
20
The following list provides some common examples of date formats and how to parse them.
Copy file name to clipboardExpand all lines: solutions/observability/streams/management/extract/dissect.md
+12-6Lines changed: 12 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,20 +5,26 @@ applies_to:
5
5
---
6
6
# Dissect processor [streams-dissect-processor]
7
7
8
-
The dissect processor parses structured log messages and extracts fields from them. Unlike Grok, it does not use a set of predefined patterns to match the log messages. Instead, it uses a set of delimiters to split the log message into fields.
9
-
Dissect is much faster than Grok and is ideal for log messages that follow a consistent, structured format.
8
+
The dissect processor parses structured log messages and extracts fields from them. It uses a set of delimiters to split the log message into fields instead of predefined patterns to match the log messages.
9
+
10
+
Dissect is much faster than Grok, and is recommend for log messages that follow a consistent, structured format.
11
+
12
+
To parse a log message with a dissect processor:
13
+
1. Set the **Source Field** to the field you want to dissect
14
+
1. Set the delimiters you want to use in the **Pattern** field. Refer to the [example pattern](#streams-dissect-example) for more information on setting delimiters.
10
15
11
16
This functionality uses the {{es}} dissect pipeline processor. Refer to the [dissect processor](elasticsearch://reference/enrich-processor/dissect-processor.md) {{es}} documentation for more information.
12
17
13
-
To parse a log message, simply name the field and list the delimiters you want to use. The dissect processor will then split the log message into fields based on the delimiters provided.
18
+
## Example dissect pattern [streams-dissect-example]
14
19
15
-
Example:
20
+
The following example shows the dissect pattern for an unstructured log message.
Copy file name to clipboardExpand all lines: solutions/observability/streams/management/extract/grok.md
+20-12Lines changed: 20 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,18 +5,26 @@ applies_to:
5
5
---
6
6
# Grok processor [streams-grok-processor]
7
7
8
-
The Grok processor parses unstructured log messages and extracts fields from them. It uses a set of predefined patterns to match the log messages and extract the fields. The Grok processor is very powerful and can parse a wide variety of log formats.
8
+
The grok processor parses unstructured log messages using a set of predefined patterns to match the log messages and extract the fields. The Grok processor is very powerful and can parse a wide variety of log formats.
9
9
10
-
You can provide multiple patterns to the Grok processor. The Grok processor will try to match the log message against each pattern in the order they are provided. If a pattern matches, the fields will be extracted and the remaining patterns will not be used.
11
-
If a pattern does not match, the Grok processor will try the next pattern. If no patterns match, the Grok processor will fail and you can troubleshoot the issue. Refer to [generate patterns](#streams-grok-patterns) for more information.
10
+
You can provide multiple patterns to the grok processor. The Grok processor will try to match the log message against each pattern in the order they are provided. If a pattern matches, the fields will be extracted and the remaining patterns will not be used.
12
11
13
-
Start with the most common patterns first and then add more specific patterns later. This reduces the number of runs the Grok processor has to do and improves the performance of the pipeline.
12
+
If a pattern doesn't match, the grok processor will try the next pattern. If no patterns match, the Grok processor will fail and you can troubleshoot the issue. Instead of writing grok patterns, you can have streams generate patterns for you. Refer to [generate patterns](#streams-grok-patterns) for more information.
13
+
14
+
:::{tip}
15
+
To improve pipeline performance, start with the most common patterns first, then add more specific patterns. This reduces the number times the grok processor has to run.
16
+
:::
17
+
18
+
To parse a log message with a dissect processor:
19
+
20
+
1. Set the **Source Field** to the field you want to search for grok matches.
21
+
1. Set the patterns you want to use in the **Grok patterns** field. Refer to the [example pattern](#streams-grok-example) for more information on patterns.
14
22
15
23
This functionality uses the {{es}} Grok pipeline processor. Refer to the [Grok processor](elasticsearch://reference/enrich-processor/grok-processor.md) {{es}} documentation for more information.
16
24
17
-
The Grok processor uses a set of predefined patterns to match the log messages and extract the fields.
18
-
You can also define your own pattern definitions by expanding the `Optional fields` section. You can then define your own patterns and use them in the Grok processor.
19
-
The patterns are defined in the following format:
25
+
## Example grok pattern [streams-grok-example]
26
+
27
+
Grok patterns are defined in the following format:
20
28
21
29
```
22
30
{
@@ -33,15 +41,15 @@ The previous pattern can then be used in the processor.
33
41
Requires an LLM Connector to be configured.
34
42
Instead of writing the Grok patterns by hand, you can use the **Generate Patterns** button to generate the patterns for you.
Click the plus icon next to the pattern to accept it and add it to the list of patterns used by the Grok processor.
46
+
Select **Accept** to add a generated pattern to the list of patterns used by the grok processor.
47
+
48
+
### How does **Generate patterns** work? [streams-grok-pattern-generation]
49
+
% need to check to make sure this is still accurate.
41
50
42
-
### How does the pattern generation work? [streams-grok-pattern-generation]
43
51
Under the hood, the 100 samples on the right side are grouped into categories of similar messages. For each category, a Grok pattern is generated by sending a few samples to the LLM. Matching patterns are then shown in the UI.
44
52
45
53
:::{note}
46
-
This can incur additional costs, depending on the LLM connector you are using. Typically a single iteration uses between 1000 and 5000 tokens, depending on the number of identified categories and the length of the messages.
54
+
This can incur additional costs, depending on the LLM connector you are using. Typically a single iteration uses between 1000 and 5000 tokens depending on the number of identified categories and the length of the messages.
Use the rename processor to change the name of a field, moving its value to a new field name and removing the original.
10
+
11
+
To use a rename processor:
12
+
13
+
1. Set **Source Field** to the field you want to rename.
14
+
1. Set **Target field** to the new name you want to use for the **Source Field**.
15
+
16
+
This functionality uses the {{es}} rename pipeline processor. Refer to the [rename processor](elasticsearch://reference/enrich-processor/rename-processor.md) {{es}} documentation for more information.
0 commit comments