Skip to content

Commit 4daec6c

Browse files
authored
[Observability Docs][Streams] Add access to Streams in Stack (#2156)
1 parent d335847 commit 4daec6c

File tree

9 files changed

+76
-42
lines changed

9 files changed

+76
-42
lines changed
-237 KB
Loading
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56
# Configure advanced settings [streams-advanced-settings]
67

7-
The **Advanced** tab on the **Manage stream** page shows the lower-level details of your stream. While Streams simplifies many configurations, it doesn't currently support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates, or modify any of the other ingest pipelines that are being used.
8-
This UI is intended for more advanced users.
8+
The **Advanced** tab on the **Manage stream** page shows the underlying configuration details of your stream. While Streams simplifies many configurations, it doesn't support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates or modify other ingest pipelines that used by the stream.
99

10-
![Screenshot of the Advanced tab](<../../../../images/logs-streams-advanced.png>)
10+
This UI is intended for advanced users.

solutions/observability/logs/streams/management/extract.md

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,18 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56
# Extract fields [streams-extract-fields]
67

7-
Unstructured log messages need to be parsed into meaningful fields so you can filter and analyze them quickly. Common fields to extract include timestamp and the log level, but you can also extract information like IP addresses, usernames, or ports.
8+
Unstructured log messages must be parsed into meaningful fields before you can filter and analyze them effectively. Commonly extracted fields include `@timestamp` and the `log.level`, but you can also extract information like IP addresses, usernames, and ports.
89

9-
Use the **Extract field** tab on the **Manage stream** page to process your data. The UI simulates your changes and provides an immediate preview that's tested end-to-end.
10+
Use the **Processing** tab on the **Manage stream** page to process your data. The UI simulates your changes and provides an immediate preview that's tested end-to-end.
1011

1112
The UI also shows indexing problems, such as mapping conflicts, so you can address them before applying changes.
1213

1314
:::{note}
14-
Applied changes aren't retroactive and only affect *future data ingested*.
15+
Applied changes aren't retroactive and only affect *future ingested data*.
1516
:::
1617

1718
## Add a processor [streams-add-processors]
@@ -33,7 +34,7 @@ To add a processor:
3334
1. Select **Add Processor** to save the processor.
3435

3536
:::{note}
36-
Editing processors with JSON is planned for a future release. More processors may be added over time.
37+
Editing processors with JSON is planned for a future release, and additional processors may be supported over time.
3738
:::
3839

3940
### Add conditions to processors [streams-add-processor-conditions]
@@ -59,12 +60,12 @@ Under **Processors for field extraction**, when you set pipeline processors to m
5960
When you add or edit processors, the **Data preview** updates automatically.
6061

6162
:::{note}
62-
To avoid unexpected results, focus on adding processors rather than removing or reordering existing processors.
63+
To avoid unexpected results, we recommend adding processors rather than removing or reordering existing processors.
6364
:::
6465

6566
**Data preview** loads 100 documents from your existing data and runs your changes using them.
6667
For any newly added processors, this simulation is reliable. You can save individual processors during the preview, and even reorder them.
67-
Selecting 'Save changes' applies your changes to the data stream.
68+
Selecting **Save changes** applies your changes to the data stream.
6869

6970
If you edit the stream again, note the following:
7071
- Adding more processors to the end of the list will work as expected.
@@ -85,29 +86,31 @@ Turn on **Ignore missing fields** to ignore the processor if the field is not pr
8586

8687
Documents fail processing for different reasons. Streams helps you to easily find and handle failures before deploying changes.
8788

88-
The following example shows not all messages matched the provided Grok pattern:
89+
In the following screenshot, the **Failed** percentage shows that not all messages matched the provided Grok pattern:
8990

9091
![Screenshot showing some failed documents](<../../../../images/logs-streams-parsed.png>)
9192

92-
You can filter your documents by selecting **Parsed** or **Failed** at the top of the table. Select **Failed** to see the documents that failed:
93+
You can filter your documents by selecting **Parsed** or **Failed** at the top of the table. Select **Failed** to see the documents that weren't parsed correctly:
9394

9495
![Screenshot showing the documents UI with Failed selected](<../../../../images/logs-streams-failures.png>)
9596

9697
Failures are displayed at the bottom of the process editor:
9798

9899
![Screenshot showing failure notifications](<../../../../images/logs-streams-processor-failures.png>)
99100

100-
These failures may be something you should address, but in some cases they also act as more of a warning.
101+
These failures may require action, but in some cases, they serve more as warnings.
101102

102-
### Mapping Conflicts
103+
### Mapping conflicts
103104

104-
As part of processing, Streams also checks for mapping conflicts by simulating the change end to end. If a mapping conflict is detected, Streams marks the processor as failed and displays a failure message:
105+
As part of processing, Streams also checks for mapping conflicts by simulating the change end to end. If a mapping conflict is detected, Streams marks the processor as failed and displays a failure message like the following:
105106

106107
![Screenshot showing mapping conflict notifications](<../../../../images/logs-streams-mapping-conflicts.png>)
107108

109+
You can then use the information in the failure message to find and troubleshoot mapping issues going forward.
110+
108111
## Processor statistics and detected fields [streams-stats-and-detected-fields]
109112

110-
Once saved, the processor also gives you a quick look at how successful the processing was for this step and which fields were added.
113+
Once saved, the processor provides a quick look at the processor's success rate and the fields that it added.
111114

112115
![Screenshot showing field stats](<../../../../images/logs-streams-field-stats.png>)
113116

@@ -143,12 +146,12 @@ Streams then creates and manages the `<data_stream_name>@stream.processing` pipe
143146
### User interaction with pipelines
144147

145148
Do not manually modify the `<data_stream_name>@stream.processing` pipeline created by Streams.
146-
You can still add your own processors manually to the `@custom` pipeline if needed. Adding processors before the pipeline processor crated by Streams may cause unexpected behavior.
149+
You can still add your own processors manually to the `@custom` pipeline if needed. Adding processors before the pipeline processor created by Streams may cause unexpected behavior.
147150

148151
## Known limitations [streams-known-limitations]
149152

150153
- Streams does not support all processors. We are working on adding more processors in the future.
151154
- Streams does not support all processor options. We are working on adding more options in the future.
152155
- The data preview simulation may not accurately reflect the changes to the existing data when editing existing processors or re-ordering them.
153-
- Dots in field names are not supported. You can use the dot expand processor in the `@custom` pipeline as a workaround. You need to manually add the dot processor.
154-
- Providing any arbitrary JSON in the Streams UI is not supported. We are working on adding this in the future.
156+
- Dots in field names are not supported. You can use the dot expand processor in the `@custom` pipeline as a workaround. You need to manually add the dot expand processor.
157+
- Providing any arbitrary JSON in the Streams UI is not supported. We are working on adding this in the future.

solutions/observability/logs/streams/management/extract/date.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56

67
# Date processor [streams-date-processor]
78

89
The date processor parses date strings and uses them as the timestamp of the document.
910

10-
This functionality uses the {{es}} date pipeline processor. Refer to [date processor](elasticsearch://reference/enrich-processor/date-processor.md) in the {{es}} docs for more information.
11+
This functionality uses the {{es}} date pipeline processor. Refer to the [date processor](elasticsearch://reference/enrich-processor/date-processor.md) {{es}} documentation for more information.
1112

1213
## Examples
1314

@@ -34,7 +35,7 @@ Sunday, October 15, 2023 => EEEE, MMMM dd, yyyy
3435
```
3536

3637

37-
## Optional Fields [streams-date-optional-fields]
38+
## Optional fields [streams-date-optional-fields]
3839
The following fields are optional for the date processor:
3940

4041
| Field | Description|

solutions/observability/logs/streams/management/extract/dissect.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56
# Dissect processor [streams-dissect-processor]
67

78
The dissect processor parses structured log messages and extracts fields from them. Unlike Grok, it does not use a set of predefined patterns to match the log messages. Instead, it uses a set of delimiters to split the log message into fields.
8-
Dissect is much faster than Grok and can parse slightly more structured log messages.
9+
Dissect is much faster than Grok and is ideal for log messages that follow a consistent, structured format.
910

10-
This functionality uses the {{es}} dissect pipeline processor. Refer to [dissect processor](elasticsearch://reference/enrich-processor/dissect-processor.md) in the {{es}} docs for more information.
11+
This functionality uses the {{es}} dissect pipeline processor. Refer to the [dissect processor](elasticsearch://reference/enrich-processor/dissect-processor.md) {{es}} documentation for more information.
1112

1213
To parse a log message, simply name the field and list the delimiters you want to use. The dissect processor will then split the log message into fields based on the delimiters provided.
1314

solutions/observability/logs/streams/management/extract/grok.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56
# Grok processor [streams-grok-processor]
67

@@ -11,10 +12,10 @@ If a pattern does not match, the Grok processor will try the next pattern. If no
1112

1213
Start with the most common patterns first and then add more specific patterns later. This reduces the number of runs the Grok processor has to do and improves the performance of the pipeline.
1314

14-
This functionality uses the {{es}} Grok pipeline processor. Refer to [Grok processor](elasticsearch://reference/enrich-processor/grok-processor.md) in the {{es}} docs for more information.
15+
This functionality uses the {{es}} Grok pipeline processor. Refer to the [Grok processor](elasticsearch://reference/enrich-processor/grok-processor.md) {{es}} documentation for more information.
1516

1617
The Grok processor uses a set of predefined patterns to match the log messages and extract the fields.
17-
You can also define your own pattern definitions by expanding the `Optional fields` section. This will allow you to define your own patterns and use them in the Grok processor.
18+
You can also define your own pattern definitions by expanding the `Optional fields` section. You can then define your own patterns and use them in the Grok processor.
1819
The patterns are defined in the following format:
1920

2021
```
@@ -23,23 +24,23 @@ The patterns are defined in the following format:
2324
}
2425
```
2526
Where `MY_DATE` is the name of the pattern.
26-
The above pattern can then be used in the processor
27+
The previous pattern can then be used in the processor.
2728
```
2829
%{MY_DATE:date}
2930
```
3031

31-
## Generate Patterns [streams-grok-patterns]
32+
## Generate patterns [streams-grok-patterns]
3233
Requires an LLM Connector to be configured.
3334
Instead of writing the Grok patterns by hand, you can use the **Generate Patterns** button to generate the patterns for you.
3435

3536
% TODO Elastic LLM?
3637

3738
![generated patterns](<../../../../../images/logs-streams-patterns.png>)
3839

39-
Patterns can be accepted by clicking the plus icon next to the pattern. This will add the pattern to the list of patterns to be used in the Grok processor.
40+
Click the plus icon next to the pattern to accept it and add it to the list of patterns used by the Grok processor.
4041

4142
### How does the pattern generation work? [streams-grok-pattern-generation]
42-
Under the hood, the 100 samples on the right hand side are grouped into categories of similar messages. For each category, a Grok pattern is generated by sending a few samples to the LLM. Matching patterns are then shown in the UI.
43+
Under the hood, the 100 samples on the right side are grouped into categories of similar messages. For each category, a Grok pattern is generated by sending a few samples to the LLM. Matching patterns are then shown in the UI.
4344

4445
:::{note}
4546
This can incur additional costs, depending on the LLM connector you are using. Typically a single iteration uses between 1000 and 5000 tokens, depending on the number of identified categories and the length of the messages.

solutions/observability/logs/streams/management/extract/key-value.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,13 @@
22
navigation_title: KV processor
33
applies_to:
44
serverless: preview
5+
stack: preview 9.1
56
---
67
# Key value processor [streams-kv-processor]
78

89
The key value (KV) processor allows you to extract key-value pairs from a field and assign them to a target field or the root of the document.
910

10-
This functionality uses the {{es}} kv pipeline processor. Refer to [KV processor](elasticsearch://reference/enrich-processor/kv-processor.md) in the {{es}} docs for more information.
11+
This functionality uses the {{es}} KV pipeline processor. Refer to the [KV processor](elasticsearch://reference/enrich-processor/kv-processor.md) {{es}} documentation for more information.
1112

1213
## Required fields [streams-kv-required-fields]
1314

@@ -17,7 +18,7 @@ The KV processor requires the following fields:
1718
| ------- | --------------- |
1819
| Field | The field to be parsed.|
1920
| Field split | Regex pattern used to delimit the key-value pairs. Typically a space character (" "). |
20-
| Value split | Regex pattern used to delimit the key from the value. Typically an equals sign (=). |
21+
| Value split | Regex pattern used to delimit the key from the value. Typically an equals sign (`=`). |
2122

2223
## Optional fields [streams-kv-optional-fields]
2324

@@ -31,4 +32,4 @@ The following fields are optional for the KV processor:
3132
| Prefix | A prefix to add to extracted keys. |
3233
| Trim key | A string of characters to trim from extracted keys. |
3334
| Trim value | A string of characters to trim from extracted values. |
34-
| Strip brackets | Removes brackets ( (), <>, []) and quotes (', ") from extracted values.|
35+
| Strip brackets | Removes brackets (`(), <>, []`) and quotes (`', "`) from extracted values.|
Lines changed: 25 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56

67
# Manage data retention [streams-data-retention]
@@ -13,11 +14,30 @@ The **Data retention** page is made up of the following components that can help
1314

1415
- **Retention period**: The minimum number of days after which the data is deleted
1516
- **Source**: The origin of the data retention policy.
16-
- **Ingestion**: Estimated ingestion per day and month calculated based on the size of all data in the stream and divided by the age of the stream. This is an estimate, and the actual ingestion may vary.
17+
- **Last updated**: When data retention was last updated for the selected stream.
18+
- **Ingestion**: Estimated ingestion per day and month, calculated based on the total size of all data in the stream divided by the stream's age. This is an estimate, and the actual ingestion may vary.
1719
- **Total doc count**: The total number of documents in the stream.
18-
- **Ingestion Rate**: Estimated ingestion rate per time bucket. The bucket interval is dynamic and adjusts based on the selected time range. The ingestion rate is calculated based on the average document size in a stream, multiplied by the number of documents in the bucket. This is an estimate, and the actual ingestion rate may vary.
20+
- **Ingestion Rate**: Estimated ingestion rate per time bucket. The bucket interval is dynamic and adjusts based on the selected time range. The ingestion rate is calculated using the average document size in the stream multiplied by the number of documents in each bucket. This is an estimate, and the actual ingestion rate may vary.
21+
- **Policy summary**: {applies_to}`stack: preview 9.1` The amount of data ingested per phase (hot, warm, cold).
1922

20-
## Edit the data retention period [streams-update-data-retention]
21-
Select `Edit data retention` to change how long data for your stream is retained. The **Retention period** is the minimum number of days after which the data is deleted.
23+
## Edit the data retention [streams-update-data-retention]
24+
From any stream page, select **Edit data retention** to change how long your data stream retains data.
2225

23-
To define a global default retention policy, refer to [project settings](../../../../../deploy-manage/deploy/elastic-cloud/project-settings.md).
26+
### Set a specific retention period
27+
The **Retention period** is the minimum number of days after which the data is deleted. To set data retention to a specific time period:
28+
29+
1. Select **Edit data retention****Set specific retention days**.
30+
1. From here, set the period of time you want to retain data for this stream.
31+
32+
To define a global default retention policy, refer to [project settings](../../../../../deploy-manage/deploy/elastic-cloud/project-settings.md).
33+
34+
### Follow an ILM policy
35+
```{applies_to}
36+
stack: ga 9.1
37+
```
38+
[ILM policies](../../../../../manage-data/lifecycle/index-lifecycle-management.md) let you automate and standardize data retention across streams and other data streams. To have your streams follow an existing policy:
39+
40+
1. Select **Edit data retention****Use a lifecycle policy**.
41+
1. Select a pre-defined ILM policy from the list.
42+
43+
You can also create a new ILM policy. Refer to [Configure a lifecycle policy](../../../../../manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) for more information.

solutions/observability/logs/streams/streams.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
applies_to:
33
serverless: preview
4+
stack: preview 9.1
45
---
56

67
:::{warning}
@@ -15,25 +16,31 @@ Streams provides a single, centralized UI within {{kib}} that streamlines common
1516

1617
A Stream directly corresponds to an {{es}} data stream (for example, `logs-myapp-default`). Operations performed in the Streams UI configure that specific data stream.
1718

18-
1919
## Required permissions
2020

2121
Streams requires the following Elastic Cloud Serverless roles:
2222

2323
- Admin: ability to manage all Streams.
2424
- Editor/Viewer: limited access, unable to perform all actions.
2525

26-
## Access Streams
26+
## Access the Streams UI
27+
28+
In {{obs-serverless}}, Streams is automatically available.
29+
30+
In {{stack}} version 9.1 and later, you can enable Streams in the {{observability}} Advanced Settings. To do this:
31+
32+
1. Go to **Management** > **Stack Management** > **Advanced Settings**, or search for "Advanced Settings" in the [global search field](../../../../explore-analyze/find-and-organize/find-apps-and-objects.md).
33+
1. Enable **Streams UI** under **Observability**.
2734

28-
Access streams in one of the following ways:
35+
In {{serverless-short}} or after enabling Streams in {{stack}}, access the UI in one of the following ways:
2936

30-
- From the navigation menu, select **Streams**.
37+
- Select **Streams** from the navigation menu or use the [global search field](../../../../explore-analyze/find-and-organize/find-apps-and-objects.md).
3138

3239
- From **Discover**, expand a document's details flyout and select **Stream** or an action associated with the document's data stream. Streams will open filtered to only the selected stream. This only works for documents stored in a data stream.
3340

34-
## Manage stream [streams-management-tab]
41+
## Manage individual streams [streams-management-tab]
3542

36-
Interact with and configure your stream in the following ways:
43+
Interact with and configure your streams in the following ways:
3744

3845
- [Data retention](./management/retention.md): Manage how your stream retains data and get insight into data ingestion and storage size under the **Data retention** tab.
3946
- [Processing](./management/extract.md): Parse and extract information from log messages into dedicated fields under the **Processing** tab.

0 commit comments

Comments
 (0)