You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: explore-analyze/report-and-share.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,6 +88,10 @@ In the following dashboard, the shareable container is highlighted:
88
88
* {applies_to}`stack: ga 9.0` From the toolbar, click **Share** > **Export** tab, then choose a file type. Note that when you create a dashboard report that includes a data table or Discover session, the PDF includes only the visible data.
89
89
* {applies_to}`stack: ga 9.1` From the toolbar, click the **Export** icon, then choose a file type.
90
90
91
+
::::{note}
92
+
When you create a dashboard report that includes a data table or Discover session, the PDF includes only the visible data.
Copy file name to clipboardExpand all lines: explore-analyze/report-and-share/automating-report-generation.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,18 +16,18 @@ To automatically generate PDF and CSV reports, generate a POST URL, then submit
16
16
17
17
Create the POST URL that triggers a report to generate PDF and CSV reports.
18
18
19
-
### PDF reports
19
+
### PDF and PNG reports[pdf-png-post-url]
20
20
21
21
To create the POST URL for PDF reports:
22
22
23
23
1. Go to **Dashboards**, **Visualize Library**, or **Canvas**.
24
24
2. Open the dashboard, visualization, or **Canvas** workpad you want to view as a report. From the toolbar, do one of the following:
25
25
26
-
* {applies_to}`stack: ga 9.0` If you are using **Dashboard** or **Visualize Library**, click **Share > Export**, select the PDF option, then click **Copy POST URL**.
26
+
* {applies_to}`stack: ga 9.0` If you are using **Dashboard** or **Visualize Library**, click **Share > Export**, select the PDF or PNG option, then click **Copy POST URL**.
27
27
* {applies_to}`stack: ga 9.0` If you are using **Canvas**, click **Share > PDF Reports**, then click **Advanced options > Copy POST URL**.
28
-
* {applies_to}`stack: ga 9.1` Click the **Export** icon, then **PDF**. In the export flyout, copy the POST URL.
28
+
* {applies_to}`stack: ga 9.1` Click the **Export** icon, then **PDF** or **PNG**. In the export flyout, copy the POST URL.
Copy file name to clipboardExpand all lines: solutions/observability/apm/use-opentelemetry-with-apm.md
+26-52Lines changed: 26 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,34 +11,22 @@ products:
11
11
- id: observability
12
12
---
13
13
14
-
# Use OpenTelemetry with APM [apm-open-telemetry]
15
-
16
-
::::{note}
17
-
For a complete overview of using OpenTelemetry with Elastic, explore [**Elastic Distributions of OpenTelemetry**](opentelemetry://reference/index.md).
18
-
::::
14
+
# Use OpenTelemetry with APM [apm-otel-elastic-distros]
19
15
20
16
[OpenTelemetry](https://opentelemetry.io/docs/concepts/what-is-opentelemetry/) is a set of APIs, SDKs, tooling, and integrations that enable the capture and management of telemetry data from your services and applications.
21
17
22
-
Elastic integrates with OpenTelemetry, allowing you to reuse your existing instrumentation to easily send observability data to the {{stack}}. There are several ways to integrate OpenTelemetry with the {{stack}}:
23
-
24
-
*[Elastic Distributions of OpenTelemetry language SDKs](/solutions/observability/apm/use-opentelemetry-with-apm.md#apm-otel-elastic-distros)
## Elastic Distributions of OpenTelemetry language SDKs [apm-otel-elastic-distros]
18
+
Elastic offers several distributions of OpenTelemetry language SDKs. Each Elastic Distribution of OpenTelemetry is a customized version of an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/), ready to send data to the [Managed OTLP endpoint](opentelemetry://reference/motlp.md), Elastic APM server, or directly to {{es}}.
30
19
31
-
Elastic offers several distributions of OpenTelemetry language SDKs. A *distribution* is a customized version of an upstream OpenTelemetry repository. Each Elastic Distribution of OpenTelemetry is a customized version of an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/).
With an Elastic Distribution of OpenTelemetry language SDK you have access to all the features of the OpenTelemetry SDK that it customizes, plus:
39
26
40
-
* You may get access to SDK improvements and bug fixes contributed by the Elastic team *before* the changes are available upstream in the OpenTelemetry repositories.
41
-
* The distribution preconfigures the collection of tracing and metrics signals, applying some opinionated defaults, such as which sources are collected by default.
27
+
* You can get access to SDK improvements and bug fixes contributed by the Elastic team before the changes are available upstream in the OpenTelemetry repositories.
28
+
* The distribution preconfigures the collection of tracing and metrics signals, applying opinionated defaults, such as which sources are collected by default.
29
+
* By sending data through the [EDOT Collector](opentelemetry://reference/edot-collector/index.md), you make sure to onboard infrastructure logs and metrics.
42
30
43
31
Get started with an Elastic Distribution of OpenTelemetry language SDK:
44
32
@@ -48,28 +36,17 @@ Get started with an Elastic Distribution of OpenTelemetry language SDK:
48
36
*[**Elastic Distribution of OpenTelemetry Python**](opentelemetry://reference/edot-sdks/python/index.md)
49
37
*[**Elastic Distribution of OpenTelemetry PHP**](opentelemetry://reference/edot-sdks/php/index.md)
50
38
51
-
::::{note}
52
-
For more details about OpenTelemetry distributions in general, visit the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/distributions).
39
+
::::{important}
40
+
For a complete overview of OpenTelemetry and Elastic, explore [**Elastic Distributions of OpenTelemetry**](opentelemetry://reference/index.md).
53
41
::::
54
42
55
-
## Upstream OpenTelemetry Collector and language SDKs [apm-otel-upstream]
43
+
## Upstream OpenTelemetry Collector and SDKs [apm-otel-upstream]
56
44
57
45
The {{stack}} natively supports the OpenTelemetry protocol (OTLP). This means trace data and metrics collected from your applications and infrastructure by an OpenTelemetry Collector or OpenTelemetry language SDK can be sent to the {{stack}}.
58
46
59
-
You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), instrument your application with an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/) that sends data to the collector, and use the collector to process and export the data to either {{apm-server-or-mis}}.
It’s also possible to send data directly to either {{apm-server-or-mis}} from an upstream OpenTelemetry SDK. You might do this during development or if you’re monitoring a small-scale application. Read more about when to use a collector in the [OpenTelemetry documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).
68
-
::::
69
-
70
-
This approach works well when you need to instrument a technology that Elastic doesn’t provide a solution for. For example, if you want to instrument C or C++ you could use the [OpenTelemetry C++ client](https://github.com/open-telemetry/opentelemetry-cpp).
47
+
You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/), instrument your application with an [OpenTelemetry language SDK](https://opentelemetry.io/docs/languages/) that sends data to the Collector, and use the Collector to process and export the data to either the [Managed OTLP endpoint](opentelemetry://reference/motlp.md) or {{apm-server-or-mis}}.
71
48
72
-
However, there are some limitations when using collectors and language SDKs built and maintained by OpenTelemetry, including:
49
+
This approach works well when you need to instrument a technology that Elastic doesn’t provide a solution for. For example, if you want to instrument C or C++ you could use the [OpenTelemetry C++ client](https://github.com/open-telemetry/opentelemetry-cpp). However, there are some limitations when using upstream OpenTelemetry collectors and language SDKs, including:
73
50
74
51
* Elastic can’t provide implementation support on how to use upstream OpenTelemetry tools.
75
52
* You won’t have access to Elastic enterprise APM features.
@@ -79,24 +56,13 @@ For more on the limitations associated with using upstream OpenTelemetry tools,
79
56
80
57
[**Get started with upstream OpenTelemetry Collectors and language SDKs →**](/solutions/observability/apm/upstream-opentelemetry-collectors-language-sdks.md)
AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {{observability}} or {{obs-serverless}}.
85
-
86
-
To get started, follow the official AWS Distro for OpenTelemetry Lambda documentation, and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster:
87
-
88
-
[**Get started with the AWS Distro for OpenTelemetry Lambda**](https://aws-otel.github.io/docs/getting-started/lambda)
89
-
90
-
## Upstream OpenTelemetry with the Elastic APM agent [apm-otel-api-sdk-elastic-agent]
91
-
92
-
You can use the OpenTelemetry API/SDKs with [Elastic APM agents](/solutions/observability/apm/get-started-fleet-managed-apm-server.md#_step_3_install_apm_agents) to translate OpenTelemetry API calls to Elastic APM API calls.
To understand the differences between Elastic Distributions of OpenTelemetry and upstream OpenTelemetry, refer to [EDOT compared to upstream OpenTelemetry](opentelemetry://reference/compatibility/edot-vs-upstream.md).
97
61
:::
98
62
99
-
This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans — avoiding vendor lock-in and having to redo manual instrumentation.
63
+
## Upstream OpenTelemetry with Elastic APM agent [apm-otel-api-sdk-elastic-agent]
64
+
65
+
You can use the OpenTelemetry API/SDKs with [Elastic APM agents](/solutions/observability/apm/get-started-fleet-managed-apm-server.md#_step_3_install_apm_agents) to translate OpenTelemetry API calls to Elastic APM API calls. This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans, avoiding vendor lock-in and having to redo manual instrumentation.
100
66
101
67
However, not all features of the OpenTelemetry API are supported when using this approach, and not all Elastic APM agents support this approach.
102
68
@@ -105,4 +71,12 @@ Find more details about how to use an OpenTelemetry API or SDK with an Elastic A
AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {{observability}} or {{obs-serverless}}.
79
+
80
+
To get started, follow the official AWS Distro for OpenTelemetry Lambda documentation, and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster:
81
+
82
+
[**Get started with the AWS Distro for OpenTelemetry Lambda**](https://aws-otel.github.io/docs/getting-started/lambda)
The **Advanced** tab on the **Manage stream** page shows the lower-level details of your stream. While Streams simplifies many configurations, it doesn't currently support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates, or modify any of the other ingest pipelines that are being used.
8
-
This UI is intended for more advanced users.
8
+
The **Advanced** tab on the **Manage stream** page shows the underlying configuration details of your stream. While Streams simplifies many configurations, it doesn't support modifying all pipelines and templates. From the **Advanced** tab, you can manually interact with the index or component templates or modify other ingest pipelines that used by the stream.
9
9
10
-

Copy file name to clipboardExpand all lines: solutions/observability/logs/streams/management/extract.md
+18-15Lines changed: 18 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,18 @@
1
1
---
2
2
applies_to:
3
3
serverless: preview
4
+
stack: preview 9.1
4
5
---
5
6
# Extract fields [streams-extract-fields]
6
7
7
-
Unstructured log messages need to be parsed into meaningful fields so you can filter and analyze them quickly. Common fields to extract include timestamp and the loglevel, but you can also extract information like IP addresses, usernames, or ports.
8
+
Unstructured log messages must be parsed into meaningful fields before you can filter and analyze them effectively. Commonly extracted fields include `@timestamp` and the `log.level`, but you can also extract information like IP addresses, usernames, and ports.
8
9
9
-
Use the **Extract field** tab on the **Manage stream** page to process your data. The UI simulates your changes and provides an immediate preview that's tested end-to-end.
10
+
Use the **Processing** tab on the **Manage stream** page to process your data. The UI simulates your changes and provides an immediate preview that's tested end-to-end.
10
11
11
12
The UI also shows indexing problems, such as mapping conflicts, so you can address them before applying changes.
12
13
13
14
:::{note}
14
-
Applied changes aren't retroactive and only affect *future data ingested*.
15
+
Applied changes aren't retroactive and only affect *future ingested data*.
15
16
:::
16
17
17
18
## Add a processor [streams-add-processors]
@@ -33,7 +34,7 @@ To add a processor:
33
34
1. Select **Add Processor** to save the processor.
34
35
35
36
:::{note}
36
-
Editing processors with JSON is planned for a future release. More processors may be added over time.
37
+
Editing processors with JSON is planned for a future release, and additional processors may be supported over time.
37
38
:::
38
39
39
40
### Add conditions to processors [streams-add-processor-conditions]
@@ -59,12 +60,12 @@ Under **Processors for field extraction**, when you set pipeline processors to m
59
60
When you add or edit processors, the **Data preview** updates automatically.
60
61
61
62
:::{note}
62
-
To avoid unexpected results, focus on adding processors rather than removing or reordering existing processors.
63
+
To avoid unexpected results, we recommend adding processors rather than removing or reordering existing processors.
63
64
:::
64
65
65
66
**Data preview** loads 100 documents from your existing data and runs your changes using them.
66
67
For any newly added processors, this simulation is reliable. You can save individual processors during the preview, and even reorder them.
67
-
Selecting 'Save changes' applies your changes to the data stream.
68
+
Selecting **Save changes** applies your changes to the data stream.
68
69
69
70
If you edit the stream again, note the following:
70
71
- Adding more processors to the end of the list will work as expected.
@@ -85,29 +86,31 @@ Turn on **Ignore missing fields** to ignore the processor if the field is not pr
85
86
86
87
Documents fail processing for different reasons. Streams helps you to easily find and handle failures before deploying changes.
87
88
88
-
The following example shows not all messages matched the provided Grok pattern:
89
+
In the following screenshot, the **Failed** percentage shows that not all messages matched the provided Grok pattern:
89
90
90
91

91
92
92
-
You can filter your documents by selecting **Parsed** or **Failed** at the top of the table. Select **Failed** to see the documents that failed:
93
+
You can filter your documents by selecting **Parsed** or **Failed** at the top of the table. Select **Failed** to see the documents that weren't parsed correctly:
93
94
94
95

95
96
96
97
Failures are displayed at the bottom of the process editor:
These failures may be something you should address, but in some cases they also act as more of a warning.
101
+
These failures may require action, but in some cases, they serve more as warnings.
101
102
102
-
### Mapping Conflicts
103
+
### Mapping conflicts
103
104
104
-
As part of processing, Streams also checks for mapping conflicts by simulating the change end to end. If a mapping conflict is detected, Streams marks the processor as failed and displays a failure message:
105
+
As part of processing, Streams also checks for mapping conflicts by simulating the change end to end. If a mapping conflict is detected, Streams marks the processor as failed and displays a failure message like the following:
You can then use the information in the failure message to find and troubleshoot mapping issues going forward.
110
+
108
111
## Processor statistics and detected fields [streams-stats-and-detected-fields]
109
112
110
-
Once saved, the processor also gives you a quick look at how successful the processing was for this step and which fields were added.
113
+
Once saved, the processor provides a quick look at the processor's success rate and the fields that it added.
111
114
112
115

113
116
@@ -143,12 +146,12 @@ Streams then creates and manages the `<data_stream_name>@stream.processing` pipe
143
146
### User interaction with pipelines
144
147
145
148
Do not manually modify the `<data_stream_name>@stream.processing` pipeline created by Streams.
146
-
You can still add your own processors manually to the `@custom` pipeline if needed. Adding processors before the pipeline processor crated by Streams may cause unexpected behavior.
149
+
You can still add your own processors manually to the `@custom` pipeline if needed. Adding processors before the pipeline processor created by Streams may cause unexpected behavior.
147
150
148
151
## Known limitations [streams-known-limitations]
149
152
150
153
- Streams does not support all processors. We are working on adding more processors in the future.
151
154
- Streams does not support all processor options. We are working on adding more options in the future.
152
155
- The data preview simulation may not accurately reflect the changes to the existing data when editing existing processors or re-ordering them.
153
-
- Dots in field names are not supported. You can use the dot expand processor in the `@custom` pipeline as a workaround. You need to manually add the dot processor.
154
-
- Providing any arbitrary JSON in the Streams UI is not supported. We are working on adding this in the future.
156
+
- Dots in field names are not supported. You can use the dot expand processor in the `@custom` pipeline as a workaround. You need to manually add the dot expand processor.
157
+
- Providing any arbitrary JSON in the Streams UI is not supported. We are working on adding this in the future.
0 commit comments