Skip to content

Commit c32cbcf

Browse files
Merge branch 'main' into update-screenshots-for-nav-changes-in-9.2
2 parents d1e6cfa + 7db718f commit c32cbcf

File tree

6 files changed

+65
-35
lines changed

6 files changed

+65
-35
lines changed
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
The request **Inspector** is available in **Discover** and for all **Dashboards** visualization panels that are built based on a query. The available information can differ based on the request.
2+
3+
1. Open the **Inspector**:
4+
- If you're in **Discover**, select **Inspect** from the application's toolbar.
5+
- If you're in **Dashboards**, open the panel menu and select **Inspect**.
6+
1. Open the **View** dropdown, then select **Requests**.
7+
1. Several tabs with different information can appear, depending on nature of the request:
8+
:::{tip}
9+
Some visualizations rely on several requests. From the dropdown, select the request you want to inspect.
10+
:::
11+
* **Statistics**: Provides general information and statistics about the request. For example, you can check if the number of hits and query time match your expectations. If not, this may indicate an issue with the request used to build the visualization.
12+
* **Clusters and shards**: Lists the {{es}} clusters and shards per cluster queried to fetch the data and shows the status of the request on each of them. With the information in this tab, you can check if the request is properly executed, especially in case of cross-cluster search.
13+
14+
:::{note}
15+
This tab is not available for {{esql}} queries and Vega visualizations.
16+
:::
17+
18+
* **Request**: Provides a full view of the visualization's request, which you can copy or **Open in Console** to refine, if needed.
19+
* **Response**: Provides a full view of the response returned by the request.

explore-analyze/dashboards/using.md

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -180,21 +180,8 @@ This action is possible for all charts created using **Lens** or {{esql}}. It is
180180

181181
#### View the requests that collect the data
182182

183-
This action is possible for all visualization panels that are built based on a query, but the available information can differ based on the panel type.
184-
185-
1. Open the panel menu and select **Inspect**.
186-
1. Open the **View** dropdown, then select **Requests**.
187-
1. Some visualizations rely on several requests. From the dropdown, select the request you want to inspect. Several tabs with different information can appear, depending on the panel type:
188-
* **Statistics**: Provides general information and statistics about the request. For example, you can check if the number of hits and query time match your expectations. If not, this may indicate an issue with the request used to build the visualization.
189-
* **Clusters and shards**: Lists the {{es}} clusters and shards per cluster queried to fetch the data and shows the status of the request on each of them. With the information in this tab, you can check if the request is properly executed, especially in case of cross-cluster search.
190-
191-
:::{note}
192-
This tab is not available for {{esql}} and Vega visualizations.
193-
:::
194-
195-
* **Request**: Provides a full view of the visualization's request, which you can copy or **Open in Console** to refine, if needed.
196-
* **Response**: Provides a full view of the response returned by the request.
197-
183+
:::{include} ../_snippets/inspect-request.md
184+
:::
198185

199186
#### View the time range on specific panels
200187

explore-analyze/discover/discover-get-started.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,15 @@ Your query may include multiple data types that each have tailored experiences;
4343

4444
In this case **Discover** provides the default experience until it detects that you're interacting with a single type of data. For example, when you [](#look-inside-a-document).
4545

46+
### View active context-aware experience
47+
48+
You can check which experience is currently active for your current Discover session. This can help you confirm whether the type of data you're currently exploring is properly detected or if Discover is currently using its default experience.
49+
50+
1. Select **Inspect** from Discover's toolbar.
51+
1. Open the **View** dropdown, then select **Profiles**.
52+
53+
The various profiles listed show details such as the active solution and data source contexts, which determine Discover's context-aware experiences.
54+
4655
## Load data into Discover [find-the-data-you-want-to-use]
4756

4857
Select the data you want to explore, and then specify the time range in which to view that data.
@@ -294,6 +303,11 @@ Note that in ES|QL mode, the **Documents** tab is named **Results**.
294303

295304
Learn more about how to use ES|QL queries in [Using ES|QL](try-esql.md).
296305

306+
### Inspect your Discover queries
307+
308+
:::{include} ../_snippets/inspect-request.md
309+
:::
310+
297311

298312
### Save your Discover session for later use [save-discover-search]
299313

reference/fleet/kafka-output-settings.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -127,27 +127,27 @@ The number of partitions created is set automatically by the Kafka broker based
127127
Use this option to set the Kafka topic for each {{agent}} event.
128128

129129
**Default topic** $$$kafka-output-topics-default$$$
130-
: Set a default topic to use for events sent by {{agent}} to the Kafka output.
130+
: Set the default Kafka topic used for events sent by {{agent}}.
131131

132132
You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Schema (ECS)](ecs://reference/index.md) field. Available fields include:
133133

134134
* `data_stream.type`
135135
* `data_stream.dataset`
136136
* `data_stream.namespace`
137137
* `@timestamp`
138-
* `event-dataset`
138+
* `event.dataset`
139139

140-
You can also set a custom field. This is useful if you need to construct a more complex or structured topic name.
140+
You can also set a custom field. This is useful if you need to construct a more complex or structured topic name. For example, you can use the `fields.kafka_topic` custom field to set a dynamic topic for each event.
141141

142142
To set a dynamic topic value for outputting {{agent}} data to Kafka, you can add the [`add_fields` processor](/reference/fleet/add_fields-processor.md) to any integration policies on your {{fleet}}-managed {{agents}}.
143143

144-
For example, the following `add_fields` processor creates a dynamic topic value by interpolating multiple [data stream fields](ecs://reference/ecs-data_stream.md):
144+
For example, the following `add_fields` processor creates a dynamic topic value for the `fields.kafka_topic` field by interpolating multiple [data stream fields](ecs://reference/ecs-data_stream.md):
145145

146146
```yaml
147147
- add_fields:
148-
target: ''
149-
fields:
150-
kafka_topic: '${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}' <1>
148+
target: ''
149+
fields:
150+
kafka_topic: '${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}' <1>
151151
```
152152
1. Depending on the values of the data stream fields, this generates topic names such as `logs-nginx.access-production` or `metrics-system.cpu-staging` as the value of the custom `kafka_topic` field.
153153

reference/fleet/kafka-output.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -164,25 +164,35 @@ Use these options to set the Kafka topic for each {{agent}} event.
164164
`topic` $$$kafka-topic-setting$$$
165165
: The default Kafka topic used for produced events.
166166

167-
You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Schema (ECS)](ecs://reference/index.md) field. Available fields include:
167+
You can set a static topic, for example `elastic-agent`, or you can use a format string to set a topic dynamically based on an [Elastic Common Schema (ECS)](ecs://reference/index.md) field. Available fields include:
168168

169169
* `data_stream.type`
170170
* `data_stream.dataset`
171171
* `data_stream.namespace`
172172
* `@timestamp`
173-
* `event-dataset`
173+
* `event.dataset`
174+
175+
For example:
176+
177+
```yaml
178+
topic: '${data_stream.type}'
179+
```
174180

175-
You can also set a custom field. This is useful if you need to construct a more complex or structured topic name.
181+
You can also set a custom field. This is useful if you need to construct a more complex or structured topic name. For example, this configuration uses the `fields.kafka_topic` custom field to set the topic for each event:
182+
183+
```yaml
184+
topic: '${fields.kafka_topic}'
185+
```
176186

177-
To set a dynamic topic value for outputting {{agent}} data to Kafka, you can add the [`add_fields` processor](/reference/fleet/add_fields-processor.md) to the input configuration settings of your standalone {{agent}}.
187+
To set a dynamic topic value for outputting {{agent}} data to Kafka, you can add the [`add_fields` processor](/reference/fleet/add_fields-processor.md) to the input configuration settings of your standalone {{agent}}.
178188

179-
For example, the following `add_fields` processor creates a dynamic topic value by interpolating multiple [data stream fields](ecs://reference/ecs-data_stream.md):
189+
For example, the following `add_fields` processor creates a dynamic topic value for the `fields.kafka_topic` field by interpolating multiple [data stream fields](ecs://reference/ecs-data_stream.md):
180190

181191
```yaml
182192
- add_fields:
183-
target: ''
184-
fields:
185-
kafka_topic: '${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}' <1>
193+
target: ''
194+
fields:
195+
kafka_topic: '${data_stream.type}-${data_stream.dataset}-${data_stream.namespace}' <1>
186196
```
187197
1. Depending on the values of the data stream fields, this generates topic names such as `logs-nginx.access-production` or `metrics-system.cpu-staging` as the value of the custom `kafka_topic` field.
188198

troubleshoot/observability/apm/processing-performance.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -32,11 +32,11 @@ The results below include numbers for a synthetic workload. You can use the resu
3232

3333
| Profile / Cloud | AWS | Azure | GCP |
3434
| --- | --- | --- | --- |
35-
| **1 GB**<br>(10 agents) | 15,000<br>events/second | 14,000<br>events/second | 17,000<br>events/second |
36-
| **4 GB**<br>(30 agents) | 29,000<br>events/second | 26,000<br>events/second | 35,000<br>events/second |
37-
| **8 GB**<br>(60 agents) | 50,000<br>events/second | 34,000<br>events/second | 48,000<br>events/second |
38-
| **16 GB**<br>(120 agents) | 96,000<br>events/second | 57,000<br>events/second | 90,000<br>events/second |
39-
| **32 GB**<br>(240 agents) | 133,000<br>events/second | 89,000<br>events/second | 143,000<br>events/second |
35+
| **1 GB**<br>(10 agents) | 19,000<br>events/second | 17,000<br>events/second | 18,000<br>events/second |
36+
| **4 GB**<br>(30 agents) | 33,000<br>events/second | 23,000<br>events/second | 25,000<br>events/second |
37+
| **8 GB**<br>(60 agents) | 52,000<br>events/second | 36,000<br>events/second | 48,000<br>events/second |
38+
| **16 GB**<br>(120 agents) | 74,000<br>events/second | 58,000<br>events/second | 71,000<br>events/second |
39+
| **32 GB**<br>(240 agents) | 127,000<br>events/second | 90,000<br>events/second | 133,000<br>events/second |
4040

4141
Don’t forget that the APM Server is stateless. Several instances running do not need to know about each other. This means that with a properly sized {{es}} instance, APM Server scales out linearly.
4242

0 commit comments

Comments
 (0)