You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/observability/observability-ai-assistant.md
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,25 +18,28 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
18
18
***Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
19
19
***Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.
20
20
21
-
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors:
21
+
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors. Refer to following for more information:
22
+
23
+
-[Set up the AI Assistant](#obs-ai-set-up) for more on available AI connectors.
24
+
-[{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and their ratings for different use cases.
22
25
23
26
## Use cases
24
27
25
28
The {{obs-ai-assistant}} helps you:
26
29
27
-
***Decode error messages**: Interpret stack traces and error logs to pinpoint root causes
28
-
***Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch
30
+
***Decode error messages**: Interpret stack traces and error logs to pinpoint root causes.
31
+
***Identify performance bottlenecks**: Find resource-intensive operations and slow queries in {{es}}
29
32
***Generate reports**: Create alert summaries and incident timelines with key metrics
30
-
***Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
31
-
***Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data
33
+
***Build and execute queries**: Build {{es}} queries from natural language, convert Query DSL to {{esql}} syntax, and execute queries directly from the chat interface
34
+
***Visualize data**: Create time-series charts and distribution graphs from your {{es}} data
32
35
33
36
## Requirements [obs-ai-requirements]
34
37
35
38
The AI assistant requires the following:
36
39
37
40
- An **Elastic deployment**:
38
41
39
-
- For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
42
+
- For **{{observability}}**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
40
43
41
44
- For **Search**: {{stack}} version **8.16.0** or later, or **{{serverless-short}} {{es}} project**.
42
45
@@ -62,9 +65,9 @@ serverless: ga
62
65
63
66
The [**GenAI settings**](/explore-analyze/manage-access-to-ai-assistant.md) page allows you to:
64
67
65
-
- Manage which AI connectors are available in your environment.
68
+
- Manage which AI connectors are available in your environment.
66
69
- Enable or disable AI Assistant and other AI-powered features in your environment.
67
-
- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear.
70
+
- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for {{observability}} and Search` and the `AI Assistant for Security` appear.
68
71
69
72
## Your data and the AI Assistant [data-information]
70
73
@@ -98,11 +101,11 @@ The AI Assistant connects to one of these supported LLM providers:
98
101
99
102
**Setup steps**:
100
103
101
-
1.**Create authentication credentials** with your chosen provider using the links above.
104
+
1.**Create authentication credentials** with your chosen provider using the links in the previous table.
102
105
2.**Create an LLM connector** for your chosen provider by going to the **Connectors** management page in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
103
106
3.**Authenticate the connection** by entering:
104
-
- The provider's API endpoint URL
105
-
- Your authentication key or secret
107
+
- The provider's API endpoint URL.
108
+
- Your authentication key or secret.
106
109
107
110
::::{admonition} Recommended models
108
111
While the {{obs-ai-assistant}} is compatible with many different models, refer to the [Large language model performance matrix](/solutions/observability/llm-performance-matrix.md) to select models that perform well with your desired use cases.
0 commit comments