You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Observability] Link to AI connectors and LLM performance matrix (#3832)
This PR partially addresses #3712 and adds links to the AI connectors
and LLM performance matrix. We will continue improving and aligning the
AI Assistant docs across solutions. As we work on that, this provides
links for users to the available third party LLMs and connectors.
Copy file name to clipboardExpand all lines: solutions/observability/observability-ai-assistant.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,25 +18,25 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
18
18
***Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
19
19
***Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.
20
20
21
-
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors:
21
+
The AI Assistant integrates with your large language model (LLM) provider through our [supported {{stack}} connectors](kibana://reference/connectors-kibana/gen-ai-connectors.md). Refer to the [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and their performance ratings.
22
22
23
23
## Use cases
24
24
25
25
The {{obs-ai-assistant}} helps you:
26
26
27
-
***Decode error messages**: Interpret stack traces and error logs to pinpoint root causes
28
-
***Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch
29
-
***Generate reports**: Create alert summaries and incident timelines with key metrics
30
-
***Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
31
-
***Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data
27
+
***Decode error messages**: Interpret stack traces and error logs to pinpoint root causes.
28
+
***Identify performance bottlenecks**: Find resource-intensive operations and slow queries in {{es}}.
29
+
***Generate reports**: Create alert summaries and incident timelines with key metrics.
30
+
***Build and execute queries**: Build {{es}} queries from natural language, convert Query DSL to {{esql}} syntax, and execute queries directly from the chat interface.
31
+
***Visualize data**: Create time-series charts and distribution graphs from your {{es}} data.
32
32
33
33
## Requirements [obs-ai-requirements]
34
34
35
35
The AI assistant requires the following:
36
36
37
37
- An **Elastic deployment**:
38
38
39
-
- For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
39
+
- For **{{observability}}**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
40
40
41
41
- For **Search**: {{stack}} version **8.16.0** or later, or **{{serverless-short}} {{es}} project**.
42
42
@@ -46,12 +46,12 @@ The AI assistant requires the following:
46
46
47
47
- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.
48
48
49
-
Refer to the [documentation](/deploy-manage/manage-connectors.md) for your provider to learn about supported and default models.
49
+
Refer to the [documentation](kibana://reference/connectors-kibana/gen-ai-connectors.md) for your provider to learn about supported and default models.
50
50
51
51
* The knowledge base requires a 4 GB {{ml}} node.
52
52
- In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs.
53
53
54
-
* A self-deployed connector service if [content connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base.
54
+
* A self-deployed connector service if you're using [content connectors](elasticsearch://reference/search-connectors/index.md) to populate external data into the knowledge base.
55
55
56
56
## Manage access to AI Assistant
57
57
@@ -62,9 +62,9 @@ serverless: ga
62
62
63
63
The [**GenAI settings**](/explore-analyze/manage-access-to-ai-assistant.md) page allows you to:
64
64
65
-
- Manage which AI connectors are available in your environment.
65
+
- Manage which AI connectors are available in your environment.
66
66
- Enable or disable AI Assistant and other AI-powered features in your environment.
67
-
- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear.
67
+
- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for {{observability}} and Search` and the `AI Assistant for Security` appear.
68
68
69
69
## Your data and the AI Assistant [data-information]
70
70
@@ -98,11 +98,11 @@ The AI Assistant connects to one of these supported LLM providers:
98
98
99
99
**Setup steps**:
100
100
101
-
1.**Create authentication credentials** with your chosen provider using the links above.
101
+
1.**Create authentication credentials** with your chosen provider using the links in the previous table.
102
102
2.**Create an LLM connector** for your chosen provider by going to the **Connectors** management page in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
103
103
3.**Authenticate the connection** by entering:
104
-
- The provider's API endpoint URL
105
-
- Your authentication key or secret
104
+
- The provider's API endpoint URL.
105
+
- Your authentication key or secret.
106
106
107
107
::::{admonition} Recommended models
108
108
While the {{obs-ai-assistant}} is compatible with many different models, refer to the [Large language model performance matrix](/solutions/observability/llm-performance-matrix.md) to select models that perform well with your desired use cases.
0 commit comments