From 2e29b3bac2369f552fa8e7ea4ef5bf5fcc031447 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Wed, 5 Nov 2025 13:56:43 -0600 Subject: [PATCH 1/6] Link to AI connectors and LLM performance matrix --- .../observability-ai-assistant.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index 48ecea825e..20557651cb 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -18,17 +18,20 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways: * **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps. * **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs. -The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors: +The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors. Refer to following for more information: + +- [Set up the AI Assistant](#obs-ai-set-up) for more on available AI connectors. +- [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and their ratings for different use cases. ## Use cases The {{obs-ai-assistant}} helps you: -* **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes -* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch +* **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes. +* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in {{es}} * **Generate reports**: Create alert summaries and incident timelines with key metrics -* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface -* **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data +* **Build and execute queries**: Build {{es}} queries from natural language, convert Query DSL to {{esql}} syntax, and execute queries directly from the chat interface +* **Visualize data**: Create time-series charts and distribution graphs from your {{es}} data ## Requirements [obs-ai-requirements] @@ -36,7 +39,7 @@ The AI assistant requires the following: - An **Elastic deployment**: - - For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**. + - For **{{observability}}**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**. - For **Search**: {{stack}} version **8.16.0** or later, or **{{serverless-short}} {{es}} project**. @@ -62,9 +65,9 @@ serverless: ga The [**GenAI settings**](/explore-analyze/manage-access-to-ai-assistant.md) page allows you to: -- Manage which AI connectors are available in your environment. +- Manage which AI connectors are available in your environment. - Enable or disable AI Assistant and other AI-powered features in your environment. -- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear. +- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for {{observability}} and Search` and the `AI Assistant for Security` appear. ## Your data and the AI Assistant [data-information] @@ -98,11 +101,11 @@ The AI Assistant connects to one of these supported LLM providers: **Setup steps**: -1. **Create authentication credentials** with your chosen provider using the links above. +1. **Create authentication credentials** with your chosen provider using the links in the previous table. 2. **Create an LLM connector** for your chosen provider by going to the **Connectors** management page in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 3. **Authenticate the connection** by entering: - - The provider's API endpoint URL - - Your authentication key or secret + - The provider's API endpoint URL. + - Your authentication key or secret. ::::{admonition} Recommended models While the {{obs-ai-assistant}} is compatible with many different models, refer to the [Large language model performance matrix](/solutions/observability/llm-performance-matrix.md) to select models that perform well with your desired use cases. From d2730d5e2681e7b4d781bed0554deb41177b18ce Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Wed, 5 Nov 2025 15:06:24 -0600 Subject: [PATCH 2/6] fix wording --- solutions/observability/observability-ai-assistant.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index 20557651cb..e65bca51b0 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -20,8 +20,8 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways: The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors. Refer to following for more information: -- [Set up the AI Assistant](#obs-ai-set-up) for more on available AI connectors. -- [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and their ratings for different use cases. +- [Set up the AI Assistant](#obs-ai-set-up) for available AI connectors. +- [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and performance ratings. ## Use cases From 23684fd0b48b7bf3096cf25401b4992835040df6 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Wed, 5 Nov 2025 15:24:30 -0600 Subject: [PATCH 3/6] update links --- solutions/observability/observability-ai-assistant.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index e65bca51b0..d4f538e4c5 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -49,12 +49,12 @@ The AI assistant requires the following: - The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required. - Refer to the [documentation](/deploy-manage/manage-connectors.md) for your provider to learn about supported and default models. + Refer to the [documentation](kibana://reference/connectors-kibana/gen-ai-connectors.md) for your provider to learn about supported and default models. * The knowledge base requires a 4 GB {{ml}} node. - In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs. -* A self-deployed connector service if [content connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base. +* A self-deployed connector service if you're using [content connectors](elasticsearch://reference/search-connectors/index.md) to populate external data into the knowledge base. ## Manage access to AI Assistant From 8062afc9da69376379412d45e65c2dfc4e25cb5f Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Wed, 5 Nov 2025 15:26:27 -0600 Subject: [PATCH 4/6] fix link --- solutions/observability/observability-ai-assistant.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index d4f538e4c5..d2c2dc2fe0 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -18,10 +18,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways: * **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps. * **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs. -The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors. Refer to following for more information: - -- [Set up the AI Assistant](#obs-ai-set-up) for available AI connectors. -- [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and performance ratings. +The AI Assistant integrates with your large language model (LLM) provider through our [supported {{stack}} connectors](kibana://reference/connectors-kibana/gen-ai-connectors.md). Refer to following for more information [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and performance ratings. ## Use cases From 81984a6e3ab8ea31efb53c034432db7082fb4096 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 6 Nov 2025 08:27:41 -0600 Subject: [PATCH 5/6] fix wording --- solutions/observability/observability-ai-assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index d2c2dc2fe0..adc6d54ac3 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -18,7 +18,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways: * **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps. * **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs. -The AI Assistant integrates with your large language model (LLM) provider through our [supported {{stack}} connectors](kibana://reference/connectors-kibana/gen-ai-connectors.md). Refer to following for more information [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and performance ratings. +The AI Assistant integrates with your large language model (LLM) provider through our [supported {{stack}} connectors](kibana://reference/connectors-kibana/gen-ai-connectors.md). Refer to the [{{obs-ai-assistant}} LLM performance matrix](./llm-performance-matrix.md) for supported third-party LLM providers and their performance ratings. ## Use cases From 289c0b4f804017d2804b286d10699b73d02ac62a Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 6 Nov 2025 08:29:37 -0600 Subject: [PATCH 6/6] fix punctuation --- solutions/observability/observability-ai-assistant.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index adc6d54ac3..b04eac607e 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -25,10 +25,10 @@ The AI Assistant integrates with your large language model (LLM) provider throug The {{obs-ai-assistant}} helps you: * **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes. -* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in {{es}} -* **Generate reports**: Create alert summaries and incident timelines with key metrics -* **Build and execute queries**: Build {{es}} queries from natural language, convert Query DSL to {{esql}} syntax, and execute queries directly from the chat interface -* **Visualize data**: Create time-series charts and distribution graphs from your {{es}} data +* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in {{es}}. +* **Generate reports**: Create alert summaries and incident timelines with key metrics. +* **Build and execute queries**: Build {{es}} queries from natural language, convert Query DSL to {{esql}} syntax, and execute queries directly from the chat interface. +* **Visualize data**: Create time-series charts and distribution graphs from your {{es}} data. ## Requirements [obs-ai-requirements]