From b1fd6e361aad2ab8daaeab87455897f35c9aa8b3 Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Tue, 1 Apr 2025 15:31:31 +0200 Subject: [PATCH 1/6] Consolidate and cleanup Obsearchability AI Assistant page * updated "Observability AI Assistant" to "Elastic AI Assistant for Observability and Search", added variable * improved page structure with clear sections and better hierarchy * consolidated privacy information, warnings, and notes into prose sections where possible to avoid admonition fatigue * created provider table for clearer setup options * reorganized knowledge base section with recommended approaches first * reorganized embedding methods to prioritize simpler approach * added clear navigation paths throughout document * reorganized functions as subcategory of chat rather than separate feature * improved section organization to match user workflow * rephrased LLM limitations warning for clarity * add quickstart guide at top for basic setup * breakout information onto separate pages --- docset.yml | 1 + explore-analyze/ai-assistant.md | 9 +- .../observability-ai-assistant.md | 234 ++++++++---------- solutions/search/ai-assistant.md | 21 ++ solutions/toc.yml | 1 + 5 files changed, 134 insertions(+), 132 deletions(-) create mode 100644 solutions/search/ai-assistant.md diff --git a/docset.yml b/docset.yml index 31132dc7b5..a9d5247c7e 100644 --- a/docset.yml +++ b/docset.yml @@ -275,3 +275,4 @@ subs: version: "9.0.0" release-date: "2-April-2025" heroku: "Elasticsearch Add-on for Heroku" + obs-ai-assistant: "Elastic AI Assistant for Observability and Search" diff --git a/explore-analyze/ai-assistant.md b/explore-analyze/ai-assistant.md index 8612ba6fb1..e1dbd02f11 100644 --- a/explore-analyze/ai-assistant.md +++ b/explore-analyze/ai-assistant.md @@ -15,23 +15,22 @@ mapped_urls: $$$token-limits$$$ -**AI Assistant** is a chat-based interactive tool that uses generative AI and ELSER, Elastic’s proprietary semantic search model, to help you with a variety of tasks related to Elasticsearch and Kibana, including: +**AI Assistant** is a chat-based interactive tool to help you with a variety of tasks related to Elasticsearch and Kibana, including: -- **Constructing queries**: Assists you in building queries to search and analyze your data, including converting queries from other languages to [ES|QL](query-filter/languages/esql-rest.md). +- **Constructing queries**: Assists you in building queries to search and analyze your data, including converting queries from other languages to [ES|QL](query-filter/languages/esql.md). - **Indexing data**: Guides you on how to index data into Elasticsearch. - **Using APIs**: Calls Elasticsearch APIs on your behalf if you need specific operations performed. - **Generating sample data**: Helps you create sample data for testing and development purposes. - **Visualizing and analyzing data**: Assists you in creating visualizations and analyzing your data using Kibana. - **Troubleshooting**: Explains errors, messages, and suggests remediation. -AI Assistant requires specific privileges and a generative AI connector. +AI Assistant requires specific privileges and a generative AI connector if not using the default Elastic LLM: % Check [Configure AI Assistant](../deploy-manage/) for more details on how to enable and configure it. The capabilities and ways to interact with AI Assistant can differ for each solution. Find more information in the respective solution docs: -% - [AI Assistant for Search](../solutions/search/) -- [AI Assistant for Observability](../solutions/observability/observability-ai-assistant.md) +- [{{obs-ai-assistant}}](../solutions/observability/observability-ai-assistant.md) - [AI Assistant for Security](../solutions/security/ai/ai-assistant.md) ## Prompt best practices [rag-for-esql] diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index ae24157956..ed7fd9918f 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -1,104 +1,105 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/obs-ai-assistant.html +navigation_title: "AI Assistant" +applies_to: + stack: ga + serverless: ga --- -# Observability AI Assistant [obs-ai-assistant] - -::::{important} -To run the Observability AI Assistant on self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions). -:::: +# {{obs-ai-assistant}} [obs-ai-assistant] +The AI Assistant is an integration with a large language model (LLM) that helps you understand, analyze, and interact with your Elastic data. -The AI Assistant uses generative AI to provide: +You can [interact with the AI Assistant](#obs-ai-interact) in two ways: -* **Contextual insights** — open prompts throughout {{observability}} that explain errors and messages and suggest remediation. -* **Chat** — have conversations with the AI Assistant. Chat uses function calling to request, analyze, and visualize your data. +* **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps. +* **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs. -:::{image} /solutions/images/observability-obs-assistant2.gif +% :::{image} /solutions/images/observability-obs-assistant2.gif :alt: Observability AI assistant preview :screenshot: ::: -The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors: - -* [OpenAI connector](kibana://reference/connectors-kibana/openai-action-type.md) for OpenAI or Azure OpenAI Service. -* [Amazon Bedrock connector](kibana://reference/connectors-kibana/bedrock-action-type.md) for Amazon Bedrock, specifically for the Claude models. -* [Google Gemini connector](kibana://reference/connectors-kibana/gemini-action-type.md) for Google Gemini. - ::::{important} -The AI Assistant is powered by an integration with your large language model (LLM) provider. LLMs are known to sometimes present incorrect information as if it’s correct. Elastic supports configuration and connection to the LLM provider and your knowledge base, but is not responsible for the LLM’s responses. - +The AI Assistant uses large language models (LLMs) which are probabilistic and liable to provide incomplete or incorrect information. Elastic supports LLM configuration and connectivity but is not responsible for response accuracy. Always verify important information before implementing suggested changes. :::: +## Use cases -::::{important} -Also, the data you provide to the Observability AI assistant is *not* anonymized, and is stored and processed by the third-party AI provider. This includes any data used in conversations for analysis or context, such as alert or event data, detection rule configurations, and queries. Therefore, be careful about sharing any confidential or sensitive details while using this feature. - -:::: - +The {{obs-ai-assistant}} helps you: +* **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes +* **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch +* **Generate reports**: Create alert summaries and incident timelines with key metrics +* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface +* **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data ## Requirements [obs-ai-requirements] The AI assistant requires the following: -* {{stack}} version 8.9 and later. -* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/self-managed-connectors.md) are used to populate external data into the knowledge base. -* An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. - - Refer to the [connector documentation](../../deploy-manage/manage-connectors.md) for your provider to learn about supported and default models. - -* The knowledge base requires a 4 GB {{ml}} node. +* Elastic deployment: -::::{important} -The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required. - -The Observability AI assistant doesn’t support connecting to a private LLM. Elastic doesn’t recommend using private LLMs with the Observability AI assistant. - -:::: + - For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**. + + - For **Search**:, you must be running Elastic Stack version **8.16.0** or later, or an **{{serverless-short}} {{es}} project**. + + - To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions). + +* An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. + - The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required. -::::{important} -In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs. - -:::: + Refer to the [documentation](../../deploy-manage/manage-connectors.md) for your provider to learn about supported and default models. +* The knowledge base requires a 4 GB {{ml}} node. + - In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs. +* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base. ## Your data and the AI Assistant [data-information] -Elastic does not use customer data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to the AI Assistant will be processed by the third-party provider you chose when setting up the OpenAI connector as part of the assistant setup. +It's important to understand how your data is handled when using the AI Assistant. Here are some key points: -Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. +**Data usage by Elastic**: Elastic does not use customer data for model training, but all data is processed by third-party AI providers. +**Anonymization**: Data sent to the AI Assistant is *not* anonymized, including alert data, configurations, queries, logs, and chat interactions. -## Set up the AI Assistant [obs-ai-set-up] +**Permission context**: When the AI Assistant performs searches, it uses the same permissions as the current user. -To set up the AI Assistant: +**Third-party processing**: Any data submitted may be used by the provider for AI training or other purposes with no guarantee of security or confidentiality. -1. Create an authentication key with your AI provider to authenticate requests from the AI Assistant. You’ll use this in the next step. Refer to your provider’s documentation for information about creating authentication keys: +**Telemetry collection**: Your AI provider may collect telemetry during usage. Contact them for details on what data is collected. - * [OpenAI API keys](https://platform.openai.com/docs/api-reference) - * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) - * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) - * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get) +## Set up the AI Assistant [obs-ai-set-up] -2. Create a connector for your AI provider. Refer to the connector documentation to learn how: +The AI Assistant connects to one of these supported LLM providers: - * [OpenAI](kibana://reference/connectors-kibana/openai-action-type.md) - * [Amazon Bedrock](kibana://reference/connectors-kibana/bedrock-action-type.md) - * [Google Gemini](kibana://reference/connectors-kibana/gemini-action-type.md) +% TODO add | Elastic LLM (default) | No configuration needed | N/A | to table -3. Authenticate communication between {{observability}} and the AI provider by providing the following information: +| Provider | Configuration Guide | Authentication Guide | +|----------|---------------------|---------------------| +| OpenAI | [Configure connector](kibana://reference/connectors-kibana/openai-action-type.md) | [Get API key](https://platform.openai.com/docs/api-reference) | +| Azure OpenAI | [Configure connector](kibana://reference/connectors-kibana/openai-action-type.md) | [Get API key](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) | +| Amazon Bedrock | [Configure connector](kibana://reference/connectors-kibana/bedrock-action-type.md) | [Get auth keys](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) | +| Google Gemini | [Configure connector](kibana://reference/connectors-kibana/gemini-action-type.md) | [Get service account key](https://cloud.google.com/iam/docs/keys-list-get) | - 1. In the **URL** field, enter the AI provider’s API endpoint URL. - 2. Under **Authentication**, enter the key or secret you created in the previous step. +**Setup steps**: +1. **Create authentication credentials** with your chosen provider using the links above +2. **Create an LLM connector** by navigating to **Stack Management → Connectors** to create an LLM connector for your chosen provider. +3. **Authenticate the connection** by entering: + - The provider's API endpoint URL + - Your authentication key or secret +::::{important} + {{obs-ai-assistant}} doesn’t support connecting to a private LLM. Elastic doesn’t recommend using private LLMs with the AI Assistant. +:::: ## Add data to the AI Assistant knowledge base [obs-ai-add-data] +:::::{dropdown} Using pre-8.12 knowledge base articles? ::::{important} **If you started using the AI Assistant in technical preview**, any knowledge base articles you created before 8.12 will have to be reindexed or upgraded before they can be used. Knowledge base articles created before 8.12 use ELSER v1. In 8.12, knowledge base articles must use ELSER v2. Options include: @@ -106,15 +107,11 @@ To set up the AI Assistant: * Upgrade all knowledge base articles indexed with ELSER v1 to ELSER v2 using a [Python script](https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/model-upgrades/upgrading-index-to-use-elser.ipynb). :::: +::::: The AI Assistant uses [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md), Elastic’s semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance. -::::{note} -Your AI provider may collect telemetry when using the AI Assistant. Contact your AI provider for information on how data is collected. -:::: - - Add data to the knowledge base with one or more of the following methods: * [Use the knowledge base UI](#obs-ai-kb-ui) available at [AI Assistant Settings](#obs-ai-settings) page. @@ -128,7 +125,7 @@ You can also add information to the knowledge base by asking the AI Assistant to To add external data to the knowledge base in {{kib}}: 1. To open AI Assistant settings, find `AI Assistants` in the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Under **Elastic AI Assistant for Observability**, click **Manage settings**. +2. Under **{{obs-ai-assistant}}**, click **Manage settings**. 3. Switch to the **Knowledge base** tab. 4. Click the **New entry** button, and choose either: @@ -142,51 +139,53 @@ To add external data to the knowledge base in {{kib}}: } ``` - - ### Use search connectors [obs-ai-search-connectors] -::::{tip} -The [search connectors](elasticsearch://reference/search-connectors/index.md) described in this section differ from the [Stack management → Connectors](../../deploy-manage/manage-connectors.md) configured during the [AI Assistant setup](#obs-ai-set-up). Search connectors are only needed when importing external data into the Knowledge base of the AI Assistant, while the stack connector to the LLM is required for the AI Assistant to work. - -:::: - - -[Connectors](elasticsearch://reference/search-connectors/index.md) allow you to index content from external sources thereby making it available for the AI Assistant. This can greatly improve the relevance of the AI Assistant’s responses. Data can be integrated from sources such as GitHub, Confluence, Google Drive, Jira, AWS S3, Microsoft Teams, Slack, and more. - -UI affordances for creating and managing search connectors are available in the Search Solution in {{kib}}. You can also use the {{es}} [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector) to create and manage search connectors. - -The infrastructure for deploying connectors must be [self-managed](elasticsearch://reference/search-connectors/self-managed-connectors.md). +[Search connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses. -By default, the AI Assistant queries all search connector indices. To override this behavior and customize which indices are queried, adjust the **Search connector index pattern** setting on the [AI Assistant Settings](#obs-ai-settings) page. This allows precise control over which data sources are included in AI Assistant knowledge base. +**Requirements and limitations:** +- For stack 9.0.0+ or {{serverless-short}}, connectors must be [self-managed](elasticsearch://reference/search-connectors/self-managed-connectors.md) +- Manage connectors through the Search Solution in {{kib}} (pre-9.0) or via the [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector) +- By default, the AI Assistant queries all search connector indices. To customize which data sources are included in the knowledge base, adjust the **Search connector index pattern** setting on the [AI Assistant Settings](#obs-ai-settings) page. -To create a connector in the {{kib}} UI and make its content available to the AI Assistant knowledge base, follow these steps: +**Setup process:** -1. Open **Connectors** by finding `Content / Connectors` in the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md). +1. **Create a connector** + + **Use the UI**: - ::::{note} - If your {{kib}} Space doesn’t include the Search solution you will have to create the connector from a different space or change your space **Solution view** setting to `Classic`. + - Navigate to `Content / Connectors` in the global search field + - Create a connector for your data source (example: [GitHub connector](elasticsearch://reference/search-connectors/es-connectors-github.md)) + - If your Space lacks the Search solution, either create the connector from a different space or change your space **Solution view** to `Classic` - :::: + **Use the API**: + - Create a connector using the [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector) -2. Follow the instructions to create a new connector. +2. **Create embeddings** (choose one method): + - [`semantic_text` field](#obs-ai-search-connectors-semantic-text): Recommended workflow which handles model setup automatically + - [ML pipeline](#obs-ai-search-connectors-ml-embeddings): Requires manual setup of the ELSER model and inference pipeline - For example, if you create a [GitHub connector](elasticsearch://reference/search-connectors/es-connectors-github.md) you have to set a `name`, attach it to a new or existing `index`, add your `personal access token` and include the `list of repositories` to synchronize. - - Learn more about configuring and [using connectors](elasticsearch://reference/search-connectors/connectors-ui-in-kibana.md) in the Elasticsearch documentation. +#### Option 1: Use a `semantic_text` field type to create embeddings (recommended) [obs-ai-search-connectors-semantic-text] +To create the embeddings needed by the AI Assistant using a [`semantic_text`](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md) field type: -After creating your connector, create the embeddings needed by the AI Assistant. You can do this using either: +1. Open the previously created connector, and select the **Mappings** tab. +2. Select **Add field**. +3. Under **Field type**, select **Semantic text**. +4. Under **Reference field**, select the field you want to use for model inference. +5. Under **Select an inference endpoint**, select the model you want to use to add the embeddings to the data. +6. Add the field to your mapping by selecting **Add field**. +7. Sync the data by selecting **Full Content** from the **Sync** menu. -* [a machine learning (ML) pipeline](#obs-ai-search-connectors-ml-embeddings): requires the ELSER ML model. -* [a `semantic_text` field type](#obs-ai-search-connectors-semantic-text): can use any available ML model (ELSER, E5, or a custom model). +The AI Assistant will now query the connector you’ve set up using the model you’ve selected. Check that the AI Assistant is using the index by asking it something related to the indexed data. +#### Option 2: Use machine learning pipelines to create embeddings [obs-ai-search-connectors-ml-embeddings] -#### Use machine learning pipelines to create AI Assistant embeddings [obs-ai-search-connectors-ml-embeddings] +This is a more complex method that requires you to set up the ELSER model and inference pipeline manually. To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an **ML Inference Pipeline**: -1. Open the previously created connector, and select the **Pipelines** tab. +1. Open the previously created search connector in **Content / Connectors**, and select the **Pipelines** tab. 2. Select **Copy and customize** under `Unlock your custom pipelines`. 3. Select **Add Inference Pipeline** under `Machine Learning Inference Pipelines`. 4. Select the **ELSER (Elastic Learned Sparse EncodeR)** ML model to add the necessary embeddings to the data. @@ -207,25 +206,9 @@ After creating the pipeline, complete the following steps: Ask something to the AI Assistant related with the indexed data. - -#### Use a `semantic_text` field type to create AI Assistant embeddings [obs-ai-search-connectors-semantic-text] - -To create the embeddings needed by the AI Assistant using a [`semantic_text`](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md) field type: - -1. Open the previously created connector, and select the **Mappings** tab. -2. Select **Add field**. -3. Under **Field type**, select **Semantic text**. -4. Under **Reference field**, select the field you want to use for model inference. -5. Under **Select an inference endpoint**, select the model you want to use to add the embeddings to the data. -6. Add the field to your mapping by selecting **Add field**. -7. Sync the data by selecting **Full Content** from the **Sync** menu. - -The AI Assistant will now query the connector you’ve set up using the model you’ve selected. Check that the AI Assistant is using the index by asking it something related to the indexed data. - - ## Interact with the AI Assistant [obs-ai-interact] -Chat with the AI Assistant or interact with contextual insights located throughout {{observability}}. Check the following sections for more on interacting with the AI Assistant. +Chat with the AI Assistant or interact with contextual insights located throughout the UI. Check the following sections for more on interacting with the AI Assistant. ::::{tip} After every answer the LLM provides, let us know if the answer was helpful. Your feedback helps us improve the AI Assistant! @@ -235,7 +218,7 @@ After every answer the LLM provides, let us know if the answer was helpful. Your ### Chat with the assistant [obs-ai-chat] -Select the **AI Assistant** icon (![AI Assistant icon](/solutions/images/observability-ai-assistant-icon.png "")) at the upper-right corner of any {{observability}} application to start the chat. +Select the **AI Assistant** icon (![AI Assistant icon](/solutions/images/observability-ai-assistant-icon.png "")) at the upper-right corner of the Serverless or {{kib}} UI to start the chat. This opens the AI Assistant flyout, where you can ask the assistant questions about your instance: @@ -247,18 +230,16 @@ This opens the AI Assistant flyout, where you can ask the assistant questions ab ::::{important} Asking questions about your data requires `function calling`, which enables LLMs to reliably interact with third-party generative AI providers to perform searches or run advanced functions using customer data. -When the {{observability}} AI Assistant performs searches in the cluster, the queries are run with the same level of permissions as the user. +When the {{obs-ai-assistant}} performs searches in the cluster, the queries are run with the same level of permissions as the user. :::: +#### Suggest functions [obs-ai-functions] - -### Suggest functions [obs-ai-functions] - -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - +```{applies_to} +stack: preview +serverless: preview +``` The AI Assistant uses functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant’s function suggestions and inspect function responses. @@ -304,11 +285,11 @@ Additional functions are available when your cluster has APM data: AI Assistant contextual prompts throughout {{observability}} provide the following information: -* **Universal Profiling** — explains the most expensive libraries and functions in your fleet and provides optimization suggestions. -* **Application performance monitoring (APM)** — explains APM errors and provides remediation suggestions. -* **Infrastructure Observability** — explains the processes running on a host. -* **Logs** — explains log messages and generates search patterns to find similar issues. -* **Alerting** — provides possible causes and remediation suggestions for log rate changes. +* **Universal Profiling**: explains the most expensive libraries and functions in your fleet and provides optimization suggestions. +* **Application performance monitoring (APM)**: explains APM errors and provides remediation suggestions. +* **Infrastructure Observability**: explains the processes running on a host. +* **Logs**: explains log messages and generates search patterns to find similar issues. +* **Alerting**: provides possible causes and remediation suggestions for log rate changes. For example, in the log details, you’ll see prompts for **What’s this message?** and **How do I find similar log messages?**: @@ -331,7 +312,7 @@ Continue a conversation from a contextual prompt by clicking **Start chat** to o Use the [Observability AI Assistant connector](kibana://reference/connectors-kibana/obs-ai-assistant-action-type.md) to add AI-generated insights and custom actions to your alerting workflows as follows: -1. [Create (or edit) an alerting rule](incident-management/create-manage-rules.md) and specify the conditions that must be met for the alert to fire. +1. Navigate to **Observability / Alerts** to [create (or edit) an alerting rule](incident-management/create-manage-rules.md) that uses the AI Assistant connector. Specify the conditions that must be met for the alert to fire. 2. Under **Actions**, select the **Observability AI Assistant** connector type. 3. In the **Connector** list, select the AI connector you created when you set up the assistant. 4. In the **Message** field, specify the message to send to the assistant: @@ -341,7 +322,6 @@ Use the [Observability AI Assistant connector](kibana://reference/connectors-kib :screenshot: ::: - You can ask the assistant to generate a report of the alert that fired, recall any information or potential resolutions of past occurrences stored in the knowledge base, provide troubleshooting guidance and resolution steps, and also include other active alerts that may be related. As a last step, you can ask the assistant to trigger an action, such as sending the report (or any other message) to a Slack webhook. ::::{note} @@ -377,7 +357,7 @@ The `server.publicBaseUrl` setting must be correctly specified under {{kib}} set :screenshot: ::: -The Observability AI Assistant connector is called when the alert fires and when it recovers. +{{obs-ai-assistant}} connector is called when the alert fires and when it recovers. To learn more about alerting, actions, and connectors, refer to [Alerting](incident-management/alerting.md). @@ -396,14 +376,14 @@ The AI Assistant Settings page contains the following tabs: * **Search Connectors**: Provides a link to {{kib}} **Search** → **Content** → **Connectors** UI for connectors configuration. -## Elastic documentation for the AI Assistant [obs-ai-product-documentation] +### Add Elastic documentation [obs-ai-product-documentation] -It is possible to make the Elastic official documentation available to the AI Assistant, which significantly increases its efficiency and accuracy in answering questions related to the Elastic stack and Elastic products. +You can make the official Elastic documentation available to the AI Assistant, which significantly improves its ability to accurately answer questions about the Elastic Stack and Elastic products. -Enabling that feature can be done from the **Settings** tab of the AI Assistant Settings page, using the "Install Elastic Documentation" action. +Enable this feature from the **Settings** tab in AI Assistant Settings by using the "Install Elastic Documentation" action. ::::{important} -Installing the product documentation in air gapped environments requires specific installation and configuration instructions, which are available in the [{{kib}} Kibana AI Assistants settings documentation](kibana://reference/configuration-reference/ai-assistant-settings.md). +For air-gapped environments, installing product documentation requires special configuration. See the [{{kib}} AI Assistants settings documentation](kibana://reference/configuration-reference/ai-assistant-settings.md) for detailed instructions. :::: diff --git a/solutions/search/ai-assistant.md b/solutions/search/ai-assistant.md new file mode 100644 index 0000000000..f3f0d707a1 --- /dev/null +++ b/solutions/search/ai-assistant.md @@ -0,0 +1,21 @@ +--- +applies_to: + stack: ga 8.16.0 + serverless: ga +--- + +# AI Assistant + +The {{obs-ai-assistant}} uses generative AI to help you with a variety of tasks related to {{es}} and Kibana, including: + +1. **Constructing queries**: Assists you in building queries to search and analyze your data. +2. **Indexing data**: Guides you on how to index data into {{es}}. +3. **Searching data**: Helps you search for specific data within your {{es}} indices. +4. **Using {{es}} APIs**: Calls {{es}} APIs on your behalf if you need specific operations performed. +5. **Generating sample data**: Helps you create sample data for testing and development purposes. +6. **Visualizing and analyzing data**: Assists you in creating visualizations and analyzing your data using Kibana. +7. **Explaining ES|QL**: Explains how {{esql}} works and help you convert queries from other languages to {{esql}}. + +:::{tip} +Learn more in the [{{obs-ai-assistant}}](../observability/observability-ai-assistant.md) documentation. +::: \ No newline at end of file diff --git a/solutions/toc.yml b/solutions/toc.yml index 80cf5d3f2f..2c2574519f 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -82,6 +82,7 @@ toc: - file: search/search-applications/search-application-security.md - file: search/search-applications/search-application-client.md - file: search/apis-and-tools.md + - file: search/ai-assistant.md - file: observability.md children: - file: observability/get-started.md From 4659c504a649e84ad4209f9d8c33daa4b398e435 Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Tue, 1 Apr 2025 15:41:29 +0200 Subject: [PATCH 2/6] fix typo --- explore-analyze/ai-assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/explore-analyze/ai-assistant.md b/explore-analyze/ai-assistant.md index e1dbd02f11..3d3d83d856 100644 --- a/explore-analyze/ai-assistant.md +++ b/explore-analyze/ai-assistant.md @@ -24,7 +24,7 @@ $$$token-limits$$$ - **Visualizing and analyzing data**: Assists you in creating visualizations and analyzing your data using Kibana. - **Troubleshooting**: Explains errors, messages, and suggests remediation. -AI Assistant requires specific privileges and a generative AI connector if not using the default Elastic LLM: +AI Assistant requires specific privileges and a generative AI connector (if not using the default Elastic LLM). % Check [Configure AI Assistant](../deploy-manage/) for more details on how to enable and configure it. From 7d7d61864bf19b6431d1052b333e3ba894757ae2 Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Tue, 1 Apr 2025 15:49:20 +0200 Subject: [PATCH 3/6] typo --- solutions/observability/observability-ai-assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index ed7fd9918f..94cab94eab 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -43,7 +43,7 @@ The AI assistant requires the following: - For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**. - - For **Search**:, you must be running Elastic Stack version **8.16.0** or later, or an **{{serverless-short}} {{es}} project**. + - For **Search**: {{stack}} version **8.16.0** or later, or an **{{serverless-short}} {{es}} project**. - To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions). From 198231a66c86d941c26ebcac73471afb1e1c735b Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Tue, 1 Apr 2025 15:50:04 +0200 Subject: [PATCH 4/6] simplify table headings --- solutions/observability/observability-ai-assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index 94cab94eab..7ec3fdf0c6 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -78,7 +78,7 @@ The AI Assistant connects to one of these supported LLM providers: % TODO add | Elastic LLM (default) | No configuration needed | N/A | to table -| Provider | Configuration Guide | Authentication Guide | +| Provider | Configuration | Authentication | |----------|---------------------|---------------------| | OpenAI | [Configure connector](kibana://reference/connectors-kibana/openai-action-type.md) | [Get API key](https://platform.openai.com/docs/api-reference) | | Azure OpenAI | [Configure connector](kibana://reference/connectors-kibana/openai-action-type.md) | [Get API key](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) | From 706d5abf23fba330517e188c8b15a95a1a1ebddf Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Tue, 1 Apr 2025 15:54:45 +0200 Subject: [PATCH 5/6] make pills uniform --- solutions/search/ai-assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/search/ai-assistant.md b/solutions/search/ai-assistant.md index f3f0d707a1..2056de4e4f 100644 --- a/solutions/search/ai-assistant.md +++ b/solutions/search/ai-assistant.md @@ -1,6 +1,6 @@ --- applies_to: - stack: ga 8.16.0 + stack: ga serverless: ga --- From c0bb631a048ef6aa4dde0d887dcf50ac63854009 Mon Sep 17 00:00:00 2001 From: Liam Thompson Date: Wed, 2 Apr 2025 11:33:35 +0200 Subject: [PATCH 6/6] Cleanup after merging preconfigured LLM info from other PR clarified llm connector is in ech deployments and serverless reorganized requirements section reformatted data usage info as definition list moved accuracy warning to more relevant section cut search ai page down to the bone --- solutions/_snippets/elastic-llm.md | 6 ++- .../observability-ai-assistant.md | 47 +++++++------------ solutions/search/ai-assistant.md | 14 +----- 3 files changed, 21 insertions(+), 46 deletions(-) diff --git a/solutions/_snippets/elastic-llm.md b/solutions/_snippets/elastic-llm.md index 6d1f1dccc6..0f07d59497 100644 --- a/solutions/_snippets/elastic-llm.md +++ b/solutions/_snippets/elastic-llm.md @@ -1,4 +1,6 @@ -An LLM is preconfigured as a connector, enabled by default and ready to use out of the box. +{{ech}} deployments and {{serverless-full}} projects include a preconfigured LLM connector that's enabled by default and ready to use. + Using the preconfigured LLM enables you to use features such as Playground and AI Assistant without having an account with an LLM provider or setting up an LLM connector. + The LLM is hosted as a service and will incur additional costs. -For more details, refer to the [pricing page](https://www.elastic.co/pricing). +For more details, refer to the [pricing page](https://www.elastic.co/pricing). \ No newline at end of file diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index 4c40da8f22..5c3b794d4c 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -16,21 +16,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways: * **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps. * **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs. -% :::{image} /solutions/images/observability-obs-assistant2.gif -:alt: Observability AI assistant preview -:screenshot: -::: - -By default, AI Assistant uses a preconfigured large language model (LLM) connector that works out of the box. -It also integrates with your LLM provider through our supported {{stack}} connectors: - -* [OpenAI connector](kibana://reference/connectors-kibana/openai-action-type.md) for OpenAI or Azure OpenAI Service. -* [Amazon Bedrock connector](kibana://reference/connectors-kibana/bedrock-action-type.md) for Amazon Bedrock, specifically for the Claude models. -* [Google Gemini connector](kibana://reference/connectors-kibana/gemini-action-type.md) for Google Gemini. - -::::{important} -The AI Assistant uses large language models (LLMs) which are probabilistic and liable to provide incomplete or incorrect information. Elastic supports LLM configuration and connectivity but is not responsible for response accuracy. Always verify important information before implementing suggested changes. -:::: +By default, AI Assistant uses a [preconfigured LLM](#preconfigured-llm-ai-assistant) connector that works out of the box. You can also connect to third-party LLM providers. ## Use cases @@ -51,14 +37,7 @@ The {{obs-ai-assistant}} helps you: The AI assistant requires the following: -* Elastic deployment: -* {{stack}} version 8.9 and later. -* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/self-managed-connectors.md) are used to populate external data into the knowledge base. -* If not using the [default preconfigured LLM](#preconfigured-llm-ai-assistant), you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. - - Refer to the [connector documentation](../../deploy-manage/manage-connectors.md) for your provider to learn about supported and default models. - -* The knowledge base requires a 4 GB {{ml}} node. +- An **Elastic deployment**: - For **Observability**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**. @@ -66,7 +45,7 @@ The AI assistant requires the following: - To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions). -* An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. +- If not using the [default preconfigured LLM](#preconfigured-llm-ai-assistant), you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. - The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required. @@ -81,21 +60,23 @@ The AI assistant requires the following: It's important to understand how your data is handled when using the AI Assistant. Here are some key points: -**Data usage by Elastic**: Elastic does not use customer data for model training, but all data is processed by third-party AI providers. - -**Anonymization**: Data sent to the AI Assistant is *not* anonymized, including alert data, configurations, queries, logs, and chat interactions. +**Data usage by Elastic** +: Elastic does not use customer data for model training, but all data is processed by third-party AI providers. -**Permission context**: When the AI Assistant performs searches, it uses the same permissions as the current user. +**Anonymization** +: Data sent to the AI Assistant is *not* anonymized, including alert data, configurations, queries, logs, and chat interactions. -**Third-party processing**: Any data submitted may be used by the provider for AI training or other purposes with no guarantee of security or confidentiality. +**Permission context** +: When the AI Assistant performs searches, it uses the same permissions as the current user. -To set up the AI Assistant: +**Third-party processing** +: Any data submitted may be used by the provider for AI training or other purposes with no guarantee of security or confidentiality. **Telemetry collection**: Your AI provider may collect telemetry during usage. Contact them for details on what data is collected. ## Set up the AI Assistant [obs-ai-set-up] -:::{tip} +:::{note} If you use [the preconfigured LLM](#preconfigured-llm-ai-assistant) connector, you can skip this step. Your LLM connector is ready to use. ::: @@ -231,6 +212,10 @@ After creating the pipeline, complete the following steps: ## Interact with the AI Assistant [obs-ai-interact] +::::{important} +The AI Assistant uses large language models (LLMs) which are probabilistic and liable to provide incomplete or incorrect information. Elastic supports LLM configuration and connectivity but is not responsible for response accuracy. Always verify important information before implementing suggested changes. +:::: + Chat with the AI Assistant or interact with contextual insights located throughout the UI. Check the following sections for more on interacting with the AI Assistant. ::::{tip} diff --git a/solutions/search/ai-assistant.md b/solutions/search/ai-assistant.md index 2056de4e4f..9aff8cbefa 100644 --- a/solutions/search/ai-assistant.md +++ b/solutions/search/ai-assistant.md @@ -6,16 +6,4 @@ applies_to: # AI Assistant -The {{obs-ai-assistant}} uses generative AI to help you with a variety of tasks related to {{es}} and Kibana, including: - -1. **Constructing queries**: Assists you in building queries to search and analyze your data. -2. **Indexing data**: Guides you on how to index data into {{es}}. -3. **Searching data**: Helps you search for specific data within your {{es}} indices. -4. **Using {{es}} APIs**: Calls {{es}} APIs on your behalf if you need specific operations performed. -5. **Generating sample data**: Helps you create sample data for testing and development purposes. -6. **Visualizing and analyzing data**: Assists you in creating visualizations and analyzing your data using Kibana. -7. **Explaining ES|QL**: Explains how {{esql}} works and help you convert queries from other languages to {{esql}}. - -:::{tip} -Learn more in the [{{obs-ai-assistant}}](../observability/observability-ai-assistant.md) documentation. -::: \ No newline at end of file +The {{obs-ai-assistant}} uses generative AI to help you with a variety of tasks related to {{es}} and Kibana. Learn more in the [{{obs-ai-assistant}}](../observability/observability-ai-assistant.md) documentation. \ No newline at end of file