diff --git a/solutions/_snippets/elastic-llm.md b/solutions/_snippets/elastic-llm.md new file mode 100644 index 0000000000..6d1f1dccc6 --- /dev/null +++ b/solutions/_snippets/elastic-llm.md @@ -0,0 +1,4 @@ +An LLM is preconfigured as a connector, enabled by default and ready to use out of the box. +Using the preconfigured LLM enables you to use features such as Playground and AI Assistant without having an account with an LLM provider or setting up an LLM connector. +The LLM is hosted as a service and will incur additional costs. +For more details, refer to the [pricing page](https://www.elastic.co/pricing). diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index ae24157956..cebb245c70 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -20,7 +20,8 @@ The AI Assistant uses generative AI to provide: :screenshot: ::: -The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors: +By default AI Assistant uses a preconfigured large language model (LLM) connector that works out of the box. +It also integrates with your LLM provider through our supported {{stack}} connectors: * [OpenAI connector](kibana://reference/connectors-kibana/openai-action-type.md) for OpenAI or Azure OpenAI Service. * [Amazon Bedrock connector](kibana://reference/connectors-kibana/bedrock-action-type.md) for Amazon Bedrock, specifically for the Claude models. @@ -37,7 +38,10 @@ Also, the data you provide to the Observability AI assistant is *not* anonymized :::: +## Preconfigured LLM [preconfigured-llm-ai-assistant] +:::{include} ../_snippets/elastic-llm.md +::: ## Requirements [obs-ai-requirements] @@ -45,7 +49,7 @@ The AI assistant requires the following: * {{stack}} version 8.9 and later. * A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/self-managed-connectors.md) are used to populate external data into the knowledge base. -* An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. +* If not using the preconfigured default LLM connector, you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance. Refer to the [connector documentation](../../deploy-manage/manage-connectors.md) for your provider to learn about supported and default models. @@ -75,6 +79,11 @@ Elastic does not control third-party tools, and assumes no responsibility or lia ## Set up the AI Assistant [obs-ai-set-up] +:::{note} +If you use [the preconfigured LLM](#preconfigured-llm-ai-assistant) connector, you can skip this step. Your LLM connector is ready to use. + +::: + To set up the AI Assistant: 1. Create an authentication key with your AI provider to authenticate requests from the AI Assistant. You’ll use this in the next step. Refer to your provider’s documentation for information about creating authentication keys: diff --git a/solutions/search/rag/playground.md b/solutions/search/rag/playground.md index 3f6bf77a83..f7247ddb33 100644 --- a/solutions/search/rag/playground.md +++ b/solutions/search/rag/playground.md @@ -31,8 +31,6 @@ Watch these video tutorials to help you get started: :::: - - ## How Playground works [playground-how-it-works] Here’s a simpified overview of how Playground works: @@ -61,7 +59,10 @@ Here’s a simpified overview of how Playground works: * User can also **Download the code** to integrate into application +## Elastic LLM [preconfigured-llm-playground] +:::{include} ../../_snippets/elastic-llm.md +::: ## Availability and prerequisites [playground-availability-prerequisites] @@ -76,7 +77,7 @@ To use Playground, you’ll need the following: * See [ingest data](playground.md#playground-getting-started-ingest) if you’d like to ingest sample data. -3. An account with a **supported LLM provider**. Playground supports the following: +3. If not using the default preconfigured LLM connector, you will need an account with a supported LLM provider: * **Amazon Bedrock** @@ -99,7 +100,6 @@ To use Playground, you’ll need the following: * Google Gemini 1.5 Pro * Google Gemini 1.5 Flash - ::::{tip} :name: playground-local-llms @@ -110,8 +110,6 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On :::: - - ## Getting started [playground-getting-started] :::{image} /solutions/images/kibana-get-started.png @@ -119,9 +117,13 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On :screenshot: ::: - ### Connect to LLM provider [playground-getting-started-connect] +:::{note} +If you use [the preconfigured LLM](#preconfigured-llm-playground) connector, you can skip this step. Your LLM connector is ready to use. + +::: + To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector. To connect to an LLM provider, follow these steps on the Playground landing page: