Skip to content

Commit 710bcee

Browse files
committed
Removes references to Preconfigured LLM.
1 parent b4cdb08 commit 710bcee

File tree

3 files changed

+3
-29
lines changed

3 files changed

+3
-29
lines changed

deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,4 @@ Data volumes for ingest and retention are based on the fully enriched normalized
2222

2323
[Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) is an optional add-on to Observability Serverless projects that allows you to periodically check the status of your services and applications. In addition to the core ingest and retention dimensions, there is a charge to execute synthetic monitors on our testing infrastructure. Browser (journey) based tests are charged per-test-run, and ping (lightweight) tests have an all-you-can-use model per location used.
2424

25-
## Elastic Inference Service [EIS-billing]
26-
[Elastic Inference Service (EIS)](../../../explore-analyze/elastic-inference/eis.md) enables you to leverage AI-powered search as a service without deploying a model in your serverless project. EIS is configured as a default LLM for use with the Observability AI Assistant (for all observability projects).
27-
28-
:::{note}
29-
Use of the Observability AI Assistant uses EIS tokens and incurs related token-based add-on billing for your serverless project.
30-
:::
31-
3225
Refer to [Serverless billing dimensions](serverless-project-billing-dimensions.md) and the [{{ecloud}} pricing table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless&project=observability) for more details about {{obs-serverless}} billing dimensions and rates.

solutions/observability/observability-ai-assistant.md

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
1616
* **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
1717
* **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.
1818

19-
By default, AI Assistant uses a [preconfigured LLM](#preconfigured-llm-ai-assistant) connector that works out of the box. You can also connect to third-party LLM providers.
19+
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors:
2020

2121
## Use cases
2222

@@ -28,11 +28,6 @@ The {{obs-ai-assistant}} helps you:
2828
* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
2929
* **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data
3030

31-
## Preconfigured LLM [preconfigured-llm-ai-assistant]
32-
33-
:::{include} ../_snippets/elastic-llm.md
34-
:::
35-
3631
## Requirements [obs-ai-requirements]
3732

3833
The AI assistant requires the following:
@@ -45,7 +40,7 @@ The AI assistant requires the following:
4540

4641
- To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions).
4742

48-
- If not using the [default preconfigured LLM](#preconfigured-llm-ai-assistant), you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
43+
- An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
4944

5045
- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.
5146

@@ -76,10 +71,6 @@ It's important to understand how your data is handled when using the AI Assistan
7671

7772
## Set up the AI Assistant [obs-ai-set-up]
7873

79-
:::{note}
80-
If you use [the preconfigured LLM](#preconfigured-llm-ai-assistant) connector, you can skip this step. Your LLM connector is ready to use.
81-
:::
82-
8374
The AI Assistant connects to one of these supported LLM providers:
8475

8576
| Provider | Configuration | Authentication |

solutions/search/rag/playground.md

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -59,11 +59,6 @@ Here’s a simpified overview of how Playground works:
5959

6060
* User can also **Download the code** to integrate into application
6161

62-
## Elastic LLM [preconfigured-llm-playground]
63-
64-
:::{include} ../../_snippets/elastic-llm.md
65-
:::
66-
6762
## Availability and prerequisites [playground-availability-prerequisites]
6863

6964
For Elastic Cloud and self-managed deployments Playground is available in the **Search** space in {{kib}}, under **Content** > **Playground**.
@@ -77,7 +72,7 @@ To use Playground, you’ll need the following:
7772

7873
* See [ingest data](playground.md#playground-getting-started-ingest) if you’d like to ingest sample data.
7974

80-
3. If not using the default preconfigured LLM connector, you will need an account with a supported LLM provider:
75+
3. An account with a **supported LLM provider**. Playground supports the following:
8176

8277
* **Amazon Bedrock**
8378

@@ -119,11 +114,6 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On
119114

120115
### Connect to LLM provider [playground-getting-started-connect]
121116

122-
:::{note}
123-
If you use [the preconfigured LLM](#preconfigured-llm-playground) connector, you can skip this step. Your LLM connector is ready to use.
124-
125-
:::
126-
127117
To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector.
128118

129119
To connect to an LLM provider, follow these steps on the Playground landing page:

0 commit comments

Comments
 (0)