Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,4 @@ Data volumes for ingest and retention are based on the fully enriched normalized

[Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) is an optional add-on to Observability Serverless projects that allows you to periodically check the status of your services and applications. In addition to the core ingest and retention dimensions, there is a charge to execute synthetic monitors on our testing infrastructure. Browser (journey) based tests are charged per-test-run, and ping (lightweight) tests have an all-you-can-use model per location used.

## Elastic Inference Service [EIS-billing]
[Elastic Inference Service (EIS)](../../../explore-analyze/elastic-inference/eis.md) enables you to leverage AI-powered search as a service without deploying a model in your serverless project. EIS is configured as a default LLM for use with the Observability AI Assistant (for all observability projects).

:::{note}
Use of the Observability AI Assistant uses EIS tokens and incurs related token-based add-on billing for your serverless project.
:::

Refer to [Serverless billing dimensions](serverless-project-billing-dimensions.md) and the [{{ecloud}} pricing table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless&project=observability) for more details about {{obs-serverless}} billing dimensions and rates.
1 change: 0 additions & 1 deletion explore-analyze/elastic-inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,5 @@ navigation_title: Elastic Inference

There are several ways to perform {{infer}} in the {{stack}}. This page provides a brief overview of the different methods:

* [Using EIS (Elastic Inference Service)](elastic-inference/eis.md)
* [Using the {{infer}} API](elastic-inference/inference-api.md)
* [Trained models deployed in your cluster](machine-learning/nlp/ml-nlp-overview.md)
10 changes: 0 additions & 10 deletions explore-analyze/elastic-inference/eis.md

This file was deleted.

This file was deleted.

3 changes: 0 additions & 3 deletions explore-analyze/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,7 @@ toc:
- file: transforms/transform-limitations.md
- file: elastic-inference.md
children:
- file: elastic-inference/eis.md
- file: elastic-inference/inference-api.md
children:
- file: elastic-inference/inference-api/elastic-inference-service-eis.md
- file: machine-learning.md
children:
- file: machine-learning/setting-up-machine-learning.md
Expand Down
13 changes: 2 additions & 11 deletions solutions/observability/observability-ai-assistant.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
* **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
* **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.

By default, AI Assistant uses a [preconfigured LLM](#preconfigured-llm-ai-assistant) connector that works out of the box. You can also connect to third-party LLM providers.
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors:

## Use cases

Expand All @@ -28,11 +28,6 @@ The {{obs-ai-assistant}} helps you:
* **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
* **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data

## Preconfigured LLM [preconfigured-llm-ai-assistant]

:::{include} ../_snippets/elastic-llm.md
:::

## Requirements [obs-ai-requirements]

The AI assistant requires the following:
Expand All @@ -45,7 +40,7 @@ The AI assistant requires the following:

- To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions).

- If not using the [default preconfigured LLM](#preconfigured-llm-ai-assistant), you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
- An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.

- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.

Expand Down Expand Up @@ -76,10 +71,6 @@ It's important to understand how your data is handled when using the AI Assistan

## Set up the AI Assistant [obs-ai-set-up]

:::{note}
If you use [the preconfigured LLM](#preconfigured-llm-ai-assistant) connector, you can skip this step. Your LLM connector is ready to use.
:::

The AI Assistant connects to one of these supported LLM providers:

| Provider | Configuration | Authentication |
Expand Down
12 changes: 1 addition & 11 deletions solutions/search/rag/playground.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,6 @@ Here’s a simpified overview of how Playground works:

* User can also **Download the code** to integrate into application

## Elastic LLM [preconfigured-llm-playground]

:::{include} ../../_snippets/elastic-llm.md
:::

## Availability and prerequisites [playground-availability-prerequisites]

For Elastic Cloud and self-managed deployments Playground is available in the **Search** space in {{kib}}, under **Content** > **Playground**.
Expand All @@ -77,7 +72,7 @@ To use Playground, you’ll need the following:

* See [ingest data](playground.md#playground-getting-started-ingest) if you’d like to ingest sample data.

3. If not using the default preconfigured LLM connector, you will need an account with a supported LLM provider:
3. An account with a **supported LLM provider**. Playground supports the following:

* **Amazon Bedrock**

Expand Down Expand Up @@ -119,11 +114,6 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On

### Connect to LLM provider [playground-getting-started-connect]

:::{note}
If you use [the preconfigured LLM](#preconfigured-llm-playground) connector, you can skip this step. Your LLM connector is ready to use.

:::

To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector.

To connect to an LLM provider, follow these steps on the Playground landing page:
Expand Down
Loading