You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md
-7Lines changed: 0 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,4 @@ Data volumes for ingest and retention are based on the fully enriched normalized
22
22
23
23
[Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) is an optional add-on to Observability Serverless projects that allows you to periodically check the status of your services and applications. In addition to the core ingest and retention dimensions, there is a charge to execute synthetic monitors on our testing infrastructure. Browser (journey) based tests are charged per-test-run, and ping (lightweight) tests have an all-you-can-use model per location used.
24
24
25
-
## Elastic Inference Service [EIS-billing]
26
-
[Elastic Inference Service (EIS)](../../../explore-analyze/elastic-inference/eis.md) enables you to leverage AI-powered search as a service without deploying a model in your serverless project. EIS is configured as a default LLM for use with the Observability AI Assistant (for all observability projects).
27
-
28
-
:::{note}
29
-
Use of the Observability AI Assistant uses EIS tokens and incurs related token-based add-on billing for your serverless project.
30
-
:::
31
-
32
25
Refer to [Serverless billing dimensions](serverless-project-billing-dimensions.md) and the [{{ecloud}} pricing table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless&project=observability) for more details about {{obs-serverless}} billing dimensions and rates.
Copy file name to clipboardExpand all lines: solutions/observability/observability-ai-assistant.md
+2-11Lines changed: 2 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
16
16
***Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
17
17
***Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.
18
18
19
-
By default, AI Assistant uses a [preconfigured LLM](#preconfigured-llm-ai-assistant) connector that works out of the box. You can also connect to third-party LLM providers.
19
+
The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors:
20
20
21
21
## Use cases
22
22
@@ -28,11 +28,6 @@ The {{obs-ai-assistant}} helps you:
28
28
***Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface
29
29
***Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data
@@ -45,7 +40,7 @@ The AI assistant requires the following:
45
40
46
41
- To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions).
47
42
48
-
-If not using the [default preconfigured LLM](#preconfigured-llm-ai-assistant), you need an account with a third-party generative AI provider that preferably supports function calling. If your provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
43
+
-An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
49
44
50
45
- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.
51
46
@@ -76,10 +71,6 @@ It's important to understand how your data is handled when using the AI Assistan
76
71
77
72
## Set up the AI Assistant [obs-ai-set-up]
78
73
79
-
:::{note}
80
-
If you use [the preconfigured LLM](#preconfigured-llm-ai-assistant) connector, you can skip this step. Your LLM connector is ready to use.
81
-
:::
82
-
83
74
The AI Assistant connects to one of these supported LLM providers:
Copy file name to clipboardExpand all lines: solutions/search/rag/playground.md
+1-11Lines changed: 1 addition & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,11 +59,6 @@ Here’s a simpified overview of how Playground works:
59
59
60
60
* User can also **Download the code** to integrate into application
61
61
62
-
## Elastic LLM [preconfigured-llm-playground]
63
-
64
-
:::{include} ../../_snippets/elastic-llm.md
65
-
:::
66
-
67
62
## Availability and prerequisites [playground-availability-prerequisites]
68
63
69
64
For Elastic Cloud and self-managed deployments Playground is available in the **Search** space in {{kib}}, under **Content** > **Playground**.
@@ -77,7 +72,7 @@ To use Playground, you’ll need the following:
77
72
78
73
* See [ingest data](playground.md#playground-getting-started-ingest) if you’d like to ingest sample data.
79
74
80
-
3.If not using the default preconfigured LLM connector, you will need an account with a supported LLM provider:
75
+
3.An account with a **supported LLM provider**. Playground supports the following:
81
76
82
77
***Amazon Bedrock**
83
78
@@ -119,11 +114,6 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On
119
114
120
115
### Connect to LLM provider [playground-getting-started-connect]
121
116
122
-
:::{note}
123
-
If you use [the preconfigured LLM](#preconfigured-llm-playground) connector, you can skip this step. Your LLM connector is ready to use.
124
-
125
-
:::
126
-
127
117
To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector.
128
118
129
119
To connect to an LLM provider, follow these steps on the Playground landing page:
0 commit comments