You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|**Behavioral analytics**| ❌ (deprecated in 9.0) | ❌ | Not available in Serverless |
88
88
|[**Clone index API**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-clone)| ✅ |**Planned**| Anticipated in a future release |
89
-
|[**Bulk indexing**](/deploy-manage/production-guidance/optimize-performance/indexing-speed.md#_use_bulk_requests)| ✅ | ✅ | The maximum bulk request response time in {{serverless-short}} is 200ms |
89
+
|[**Bulk indexing**](/deploy-manage/production-guidance/optimize-performance/indexing-speed.md#_use_bulk_requests)| ✅ | ✅ | The baseline write latency in {{serverless-short}} is 200ms[^1^](#footnote-1)|
90
90
|[**Cross-cluster replication**](/deploy-manage/tools/cross-cluster-replication.md)| ✅ |**Planned**| Anticipated in a future release |
91
91
|[**Cross-cluster search**](/solutions/search/cross-cluster-search.md)| ✅ |**Planned**| Anticipated in a future release |
92
92
|**Data lifecycle management**| - [ILM](/manage-data/lifecycle/index-lifecycle-management.md) <br>- [Data stream lifecycle](/manage-data/lifecycle/data-stream.md)|[Data stream lifecycle](/manage-data/lifecycle/data-stream.md) only | - No data tiers in Serverless <br>- Optimized for common lifecycle management needs |
@@ -103,6 +103,8 @@ This table compares Elasticsearch capabilities between {{ech}} deployments and S
103
103
|[**Watcher**](/explore-analyze/alerts-cases/watcher.md)| ✅ | ❌ | Use **Kibana Alerts** instead, which provides rich integrations across use cases |
104
104
|**Web crawler**| ❌ (Managed Elastic Crawler discontinued with Enterprise Search in 9.0) | Self-managed only | Use [**self-managed crawler**](https://github.com/elastic/crawler)|
105
105
106
+
^1^ $$$footnote-1$$$ In {{serverless-short}}, Elastic ensures data durability by storing indexed data in an [object store](https://www.elastic.co/blog/elastic-serverless-architecture) rather than local replicas. Writes are batched over a 200ms window to ensure durability while optimizing performance and cost, which means that single-document indexing can appear slower than in {{ech}}. However, this design makes {{serverless-short}} more scalable and resilient to high indexing loads without relying on in-cluster replication for fault tolerance. Because of a higher baseline write latency, {{serverless-short}} indexing can be scaled by increasing concurrent indexing clients.
107
+
106
108
### Observability
107
109
108
110
This table compares Observability capabilities between {{ech}} deployments and Observability Complete Serverless projects. For more information on Observability Logs Essentials Serverless projects, refer to [Observability feature tiers](../../../solutions/observability/observability-serverless-feature-tiers.md).
Copy file name to clipboardExpand all lines: solutions/observability/observability-ai-assistant.md
+15-16Lines changed: 15 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,26 +32,20 @@ The {{obs-ai-assistant}} helps you:
32
32
33
33
## Requirements [obs-ai-requirements]
34
34
35
-
The AI assistant requires the following:
35
+
To set up or use AI assistant, you need the following:
36
36
37
-
- An **Elastic deployment**:
37
+
* An appropriate [Elastic subscription](https://www.elastic.co/subscriptions)
38
38
39
-
- For **{{observability}}**: {{stack}} version **8.9** or later, or an **{{observability}} serverless project**.
39
+
* The `Observability AI Assistant: All` {{kib}} privilege
40
40
41
-
- For **Search**: {{stack}} version **8.16.0** or later, or **{{serverless-short}} {{es}} project**.
41
+
* An [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md)
42
42
43
-
- To run {{obs-ai-assistant}} on a self-hosted Elastic stack, you need an [appropriate license](https://www.elastic.co/subscriptions).
44
-
45
-
- An account with a third-party generative AI provider that preferably supports function calling. If your AI provider does not support function calling, you can configure [AI Assistant settings](../../solutions/observability/observability-ai-assistant.md#obs-ai-settings) to simulate function calling, but this might affect performance.
46
-
47
-
- The free tier offered by third-party generative AI provider may not be sufficient for the proper functioning of the AI assistant. In most cases, a paid subscription to one of the supported providers is required.
48
-
49
-
Refer to the [documentation](kibana://reference/connectors-kibana/gen-ai-connectors.md) for your provider to learn about supported and default models.
50
-
51
-
* The knowledge base requires a 4 GB {{ml}} node.
52
-
- In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs.
53
-
54
-
* A self-deployed connector service if you're using [content connectors](elasticsearch://reference/search-connectors/index.md) to populate external data into the knowledge base.
43
+
* (Optional) To use [knowledge base](#obs-ai-add-data):
44
+
- A 4 GB {{ml}} node
45
+
:::{note}
46
+
In {{ecloud}} or {{ece}}, if you have {{ml}} autoscaling enabled, {{ml}} nodes automatically start when using the knowledge base and AI Assistant. Therefore using these features incurs additional costs.
47
+
:::
48
+
- If you want to use [content connectors](elasticsearch://reference/search-connectors/index.md) to add external data to knowledge base: A self-deployed connector service
55
49
56
50
## Manage access to AI Assistant
57
51
@@ -341,6 +335,11 @@ Main functions:
341
335
`kibana`
342
336
: Call {{kib}} APIs on your behalf.
343
337
338
+
::::::{important}
339
+
:applies_to: self:
340
+
For self‑managed deployments, you must configure [`server.publicBaseUrl`](kibana://reference/configuration-reference/general-settings.md#server-publicbaseurl) in your `kibana.yml` to use the `kibana` function.
341
+
::::::
342
+
344
343
`query`
345
344
: Generate, execute, and visualize queries based on your request.
Copy file name to clipboardExpand all lines: solutions/security/ai/ai-assistant.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,9 +29,9 @@ The Elastic AI Assistant is designed to enhance your analysis with smart dialogu
29
29
::::{admonition} Requirements
30
30
* {applies_to}`stack: ga` An [Enterprise subscription](https://www.elastic.co/pricing).
31
31
* {applies_to}`serverless: ga` An {{sec-serverless}} project with the [EASE or Security Analytics Complete feature tier](/deploy-manage/deploy/elastic-cloud/project-settings.md).
32
-
* To use AI Assistant, the **Elastic AI Assistant: All** and **Actions and Connectors: Read**[privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
32
+
* To use AI Assistant, the **Elastic AI Assistant: All**Security [privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md)and the **Actions and Connectors: Read**management [privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
33
33
* To set up AI Assistant, the **Actions and Connectors : All**[privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
34
-
*A [generative AI connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses.
34
+
*An [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses.
35
35
* A [machine learning node](/explore-analyze/machine-learning/setting-up-machine-learning.md).
0 commit comments