Skip to content

Commit ba9e959

Browse files
Merge branch 'main' into update-ccs-support-for-9.2-release
2 parents 5d255a1 + 39544aa commit ba9e959

File tree

55 files changed

+1262
-267
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1262
-267
lines changed

deploy-manage/monitor/autoops/ec-autoops-regions.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -29,27 +29,27 @@ AutoOps for {{ECH}} is currently available in the following regions for AWS:
2929

3030
| Region | Name |
3131
| --- | --- | --- | --- |
32-
| us-east-1 | N. Virginia |
33-
| us-east-2 | Ohio |
34-
| us-west-1 | N. California |
35-
| us-west-2 | Oregon |
36-
| ca-central-1 | Canada |
37-
| eu-west-1 | Ireland |
38-
| eu-west-2 | London |
39-
| eu-west-3 | Paris |
40-
| eu-north-1 | Stockholm |
41-
| eu-central-1 | Frankfurt |
42-
| eu-central-2 | Zurich |
43-
| eu-south-1 | Milan |
44-
| me-south-1 | Bahrain |
45-
| ap-east-1 | Hong Kong |
46-
| ap-northeast-1 | Tokyo |
47-
| ap-northeast-2 | Seoul |
48-
| ap-southeast-1 | Singapore |
49-
| ap-southeast-2 | Sydney |
50-
| ap-south-1 | Mumbai |
51-
| sa-east-1 | Sao Paulo |
52-
| af-south-1 | Cape Town |
32+
| us-east-1 | US East (N. Virginia) |
33+
| us-east-2 | US East (Ohio) |
34+
| us-west-1 | US West (N. California) |
35+
| us-west-2 | US West (Oregon) |
36+
| ca-central-1 | Canada (Central) |
37+
| eu-west-1 | Europe (Ireland) |
38+
| eu-west-2 | Europe (London) |
39+
| eu-west-3 | Europe (Paris) |
40+
| eu-north-1 | Europe (Stockholm) |
41+
| eu-central-1 | Europe (Frankfurt) |
42+
| eu-central-2 | Europe (Zurich) |
43+
| eu-south-1 | Europe (Milan) |
44+
| me-south-1 | Middle East (Bahrain) |
45+
| ap-east-1 | Asia Pacific (Hong Kong) |
46+
| ap-northeast-1 | Asia Pacific (Tokyo) |
47+
| ap-northeast-2 | Asia Pacific (Seoul) |
48+
| ap-southeast-1 | Asia Pacific (Singapore) |
49+
| ap-southeast-2 | Asia Pacific (Sydney) |
50+
| ap-south-1 | Asia Pacific (Mumbai) |
51+
| sa-east-1 | South America (Sao Paulo) |
52+
| af-south-1 | Africa (Cape Town) |
5353

5454
Regions for Azure and GCP are coming soon.
5555

@@ -68,9 +68,9 @@ AutoOps for serverless projects is currently available in the following regions
6868

6969
| Region | Name |
7070
| --- | --- | --- | --- |
71-
| us-east-1 | N. Virginia |
72-
| eu-west-1 | Ireland |
73-
| ap-southeast-1 | Singapore |
74-
| us-west-2 | Oregon |
71+
| us-east-1 | US East (N. Virginia) |
72+
| eu-west-1 | Europe (Ireland) |
73+
| ap-southeast-1 | Asia Pacific (Singapore) |
74+
| us-west-2 | US West (Oregon) |
7575

7676
The only exception is the **Search AI Lake** view, which is available in all CSP regions across AWS, Azure and GCP.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
* To use {{kib}}'s **Snapshot and Restore** feature, you must have the following permissions:
2+
3+
* [Cluster privileges](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-cluster): `monitor`, `manage_slm`, `cluster:admin/snapshot`, and `cluster:admin/repository`
4+
* [Index privilege](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-indices): `monitor` privilege on all the indices
5+
6+
* To register a snapshot repository or restore a snapshot, the cluster’s global metadata must be writeable. Ensure there aren’t any [cluster blocks](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-read-only) that prevent write access. The restore operation ignores index blocks.

deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,8 @@ From within {{ech}}, you can restore a snapshot from a different deployment in t
2626

2727
## Prerequisites for {{ech}}
2828

29-
To use Kibana's Snapshot and Restore feature, you must have the following permissions:
30-
31-
- Cluster privileges: `monitor`, `manage_slm`, `cluster:admin/snapshot`, and `cluster:admin/repository`
32-
- Index privilege: `all` on the monitor index
33-
34-
To register a snapshot repository, the cluster’s global metadata must be writable. Ensure there aren’t any cluster blocks that prevent write access.
29+
:::{include} _snippets/restore-snapshot-common-prerequisites.md
30+
:::
3531

3632
## Considerations
3733

deploy-manage/tools/snapshot-and-restore/restore-snapshot.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,11 @@ In this guide, you’ll learn how to:
2727
This guide also provides tips for [restoring to another cluster](#restore-different-cluster) and [troubleshooting common restore errors](#troubleshoot-restore).
2828

2929
## Prerequisites
30-
- To use Kibana’s Snapshot and Restore feature, you must have the following permissions:
31-
- [Cluster privileges](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-cluster): `monitor`, `manage_slm`, `cluster:admin/snapshot`, and `cluster:admin/repository`
32-
- [Index privilege](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-indices): `all` on the monitor index
30+
:::{include} _snippets/restore-snapshot-common-prerequisites.md
31+
:::
32+
3333
- You can only restore a snapshot to a running cluster with an elected [master node](/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#master-node-role). The snapshot’s repository must be registered and available to the cluster.
3434
- The snapshot and cluster versions must be compatible. See [Snapshot compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-compatibility).
35-
- To restore a snapshot, the cluster’s global metadata must be writable. Ensure there aren’t any cluster blocks that prevent writes. The restore operation ignores index blocks.
3635
- Before you restore a data stream, ensure the cluster contains a [matching index template](/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md#create-ts-index-template) with data stream enabled. To check, use [Kibana’s Index Management](/manage-data/data-store/index-basics.md#index-management-manage-index-templates) feature or the get index template API:
3736

3837
```console

deploy-manage/tools/snapshot-and-restore/self-managed.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,8 @@ In this guide, you’ll learn how to:
2121

2222
## Prerequisites [snapshot-repo-prereqs]
2323

24-
* To use {{kib}}'s **Snapshot and Restore** feature, you must have the following permissions:
25-
26-
* [Cluster privileges](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-cluster): `monitor`, `manage_slm`, `cluster:admin/snapshot`, and `cluster:admin/repository`
27-
* [Index privilege](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-indices): `all` on the `monitor` index
28-
29-
* To register a snapshot repository, the cluster’s global metadata must be writeable. Ensure there aren’t any [cluster blocks](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-read-only) that prevent write access.
24+
:::{include} _snippets/restore-snapshot-common-prerequisites.md
25+
:::
3026

3127
## Considerations [snapshot-repo-considerations]
3228

explore-analyze/elastic-inference/eis.md

Lines changed: 8 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -9,64 +9,48 @@ applies_to:
99

1010
# Elastic {{infer-cap}} Service [elastic-inference-service-eis]
1111

12-
The Elastic {{infer-cap}} Service (EIS) enables you to leverage AI-powered search as a service without deploying a model in your cluster.
12+
The Elastic {{infer-cap}} Service (EIS) enables you to leverage AI-powered search as a service without deploying a model in your environment.
1313
With EIS, you don't need to manage the infrastructure and resources required for {{ml}} {{infer}} by adding, configuring, and scaling {{ml}} nodes.
1414
Instead, you can use {{ml}} models for ingest, search, and chat independently of your {{es}} infrastructure.
1515

1616
## AI features powered by EIS [ai-features-powered-by-eis]
1717

1818
* Your Elastic deployment or project comes with a default [`Elastic Managed LLM` connector](https://www.elastic.co/docs/reference/kibana/connectors-kibana/elastic-managed-llm). This connector is used in the AI Assistant, Attack Discovery, Automatic Import and Search Playground.
1919

20-
* You can use [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md) to perform semantic search as a service (ELSER on EIS). {applies_to}`stack: preview 9.1` {applies_to}`serverless: preview`
20+
* You can use [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md) to perform semantic search as a service (ELSER on EIS). {applies_to}`stack: preview 9.1, ga 9.2` {applies_to}`serverless: ga`
2121

2222
## Region and hosting [eis-regions]
2323

2424
Requests through the `Elastic Managed LLM` are currently proxying to AWS Bedrock in AWS US regions, beginning with `us-east-1`.
2525
The request routing does not restrict the location of your deployments.
2626

27-
ELSER requests are managed by Elastic's own EIS infrastructure and are also hosted in AWS US regions, beginning with `us-east-1`. All Elastic Cloud hosted deployments and serverless projects in any CSP and region can access the endpoint. As we expand the service to Azure and GCP and more regions, we will automatically route requests to the same CSP and closest region the Elaticsearch cluster is hosted on.
2827

28+
ELSER requests are managed by Elastic's own EIS infrastructure and are also hosted in AWS US regions, beginning with `us-east-1`. All Elastic Cloud hosted deployments and serverless projects in any CSP and region can access the endpoint. As we expand the service to Azure and GCP and more regions, we will automatically route requests to the same CSP and closest region the Elaticsearch cluster is hosted on.
2929

3030
## ELSER via Elastic {{infer-cap}} Service (ELSER on EIS) [elser-on-eis]
3131

3232
```{applies_to}
33-
stack: preview 9.1
34-
serverless: preview
33+
stack: preview 9.1, ga 9.2
34+
serverless: ga
3535
```
3636

37-
ELSER on EIS enables you to use the ELSER model on GPUs, without having to manage your own ML nodes. We expect significantly better performance for throughput and consistent search latency as compared to ML nodes, and will continue to benchmark, remove limitations and address concerns as we move towards General Availability.
37+
ELSER on EIS enables you to use the ELSER model on GPUs, without having to manage your own ML nodes. We expect better performance for ingest throughput than ML nodes and equivalent performance for search latency. We will continue to benchmark, remove limitations and address concerns.
3838

3939
### Using the ELSER on EIS endpoint
4040

4141
You can now use `semantic_text` with the new ELSER endpoint on EIS. To learn how to use the `.elser-2-elastic` inference endpoint, refer to [Using ELSER on EIS](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md#using-elser-on-eis).
4242

4343
#### Get started with semantic search with ELSER on EIS
44-
[Semantic Search with `semantic_text`](/solutions/search/semantic-search/semantic-search-semantic-text.md) has a detailed tutorial on using the `semantic_text` field and using the ELSER endpoint on EIS instead of the default endpoint. This is a great way to get started and try the new endpoint.
45-
46-
### Limitations
47-
48-
While we do encourage experimentation, we do not recommend implementing production use cases on top of this feature while it is in Technical Preview.
4944

50-
#### Uptime
51-
52-
There are no uptime guarantees during the Technical Preview.
53-
While Elastic will address issues promptly, the feature may be unavailable for extended periods.
54-
55-
#### Throughput and latency
45+
[Semantic Search with `semantic_text`](/solutions/search/semantic-search/semantic-search-semantic-text.md) has a detailed tutorial on using the `semantic_text` field and using the ELSER endpoint on EIS instead of the default endpoint. This is a great way to get started and try the new endpoint.
5646

57-
{{infer-cap}} throughput via this endpoint is expected to exceed that of {{infer}} operations on an ML node.
58-
However, throughput and latency are not guaranteed.
59-
Performance may vary during the Technical Preview.
47+
### Limitations
6048

6149
#### Batch size
6250

6351
Batches are limited to a maximum of 16 documents.
6452
This is particularly relevant when using the [_bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-bulk) for data ingestion.
6553

66-
#### Rate limits
67-
68-
Rate limit for search and ingest is currently at 500 requests per minute. This allows you to ingest approximately 8000 documents per minute at 16 documents per request.
69-
7054
## Pricing
7155

7256
All models on EIS incur a charge per million tokens. The pricing details are at our [Pricing page](https://www.elastic.co/pricing/serverless-search) for the Elastic Managed LLM and ELSER.

explore-analyze/elastic-inference/inference-api.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ The behavior of allocations depends on several factors:
8888
If you enable adaptive allocations and set the `min_number_of_allocations` to a value greater than `0`, you will be charged for the machine learning resources, even if no inference requests are sent.
8989

9090
However, setting the `min_number_of_allocations` to a value greater than `0` keeps the model always available without scaling delays. Choose the configuration that best fits your workload and availability needs.
91-
::::
91+
::::
9292

9393
For more information about adaptive allocations and resources, refer to the [trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md) documentation.
9494

@@ -105,9 +105,9 @@ By default, documents are split into sentences and grouped in sections up to 250
105105

106106
### Chunking strategies
107107

108-
Several strategies are available for chunking:
108+
Several strategies are available for chunking:
109109

110-
#### `sentence`
110+
#### `sentence`
111111

112112
The `sentence` strategy splits the input text at sentence boundaries. Each chunk contains one or more complete sentences ensuring that the integrity of sentence-level context is preserved, except when a sentence causes a chunk to exceed a word count of `max_chunk_size`, in which case it will be split across chunks. The `sentence_overlap` option defines the number of sentences from the previous chunk to include in the current chunk which is either `0` or `1`.
113113

@@ -134,7 +134,7 @@ The default chunking strategy is `sentence`.
134134

135135
#### `word`
136136

137-
The `word` strategy splits the input text on individual words up to the `max_chunk_size` limit. The `overlap` option is the number of words from the previous chunk to include in the current chunk.
137+
The `word` strategy splits the input text on individual words up to the `max_chunk_size` limit. The `overlap` option is the number of words from the previous chunk to include in the current chunk.
138138

139139
The following example creates an {{infer}} endpoint with the `elasticsearch` service that deploys the ELSER model and configures the chunking behavior with the `word` strategy, setting a maximum of 120 words per chunk and an overlap of 40 words between chunks.
140140

@@ -158,7 +158,7 @@ PUT _inference/sparse_embedding/word_chunks
158158
#### `recursive`
159159

160160
```{applies_to}
161-
stack: ga 9.1`
161+
stack: ga 9.1
162162
```
163163

164164
The `recursive` strategy splits the input text based on a configurable list of separator patterns (for example, newlines or Markdown headers). The chunker applies these separators in order, recursively splitting any chunk that exceeds the `max_chunk_size` word limit. If no separator produces a small enough chunk, the strategy falls back to sentence-level splitting.
@@ -215,7 +215,7 @@ PUT _inference/sparse_embedding/recursive_custom_chunks
215215
#### `none`
216216

217217
```{applies_to}
218-
stack: ga 9.1`
218+
stack: ga 9.1
219219
```
220220

221221
The `none` strategy disables chunking and processes the entire input text as a single block, without any splitting or overlap. When using this strategy, you can instead [pre-chunk](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/semantic-text#auto-text-chunking) the input by providing an array of strings, where each element acts as a separate chunk to be sent directly to the inference service without further chunking.
120 KB
Loading
119 KB
Loading
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
applies_to:
3+
stack: ga 9.2
4+
serverless: ga
5+
products:
6+
- id: kibana
7+
- id: observability
8+
- id: security
9+
- id: cloud-serverless
10+
---
11+
12+
# Manage access to AI Features
13+
14+
This page describes how to use the GenAI Settings page to control access to AI-powered features in your deployments in the following ways:
15+
16+
17+
- Manage which AI connectors are available in your environment.
18+
- Enable or disable AI Assistant and other AI-powered features in your environment.
19+
- {applies_to}`stack: ga 9.2` {applies_to}`serverless: unavailable` Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear.
20+
21+
22+
## Requirements
23+
24+
- To access the **GenAI Settings** page, you need the `Actions and connectors: all` or `Actions and connectors: read` [{{kib}} privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
25+
- To modify the settings on this page, you need the `Advanced Settings: all` {{kib}} privilege.
26+
27+
## The GenAI Settings page
28+
29+
To manage these settings, go to the **GenAI Settings** page by using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
30+
31+
::::{applies-switch}
32+
33+
:::{applies-item} stack: ga 9.2
34+
35+
![GenAI Settings page for Stack](/explore-analyze/images/ai-assistant-settings-page.png "")
36+
37+
38+
The **GenAI Settings** page has the following settings:
39+
40+
- **Default AI Connector**: Use this setting to specify which connector is selected by default. This affects all AI-powered features, not just AI Assistant.
41+
- **Disallow all other connectors**: Enable this setting to prevent connectors other than the default connector specified above from being used in your space. This affects all AI-powered features, not just AI Assistant.
42+
- **AI feature visibility**: This button opens the current Space's settings page. Here you can specify which features should appear in your environment, including AI-powered features.
43+
- **AI Assistant visibility**: This setting allows you to choose which AI Assistants are available to use and where. You can choose to only show the AI Assistants in their native solutions, in other {{kib}} pages (for example, Discover, Dashboards, and Stack Management), or select **Hide all assistants** to disable AI Assistant throughout {{kib}}.
44+
45+
:::
46+
47+
:::{applies-item} serverless:
48+
49+
![GenAI Settings page for Serverless](/explore-analyze/images/ai-assistant-settings-page-serverless.png "")
50+
51+
The **GenAI Settings** page has the following settings:
52+
53+
- **Default AI Connector**: Click **Manage connectors** to open the **Connectors** page, where you can create or delete AI connectors. To update these settings, you need the `Actions and connectors: all` [{{kib}} privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md).
54+
- **AI feature visibility**: Click **Go to Permissions tab** to access the active {{kib}} space's settings page, where you can specify which features each custom user role has access to in your environment. This includes AI-powered features such as AI Assistant.
55+
56+
:::
57+
58+
::::
59+

0 commit comments

Comments
 (0)