Skip to content

Commit b4fad00

Browse files
Merge branch 'main' into managed-data-views
2 parents 5ededbb + 97496eb commit b4fad00

33 files changed

+160
-70
lines changed

deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,11 @@ Enterprise Search is not available in versions 9.0+.
9292
| docker.elastic.co/cloud-release/kibana-cloud:9.1.1 | ECE 4.0.0 |
9393
| docker.elastic.co/cloud-release/elastic-agent-cloud:9.1.1 | ECE 4.0.0 |
9494
| | |
95+
| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 9.0.8](https://download.elastic.co/cloud-enterprise/versions/9.0.8.zip) | ECE 4.0.0 |
96+
| docker.elastic.co/cloud-release/elasticsearch-cloud-ess:9.0.8 | ECE 4.0.0 |
97+
| docker.elastic.co/cloud-release/kibana-cloud:9.0.8 | ECE 4.0.0 |
98+
| docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.8 | ECE 4.0.0 |
99+
| | |
95100
| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 9.0.7](https://download.elastic.co/cloud-enterprise/versions/9.0.7.zip) | ECE 4.0.0 |
96101
| docker.elastic.co/cloud-release/elasticsearch-cloud-ess:9.0.7 | ECE 4.0.0 |
97102
| docker.elastic.co/cloud-release/kibana-cloud:9.0.7 | ECE 4.0.0 |

deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Following is the full list of available packs containing {{stack}} versions. Not
5353
| [{{es}}, {{kib}}, and APM stack pack: 9.1.3](https://download.elastic.co/cloud-enterprise/versions/9.1.3.zip) | ECE 4.0.0 |
5454
| [{{es}}, {{kib}}, and APM stack pack: 9.1.2](https://download.elastic.co/cloud-enterprise/versions/9.1.2.zip) | ECE 4.0.0 |
5555
| [{{es}}, {{kib}}, and APM stack pack: 9.1.1](https://download.elastic.co/cloud-enterprise/versions/9.1.1.zip) | ECE 4.0.0 |
56+
| [{{es}}, {{kib}}, and APM stack pack: 9.0.8](https://download.elastic.co/cloud-enterprise/versions/9.0.8.zip) | ECE 4.0.0 |
5657
| [{{es}}, {{kib}}, and APM stack pack: 9.0.7](https://download.elastic.co/cloud-enterprise/versions/9.0.7.zip) | ECE 4.0.0 |
5758
| [{{es}}, {{kib}}, and APM stack pack: 9.0.6](https://download.elastic.co/cloud-enterprise/versions/9.0.6.zip) | ECE 4.0.0 |
5859
| [{{es}}, {{kib}}, and APM stack pack: 9.0.5](https://download.elastic.co/cloud-enterprise/versions/9.0.5.zip) | ECE 4.0.0 |

deploy-manage/monitor/autoops/cc-connect-self-managed-to-autoops.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -123,8 +123,8 @@ With this authentication method, you need to create an API key to grant access t
123123

124124
1. From your {{ecloud}} home page, select a deployment.
125125
2. Go to **Stack management** > **API keys** and select **Create API key**.
126-
4. In the flyout, enter a name for your key and select **User API key**.
127-
5. Enable **Control security privileges** and enter the following script:
126+
3. In the flyout, enter a name for your key and select **User API key**.
127+
4. Enable **Control security privileges** and enter the following script:
128128
```json
129129
{
130130
"autoops": {
@@ -287,8 +287,8 @@ You can use the same installation command to connect multiple clusters, but each
287287

288288
Complete the following steps to disconnect your cluster from your Cloud organization. You need the **Organization owner** [role](/deploy-manage/monitor/autoops/cc-manage-users.md#assign-roles) to perform this action.
289289

290-
2. Based on your [installation method](#select-installation-method), complete the steps to stop {{agent}} from shipping metrics to {{ecloud}}.
291-
1. Log in to [{{ecloud}}](https://cloud.elastic.co/home).
290+
1. Based on your [installation method](#select-installation-method), complete the steps to stop {{agent}} from shipping metrics to {{ecloud}}.
291+
2. Log in to [{{ecloud}}](https://cloud.elastic.co/home).
292292
3. On the **Connected clusters** page or the **Connected clusters** section of the home page, locate the cluster you want to disconnect.
293293
4. From that cluster’s actions menu, select **Disconnect cluster**.
294294
5. Enter the cluster’s name in the field that appears and then select **Disconnect cluster**.

deploy-manage/monitor/autoops/ec-autoops-regions.md

Lines changed: 17 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ mapped_pages:
33
- https://www.elastic.co/guide/en/cloud/current/ec-autoops-regions.html
44
navigation_title: Regions
55
applies_to:
6+
serverless:
67
deployment:
78
self:
89
ece:
@@ -18,6 +19,10 @@ products:
1819

1920
A region is where a cloud service provider's data center hosts your deployments or clusters.
2021

22+
::::{note}
23+
AutoOps is currently not available in any region for GovCloud customers.
24+
::::
25+
2126
## AutoOps for {{ECH}} regions
2227

2328
AutoOps for {{ECH}} is currently available in the following regions for AWS:
@@ -57,7 +62,15 @@ This service is currently available in the following regions for AWS:
5762
:::{include} ../_snippets/autoops-cc-regions.md
5863
:::
5964

60-
<br>
61-
::::{note}
62-
AutoOps is currently not available for GovCloud customers.
63-
::::
65+
## AutoOps for {{serverless-full}} regions
66+
67+
AutoOps for serverless projects is currently available in the following regions for AWS:
68+
69+
| Region | Name |
70+
| --- | --- | --- | --- |
71+
| us-east-1 | N. Virginia |
72+
| eu-west-1 | Ireland |
73+
| ap-southeast-1 | Singapore |
74+
| us-west-2 | Oregon |
75+
76+
The only exception is the **Search AI Lake** view, which is available in all CSP regions across AWS, Azure and GCP.

deploy-manage/tools/snapshot-and-restore/s3-repository.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,7 @@ See [this video](https://www.youtube.com/watch?v=ACqfyzWf-xs) for a walkthrough
2222

2323
## Getting started [repository-s3-usage]
2424

25-
To register an S3 repository, specify the type as `s3` when creating the repository. The repository defaults to using [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication.
26-
27-
The only mandatory setting is the bucket name:
25+
To register an S3 repository, specify the type as `s3` when creating the repository. The only mandatory setting is the bucket name:
2826

2927
```console
3028
PUT _snapshot/my_s3_repository
@@ -36,6 +34,7 @@ PUT _snapshot/my_s3_repository
3634
}
3735
```
3836

37+
By default, an S3 repository will attempt to obtain its credentials automatically from the environment. For instance, if {{es}} is running on an AWS EC2 instance then it will attempt to use the [EC2 Instance Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) to obtain temporary credentials for the [instance IAM role](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). Likewise, if {{es}} is running in AWS EC2, then it will automatically obtain temporary [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication. To disable this behavior, specify an access key, a secret key, and optionally a session token, in the {{es}} keystore.
3938

4039
## Client settings [repository-s3-client]
4140

@@ -65,7 +64,7 @@ bin/elasticsearch-keystore add s3.client.default.session_token
6564

6665
If you do not configure these settings then {{es}} will attempt to automatically obtain credentials from the environment in which it is running:
6766

68-
* Nodes running on an instance in AWS EC2 will attempt to use the EC2 Instance Metadata Service (IMDS) to obtain instance role credentials. {{es}} supports both IMDS version 1 and IMDS version 2.
67+
* Nodes running on an instance in AWS EC2 will attempt to use the EC2 Instance Metadata Service (IMDS) to obtain instance role credentials. {{es}} supports IMDS version 2 only.
6968
* Nodes running in a container in AWS ECS and AWS EKS will attempt to obtain container role credentials similarly.
7069

7170
You can switch from using specific credentials back to the default of using the instance role or container role by removing these settings from the keystore as follows:
@@ -385,7 +384,7 @@ There are a number of storage systems that provide an S3-compatible API, and the
385384

386385
By default {{es}} communicates with your storage system using HTTPS, and validates the repository’s certificate chain using the JVM-wide truststore. Ensure that the JVM-wide truststore includes an entry for your repository. If you wish to use unsecured HTTP communication instead of HTTPS, set `s3.client.CLIENT_NAME.protocol` to `http`.
387386

388-
There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behavior in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistency and performance at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository.
387+
There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behavior in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistency, performance, and reliability at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository.
389388

390389
You can perform some basic checks of the suitability of your storage system using the [repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze). If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. However, these checks do not guarantee full compatibility.
391390

explore-analyze/find-and-organize/saved-objects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Granting access to `Saved Objects Management` authorizes users to manage all sav
6060

6161
Use import and export to move objects between different {{kib}} instances. These actions are useful when you have multiple environments for development and production. Import and export also work well when you have a large number of objects to update and want to batch the process.
6262

63-
{{kib}} also provides import and export saved objects APIs for your [Elastic Stack deployments](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) and [serverless projects](https://www.elastic.co/docs/api/doc/serverless/operation/operation-exportsavedobjectsdefault) to automate this process.
63+
{{kib}} also provides import and export saved objects APIs for your [Elastic Stack deployments](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) and [serverless projects](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-saved-objects) to automate this process.
6464

6565

6666
### Import [saved-objects-import]

explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Here, you’ll be able to:
106106

107107
Inference processors added to your index-specific ML {{infer}} pipelines are normal Elasticsearch pipelines. Once created, each processor will have options to **View in Stack Management** and **Delete Pipeline**. Deleting an {{infer}} processor from within the **Content** UI deletes the pipeline and also removes its reference from your index-specific ML {{infer}} pipeline.
108108

109-
These pipelines can also be viewed, edited, and deleted in Kibana via **Stack Management → Ingest Pipelines**, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
109+
These pipelines can also be viewed, edited, and deleted in Kibana from the **Ingest Pipelines** management page, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
110110

111111
## Test your ML {{infer}} pipeline [ingest-pipeline-search-inference-test-inference-pipeline]
112112

explore-analyze/machine-learning/nlp/ml-nlp-inference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ After you [deploy a trained model in your cluster](ml-nlp-deploy-models.md), you
1919

2020
## Add an {{infer}} processor to an ingest pipeline [ml-nlp-inference-processor]
2121

22-
In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **Ingest Pipelines**. To open **Ingest Pipelines**, find **{{stack-manage-app}}** in the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md).
22+
In {{kib}}, you can create and edit pipelines from the **Ingest Pipelines** management page. You can find this page in the main menu or using the [global search field](../../find-and-organize/find-apps-and-objects.md).
2323

2424
:::{image} /explore-analyze/images/machine-learning-ml-nlp-pipeline-lang.png
25-
:alt: Creating a pipeline in the Stack Management app
25+
:alt: Creating a pipeline
2626
:screenshot:
2727
:::
2828

explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.",
117117

118118
You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.
119119

120-
Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
120+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
121121

122122
```js
123123
PUT _ingest/pipeline/ner

explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa
116116

117117
Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline.
118118

119-
Now create an ingest pipeline either in the [{{stack-manage-app}} UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
119+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
120120

121121
```js
122122
PUT _ingest/pipeline/text-embeddings

0 commit comments

Comments
 (0)