You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/monitor/autoops/cc-connect-self-managed-to-autoops.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,8 +123,8 @@ With this authentication method, you need to create an API key to grant access t
123
123
124
124
1. From your {{ecloud}} home page, select a deployment.
125
125
2. Go to **Stack management** > **API keys** and select **Create API key**.
126
-
4. In the flyout, enter a name for your key and select **User API key**.
127
-
5. Enable **Control security privileges** and enter the following script:
126
+
3. In the flyout, enter a name for your key and select **User API key**.
127
+
4. Enable **Control security privileges** and enter the following script:
128
128
```json
129
129
{
130
130
"autoops": {
@@ -287,8 +287,8 @@ You can use the same installation command to connect multiple clusters, but each
287
287
288
288
Complete the following steps to disconnect your cluster from your Cloud organization. You need the **Organization owner**[role](/deploy-manage/monitor/autoops/cc-manage-users.md#assign-roles) to perform this action.
289
289
290
-
2. Based on your [installation method](#select-installation-method), complete the steps to stop {{agent}} from shipping metrics to {{ecloud}}.
291
-
1. Log in to [{{ecloud}}](https://cloud.elastic.co/home).
290
+
1. Based on your [installation method](#select-installation-method), complete the steps to stop {{agent}} from shipping metrics to {{ecloud}}.
291
+
2. Log in to [{{ecloud}}](https://cloud.elastic.co/home).
292
292
3. On the **Connected clusters** page or the **Connected clusters** section of the home page, locate the cluster you want to disconnect.
293
293
4. From that cluster’s actions menu, select **Disconnect cluster**.
294
294
5. Enter the cluster’s name in the field that appears and then select **Disconnect cluster**.
Copy file name to clipboardExpand all lines: deploy-manage/tools/snapshot-and-restore/s3-repository.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,9 +22,7 @@ See [this video](https://www.youtube.com/watch?v=ACqfyzWf-xs) for a walkthrough
22
22
23
23
## Getting started [repository-s3-usage]
24
24
25
-
To register an S3 repository, specify the type as `s3` when creating the repository. The repository defaults to using [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication.
26
-
27
-
The only mandatory setting is the bucket name:
25
+
To register an S3 repository, specify the type as `s3` when creating the repository. The only mandatory setting is the bucket name:
28
26
29
27
```console
30
28
PUT _snapshot/my_s3_repository
@@ -36,6 +34,7 @@ PUT _snapshot/my_s3_repository
36
34
}
37
35
```
38
36
37
+
By default, an S3 repository will attempt to obtain its credentials automatically from the environment. For instance, if {{es}} is running on an AWS EC2 instance then it will attempt to use the [EC2 Instance Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) to obtain temporary credentials for the [instance IAM role](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). Likewise, if {{es}} is running in AWS EC2, then it will automatically obtain temporary [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication. To disable this behavior, specify an access key, a secret key, and optionally a session token, in the {{es}} keystore.
If you do not configure these settings then {{es}} will attempt to automatically obtain credentials from the environment in which it is running:
67
66
68
-
* Nodes running on an instance in AWS EC2 will attempt to use the EC2 Instance Metadata Service (IMDS) to obtain instance role credentials. {{es}} supports both IMDS version 1 and IMDS version 2.
67
+
* Nodes running on an instance in AWS EC2 will attempt to use the EC2 Instance Metadata Service (IMDS) to obtain instance role credentials. {{es}} supports IMDS version 2 only.
69
68
* Nodes running in a container in AWS ECS and AWS EKS will attempt to obtain container role credentials similarly.
70
69
71
70
You can switch from using specific credentials back to the default of using the instance role or container role by removing these settings from the keystore as follows:
@@ -385,7 +384,7 @@ There are a number of storage systems that provide an S3-compatible API, and the
385
384
386
385
By default {{es}} communicates with your storage system using HTTPS, and validates the repository’s certificate chain using the JVM-wide truststore. Ensure that the JVM-wide truststore includes an entry for your repository. If you wish to use unsecured HTTP communication instead of HTTPS, set `s3.client.CLIENT_NAME.protocol` to `http`.
387
386
388
-
There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behavior in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistencyand performance at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository.
387
+
There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behavior in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistency, performance, and reliability at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository.
389
388
390
389
You can perform some basic checks of the suitability of your storage system using the [repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze). If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. However, these checks do not guarantee full compatibility.
Copy file name to clipboardExpand all lines: explore-analyze/find-and-organize/saved-objects.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ Granting access to `Saved Objects Management` authorizes users to manage all sav
60
60
61
61
Use import and export to move objects between different {{kib}} instances. These actions are useful when you have multiple environments for development and production. Import and export also work well when you have a large number of objects to update and want to batch the process.
62
62
63
-
{{kib}} also provides import and export saved objects APIs for your [Elastic Stack deployments](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) and [serverless projects](https://www.elastic.co/docs/api/doc/serverless/operation/operation-exportsavedobjectsdefault) to automate this process.
63
+
{{kib}} also provides import and export saved objects APIs for your [Elastic Stack deployments](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) and [serverless projects](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-saved-objects) to automate this process.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ Here, you’ll be able to:
106
106
107
107
Inference processors added to your index-specific ML {{infer}} pipelines are normal Elasticsearch pipelines. Once created, each processor will have options to **View in Stack Management** and **Delete Pipeline**. Deleting an {{infer}} processor from within the **Content** UI deletes the pipeline and also removes its reference from your index-specific ML {{infer}} pipeline.
108
108
109
-
These pipelines can also be viewed, edited, and deleted in Kibana via **Stack Management → Ingest Pipelines**, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
109
+
These pipelines can also be viewed, edited, and deleted in Kibana from the **Ingest Pipelines** management page, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
110
110
111
111
## Test your ML {{infer}} pipeline [ingest-pipeline-search-inference-test-inference-pipeline]
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/nlp/ml-nlp-inference.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,10 +19,10 @@ After you [deploy a trained model in your cluster](ml-nlp-deploy-models.md), you
19
19
20
20
## Add an {{infer}} processor to an ingest pipeline [ml-nlp-inference-processor]
21
21
22
-
In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **Ingest Pipelines**. To open **Ingest Pipelines**, find **{{stack-manage-app}}**in the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md).
22
+
In {{kib}}, you can create and edit pipelines from the **Ingest Pipelines** management page. You can find this page in the main menu or using the [global search field](../../find-and-organize/find-apps-and-objects.md).
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.",
117
117
118
118
You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.
119
119
120
-
Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
120
+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa
116
116
117
117
Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline.
118
118
119
-
Now create an ingest pipeline either in the [{{stack-manage-app}} UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
119
+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
0 commit comments