diff --git a/deploy-manage/api-keys/elasticsearch-api-keys.md b/deploy-manage/api-keys/elasticsearch-api-keys.md index e02ca6fc1b..5121c7d00a 100644 --- a/deploy-manage/api-keys/elasticsearch-api-keys.md +++ b/deploy-manage/api-keys/elasticsearch-api-keys.md @@ -40,7 +40,7 @@ Refer to the [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/ Refer to the [Create cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) documentation to learn more about creating cross-cluster API keys. -## Update an API key [udpate-api-key] +## Update an API key [update-api-key] To update an API key, go to the **API Keys** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md), and then click on the name of the key. You cannot update the name or the type of API key. diff --git a/deploy-manage/autoscaling/trained-model-autoscaling.md b/deploy-manage/autoscaling/trained-model-autoscaling.md index 1c6ad088d4..987a380249 100644 --- a/deploy-manage/autoscaling/trained-model-autoscaling.md +++ b/deploy-manage/autoscaling/trained-model-autoscaling.md @@ -37,7 +37,7 @@ $$$nlp-model-adaptive-resources$$$ Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.) -When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. +When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitly set the minimum and maximum number of allocations; autoscaling will occur within these limits. ::::{note} If you set the minimum number of allocations to 1, you will be charged even if the system is not using those resources. @@ -68,7 +68,7 @@ You can enable adaptive resources for your models when starting or updating the You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range. -Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the setings for the level you selected. +Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the settings for the level you selected. :::{image} /deploy-manage/images/machine-learning-ml-nlp-deployment-id-elser-v2.png :alt: ELSER deployment with adaptive resources enabled. diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md index a400c5e97c..a368484028 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md @@ -80,7 +80,7 @@ spec: topologyKey: kubernetes.io/hostname ``` -This is ECK default behaviour if you don’t specify any `affinity` option. To explicitly disable the default behaviour, set an empty affinity object: +This is ECK default behavior if you don’t specify any `affinity` option. To explicitly disable the default behavior, set an empty affinity object: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md index e285e277d0..473baa79eb 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md @@ -52,7 +52,7 @@ podTemplate: linkerd.io/inject: enabled ``` -If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behaviour of ECK is to enable TLS for {{es}}, {{kib}} and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of {{stack}} applications with Linkerd. +If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behavior of ECK is to enable TLS for {{es}}, {{kib}} and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of {{stack}} applications with Linkerd. ### {{es}} [k8s-service-mesh-linkerd-elasticsearch] diff --git a/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md b/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md index 8de50d3200..3c536f630f 100644 --- a/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md +++ b/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md @@ -12,7 +12,7 @@ A [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/confi ECK manages a default PDB per {{es}} resource. It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted. -In the {{es}} specification, you can change the default behaviour as follows: +In the {{es}} specification, you can change the default behavior as follows: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md index 856f918402..8195f08144 100644 --- a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md +++ b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md @@ -39,5 +39,5 @@ The pre-stop lifecycle hook also tries to gracefully shut down the {{es}} node i This is done on a best effort basis. In particular requests to an {{es}} cluster already in the process of shutting down might fail if the Kubernetes service has already been removed. The script allows for `PRE_STOP_MAX_DNS_ERRORS` which default to 2 before giving up. -When using local persistent volumes a different behaviour might be desirable because the {{es}} node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`. +When using local persistent volumes a different behavior might be desirable because the {{es}} node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`. diff --git a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md index 6b6dce01fc..3a03ee11ce 100644 --- a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md @@ -17,7 +17,7 @@ When `kibanaRef` is specified, Beat tries to connect to the {{kib}} instance. If When `kubectl` is used to modify a resource, it calculates the diff between the user applied and the existing configuration. This diff has special [semantics](https://tools.ietf.org/html/rfc7396#section-1) that forces the removal of keys if they have special values. For example, if the user-applied configuration contains `some_key: null` (or equivalent `some_key: ~`), this is interpreted as an instruction to remove `some_key`. In Beats configurations, this is often a problem when it comes to defining things like [processors](beats://reference/filebeat/add-cloud-metadata.md). To avoid this problem: -* Use `some_key: {}` (empty map) or `some_key: []` (empty array) instead of `some_key: null` if doing so does not affect the behaviour. This might not be possible in all cases as some applications distinguish between null values and empty values and behave differently. +* Use `some_key: {}` (empty map) or `some_key: []` (empty array) instead of `some_key: null` if doing so does not affect the behavior. This might not be possible in all cases as some applications distinguish between null values and empty values and behave differently. * Instead of using `config` to define configuration inline, use `configRef` and store the configuration in a Secret. diff --git a/deploy-manage/distributed-architecture.md b/deploy-manage/distributed-architecture.md index f9cb22a41c..8553920abc 100644 --- a/deploy-manage/distributed-architecture.md +++ b/deploy-manage/distributed-architecture.md @@ -21,6 +21,6 @@ The topics in this section provides information about the architecture of {{es}} * [Reading and writing documents](distributed-architecture/reading-and-writing-documents.md): Learn how {{es}} replicates read and write operations across shards and shard copies. * [Shard allocation, relocation, and recovery](distributed-architecture/shard-allocation-relocation-recovery.md): Learn how {{es}} allocates and balances shards across nodes. * [Shard allocation awareness](distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md): Learn how to use custom node attributes to distribute shards across different racks or availability zones. -* [Disocvery and cluster formation](distributed-architecture/discovery-cluster-formation.md): Learn about the cluster formation process including voting, adding nodes and publishing the cluster state. +* [Discovery and cluster formation](distributed-architecture/discovery-cluster-formation.md): Learn about the cluster formation process including voting, adding nodes and publishing the cluster state. * [Shard request cache](/deploy-manage/distributed-architecture/shard-request-cache.md): Learn how {{es}} caches search requests to improve performance. * [{{kib}} task management](distributed-architecture/kibana-tasks-management.md): Learn how {{kib}} runs background tasks and distribute work across multiple {{kib}} instances to be persistent and scale with your deployment. \ No newline at end of file diff --git a/deploy-manage/distributed-architecture/reading-and-writing-documents.md b/deploy-manage/distributed-architecture/reading-and-writing-documents.md index 1febca4f48..d5b556d992 100644 --- a/deploy-manage/distributed-architecture/reading-and-writing-documents.md +++ b/deploy-manage/distributed-architecture/reading-and-writing-documents.md @@ -46,7 +46,7 @@ In the case that the primary itself fails, the node hosting the primary will sen Once the operation has been successfully performed on the primary, the primary has to deal with potential failures when executing it on the replica shards. This may be caused by an actual failure on the replica or due to a network issue preventing the operation from reaching the replica (or preventing the replica from responding). All of these share the same end result: a replica which is part of the in-sync replica set misses an operation that is about to be acknowledged. In order to avoid violating the invariant, the primary sends a message to the master requesting that the problematic shard be removed from the in-sync replica set. Only once removal of the shard has been acknowledged by the master does the primary acknowledge the operation. Note that the master will also instruct another node to start building a new shard copy in order to restore the system to a healthy state. $$$demoted-primary$$$ -While forwarding an operation to the replicas, the primary will use the replicas to validate that it is still the active primary. If the primary has been isolated due to a network partition (or a long GC) it may continue to process incoming indexing operations before realising that it has been demoted. Operations that come from a stale primary will be rejected by the replicas. When the primary receives a response from the replica rejecting its request because it is no longer the primary then it will reach out to the master and will learn that it has been replaced. The operation is then routed to the new primary. +While forwarding an operation to the replicas, the primary will use the replicas to validate that it is still the active primary. If the primary has been isolated due to a network partition (or a long GC) it may continue to process incoming indexing operations before realizing that it has been demoted. Operations that come from a stale primary will be rejected by the replicas. When the primary receives a response from the replica rejecting its request because it is no longer the primary then it will reach out to the master and will learn that it has been replaced. The operation is then routed to the new primary. ::::{admonition} What happens if there are no replicas? This is a valid scenario that can happen due to index configuration or simply because all the replicas have failed. In that case the primary is processing operations without any external validation, which may seem problematic. On the other hand, the primary cannot fail other shards on its own but request the master to do so on its behalf. This means that the master knows that the primary is the only single good copy. We are therefore guaranteed that the master will not promote any other (out-of-date) shard copy to be a new primary and that any operation indexed into the primary will not be lost. Of course, since at that point we are running with only single copy of the data, physical hardware issues can cause data loss. See [Active shards](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) for some mitigation options. diff --git a/deploy-manage/distributed-architecture/shard-request-cache.md b/deploy-manage/distributed-architecture/shard-request-cache.md index d513a4609a..f7d41621d2 100644 --- a/deploy-manage/distributed-architecture/shard-request-cache.md +++ b/deploy-manage/distributed-architecture/shard-request-cache.md @@ -85,7 +85,7 @@ Requests where `size` is greater than 0 will not be cached even if the request c ## Cache key [_cache_key] -A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognised. +A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognized. ::::{tip} Most JSON libraries support a *canonical* mode which ensures that JSON keys are always emitted in the same order. This canonical mode can be used in the application to ensure that a request is always serialized in the same way. diff --git a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md index de168ad578..6736ea6a67 100644 --- a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md +++ b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md @@ -30,7 +30,7 @@ Because they are not resilient to any failures, we do not recommend using one-no ## Two-node clusters [high-availability-cluster-design-two-nodes] -If you have two nodes, we recommend they both be data nodes. You should also ensure every shard is stored redundantly on both nodes by setting [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md) to `1` on every index that is not a [searchable snapshot index](../../tools/snapshot-and-restore/searchable-snapshots.md). This is the default behaviour but may be overridden by an [index template](../../../manage-data/data-store/templates.md). [Auto-expand replicas](elasticsearch://reference/elasticsearch/index-settings/index-modules.md) can also achieve the same thing, but it’s not necessary to use this feature in such a small cluster. +If you have two nodes, we recommend they both be data nodes. You should also ensure every shard is stored redundantly on both nodes by setting [`index.number_of_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md) to `1` on every index that is not a [searchable snapshot index](../../tools/snapshot-and-restore/searchable-snapshots.md). This is the default behavior but may be overridden by an [index template](../../../manage-data/data-store/templates.md). [Auto-expand replicas](elasticsearch://reference/elasticsearch/index-settings/index-modules.md) can also achieve the same thing, but it’s not necessary to use this feature in such a small cluster. We recommend you set only one of your two nodes to be [master-eligible](../../distributed-architecture/clusters-nodes-shards/node-roles.md#master-node-role). This means you can be certain which of your nodes is the elected master of the cluster. The cluster can tolerate the loss of the other master-ineligible node. If you set both nodes to master-eligible, two nodes are required for a master election. Since the election will fail if either node is unavailable, your cluster cannot reliably tolerate the loss of either node. diff --git a/deploy-manage/production-guidance/kibana-alerting-production-considerations.md b/deploy-manage/production-guidance/kibana-alerting-production-considerations.md index 6b54ce2296..d94be048c7 100644 --- a/deploy-manage/production-guidance/kibana-alerting-production-considerations.md +++ b/deploy-manage/production-guidance/kibana-alerting-production-considerations.md @@ -81,7 +81,7 @@ Another related setting is `xpack.alerting.rules.maxScheduledPerMinute`, which l ### Rules that run for a long time [_rules_that_run_for_a_long_time] -Rules that run for a long time typically do so because they are issuing resource-intensive {{es}} queries or performing CPU-intensive processing. This can block the event loop, making {{kib}} inaccessible while the rule runs. By default, rule processing is cancelled after `5m` but this can be overriden using the `xpack.alerting.rules.run.timeout` configuration. This value can also be configured per rule type using `xpack.alerting.rules.run.ruleTypeOverrides`. For example, the following configuration sets the global timeout value to `1m` while allowing **Index Threshold** rules to run for `10m` before being cancelled. +Rules that run for a long time typically do so because they are issuing resource-intensive {{es}} queries or performing CPU-intensive processing. This can block the event loop, making {{kib}} inaccessible while the rule runs. By default, rule processing is cancelled after `5m` but this can be overridden using the `xpack.alerting.rules.run.timeout` configuration. This value can also be configured per rule type using `xpack.alerting.rules.run.ruleTypeOverrides`. For example, the following configuration sets the global timeout value to `1m` while allowing **Index Threshold** rules to run for `10m` before being cancelled. ```yaml xpack.alerting.rules.run: diff --git a/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md b/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md index f95a59176a..ee62a17729 100644 --- a/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md +++ b/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md @@ -131,7 +131,7 @@ As demonstrated in [Evaluate your capacity estimation](../../troubleshoot/kibana These estimates are based on historical data and should not be used as predictions, but can be used as a rough guide when scaling the system. -We recommend provisioning enough {{kib}} instances to ensure a buffer between the observed maximum throughput (as estimated under `observed.max_throughput_per_minute`) and the average required throughput (as estimated under `observed.avg_required_throughput_per_minute`). Otherwise there might be insufficient capacity to handle spikes of ad-hoc tasks. How much of a buffer is needed largely depends on your use case, but keep in mind that estimated throughput takes into account recent spikes and, as long as they are representative of your system’s behaviour, shouldn’t require much of a buffer. +We recommend provisioning enough {{kib}} instances to ensure a buffer between the observed maximum throughput (as estimated under `observed.max_throughput_per_minute`) and the average required throughput (as estimated under `observed.avg_required_throughput_per_minute`). Otherwise there might be insufficient capacity to handle spikes of ad-hoc tasks. How much of a buffer is needed largely depends on your use case, but keep in mind that estimated throughput takes into account recent spikes and, as long as they are representative of your system’s behavior, shouldn’t require much of a buffer. We recommend provisioning at least as many {{kib}} instances as proposed by `proposed.provisioned_kibana`, but keep in mind that this number is based on the estimated required throughput, which is based on average historical performance, and cannot accurately predict future requirements. diff --git a/deploy-manage/production-guidance/optimize-performance/size-shards.md b/deploy-manage/production-guidance/optimize-performance/size-shards.md index 616e7b12b0..03948f5ec6 100644 --- a/deploy-manage/production-guidance/optimize-performance/size-shards.md +++ b/deploy-manage/production-guidance/optimize-performance/size-shards.md @@ -66,7 +66,7 @@ Every index and every shard requires some memory and CPU resources. In most case Segments play a big role in a shard’s resource usage. Most shards contain several segments, which store its index data. {{es}} keeps some segment metadata in heap memory so it can be quickly retrieved for searches. As a shard grows, its segments are [merged](elasticsearch://reference/elasticsearch/index-settings/merge.md) into fewer, larger segments. This decreases the number of segments, which means less metadata is kept in heap memory. -Every mapped field also carries some overhead in terms of memory usage and disk space. By default {{es}} will automatically create a mapping for every field in every document it indexes, but you can switch off this behaviour to [take control of your mappings](../../../manage-data/data-store/mapping/explicit-mapping.md). +Every mapped field also carries some overhead in terms of memory usage and disk space. By default {{es}} will automatically create a mapping for every field in every document it indexes, but you can switch off this behavior to [take control of your mappings](../../../manage-data/data-store/mapping/explicit-mapping.md). Moreover every segment requires a small amount of heap memory for each mapped field. This per-segment-per-field heap overhead includes a copy of the field name, encoded using ISO-8859-1 if applicable or UTF-16 otherwise. Usually this is not noticeable, but you may need to account for this overhead if your shards have high segment counts and the corresponding mappings contain high field counts and/or very long field names. diff --git a/deploy-manage/remote-clusters/remote-clusters-settings.md b/deploy-manage/remote-clusters/remote-clusters-settings.md index 20bdabbf2c..683b3816ef 100644 --- a/deploy-manage/remote-clusters/remote-clusters-settings.md +++ b/deploy-manage/remote-clusters/remote-clusters-settings.md @@ -31,10 +31,10 @@ In {{es}} 8.15, the default value for `skip_unavailable` was changed from `false : Sets the time interval between regular application-level ping messages that are sent to try and keep remote cluster connections alive. If set to `-1`, application-level ping messages to this remote cluster are not sent. If unset, application-level ping messages are sent according to the global `transport.ping_schedule` setting, which defaults to `-1` meaning that pings are not sent. It is preferable to correctly configure TCP keep-alives instead of configuring a `ping_schedule`, because TCP keep-alives are handled by the operating system and not by {{es}}. By default {{es}} enables TCP keep-alives on remote cluster connections. Remote cluster connections are transport connections so the `transport.tcp.*` [advanced settings](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings) regarding TCP keep-alives apply to them. `cluster.remote..transport.compress` -: Per-cluster setting that enables you to configure compression for requests to a specific remote cluster. The handling cluster will automatically compress responses to compressed requests. The setting options are `true`, `indexing_data`, and `false`. If unset, defaults to the behaviour specified by the node-wide `transport.compress` setting. See the [documentation for the `transport.compress` setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings-compress) for further information. +: Per-cluster setting that enables you to configure compression for requests to a specific remote cluster. The handling cluster will automatically compress responses to compressed requests. The setting options are `true`, `indexing_data`, and `false`. If unset, defaults to the behavior specified by the node-wide `transport.compress` setting. See the [documentation for the `transport.compress` setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings-compress) for further information. `cluster.remote..transport.compression_scheme` -: Per-cluster setting that enables you to configure the compression scheme for requests to a specific cluster if those requests are selected to be compressed by to the `cluster.remote..transport.compress` setting. The handling cluster will automatically use the same compression scheme for responses as for the corresponding requests. The setting options are `deflate` and `lz4`. If unset, defaults to the behaviour specified by the node-wide `transport.compression_scheme` setting. See the [documentation for the `transport.compression_scheme` setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings-compression-scheme) for further information. +: Per-cluster setting that enables you to configure the compression scheme for requests to a specific cluster if those requests are selected to be compressed by to the `cluster.remote..transport.compress` setting. The handling cluster will automatically use the same compression scheme for responses as for the corresponding requests. The setting options are `deflate` and `lz4`. If unset, defaults to the behavior specified by the node-wide `transport.compression_scheme` setting. See the [documentation for the `transport.compression_scheme` setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings-compression-scheme) for further information. $$$remote-cluster-credentials-setting$$$ diff --git a/deploy-manage/security/k8s-https-settings.md b/deploy-manage/security/k8s-https-settings.md index cada17e5dc..4f878a08ad 100644 --- a/deploy-manage/security/k8s-https-settings.md +++ b/deploy-manage/security/k8s-https-settings.md @@ -176,7 +176,7 @@ If your `tls.crt` is signed by an intermediate CA you may need both the Root CA kubectl create secret generic my-cert --from-file=ca.crt --from-file=tls.crt --from-file=tls.key ``` -Alternatively you can also bring your own CA certificate including a private key and let ECK issue certificates with it. Any certificate SANs you have configured as decribed in [Reserve static IP and custom domain](#k8s-static-ip-custom-domain) will also be respected when issuing certificates with this CA certificate. +Alternatively you can also bring your own CA certificate including a private key and let ECK issue certificates with it. Any certificate SANs you have configured as described in [Reserve static IP and custom domain](#k8s-static-ip-custom-domain) will also be respected when issuing certificates with this CA certificate. Create a Kubernetes secret with: diff --git a/deploy-manage/tools/snapshot-and-restore/s3-repository.md b/deploy-manage/tools/snapshot-and-restore/s3-repository.md index 7d6caac4da..aff1a146e3 100644 --- a/deploy-manage/tools/snapshot-and-restore/s3-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/s3-repository.md @@ -384,7 +384,7 @@ By default {{es}} communicates with your storage system using HTTPS, and validat [MinIO](https://minio.io) is an example of a storage system that provides an S3-compatible API. The `s3` repository type allows {{es}} to work with MinIO-backed repositories as well as repositories stored on AWS S3. Other S3-compatible storage systems may also work with {{es}}, but these are not covered by the {{es}} test suite. -There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behaviour in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistency and performance at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository. +There are many systems, including some from very well-known storage vendors, which claim to offer an S3-compatible API despite failing to emulate S3’s behavior in full. If you are using such a system for your snapshots, consider using a [shared filesystem repository](shared-file-system-repository.md) based on a standardized protocol such as NFS to access your storage system instead. The `s3` repository type requires full compatibility with S3. In particular it must support the same set of API endpoints, with the same parameters, return the same errors in case of failures, and offer consistency and performance at least as good as S3 even when accessed concurrently by multiple nodes. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. Don't report {{es}} issues involving storage systems which claim to be S3-compatible unless you can demonstrate that the same issue exists when using a genuine AWS S3 repository. You can perform some basic checks of the suitability of your storage system using the [repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze). If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. However, these checks do not guarantee full compatibility. diff --git a/deploy-manage/tools/snapshot-and-restore/self-managed.md b/deploy-manage/tools/snapshot-and-restore/self-managed.md index d3a627a591..b1d4f67f29 100644 --- a/deploy-manage/tools/snapshot-and-restore/self-managed.md +++ b/deploy-manage/tools/snapshot-and-restore/self-managed.md @@ -64,7 +64,7 @@ Other repository types are available through official plugins: You can also use alternative storage implementations with these repository types, as long as the alternative implementation is fully compatible. For instance, [MinIO](https://minio.io) provides an alternative implementation of the AWS S3 API and you can use MinIO with the [`s3` repository type](s3-repository.md). -Note that some storage systems claim to be compatible with these repository types without emulating their behaviour in full. {{es}} requires full compatibility. In particular the alternative implementation must support the same set of API endpoints, return the same errors in case of failures, and offer equivalent consistency guarantees and performance even when accessed concurrently by multiple nodes. Incompatible error codes, consistency or performance may be particularly hard to track down since errors, consistency failures, and performance issues are usually rare and hard to reproduce. +Note that some storage systems claim to be compatible with these repository types without emulating their behavior in full. {{es}} requires full compatibility. In particular the alternative implementation must support the same set of API endpoints, return the same errors in case of failures, and offer equivalent consistency guarantees and performance even when accessed concurrently by multiple nodes. Incompatible error codes, consistency or performance may be particularly hard to track down since errors, consistency failures, and performance issues are usually rare and hard to reproduce. You can perform some basic checks of the suitability of your storage system using the [Repository analysis](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) API. If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible and is therefore unsuitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address any incompatibilities you encounter. diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index 34068d788e..32ca095b1a 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -57,7 +57,7 @@ Review the following best practices to upgrade your deployments. 1. Before upgrading your production deployment, test the upgrade using an isolated environment. Ensure the test and production environments use the same settings. :::{note} - The upgraded version of {{es}} may interact with its environment in different ways from the version you are currently running. It is possible that your environment behaves incorrectly in a way that does not matter to the version of {{es}} that you are currently running, but which does matter to the upgraded version. In this case, the upgraded version will not work correctly until you address the incorrect behaviour in your environment. + The upgraded version of {{es}} may interact with its environment in different ways from the version you are currently running. It is possible that your environment behaves incorrectly in a way that does not matter to the version of {{es}} that you are currently running, but which does matter to the upgraded version. In this case, the upgraded version will not work correctly until you address the incorrect behavior in your environment. ::: :::{tip} diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md index a17dd28aed..dc863003e4 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md @@ -27,7 +27,7 @@ xpack.security.operator_privileges.enabled: true If the node is already running before you make this change, you must restart the node for the feature to take effect. ::::{warning} -The feature needs to be either enabled or disabled consistently across all nodes in a cluster. Otherwise, you can get inconsistent behaviour depending on which node first receives a request and which node executes it. +The feature needs to be either enabled or disabled consistently across all nodes in a cluster. Otherwise, you can get inconsistent behavior depending on which node first receives a request and which node executes it. :::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/role-restriction.md b/deploy-manage/users-roles/cluster-or-deployment-auth/role-restriction.md index c927415495..f525417209 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/role-restriction.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/role-restriction.md @@ -10,7 +10,7 @@ applies_to: # Role restriction [role-restriction] -Role restriction can be used to specify conditions under which a role should be effective. When conditions are not met, the role will be disabled, which will result in access being denied. Not specifying restriction means the role is not restricted and thus always effective. This is the default behaviour. +Role restriction can be used to specify conditions under which a role should be effective. When conditions are not met, the role will be disabled, which will result in access being denied. Not specifying restriction means the role is not restricted and thus always effective. This is the default behavior. Currently, the role restriction is only supported for [API keys](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key), with limitation that the API key can only have a single role descriptor. diff --git a/explore-analyze/alerts-cases/watcher/input-search.md b/explore-analyze/alerts-cases/watcher/input-search.md index 277e4fb03f..e1d44fa59b 100644 --- a/explore-analyze/alerts-cases/watcher/input-search.md +++ b/explore-analyze/alerts-cases/watcher/input-search.md @@ -138,7 +138,7 @@ The total number of hits in the search response is returned as an object in the | Name | Required | Default | Description | | --- | --- | --- | --- | | `request.search_type` | no | `query_then_fetch` | The [type](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) of search request to perform. Valid values are: `dfs_query_then_fetch` and `query_then_fetch`. The Elasticsearch default is `query_then_fetch`. | -| `request.indices` | no | - | The indices to search. If omitted, all indices are searched, which is the default behaviour in Elasticsearch. | +| `request.indices` | no | - | The indices to search. If omitted, all indices are searched, which is the default behavior in Elasticsearch. | | `request.body` | no | - | The body of the request. The [request body](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) follows the same structure you normally send in the body of a REST `_search` request. The body can be static text or include `mustache` [templates](how-watcher-works.md#templates). | | `request.template` | no | - | The body of the search template. See [configure templates](how-watcher-works.md#templates) for more information. | | `request.indices_options.expand_wildcards` | no | `open` | How to expand wildcards. Valid values are: `all`, `open`, `closed`, and `none` See [`expand_wildcards`](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-multi-index) for more information. | diff --git a/manage-data/data-store/data-streams/reindex-tsds.md b/manage-data/data-store/data-streams/reindex-tsds.md index 69384cee58..252d59e4b7 100644 --- a/manage-data/data-store/data-streams/reindex-tsds.md +++ b/manage-data/data-store/data-streams/reindex-tsds.md @@ -27,7 +27,7 @@ To avoid these limitations, use the process that is outlined below: 3. Set `index.number_of_replicas` to zero and unset the `index.lifecycle.name` index setting. 3. Run the reindex operation to completion. -4. Revert the overriden index settings in the destination index template. +4. Revert the overridden index settings in the destination index template. 5. Invoke the `rollover` api to create a new backing index that can receive new documents. ::::{note} diff --git a/manage-data/data-store/mapping/map-runtime-field.md b/manage-data/data-store/mapping/map-runtime-field.md index 9d56e5418f..840de8ec95 100644 --- a/manage-data/data-store/mapping/map-runtime-field.md +++ b/manage-data/data-store/mapping/map-runtime-field.md @@ -100,7 +100,7 @@ You can alternatively prefix the field you want to retrieve values for with `par ## Ignoring script errors on runtime fields [runtime-errorhandling] -Scripts can throw errors at runtime, e.g. on accessing missing or invalid values in documents or because of performing invalid operations. The `on_script_error` parameter can be used to control error behaviour when this happens. Setting this parameter to `continue` will have the effect of silently ignoring all errors on this runtime field. The default `fail` value will cause a shard failure which gets reported in the search response. +Scripts can throw errors at runtime, e.g. on accessing missing or invalid values in documents or because of performing invalid operations. The `on_script_error` parameter can be used to control error behavior when this happens. Setting this parameter to `continue` will have the effect of silently ignoring all errors on this runtime field. The default `fail` value will cause a shard failure which gets reported in the search response. ## Updating and removing runtime fields [runtime-updating-scripts] diff --git a/manage-data/data-store/text-analysis/configuring-built-in-analyzers.md b/manage-data/data-store/text-analysis/configuring-built-in-analyzers.md index a0f26acdc2..ace6fa98fb 100644 --- a/manage-data/data-store/text-analysis/configuring-built-in-analyzers.md +++ b/manage-data/data-store/text-analysis/configuring-built-in-analyzers.md @@ -8,7 +8,7 @@ applies_to: # Configuring built-in analyzers [configuring-analyzers] -The built-in analyzers can be used directly without any configuration. Some of them, however, support configuration options to alter their behaviour. For instance, the [`standard` analyzer](elasticsearch://reference/text-analysis/analysis-standard-analyzer.md) can be configured to support a list of stop words: +The built-in analyzers can be used directly without any configuration. Some of them, however, support configuration options to alter their behavior. For instance, the [`standard` analyzer](elasticsearch://reference/text-analysis/analysis-standard-analyzer.md) can be configured to support a list of stop words: ```console PUT my-index-000001 diff --git a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md index 8c0487ca3d..945748beeb 100644 --- a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md +++ b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md @@ -58,7 +58,7 @@ We define 4 different types of retention: * The effective retention, or `effective_retention`, which is the retention applied at a data stream on a given moment. Effective retention cannot be set, it is derived by taking into account all the configured retention listed above and is calculated as it is described [here](#effective-retention-calculation). ::::{note} -Global default and max retention do not apply to data streams internal to elastic. Internal data streams are recognised either by having the `system` flag set to `true` or if their name is prefixed with a dot (`.`). +Global default and max retention do not apply to data streams internal to elastic. Internal data streams are recognized either by having the `system` flag set to `true` or if their name is prefixed with a dot (`.`). :::: diff --git a/reference/fleet/es-output-settings.md b/reference/fleet/es-output-settings.md index 2add2ed796..8813b8d046 100644 --- a/reference/fleet/es-output-settings.md +++ b/reference/fleet/es-output-settings.md @@ -144,7 +144,7 @@ Specify these settings to send data over a secure connection to {{es}}. In the { For descriptions of each setting, refer to [Advanced YAML configuration](#es-output-settings-yaml-config). For the `queue.mem.events`, `queue.mem.flush.min_events` and `queue.mem.flush.timeout` settings, refer to the [internal queue configuration settings](beats://reference/filebeat/configuring-internal-queue.md) in the {{filebeat}} documentation. -`Balanced` represents the new default setting (out of the box behaviour). Relative to `Balanced`, `Optimized for throughput` setting will improve EPS by 4 times, `Optimized for Scale` will perform on par and `Optimized for Latency` will show a 20% degredation in EPS (Events Per Second). These relative performance numbers were calculated from a performance testbed which operates in a controlled setting ingesting a large log file. +`Balanced` represents the new default setting (out of the box behavior). Relative to `Balanced`, `Optimized for throughput` setting will improve EPS by 4 times, `Optimized for Scale` will perform on par and `Optimized for Latency` will show a 20% degredation in EPS (Events Per Second). These relative performance numbers were calculated from a performance testbed which operates in a controlled setting ingesting a large log file. As mentioned, the `custom` preset allows you to input your own set of parameters for a finer tuning of performance. The following table is a summary of a few data points and how the resulting EPS compares to the `Balanced` setting mentioned above. diff --git a/reference/fleet/hints-annotations-autodiscovery.md b/reference/fleet/hints-annotations-autodiscovery.md index c1d6148bbe..2ff9715336 100644 --- a/reference/fleet/hints-annotations-autodiscovery.md +++ b/reference/fleet/hints-annotations-autodiscovery.md @@ -241,7 +241,7 @@ The log collection for Kubernetes autodiscovered pods can be supported by using 3. Deploy Elastic Agent Manifest 4. Elastic Agent should be able to discover all containers inside kuernetes cluster and to collect available logs. -The previous default behaviour can be disabled with `hints.default_container_logs: false`. So this will disable the automatic logs collection from all discovered pods. Users need specifically to annotate their pod with following annotations: +The previous default behavior can be disabled with `hints.default_container_logs: false`. So this will disable the automatic logs collection from all discovered pods. Users need specifically to annotate their pod with following annotations: ```yaml annotations: diff --git a/reference/fleet/kubernetes-provider.md b/reference/fleet/kubernetes-provider.md index a110bf636f..2b35ff255c 100644 --- a/reference/fleet/kubernetes-provider.md +++ b/reference/fleet/kubernetes-provider.md @@ -78,7 +78,7 @@ as well as using the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERN `add_resource_metadata` : (Optional) Specify filters and configration for the extra metadata, that will be added to the event. Configuration parameters: - * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change the default behaviour `include_labels`, `exclude_labels` and `include_annotations` can be defined. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. + * `node` or `namespace`: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change the default behavior `include_labels`, `exclude_labels` and `include_annotations` can be defined. These settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. The enrichment of `node` or `namespace` metadata can be individually disabled by setting `enabled: false`. Wildcards are supported in these settings by using `use_regex_include: true` in combination with `include_labels`, and respectively by setting `use_regex_exclude: true` in combination with `exclude_labels`. * `deployment`: If resource is `pod` and it is created from a `deployment`, by default the deployment name isn’t added, this can be enabled by setting `deployment: true`. * `cronjob`: If resource is `pod` and it is created from a `cronjob`, by default the cronjob name isn’t added, this can be enabled by setting `cronjob: true`. Example: diff --git a/reference/index.md b/reference/index.md index 816b6b6839..f8432064ff 100644 --- a/reference/index.md +++ b/reference/index.md @@ -20,7 +20,7 @@ Explore the reference documentation for [Elastic APIs]({{apis}}). ## Security -Access detailed reference documentation on field and object schemas as well as the diffferent commands used to manage and troubleshoot Elastic Endpoint. +Access detailed reference documentation on field and object schemas as well as the different commands used to manage and troubleshoot Elastic Endpoint. **Learn more in [Security](security/index.md)** diff --git a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md index 55f5fbfc79..953c0b30f4 100644 --- a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md +++ b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md @@ -33,7 +33,7 @@ This approach requires minimal configuration as the CloudFormation template crea * `S3BackupMode`: Source record backup in Amazon S3, failed data only or all data. Default is `FailedDataOnly`. * `S3BufferInterval`: The Kinesis Firehose S3 buffer interval, in seconds. Default is `300`. * `S3BufferSize`: The Kinesis Firehose S3 buffer size, in MiB. Default is `5`. -* `S3BackupBucketARN`: By default, an S3 bucket for backup will be created. You can override this behaviour by providing an ARN of an existing S3 bucket that ensures the data can be recovered if record processing transformation does not produce the desired results. +* `S3BackupBucketARN`: By default, an S3 bucket for backup will be created. You can override this behavior by providing an ARN of an existing S3 bucket that ensures the data can be recovered if record processing transformation does not produce the desired results. * `Attributes`: List of attribute name-value pairs for HTTP endpoint separated by commas. For example "name1=value1,name2=value2". :::: diff --git a/solutions/search/ranking/learning-to-rank-ltr.md b/solutions/search/ranking/learning-to-rank-ltr.md index 211b8bedee..dc30c7acd0 100644 --- a/solutions/search/ranking/learning-to-rank-ltr.md +++ b/solutions/search/ranking/learning-to-rank-ltr.md @@ -29,7 +29,7 @@ In addition to the list of documents to sort, the LTR function also requires a s ## Judgment list [learning-to-rank-judgement-list] -The LTR model is usually trained on a judgment list, which is a set of queries and documents with a relevance grade. Judgment lists can be human or machine generated: they’re commonly populated from behavioural analytics, often with human moderation. Judgment lists determine the ideal ordering of results for a given search query. The goal of LTR is to fit the model to the judgment list rankings as closely as possible for new queries and documents. +The LTR model is usually trained on a judgment list, which is a set of queries and documents with a relevance grade. Judgment lists can be human or machine generated: they’re commonly populated from behavioral analytics, often with human moderation. Judgment lists determine the ideal ordering of results for a given search query. The goal of LTR is to fit the model to the judgment list rankings as closely as possible for new queries and documents. The judgment list is the main input used to train the model. It consists of a dataset that contains pairs of queries and documents, along with their corresponding relevance labels. The relevance judgment is typically either a binary (relevant/irrelevant) or a more granular label, such as a grade between 0 (completely irrelevant) to 4 (highly relevant). The example below uses a graded relevance judgment. diff --git a/solutions/search/search-pipelines.md b/solutions/search/search-pipelines.md index d6f3bd1d33..5d3416f0cd 100644 --- a/solutions/search/search-pipelines.md +++ b/solutions/search/search-pipelines.md @@ -64,7 +64,7 @@ Aside from the pipeline itself, you have a few configuration options which contr For Elastic web crawler and connectors, you can opt in or out per index. These settings are stored in Elasticsearch in the `.elastic-connectors` index, in the document that corresponds to the specific index. These settings can be changed there directly, or through the Kibana UI at **Search > Content > Indices > > Pipelines > Settings**. -You can also change the deployment wide defaults. These settings are stored in the Elasticsearch mapping for `.elastic-connectors` in the `_meta` section. These settings can be changed there directly, or from the Kibana UI at **Search > Content > Settings** tab. Changing the deployment wide defaults will not impact any existing indices, but will only impact any newly created indices defaults. Those defaults will still be able to be overriden by the index-specific settings. +You can also change the deployment wide defaults. These settings are stored in the Elasticsearch mapping for `.elastic-connectors` in the `_meta` section. These settings can be changed there directly, or from the Kibana UI at **Search > Content > Settings** tab. Changing the deployment wide defaults will not impact any existing indices, but will only impact any newly created indices defaults. Those defaults will still be able to be overridden by the index-specific settings. ### Using the API [ingest-pipeline-search-pipeline-settings-using-the-api] diff --git a/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md b/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md index d85106b8c1..7b8aae9957 100644 --- a/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md +++ b/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md @@ -28,7 +28,7 @@ If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps si To troubleshoot a cluster in this state, first ensure the cluster has a [stable master](discovery-troubleshooting.md). Next, focus on the nodes unexpectedly leaving the cluster ahead of all other issues. It will not be possible to solve other issues until the cluster has a stable master node and stable node membership. -Diagnostics and statistics are usually not useful in an unstable cluster. These tools only offer a view of the state of the cluster at a single point in time. Instead, look at the cluster logs to see the pattern of behaviour over time. Focus particularly on logs from the elected master. When a node leaves the cluster, logs for the elected master include a message like this (with line breaks added to make it easier to read): +Diagnostics and statistics are usually not useful in an unstable cluster. These tools only offer a view of the state of the cluster at a single point in time. Instead, look at the cluster logs to see the pattern of behavior over time. Focus particularly on logs from the elected master. When a node leaves the cluster, logs for the elected master include a message like this (with line breaks added to make it easier to read): ```text [2022-03-21T11:02:35,513][INFO ][o.e.c.c.NodeLeftExecutor] [instance-0000000000]