diff --git a/cloud-account/multifactor-authentication.md b/cloud-account/multifactor-authentication.md index b4ff172ccc..64021cd94f 100644 --- a/cloud-account/multifactor-authentication.md +++ b/cloud-account/multifactor-authentication.md @@ -60,7 +60,7 @@ To enable multifactor authentication using an authenticator app, you must verify You can remove a multifactor authentication method after it’s added by clicking **Remove**. -Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methods — for example, if your device is lost or stolen — then [contact support](../troubleshoot/index.md). +Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methods — for example, if your device is lost or stolen — then [contact support](../troubleshoot/index.md). ## Frequently asked questions [ec-account-security-mfa-faq] diff --git a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md index 8af1f502bb..0abf5e50e5 100644 --- a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md @@ -39,13 +39,13 @@ Data Transfer accounts for the volume of data (payload) going into, out of, and We meter and bill data transfer using three dimensions: -1. Data in (free) +1. Data in (free) : *Data in* accounts for all of the traffic going into the deployment. It includes index requests with data payload, as well as queries sent to the deployment (although the byte size of the latter is typically much smaller). -2. Data out +2. Data out : *Data out* accounts for all of the traffic coming out of the deployment. This includes search results, as well as monitoring data sent from the deployment. The same rate applies regardless of the destination of the data, whether to the internet, to another region, or to a cloud provider account in the same region. Data coming out of the deployment through AWS PrivateLink, GCP Private Service Connect, or Azure Private Link, is also considered *Data out*. -3. Data inter-node +3. Data inter-node : *Data inter-node* accounts for all of the traffic sent between the components of the deployment. This includes the data sync between nodes of a cluster which is managed automatically by {{es}} cluster sharding. It also includes data related to search queries executed across multiple nodes of a cluster. Note that single-node {{es}} clusters typically have lower charges, but may still incur inter-node charges accounting for data exchanged with {{kib}} nodes or other nodes, such as machine learning or APM. We provide a free allowance of 100GB per month, which includes the sum of *data out* and *data inter-node*, across all deployments in the account. Once this threshold is passed, a charge is applied for any data transfer used in excess of the 100GB monthly free allowance. @@ -71,7 +71,7 @@ Storage costs are tied to the cost of storing the backup snapshots in the underl As is common with Cloud providers, we meter and bill snapshot storage using two dimensions: -1. Storage size (GB/month) +1. Storage size (GB/month) : This is calculated by metering the storage space (GBs) occupied by all snapshots of all deployments tied to an account. The same unit price applies to all regions. To calculate the due charges, we meter the amount of storage on an hourly basis and produce an average size (in GB) for a given month. The average amount is then used to bill the account for the GB/month used within a billing cycle (a calendar month). For example, if the storage used in April 2019 was 100GB for 10 days, and then 130GB for the remaining 20 days of the month, the average storage would be 120 GB/month, calculated as (100*10 + 130*20)/30. @@ -79,7 +79,7 @@ As is common with Cloud providers, we meter and bill snapshot storage using two We provide a free allowance of 100 GB/month to all accounts across all the account deployments. Any metered storage usage below that amount will not be billed. Whenever the 100 GB/month threshold is crossed, we bill for the storage used in excess of the 100GB/month free allowance. -2. Storage API requests (1K Requests/month) +2. Storage API requests (1K Requests/month) : These costs are calculated by counting the total number of calls to backup or restore snapshots made by all deployments associated with an account. Unlike storage size, this dimension is cumulative, summed up across the billing cycle, and is billed at a price of 1,000 requests. We provide a free allowance of 100,000 API requests to all accounts each month across all the account deployments. Once this threshold is passed, we bill only for the use of API requests in excess of the free allowance. diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md index ca31930280..8222c4d4de 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md @@ -141,7 +141,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec [...] ``` -6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section. +6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section. Example: diff --git a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md index 54b9e0e5a4..3c3aa78d55 100644 --- a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md @@ -171,7 +171,7 @@ Using Docker or Podman as container runtime is a configuration local to the host [...] ``` -6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section. +6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section. Example: diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md index d052cfa38f..6a1285a7cb 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md @@ -93,7 +93,7 @@ data: ubi-only: false ``` -Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator container — which will trigger an automatic restart of the operator pod by the StatefulSet controller. +Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator container — which will trigger an automatic restart of the operator pod by the StatefulSet controller. ## Configure ECK under Operator Lifecycle Manager [k8s-operator-config-olm] diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md index 92647504cc..06e3593cd2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md @@ -10,7 +10,7 @@ products: # Configure the validating webhook [k8s-webhook] -ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting. +ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting. Validating webhooks are defined using a `ValidatingWebhookConfiguration` object that defines the following: @@ -30,7 +30,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows: * Validate all known Elastic custom resources ({{eck_resources_list}}) on create and update. -* The operator itself is the webhook server — which is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace. +* The operator itself is the webhook server — which is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace. * The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire. diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md index d8905dc404..92f6255212 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md @@ -280,7 +280,7 @@ For the container name, use the name of the Beat in lower case. For example `fil ## Default behavior [k8s-default-behavior] -If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be set — resulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits. +If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be set — resulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits. | Type | Requests | Limits | | --- | --- | --- | diff --git a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md index 9d0661f54f..cf12350b47 100644 --- a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md +++ b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md @@ -100,7 +100,7 @@ ECK orchestrates NodeSet changes with no downtime and makes sure that: Behind the scenes, ECK translates each NodeSet specified in the {{es}} resource into a [StatefulSet in Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). The NodeSet specification is based on the StatefulSet specification: -* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an {{es}} node. +* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an {{es}} node. * `podTemplate` can be used to [customize some aspects of the {{es}} Pods](customize-pods.md) created by the underlying StatefulSet. * The StatefulSet name is derived from the {{es}} resource name and the NodeSet name. Each Pod in the StatefulSet gets a name generated by suffixing the pod ordinal to the StatefulSet name. {{es}} nodes have the same name as the Pod they are running on. diff --git a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md index c46ff1e7c4..24a6367d1f 100644 --- a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md +++ b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md @@ -30,7 +30,7 @@ You include your Cloud ID along with your {{ecloud}} user credentials (defined i Not sure why you need Beats or Logstash? Here’s what they do: * [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to {{es}}. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted {{es}} cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in {{es}}. -* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. +* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. ## Before you begin [ec_before_you_begin_3] diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md b/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md index 061d9e475f..3b5db2a4c5 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md @@ -178,7 +178,7 @@ Because the initial node in the cluster is bootstrapped as a single-node cluster ## Directory layout of archives [targz-layout] -The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME` — the directory created when unpacking the archive. +The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME` — the directory created when unpacking the archive. This is convenient because you don’t have to create any directories to start using {{es}}, and uninstalling {{es}} is as easy as removing the `$ES_HOME` directory. However, you should change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on. diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md index 1542a1ec75..8506c8bd2b 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md @@ -246,7 +246,7 @@ Because the initial node in the cluster is bootstrapped as a single-node cluster ## Directory layout of `.zip` archive [windows-layout] -The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `%ES_HOME%` — the directory created when unpacking the archive. +The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `%ES_HOME%` — the directory created when unpacking the archive. This is very convenient because you don’t have to create any directories to start using {{es}}, and uninstalling {{es}} is as easy as removing the `%ES_HOME%` directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on. diff --git a/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md b/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md index 78d20cffd9..54e650458c 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md +++ b/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md @@ -103,7 +103,7 @@ By default, {{kib}} runs in the foreground, prints its logs to the standard outp ## Directory layout of `.tar.gz` archives [targz-layout] -The `.tar.gz` packages are entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. +The `.tar.gz` packages are entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. diff --git a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md index 597ddafe03..a573592590 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md +++ b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md @@ -60,7 +60,7 @@ By default, {{kib}} runs in the foreground, prints its logs to `STDOUT`, and can ## Directory layout of `.zip` archive [windows-layout] -The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. +The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. diff --git a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md index 41a9d6cce3..eee7a1de58 100644 --- a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md +++ b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md @@ -89,7 +89,7 @@ The following is a list of the roles that a node can perform in a cluster. A nod :name: coordinating-node -Requests like search requests or bulk-indexing requests may involve data held on different data nodes. A search request, for example, is executed in two phases which are coordinated by the node which receives the client request — the *coordinating node*. +Requests like search requests or bulk-indexing requests may involve data held on different data nodes. A search request, for example, is executed in two phases which are coordinated by the node which receives the client request — the *coordinating node*. In the *scatter* phase, the coordinating node forwards the request to the data nodes which hold the data. Each data node executes the request locally and returns its results to the coordinating node. In the *gather* phase, the coordinating node reduces each data node’s results into a single global result set. @@ -266,7 +266,7 @@ If you take away the ability to be able to handle master duties, to hold data, a Coordinating only nodes can benefit large clusters by offloading the coordinating node role from data and master-eligible nodes. They join the cluster and receive the full [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state), like every other node, and they use the cluster state to route requests directly to the appropriate place(s). ::::{warning} -Adding too many coordinating only nodes to a cluster can increase the burden on the entire cluster because the elected master node must await acknowledgement of cluster state updates from every node! The benefit of coordinating only nodes should not be overstated — data nodes can happily serve the same purpose. +Adding too many coordinating only nodes to a cluster can increase the burden on the entire cluster because the elected master node must await acknowledgement of cluster state updates from every node! The benefit of coordinating only nodes should not be overstated — data nodes can happily serve the same purpose. :::: diff --git a/deploy-manage/distributed-architecture/reading-and-writing-documents.md b/deploy-manage/distributed-architecture/reading-and-writing-documents.md index d481a15029..16224ffea6 100644 --- a/deploy-manage/distributed-architecture/reading-and-writing-documents.md +++ b/deploy-manage/distributed-architecture/reading-and-writing-documents.md @@ -41,7 +41,7 @@ These indexing stages (coordinating, primary, and replica) are sequential. To en ### Failure handling [_failure_handling] -Many things can go wrong during indexing — disks can get corrupted, nodes can be disconnected from each other, or some configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These are infrequent but the primary has to respond to them. +Many things can go wrong during indexing — disks can get corrupted, nodes can be disconnected from each other, or some configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These are infrequent but the primary has to respond to them. In the case that the primary itself fails, the node hosting the primary will send a message to the master about it. The indexing operation will wait (up to 1 minute, by [default](elasticsearch://reference/elasticsearch/index-settings/index-modules.md)) for the master to promote one of the replicas to be a new primary. The operation will then be forwarded to the new primary for processing. Note that the master also monitors the health of the nodes and may decide to proactively demote a primary. This typically happens when the node holding the primary is isolated from the cluster by a networking issue. See [here](#demoted-primary) for more details. diff --git a/deploy-manage/distributed-architecture/shard-request-cache.md b/deploy-manage/distributed-architecture/shard-request-cache.md index 7f91266aa5..9b8a7afbc1 100644 --- a/deploy-manage/distributed-architecture/shard-request-cache.md +++ b/deploy-manage/distributed-architecture/shard-request-cache.md @@ -15,7 +15,7 @@ products: When a search request is run against an index or against many indices, each involved shard executes the search locally and returns its local results to the *coordinating node*, which combines these shard-level results into a global result set. -The shard-level request cache module caches the local results on each shard. This allows frequently used (and potentially heavy) search requests to return results almost instantly. The requests cache is a very good fit for the logging use case, where only the most recent index is being actively updated — results from older indices will be served directly from the cache. +The shard-level request cache module caches the local results on each shard. This allows frequently used (and potentially heavy) search requests to return results almost instantly. The requests cache is a very good fit for the logging use case, where only the most recent index is being actively updated — results from older indices will be served directly from the cache. You can control the size and expiration of the cache at the node level using the [shard request cache settings](elasticsearch://reference/elasticsearch/configuration-reference/shard-request-cache-settings.md). @@ -30,7 +30,7 @@ Scripted queries that use the API calls which are non-deterministic, such as `Ma ## Cache invalidation [_cache_invalidation] -The cache is smart — it keeps the same *near real-time* promise as uncached search. +The cache is smart — it keeps the same *near real-time* promise as uncached search. Cached results are invalidated automatically whenever the shard refreshes to pick up changes to the documents or when you update the mapping. In other words you will always get the same results from the cache as you would for an uncached search request. @@ -87,7 +87,7 @@ Requests where `size` is greater than 0 will not be cached even if the request c ## Cache key [_cache_key] -A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognized. +A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognized. ::::{tip} Most JSON libraries support a *canonical* mode which ensures that JSON keys are always emitted in the same order. This canonical mode can be used in the application to ensure that a request is always serialized in the same way. diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md index 6663eb7c1f..58335ae002 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md @@ -23,8 +23,8 @@ Each collector can create zero or more monitoring documents. For example, the `i | Collector | Data Types | Description | | --- | --- | --- | | Cluster Stats | `cluster_stats` | Gathers details about the cluster state, including parts of the actual clusterstate (for example `GET /_cluster/state`) and statistics about it (for example,`GET /_cluster/stats`). This produces a single document type. In versions priorto X-Pack 5.5, this was actually three separate collectors that resulted inthree separate types: `cluster_stats`, `cluster_state`, and `cluster_info`. In5.5 and later, all three are combined into `cluster_stats`. This only runs onthe *elected* master node and the data collected (`cluster_stats`) largelycontrols the UI. When this data is not present, it indicates either amisconfiguration on the elected master node, timeouts related to the collectionof the data, or issues with storing the data. Only a single document is producedper collection. | -| Index Stats | `indices_stats`, `index_stats` | Gathers details about the indices in the cluster, both in summary andindividually. This creates many documents that represent parts of the indexstatistics output (for example, `GET /_stats`). This information only needs tobe collected once, so it is collected on the *elected* master node. The mostcommon failure for this collector relates to an extreme number of indices — andtherefore time to gather them — resulting in timeouts. One summary`indices_stats` document is produced per collection and one `index_stats`document is produced per index, per collection. | -| Index Recovery | `index_recovery` | Gathers details about index recovery in the cluster. Index recovery representsthe assignment of *shards* at the cluster level. If an index is not recovered,it is not usable. This also corresponds to shard restoration via snapshots. Thisinformation only needs to be collected once, so it is collected on the *elected*master node. The most common failure for this collector relates to an extremenumber of shards — and therefore time to gather them — resulting in timeouts.This creates a single document that contains all recoveries by default, whichcan be quite large, but it gives the most accurate picture of recovery in theproduction cluster. | +| Index Stats | `indices_stats`, `index_stats` | Gathers details about the indices in the cluster, both in summary andindividually. This creates many documents that represent parts of the indexstatistics output (for example, `GET /_stats`). This information only needs tobe collected once, so it is collected on the *elected* master node. The mostcommon failure for this collector relates to an extreme number of indices — andtherefore time to gather them — resulting in timeouts. One summary`indices_stats` document is produced per collection and one `index_stats`document is produced per index, per collection. | +| Index Recovery | `index_recovery` | Gathers details about index recovery in the cluster. Index recovery representsthe assignment of *shards* at the cluster level. If an index is not recovered,it is not usable. This also corresponds to shard restoration via snapshots. Thisinformation only needs to be collected once, so it is collected on the *elected*master node. The most common failure for this collector relates to an extremenumber of shards — and therefore time to gather them — resulting in timeouts.This creates a single document that contains all recoveries by default, whichcan be quite large, but it gives the most accurate picture of recovery in theproduction cluster. | | Shards | `shards` | Gathers details about all *allocated* shards for all indices, particularlyincluding what node the shard is allocated to. This information only needs to becollected once, so it is collected on the *elected* master node. The collectoruses the local cluster state to get the routing table without any networktimeout issues unlike most other collectors. Each shard is represented by aseparate monitoring document. | | Jobs | `job_stats` | Gathers details about all machine learning job statistics (for example, `GET/_ml/anomaly_detectors/_stats`). This information only needs to be collectedonce, so it is collected on the *elected* master node. However, for the masternode to be able to perform the collection, the master node must have`xpack.ml.enabled` set to true (default) and a license level that supports {{ml}}. | | Node Stats | `node_stats` | Gathers details about the running node, such as memory utilization and CPUusage (for example, `GET /_nodes/_local/stats`). This runs on *every* node with{{monitor-features}} enabled. One common failure results in the timeout of the nodestats request due to too many segment files. As a result, the collector spendstoo much time waiting for the file system stats to be calculated until itfinally times out. A single `node_stats` document is created per collection.This is collected per node to help to discover issues with nodes communicatingwith each other, but not with the monitoring cluster (for example, intermittentnetwork issues or memory pressure). | diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md index c741b01825..d84e37f772 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md @@ -46,7 +46,7 @@ deployment: 1. Create a user that has the `monitoring_user` [built-in role](../../users-roles/cluster-or-deployment-auth/built-in-roles.md) on the monitoring cluster. ::::{note} - Make sure the `monitoring_user` role has read privileges on `metrics-*` indices. If it doesn’t, create a new role with `read` and `read_cross_cluster` index privileges on `metrics-*`, then assign the new role (along with `monitoring_user`) to your user. + Make sure the `monitoring_user` role has read privileges on `metrics-*` indices. If it doesn’t, create a new role with `read` and `read_cross_cluster` index privileges on `metrics-*`, then assign the new role (along with `monitoring_user`) to your user. :::: 2. Add the `monitoring.ui.elasticsearch.username` and `monitoring.ui.elasticsearch.password` settings in the [`kibana.yml`](/deploy-manage/stack-settings.md) file. If these settings are omitted, {{kib}} uses the `elasticsearch.username` and `elasticsearch.password` setting values. For more information, see [Configuring security in {{kib}}](../../security.md). diff --git a/deploy-manage/security/fips-140-2.md b/deploy-manage/security/fips-140-2.md index 3ac5fb52a3..58a6b90dc7 100644 --- a/deploy-manage/security/fips-140-2.md +++ b/deploy-manage/security/fips-140-2.md @@ -139,7 +139,7 @@ Password hashing configuration changes are not retroactive so the stored hashed #### Cached password hashing [_cached_password_hashing] $$$fips-cached-password-hashing$$$ -`ssha256` (salted `sha256`) is recommended for cache hashing. Though `PBKDF2` is compliant with FIPS-140-2, it is — by design — slow, and thus not generally suitable as a cache hashing algorithm. Cached credentials are never stored on disk, and salted `sha256` provides an adequate level of security for in-memory credential hashing, without imposing prohibitive performance overhead. You *may* use `PBKDF2`, however you should carefully assess performance impact first. Depending on your deployment, the overhead of `PBKDF2` could undo most of the performance gain of using a cache. +`ssha256` (salted `sha256`) is recommended for cache hashing. Though `PBKDF2` is compliant with FIPS-140-2, it is — by design — slow, and thus not generally suitable as a cache hashing algorithm. Cached credentials are never stored on disk, and salted `sha256` provides an adequate level of security for in-memory credential hashing, without imposing prohibitive performance overhead. You *may* use `PBKDF2`, however you should carefully assess performance impact first. Depending on your deployment, the overhead of `PBKDF2` could undo most of the performance gain of using a cache. Either set all `cache.hash_algo` settings to `ssha256` or leave them undefined, since `ssha256` is the default value for all `cache.hash_algo` settings. See [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings). diff --git a/deploy-manage/security/ip-filtering-basic.md b/deploy-manage/security/ip-filtering-basic.md index 2382f63761..b600c69dbe 100644 --- a/deploy-manage/security/ip-filtering-basic.md +++ b/deploy-manage/security/ip-filtering-basic.md @@ -42,7 +42,7 @@ xpack.security.transport.filter.deny: "192.168.0.0/24" The `_all` keyword can be used to deny all connections that are not explicitly allowed. ```yaml -xpack.security.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4" ] +xpack.security.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4" ] xpack.security.transport.filter.deny: _all ``` diff --git a/deploy-manage/security/kibana-es-mutual-tls.md b/deploy-manage/security/kibana-es-mutual-tls.md index a0c05f76d9..65dd895bd6 100644 --- a/deploy-manage/security/kibana-es-mutual-tls.md +++ b/deploy-manage/security/kibana-es-mutual-tls.md @@ -13,7 +13,7 @@ products: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) provide encryption for data-in-transit. While these terms are often used interchangeably, {{kib}} supports only TLS, which supersedes the old SSL protocols. -TLS requires X.509 certificates to authenticate the communicating parties and perform encryption of data-in-transit. Each certificate contains a public key and has and an associated — but separate — private key; these keys are used for cryptographic operations. {{kib}} supports certificates and private keys in PEM or PKCS#12 format. +TLS requires X.509 certificates to authenticate the communicating parties and perform encryption of data-in-transit. Each certificate contains a public key and has and an associated — but separate — private key; these keys are used for cryptographic operations. {{kib}} supports certificates and private keys in PEM or PKCS#12 format. In a standard TLS configuration, the server presents a signed certificate to authenticate itself to the client. In a mutual TLS configuration, the client also presents a signed certificate to authenticate itself to the server. diff --git a/deploy-manage/security/kibana-session-management.md b/deploy-manage/security/kibana-session-management.md index 4c3b6d363c..e1469369fe 100644 --- a/deploy-manage/security/kibana-session-management.md +++ b/deploy-manage/security/kibana-session-management.md @@ -30,7 +30,7 @@ xpack.security.session.idleTimeout: "3d" ## Session lifespan [session-lifespan] -You can use `xpack.security.session.lifespan` to configure the maximum session duration or "lifespan" — also known as the "absolute timeout". This and `xpack.security.session.idleTimeout` are both highly recommended. By default, a maximum session lifespan is 30 days. To define another lifespan, set the property in the [`kibana.yml`](/deploy-manage/stack-settings.md) configuration file. The lifespan is formatted as a duration of `[ms|s|m|h|d|w|M|Y]` (e.g. *20m*, *24h*, *7d*, *1w*). For example, set the lifespan to expire sessions after 7 days: +You can use `xpack.security.session.lifespan` to configure the maximum session duration or "lifespan" — also known as the "absolute timeout". This and `xpack.security.session.idleTimeout` are both highly recommended. By default, a maximum session lifespan is 30 days. To define another lifespan, set the property in the [`kibana.yml`](/deploy-manage/stack-settings.md) configuration file. The lifespan is formatted as a duration of `[ms|s|m|h|d|w|M|Y]` (e.g. *20m*, *24h*, *7d*, *1w*). For example, set the lifespan to expire sessions after 7 days: ```yaml xpack.security.session.lifespan: "7d" diff --git a/deploy-manage/security/traffic-filtering.md b/deploy-manage/security/traffic-filtering.md index 8d7dae69bc..56b3b33782 100644 --- a/deploy-manage/security/traffic-filtering.md +++ b/deploy-manage/security/traffic-filtering.md @@ -108,7 +108,7 @@ To identify which rule sets are automatically applied to new deployments in your ::: :::: -2. Select each of the rule sets — **Include by default** is checked when this rule set is automatically applied to all new deployments in its region. +2. Select each of the rule sets — **Include by default** is checked when this rule set is automatically applied to all new deployments in its region. #### View rejected requests diff --git a/explore-analyze/alerts-cases/watcher.md b/explore-analyze/alerts-cases/watcher.md index 9547288def..6194e3766c 100644 --- a/explore-analyze/alerts-cases/watcher.md +++ b/explore-analyze/alerts-cases/watcher.md @@ -33,7 +33,7 @@ All of these use-cases share a few key properties: * The relevant data or changes in data can be identified with a periodic Elasticsearch query. * The results of the query can be checked against a condition. -* One or more actions are taken if the condition is true — an email is sent, a 3rd party system is notified, or the query results are stored. +* One or more actions are taken if the condition is true — an email is sent, a 3rd party system is notified, or the query results are stored. ## How watches work [_how_watches_work] diff --git a/explore-analyze/find-and-organize/find-apps-and-objects.md b/explore-analyze/find-and-organize/find-apps-and-objects.md index 69f611b09b..e335f5a1a2 100644 --- a/explore-analyze/find-and-organize/find-apps-and-objects.md +++ b/explore-analyze/find-and-organize/find-apps-and-objects.md @@ -28,7 +28,7 @@ You can search for objects by type, name, and tag. To get the most from the sear | Search by type | `type:dashboard`
Available types: `application`, `canvas-workpad`, `dashboard`, `data-view`, `lens`, `maps`, `query`, `search`, `visualization` | | Search by tag | `tag:mytagname`
`tag:"tag name with spaces"` | | Search by type and name | `type:dashboard my_dashboard_title` | - | Advanced searches | `tag:(tagname1 or tagname2) my_dashboard_title`
`type:lens tag:(tagname1 or tagname2)`
`type:(dashboard or canvas-workpad) logs`
| + | Advanced searches | `tag:(tagname1 or tagname2) my_dashboard_title`
`type:lens tag:(tagname1 or tagname2)`
`type:(dashboard or canvas-workpad) logs`
| This example searches for visualizations with the tag `design` . diff --git a/explore-analyze/find-and-organize/saved-objects.md b/explore-analyze/find-and-organize/saved-objects.md index b2e95a1aba..c584ef6537 100644 --- a/explore-analyze/find-and-organize/saved-objects.md +++ b/explore-analyze/find-and-organize/saved-objects.md @@ -152,8 +152,8 @@ Historically, whether you imported or copied a saved object, {{kib}} would creat If you import a saved object using the "Check for existing objects" option, {{kib}} will take the following steps: -1. If {{kib}} finds a matching saved object with the exact same ID in the target space, that will be the import destination — you can **overwrite** that destination or **skip** it. -2. Otherwise, if {{kib}} finds a matching saved object with a *different* ID that has the same origin, that will be the import destination — again, you can **overwrite** that destination or **skip** it. +1. If {{kib}} finds a matching saved object with the exact same ID in the target space, that will be the import destination — you can **overwrite** that destination or **skip** it. +2. Otherwise, if {{kib}} finds a matching saved object with a *different* ID that has the same origin, that will be the import destination — again, you can **overwrite** that destination or **skip** it. 3. Otherwise, if a saved object with the exact same ID exists in a *different* space, then {{kib}} will generate a random ID for the import destination, preserving the saved object’s origin. 4. Otherwise, {{kib}} creates the saved object with the given ID. diff --git a/explore-analyze/query-filter/languages/esql-multi-index.md b/explore-analyze/query-filter/languages/esql-multi-index.md index 1b3ba8768a..2cc560fa9c 100644 --- a/explore-analyze/query-filter/languages/esql-multi-index.md +++ b/explore-analyze/query-filter/languages/esql-multi-index.md @@ -112,7 +112,7 @@ This functionality is in technical preview and may be changed or removed in a fu :::: -{{esql}} has a way to handle [field type mismatches](#esql-multi-index-invalid-mapping). When the same field is mapped to multiple types in multiple indices, the type of the field is understood to be a *union* of the various types in the index mappings. As seen in the preceding examples, this *union type* cannot be used in the results, and cannot be referred to by the query — except in `KEEP`, `DROP` or when it’s passed to a type conversion function that accepts all the types in the *union* and converts the field to a single type. {{esql}} offers a suite of [type conversion functions](elasticsearch://reference/query-languages/esql/functions-operators/type-conversion-functions.md) to achieve this. +{{esql}} has a way to handle [field type mismatches](#esql-multi-index-invalid-mapping). When the same field is mapped to multiple types in multiple indices, the type of the field is understood to be a *union* of the various types in the index mappings. As seen in the preceding examples, this *union type* cannot be used in the results, and cannot be referred to by the query — except in `KEEP`, `DROP` or when it’s passed to a type conversion function that accepts all the types in the *union* and converts the field to a single type. {{esql}} offers a suite of [type conversion functions](elasticsearch://reference/query-languages/esql/functions-operators/type-conversion-functions.md) to achieve this. In the above examples, the query can use a command like `EVAL client_ip = TO_IP(client_ip)` to resolve the union of `ip` and `keyword` to just `ip`. You can also use the type-conversion syntax `EVAL client_ip = client_ip::IP`. Alternatively, the query could use [`TO_STRING`](elasticsearch://reference/query-languages/esql/functions-operators/type-conversion-functions.md#esql-to_string) to convert all supported types into `KEYWORD`. diff --git a/explore-analyze/report-and-share/automating-report-generation.md b/explore-analyze/report-and-share/automating-report-generation.md index 455ea5f705..a156cb35c5 100644 --- a/explore-analyze/report-and-share/automating-report-generation.md +++ b/explore-analyze/report-and-share/automating-report-generation.md @@ -20,7 +20,7 @@ Create the POST URL that triggers a report to generate PDF and CSV reports. To create the POST URL for PDF reports: 1. Go to **Dashboards**, **Visualize Library**, or **Canvas**. -2. Open the dashboard, visualization, or **Canvas** workpad you want to view as a report. +2. Open the dashboard, visualization, or **Canvas** workpad you want to view as a report. * If you are using **Dashboard** or **Visualize Library**, from the toolbar, click **Share > Export**, select the PDF option then click **Copy POST URL**. * If you are using **Canvas**, from the toolbar, click **Share > PDF Reports**, then click **Advanced options > Copy POST URL**. diff --git a/explore-analyze/scripting/modules-scripting-fields.md b/explore-analyze/scripting/modules-scripting-fields.md index 36c1d78671..accf8c17c3 100644 --- a/explore-analyze/scripting/modules-scripting-fields.md +++ b/explore-analyze/scripting/modules-scripting-fields.md @@ -241,7 +241,7 @@ GET my-index-000001/_search ### Stored fields [modules-scripting-stored] -*Stored fields* — fields explicitly marked as [`"store": true`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-store.md) in the mapping — can be accessed using the `_fields['field_name'].value` or `_fields['field_name']` syntax: +*Stored fields* — fields explicitly marked as [`"store": true`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-store.md) in the mapping — can be accessed using the `_fields['field_name'].value` or `_fields['field_name']` syntax: ```console PUT my-index-000001 diff --git a/explore-analyze/visualize/legacy-editors/timelion.md b/explore-analyze/visualize/legacy-editors/timelion.md index f3b4180a61..5b985c4392 100644 --- a/explore-analyze/visualize/legacy-editors/timelion.md +++ b/explore-analyze/visualize/legacy-editors/timelion.md @@ -224,7 +224,7 @@ Move the legend to the north west position with two columns, then click **Updat :screenshot: ::: -  + ### Save and add the panel [save-the-timelion-panel] @@ -347,7 +347,7 @@ Customize and format the visualization using the following functions, then click :screenshot: ::: -  + ### Save and add the panel [save-the-network-timelion-panel] @@ -532,7 +532,7 @@ Customize and format the visualization using the following functions, then click :screenshot: ::: -  + ### Save and add the panel [save-the-outlier-timelion-panel] diff --git a/explore-analyze/visualize/maps/maps-vector-style-properties.md b/explore-analyze/visualize/maps/maps-vector-style-properties.md index 210eb127a6..21195282ec 100644 --- a/explore-analyze/visualize/maps/maps-vector-style-properties.md +++ b/explore-analyze/visualize/maps/maps-vector-style-properties.md @@ -24,8 +24,8 @@ You can add text labels to your Point features by configuring label style proper | **Label visibility** | Specifies the zoom range for which labels are displayed. | | **Label color** | The text color. | | **Label size** | The size of the text, in pixels. | -| **Label border color** | The color of the label border. | -| **Label border width** | The width of the label border. | +| **Label border color** | The color of the label border. | +| **Label border width** | The width of the label border. | You can symbolize Point features as **Circle markers** or **Icons**. @@ -75,8 +75,8 @@ Manage your custom icons in [settings](maps-settings.md). | **Label visibility** | Specifies the zoom range for which labels are displayed. | | **Label color** | The text color. | | **Label size** | The size of the text, in pixels. | -| **Label border color** | The color of the label border. | -| **Label border width** | The width of the label border. | +| **Label border color** | The color of the label border. | +| **Label border width** | The width of the label border. | ## Line style properties [line-style-properties] @@ -89,5 +89,5 @@ Manage your custom icons in [settings](maps-settings.md). | **Label visibility** | Specifies the zoom range for which labels are displayed. | | **Label color** | The text color. | | **Label size** | The size of the text, in pixels. | -| **Label border color** | The color of the label border. | -| **Label border width** | The width of the label border. | +| **Label border color** | The color of the label border. | +| **Label border width** | The width of the label border. | diff --git a/get-started/introduction.md b/get-started/introduction.md index 3c80ce800b..32f8f19024 100644 --- a/get-started/introduction.md +++ b/get-started/introduction.md @@ -40,7 +40,7 @@ The {{stack}} is used for a wide and growing range of use cases. Here are a few [**Get started with {{elastic-sec}} →**](../solutions/security/get-started.md) -This is just a sample of search, observability, and security use cases enabled by {{ecloud}}. Refer to Elastic [customer success stories](https://www.elastic.co/customers/success-stories) for concrete examples across a range of industries. +This is just a sample of search, observability, and security use cases enabled by {{ecloud}}. Refer to Elastic [customer success stories](https://www.elastic.co/customers/success-stories) for concrete examples across a range of industries. % TODO: cleanup these links, consolidate with Explore and analyze diff --git a/manage-data/data-store/mapping/dynamic-mapping.md b/manage-data/data-store/mapping/dynamic-mapping.md index 750a540a40..dacff833b6 100644 --- a/manage-data/data-store/mapping/dynamic-mapping.md +++ b/manage-data/data-store/mapping/dynamic-mapping.md @@ -10,7 +10,7 @@ products: # Dynamic mapping [dynamic-mapping] -One of the most important features of {{es}} is that it tries to get out of your way and let you start exploring your data as quickly as possible. To index a document, you don’t have to first create an index, define a mapping type, and define your fields — you can just index a document and the index, type, and fields will display automatically: +One of the most important features of {{es}} is that it tries to get out of your way and let you start exploring your data as quickly as possible. To index a document, you don’t have to first create an index, define a mapping type, and define your fields — you can just index a document and the index, type, and fields will display automatically: ```json PUT data/_doc/1 <1> diff --git a/manage-data/data-store/mapping/dynamic-templates.md b/manage-data/data-store/mapping/dynamic-templates.md index 94b611bd44..de215e2fa1 100644 --- a/manage-data/data-store/mapping/dynamic-templates.md +++ b/manage-data/data-store/mapping/dynamic-templates.md @@ -50,7 +50,7 @@ If a provided mapping contains an invalid mapping snippet, a validation error is * If no `match_mapping_type` has been specified but the template is valid for at least one predefined mapping type, the mapping snippet is considered valid. However, a validation error is returned at index time if a field matching the template is indexed as a different type. For example, configuring a dynamic template with no `match_mapping_type` is considered valid as string type, but if a field matching the dynamic template is indexed as a long, a validation error is returned at index time. It is recommended to configure the `match_mapping_type` to the expected JSON type or configure the desired `type` in the mapping snippet. * If the `{{name}}` placeholder is used in the mapping snippet, validation is skipped when updating the dynamic template. This is because the field name is unknown at that time. Instead, validation occurs when the template is applied at index time. -Templates are processed in order — the first matching template wins. When putting new dynamic templates through the [update mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) API, all existing templates are overwritten. This allows for dynamic templates to be reordered or deleted after they were initially added. +Templates are processed in order — the first matching template wins. When putting new dynamic templates through the [update mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) API, all existing templates are overwritten. This allows for dynamic templates to be reordered or deleted after they were initially added. ## Mapping runtime fields in a dynamic template [dynamic-mapping-runtime-fields] diff --git a/manage-data/data-store/text-analysis/anatomy-of-an-analyzer.md b/manage-data/data-store/text-analysis/anatomy-of-an-analyzer.md index 04492f9bae..561910a1ea 100644 --- a/manage-data/data-store/text-analysis/anatomy-of-an-analyzer.md +++ b/manage-data/data-store/text-analysis/anatomy-of-an-analyzer.md @@ -10,7 +10,7 @@ products: # Anatomy of an analyzer [analyzer-anatomy] -An *analyzer*  — whether built-in or custom — is just a package which contains three lower-level building blocks: *character filters*, *tokenizers*, and *token filters*. +An *analyzer* — whether built-in or custom — is just a package which contains three lower-level building blocks: *character filters*, *tokenizers*, and *token filters*. The built-in [analyzers](elasticsearch://reference/text-analysis/analyzer-reference.md) pre-package these building blocks into analyzers suitable for different languages and types of text. Elasticsearch also exposes the individual building blocks so that they can be combined to define new [`custom`](create-custom-analyzer.md) analyzers. diff --git a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md index 253f523964..a7c8df6e41 100644 --- a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md +++ b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md @@ -69,7 +69,7 @@ Global default and max retention do not apply to data streams internal to elasti * By setting the `data_retention` on the data stream level. This retention can be configured in two ways: -  — For new data streams, it can be defined in the index template that would be applied during the data stream’s creation. You can use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template), for example: + — For new data streams, it can be defined in the index template that would be applied during the data stream’s creation. You can use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template), for example: ```console PUT _index_template/template @@ -88,7 +88,7 @@ Global default and max retention do not apply to data streams internal to elasti } ``` -  — For an existing data stream, it can be set via the [PUT lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). + — For an existing data stream, it can be set via the [PUT lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). ```console PUT _data_stream/my-data-stream/_lifecycle diff --git a/manage-data/lifecycle/rollup.md b/manage-data/lifecycle/rollup.md index a9a0f2f62d..e18b9aae54 100644 --- a/manage-data/lifecycle/rollup.md +++ b/manage-data/lifecycle/rollup.md @@ -20,7 +20,7 @@ While your analysts and data scientists may wish you stored that data indefinite Storage cost is a fixed quantity. It takes X money to store Y data. But the utility of a piece of data often changes with time. Sensor data gathered at millisecond granularity is extremely useful right now, reasonably useful if from a few weeks ago, and only marginally useful if older than a few months. -So while the cost of storing a millisecond of sensor data from ten years ago is fixed, the value of that individual sensor reading often diminishes with time. It’s not useless — it could easily contribute to a useful analysis — but it’s reduced value often leads to deletion rather than paying the fixed storage cost. +So while the cost of storing a millisecond of sensor data from ten years ago is fixed, the value of that individual sensor reading often diminishes with time. It’s not useless — it could easily contribute to a useful analysis — but it’s reduced value often leads to deletion rather than paying the fixed storage cost. ## Rollup stores historical data at reduced granularity [_rollup_stores_historical_data_at_reduced_granularity] @@ -54,6 +54,6 @@ If you were to query the raw data, you’d only see the most recent month. And i ## Rollup is multi-interval aware [_rollup_is_multi_interval_aware] -Finally, Rollup is capable of intelligently utilizing the best interval available. If you’ve worked with summarizing features of other products, you’ll find that they can be limiting. If you configure rollups at daily intervals…​ your queries and charts can only work with daily intervals. If you need a monthly interval, you have to create another rollup that explicitly stores monthly averages, etc. +Finally, Rollup is capable of intelligently utilizing the best interval available. If you’ve worked with summarizing features of other products, you’ll find that they can be limiting. If you configure rollups at daily intervals… your queries and charts can only work with daily intervals. If you need a monthly interval, you have to create another rollup that explicitly stores monthly averages, etc. The Rollup feature stores data in such a way that queries can identify the smallest available interval and use that for their processing. If you store rollups at a daily interval, queries can be executed on daily or longer intervals (weekly, monthly, etc) without the need to explicitly configure a new rollup job. This helps alleviate one of the major disadvantages of a rollup system; reduced flexibility relative to raw data. diff --git a/reference/fleet/elastic-agent-container.md b/reference/fleet/elastic-agent-container.md index 4e913637da..2c38772e14 100644 --- a/reference/fleet/elastic-agent-container.md +++ b/reference/fleet/elastic-agent-container.md @@ -8,7 +8,7 @@ products: # Run Elastic Agent in a container [elastic-agent-container] -You can run {{agent}} inside a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the [Elastic Docker registry](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). If you are running in Kubernetes, refer to [run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md). +You can run {{agent}} inside a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the [Elastic Docker registry](https://www.docker.elastic.co/r/elastic-agent/elastic-agent). If you are running in Kubernetes, refer to [run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md). Note that running {{elastic-agent}} in a container is supported only in Linux environments. For this reason we don’t currently provide {{agent}} container images for Windows. diff --git a/reference/fleet/install-elastic-agents-in-containers.md b/reference/fleet/install-elastic-agents-in-containers.md index 24276b9dfd..573918a63f 100644 --- a/reference/fleet/install-elastic-agents-in-containers.md +++ b/reference/fleet/install-elastic-agents-in-containers.md @@ -8,7 +8,7 @@ products: # Install Elastic Agents in a containerized environment [install-elastic-agents-in-containers] -You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. +You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. To learn how to run {{agent}}s in a containerized environment, see: @@ -24,7 +24,7 @@ To learn how to run {{agent}}s in a containerized environment, see: * [Run {{agent}} Standalone on Kubernetes](/reference/fleet/running-on-kubernetes-standalone.md) * [Scaling {{agent}} on {{k8s}}](/reference/fleet/scaling-on-kubernetes.md) * [Using a custom ingest pipeline with the {{k8s}} Integration](/reference/fleet/ingest-pipeline-kubernetes.md) -* [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users +* [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users ::::{note} Enrollment handling for {{agent}} in a containerized environment has some special nuances. diff --git a/reference/fleet/install-elastic-agents.md b/reference/fleet/install-elastic-agents.md index 3e0d115a1c..e2ffd022fd 100644 --- a/reference/fleet/install-elastic-agents.md +++ b/reference/fleet/install-elastic-agents.md @@ -39,7 +39,7 @@ You have a few options for installing and managing an {{agent}}: * **Install {{agent}} in a containerized environment** - You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. + You can run {{agent}} inside of a container — either with {{fleet-server}} or standalone. Docker images for all versions of {{agent}} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. Refer to: @@ -54,7 +54,7 @@ You have a few options for installing and managing an {{agent}}: * [Run {{agent}} Standalone on Kubernetes](/reference/fleet/running-on-kubernetes-standalone.md) * [Scaling {{agent}} on {{k8s}}](/reference/fleet/scaling-on-kubernetes.md) - * [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users + * [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users ::::{admonition} Restrictions in {{serverless-short}} diff --git a/solutions/observability/apm/cross-cluster-search.md b/solutions/observability/apm/cross-cluster-search.md index c22fdc1e29..6c32090687 100644 --- a/solutions/observability/apm/cross-cluster-search.md +++ b/solutions/observability/apm/cross-cluster-search.md @@ -11,7 +11,7 @@ products: # Cross-cluster search with application data [apm-cross-cluster-search] -Elastic APM utilizes {{es}}'s cross-cluster search functionality. Cross-cluster search lets you run a single search request against one or more [remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) — making it easy to search APM data across multiple sources. This means you can also have deployments per data type, making sizing and scaling more predictable, and allowing for better performance while managing multiple observability use cases. +Elastic APM utilizes {{es}}'s cross-cluster search functionality. Cross-cluster search lets you run a single search request against one or more [remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) — making it easy to search APM data across multiple sources. This means you can also have deployments per data type, making sizing and scaling more predictable, and allowing for better performance while managing multiple observability use cases. ## Set up cross-cluster search [apm-set-up-cross-cluster-search] @@ -33,8 +33,8 @@ You can also specify certain clusters to display data from, for example, `cluste There are two ways to edit the default {{data-source}}: -* In the Applications UI — Find **Applications** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Go to **Settings** → **Indices** and change all `xpack.apm.indices.*` values to include remote clusters. -* In [`kibana.yml`](/deploy-manage/stack-settings.md) — Update the [`xpack.apm.indices.*`](kibana://reference/configuration-reference/apm-settings.md) configuration values to include remote clusters. +* In the Applications UI — Find **Applications** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Go to **Settings** → **Indices** and change all `xpack.apm.indices.*` values to include remote clusters. +* In [`kibana.yml`](/deploy-manage/stack-settings.md) — Update the [`xpack.apm.indices.*`](kibana://reference/configuration-reference/apm-settings.md) configuration values to include remote clusters. ::::{tip} diff --git a/solutions/observability/apm/dependencies.md b/solutions/observability/apm/dependencies.md index f71d046fd3..8a2bb25fcb 100644 --- a/solutions/observability/apm/dependencies.md +++ b/solutions/observability/apm/dependencies.md @@ -13,7 +13,7 @@ products: # Dependencies [apm-dependencies] -APM agents collect details about external calls made from instrumented services. Sometimes, these external calls resolve into a downstream service that’s instrumented — in these cases, you can utilize [distributed tracing](/solutions/observability/apm/trace-sample-timeline.md#distributed-tracing) to drill down into problematic downstream services. Other times, though, it’s not possible to instrument a downstream dependency — like with a database or third-party service. **Dependencies** gives you a window into these uninstrumented, downstream dependencies. +APM agents collect details about external calls made from instrumented services. Sometimes, these external calls resolve into a downstream service that’s instrumented — in these cases, you can utilize [distributed tracing](/solutions/observability/apm/trace-sample-timeline.md#distributed-tracing) to drill down into problematic downstream services. Other times, though, it’s not possible to instrument a downstream dependency — like with a database or third-party service. **Dependencies** gives you a window into these uninstrumented, downstream dependencies. :::{image} /solutions/images/observability-dependencies.png :alt: Dependencies view in the Applications UI diff --git a/solutions/observability/apm/elastic-apm-events-intake-api.md b/solutions/observability/apm/elastic-apm-events-intake-api.md index 3a0f69864e..5ce4d55aa7 100644 --- a/solutions/observability/apm/elastic-apm-events-intake-api.md +++ b/solutions/observability/apm/elastic-apm-events-intake-api.md @@ -14,7 +14,7 @@ products: Most users do not need to interact directly with the events intake API. :::: -The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. Agents communicate with the Server by sending events — captured pieces of information — in an HTTP request. Events can be: +The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. Agents communicate with the Server by sending events — captured pieces of information — in an HTTP request. Events can be: * Transactions * Spans diff --git a/solutions/observability/apm/jaeger.md b/solutions/observability/apm/jaeger.md index a27216c251..06afff8ab4 100644 --- a/solutions/observability/apm/jaeger.md +++ b/solutions/observability/apm/jaeger.md @@ -132,4 +132,4 @@ There are some limitations and differences between Elastic APM and Jaeger that y * Elastic APM’s [Learn about data types](/solutions/observability/apm/data-types.md) is different than Jaegers. For Jaeger trace data to work with Elastic’s data model, we rely on spans being tagged with the appropriate [`span.kind`](https://github.com/opentracing/specification/blob/master/semantic_conventions.md). * Server Jaeger spans are mapped to Elastic APM [Transactions](/solutions/observability/apm/transactions.md). - * Client Jaeger spans are mapped to Elastic APM [Spans](/solutions/observability/apm/spans.md) — unless the span is the root, in which case it is mapped to an Elastic APM [Transactions](/solutions/observability/apm/transactions.md). + * Client Jaeger spans are mapped to Elastic APM [Spans](/solutions/observability/apm/spans.md) — unless the span is the root, in which case it is mapped to an Elastic APM [Transactions](/solutions/observability/apm/transactions.md). diff --git a/solutions/observability/apm/mobile-service-overview.md b/solutions/observability/apm/mobile-service-overview.md index f1c96013c9..48e450257e 100644 --- a/solutions/observability/apm/mobile-service-overview.md +++ b/solutions/observability/apm/mobile-service-overview.md @@ -15,7 +15,7 @@ Selecting a mobile service brings you to the **Mobile service overview**. The ** For example, see: * Crash Rate (Crashes per session) -* Slowest App load time — coming soon +* Slowest App load time — coming soon * Number of sessions * Number of HTTP requests * Map showing the total number of HTTP requests based on country and region diff --git a/solutions/observability/apm/monitor-apm-server.md b/solutions/observability/apm/monitor-apm-server.md index 64f9cc5ecc..b7d8bd0a2a 100644 --- a/solutions/observability/apm/monitor-apm-server.md +++ b/solutions/observability/apm/monitor-apm-server.md @@ -20,7 +20,7 @@ Select your deployment method to get started: ## {{ecloud}} [apm-monitor-apm-cloud] -{{ecloud}} manages the installation and configuration of a monitoring agent for you — so all you have to do is flip a switch and watch the data pour in. +{{ecloud}} manages the installation and configuration of a monitoring agent for you — so all you have to do is flip a switch and watch the data pour in. * **{{ech}}** user? See [Stack Monitoring on {{ecloud}} deployments](/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md). * **{{ece}}** user? See [Enable stack monitoring on ECE deployments](/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md). diff --git a/solutions/observability/apm/secret-token.md b/solutions/observability/apm/secret-token.md index 3c77c9d87a..758ee0427b 100644 --- a/solutions/observability/apm/secret-token.md +++ b/solutions/observability/apm/secret-token.md @@ -31,7 +31,7 @@ Secret tokens are not applicable for the RUM Agent, as there is no way to preven % This should be checked and updated, it doesn't looks like it's there by default vv ::::{note} -{{ech}} and {{ece}} deployments provision a secret token when the deployment is created. The secret token can be found and reset in the {{ecloud}} Console under the deployment's **APM & Fleet** page. +{{ech}} and {{ece}} deployments provision a secret token when the deployment is created. The secret token can be found and reset in the {{ecloud}} Console under the deployment's **APM & Fleet** page. :::: :::::::{tab-set} diff --git a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md index 83a9851ae6..83877be14a 100644 --- a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md +++ b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md @@ -56,4 +56,4 @@ Go to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs :alt: scale APM ::: -Congratulations — you now have the latest and greatest in Elastic APM! +Congratulations — you now have the latest and greatest in Elastic APM! diff --git a/solutions/observability/apm/switch-self-installation-to-apm-integration.md b/solutions/observability/apm/switch-self-installation-to-apm-integration.md index 407af2a04b..77b545ac39 100644 --- a/solutions/observability/apm/switch-self-installation-to-apm-integration.md +++ b/solutions/observability/apm/switch-self-installation-to-apm-integration.md @@ -57,4 +57,4 @@ You’ll configure the APM integration in this step. See [Configure APM Server]( Once data from upgraded APM agents is visible in the Applications UI, it’s safe to stop the APM Server process. -Congratulations — you now have the latest and greatest in Elastic APM! +Congratulations — you now have the latest and greatest in Elastic APM! diff --git a/solutions/observability/apm/switch-to-elastic-apm-integration.md b/solutions/observability/apm/switch-to-elastic-apm-integration.md index 7dc3655c53..f2bdb9a05f 100644 --- a/solutions/observability/apm/switch-to-elastic-apm-integration.md +++ b/solutions/observability/apm/switch-to-elastic-apm-integration.md @@ -14,14 +14,14 @@ The APM integration offers a number of benefits over the standalone method of ru **{{fleet}}**: -* A single, unified way to add monitoring for logs, metrics, traces, and other types of data to each host — install one thing instead of multiple -* Central, unified configuration management — no need to edit multiple configuration files +* A single, unified way to add monitoring for logs, metrics, traces, and other types of data to each host — install one thing instead of multiple +* Central, unified configuration management — no need to edit multiple configuration files **Data streams**: * Reduced number of fields per index, better space efficiency, and faster queries * More granular data control -* Errors and metrics data streams are shared with other data sources — which means better long-term integration with the logs and metrics apps +* Errors and metrics data streams are shared with other data sources — which means better long-term integration with the logs and metrics apps * Removes template inheritance for {{ilm-init}} policies and makes use of new {{es}} index and component templates * Fixes _resource 'apm-{{version}}-$type' exists, but it is not an alias_ error diff --git a/solutions/observability/apm/traces-ui.md b/solutions/observability/apm/traces-ui.md index 3158580741..2b7bafc48c 100644 --- a/solutions/observability/apm/traces-ui.md +++ b/solutions/observability/apm/traces-ui.md @@ -19,7 +19,7 @@ Traces link together related transactions to show an end-to-end performance of h **Traces** displays your application’s entry (root) transactions. Transactions with the same name are grouped together and only shown once in this table. If you’re using [distributed tracing](/solutions/observability/apm/trace-sample-timeline.md#distributed-tracing), this view is key to finding the critical paths within your application. -By default, transactions are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service — in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, select it to view its [transaction details](/solutions/observability/apm/transactions-ui.md#transaction-details). +By default, transactions are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service — in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, select it to view its [transaction details](/solutions/observability/apm/transactions-ui.md#transaction-details). You can also use queries to filter and search the transactions shown on this page. Note that only properties available on root transactions are searchable. For example, you can’t search for `label.tier: 'high'`, as that field is only available on non-root transactions. diff --git a/solutions/observability/apm/traces.md b/solutions/observability/apm/traces.md index 49b4f7f276..9ad6e85528 100644 --- a/solutions/observability/apm/traces.md +++ b/solutions/observability/apm/traces.md @@ -20,7 +20,7 @@ When a trace travels through multiple services, as is common in a microservice a ### Why is distributed tracing important? [apm-why-distributed-tracing] -Distributed tracing enables you to analyze performance throughout your microservice architecture by tracing the entirety of a request — from the initial web request on your front-end service all the way to database queries made on your back-end services. +Distributed tracing enables you to analyze performance throughout your microservice architecture by tracing the entirety of a request — from the initial web request on your front-end service all the way to database queries made on your back-end services. Tracking requests as they propagate through your services provides an end-to-end picture of where your application is spending time, where errors are occurring, and where bottlenecks are forming. Distributed tracing eliminates individual service’s data silos and reveals what’s happening outside of service borders. @@ -331,7 +331,7 @@ public Response onIncomingRequest(Request request) throws Exception { :::::: ::::::{tab-item} .NET -Deserialize the incoming distributed tracing context, and pass it to any of the [`StartTransaction`](apm-agent-dotnet://reference/public-api.md#api-start-transaction) or [`CaptureTransaction`](apm-agent-dotnet://reference/public-api.md#convenient-capture-transaction) APIs — all of which have an optional `DistributedTracingData` parameter. This will create a new transaction or span as a child of the incoming trace context. +Deserialize the incoming distributed tracing context, and pass it to any of the [`StartTransaction`](apm-agent-dotnet://reference/public-api.md#api-start-transaction) or [`CaptureTransaction`](apm-agent-dotnet://reference/public-api.md#convenient-capture-transaction) APIs — all of which have an optional `DistributedTracingData` parameter. This will create a new transaction or span as a child of the incoming trace context. Example starting a new transaction: diff --git a/solutions/observability/apm/transaction-sampling.md b/solutions/observability/apm/transaction-sampling.md index 9b2cd27050..69d6cf00bd 100644 --- a/solutions/observability/apm/transaction-sampling.md +++ b/solutions/observability/apm/transaction-sampling.md @@ -16,7 +16,7 @@ products: :::{include} _snippets/apm-server-vs-mis.md ::: -[Distributed tracing](/solutions/observability/apm/traces.md) can generate a substantial amount of data. More data can mean higher costs and more noise. Sampling aims to lower the amount of data ingested and the effort required to analyze that data — all while still making it easy to find anomalous patterns in your applications, detect outages, track errors, and lower mean time to recovery (MTTR). +[Distributed tracing](/solutions/observability/apm/traces.md) can generate a substantial amount of data. More data can mean higher costs and more noise. Sampling aims to lower the amount of data ingested and the effort required to analyze that data — all while still making it easy to find anomalous patterns in your applications, detect outages, track errors, and lower mean time to recovery (MTTR). Elastic APM supports two types of sampling: @@ -36,7 +36,7 @@ In head-based sampling, the sampling decision for each trace is made when the tr For example, a sampling value of `.2` indicates a transaction sample rate of `20%`. This means that only `20%` of traces will send and retain all of their associated information. The remaining traces will drop contextual information to reduce the transfer and storage size of the trace. -Head-based sampling is quick and easy to set up. Its downside is that it’s entirely random — interesting data might be discarded purely due to chance. +Head-based sampling is quick and easy to set up. Its downside is that it’s entirely random — interesting data might be discarded purely due to chance. See [Configure head-based sampling](/solutions/observability/apm/transaction-sampling.md#apm-configure-head-based-sampling) to get started. @@ -120,7 +120,7 @@ Tail-based sampling is only supported when writing to {{es}}. If you are using a In tail-based sampling, the sampling decision for each trace is made after the trace has completed. This means all traces will be analyzed against a set of rules, or policies, which will determine the rate at which they are sampled. -Unlike head-based sampling, each trace does not have an equal probability of being sampled. Because slower traces are more interesting than faster ones, tail-based sampling uses weighted random sampling — so traces with a longer root transaction duration are more likely to be sampled than traces with a fast root transaction duration. +Unlike head-based sampling, each trace does not have an equal probability of being sampled. Because slower traces are more interesting than faster ones, tail-based sampling uses weighted random sampling — so traces with a longer root transaction duration are more likely to be sampled than traces with a fast root transaction duration. A downside of tail-based sampling is that it results in more data being sent from APM agents to the APM Server. The APM Server will therefore use more CPU, memory, and disk than with head-based sampling. However, because the tail-based sampling decision happens in APM Server, there is less data to transfer from APM Server to {{es}}. So running APM Server close to your instrumented services can reduce any increase in transfer costs that tail-based sampling brings. @@ -210,7 +210,7 @@ For all sampling methods, metrics are weighted by the inverse sampling rate of t These calculation methods ensure that the APM app provides the most accurate metrics possible given the sampling strategy in use, while also accounting for the head-based sampling rate to estimate the full population of traces. -^1^ $$$footnote-1$$$ Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped. +^1^ $$$footnote-1$$$ Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped. ## Sample rates [_sample_rates] @@ -224,7 +224,7 @@ What’s the best sampling rate? Unfortunately, there isn’t one. Sampling is d * Services with considerably more traffic than others might be safe to sample at lower rates * Routes that are more important than others might be sampled at higher rates * A production service environment might warrant a higher sampling rate than a development environment -* Failed trace outcomes might be more interesting than successful traces — thus requiring a higher sample rate +* Failed trace outcomes might be more interesting than successful traces — thus requiring a higher sample rate Regardless of the above, cost conscious customers are likely to be fine with a lower sample rate. @@ -270,7 +270,7 @@ Enhanced privileges are required to use tail-based sampling. For more informatio Enable tail-based sampling with [Enable tail-based sampling](/solutions/observability/apm/tail-based-sampling.md#sampling-tail-enabled-ref). When enabled, trace events are mapped to sampling policies. Each sampling policy must specify a sample rate, and can optionally specify other conditions. All of the policy conditions must be true for a trace event to match it. -Trace events are matched to policies in the order specified. Each policy list must conclude with a default policy — one that only specifies a sample rate. This default policy is used to catch remaining trace events that don’t match a stricter policy. Requiring this default policy ensures that traces are only dropped intentionally. If you enable tail-based sampling and send a transaction that does not match any of the policies, APM Server will reject the transaction with the error `no matching policy`. +Trace events are matched to policies in the order specified. Each policy list must conclude with a default policy — one that only specifies a sample rate. This default policy is used to catch remaining trace events that don’t match a stricter policy. Requiring this default policy ensures that traces are only dropped intentionally. If you enable tail-based sampling and send a transaction that does not match any of the policies, APM Server will reject the transaction with the error `no matching policy`. ::::{important} Note that from version `9.0.0` APM Server has an unlimited storage limit, but will stop writing when the disk where the database resides reaches 80% usage. Due to how the limit is calculated and enforced, the actual disk space may still grow slightly over this disk usage based limit, or any configured storage limit. diff --git a/solutions/observability/apm/transactions-ui.md b/solutions/observability/apm/transactions-ui.md index 292194b063..bdca181677 100644 --- a/solutions/observability/apm/transactions-ui.md +++ b/solutions/observability/apm/transactions-ui.md @@ -92,7 +92,7 @@ Selecting a transaction group will bring you to the **transaction** details. Thi ### Latency distribution [transaction-duration-distribution] -The latency distribution shows a plot of all transaction durations for the given time period. The following screenshot shows a typical distribution and indicates most of our requests were served quickly — awesome! The requests on the right are taking longer than average; we probably need to focus on them. +The latency distribution shows a plot of all transaction durations for the given time period. The following screenshot shows a typical distribution and indicates most of our requests were served quickly — awesome! The requests on the right are taking longer than average; we probably need to focus on them. :::{image} /solutions/images/observability-apm-transaction-duration-dist.png :alt: Example view of latency distribution graph diff --git a/solutions/observability/applications/user-experience.md b/solutions/observability/applications/user-experience.md index 0d606b3393..3150e64f03 100644 --- a/solutions/observability/applications/user-experience.md +++ b/solutions/observability/applications/user-experience.md @@ -10,7 +10,7 @@ products: # Real user monitoring (User Experience) [user-experience] -{{user-experience}} provides a way to quantify and analyze the perceived performance of your web application. Unlike testing environments, {{user-experience}} data reflects real-world user experiences. Drill down further by looking at data by URL, operating system, browser, and location — all of which can impact how your application performs on end-user machines. +{{user-experience}} provides a way to quantify and analyze the perceived performance of your web application. Unlike testing environments, {{user-experience}} data reflects real-world user experiences. Drill down further by looking at data by URL, operating system, browser, and location — all of which can impact how your application performs on end-user machines. Powered by the APM Real user monitoring (RUM) agent, all it takes is a few lines of code to begin surfacing key user experience metrics. @@ -21,7 +21,7 @@ Powered by the APM Real user monitoring (RUM) agent, all it takes is a few lines ## Why is {{user-experience}} important? [why-user-experience] -Search engines are placing increasing importance on user experience when organically ranking websites. Elastic makes it easy to view your website data in the context of Google Core Web Vitals — metrics that score three key areas of user experience: loading performance, visual stability, and interactivity. These Core Web Vitals are set to become the main performance measurement in Google ranking factors. If you’re a content-based site that wants to appear in the “Top Stories” section of Google search results, you must have good Core Web Vitals. +Search engines are placing increasing importance on user experience when organically ranking websites. Elastic makes it easy to view your website data in the context of Google Core Web Vitals — metrics that score three key areas of user experience: loading performance, visual stability, and interactivity. These Core Web Vitals are set to become the main performance measurement in Google ranking factors. If you’re a content-based site that wants to appear in the “Top Stories” section of Google search results, you must have good Core Web Vitals. ## How does {{user-experience}} work? [how-user-experience-works] @@ -87,12 +87,12 @@ Interaction to next paint (INP) ::::{note} Previous {{kib}} versions included the metric [First input delay (FID)](https://web.dev/fid/) in the User Experience app. Starting with version 8.12, FID was replaced with *Interaction to next paint (INP)*. The APM RUM agent started collecting INP data in version 5.16.0. If you use an earlier version of the RUM agent with {{kib}} version 8.12 or later, it will *not* capture INP data and there will be *no data* to show in the User Experience app: -| | **Kibana version ≥ 8.12** | **Kibana version < 8.12** | +| | **Kibana version ≥ 8.12** | **Kibana version < 8.12** | | --- | --- | --- | -| **RUM agent version ≥ 5.16.0** | INP data will be visible. | FID data will be visible. | -| **RUM agent version < 5.16.0** | The INP section will be empty. | FID data will be visible. | +| **RUM agent version ≥ 5.16.0** | INP data will be visible. | FID data will be visible. | +| **RUM agent version < 5.16.0** | The INP section will be empty. | FID data will be visible. | -RUM agent version ≥ 5.16.0 will continue to collect FID metrics so, while FID metrics are not shown in the User Experience app in {{kib}} versions 8.12 and later, you can choose to visualize FID metrics in a [custom dashboard](/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md) or using [Lens](/explore-analyze/visualize/lens.md). +RUM agent version ≥ 5.16.0 will continue to collect FID metrics so, while FID metrics are not shown in the User Experience app in {{kib}} versions 8.12 and later, you can choose to visualize FID metrics in a [custom dashboard](/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md) or using [Lens](/explore-analyze/visualize/lens.md). :::: @@ -105,9 +105,9 @@ Cumulative layout shift (CLS) ### Load/view distribution [user-experience-distribution] -Operating system, browser family, and geographic location can all have a massive impact on how visitors experience your website. This data can help you understand when and where your users are visiting from, and can help you prioritize optimizations — for example, prioritizing improvements for the most popular browsers visiting your site. +Operating system, browser family, and geographic location can all have a massive impact on how visitors experience your website. This data can help you understand when and where your users are visiting from, and can help you prioritize optimizations — for example, prioritizing improvements for the most popular browsers visiting your site. -Don’t forget, this data also influences search engine page rankings and placement in top stories for content sites — without requiring the use of AMP. +Don’t forget, this data also influences search engine page rankings and placement in top stories for content sites — without requiring the use of AMP. :::{image} /solutions/images/observability-visitor-breakdown.png :alt: {{user-experience}} visitor breakdown diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md index 275732a845..6960b432f3 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md @@ -533,7 +533,7 @@ From the **Data view** menu, select `All logs`. From here, you can quickly searc Quickly triage and troubleshoot application performance problems with the help of Elastic application performance monitoring (APM). -Think of a latency spike — APM can help you narrow the scope of your investigation to a single service. Because you’ve also ingested and correlated logs and metrics, you can then link the problem to CPU and memory utilization or error log entries of a particular Kubernetes pod. +Think of a latency spike — APM can help you narrow the scope of your investigation to a single service. Because you’ve also ingested and correlated logs and metrics, you can then link the problem to CPU and memory utilization or error log entries of a particular Kubernetes pod. ### Step 1: Set up APM [_step_1_set_up_apm] @@ -545,10 +545,10 @@ There are many ways to deploy APM when working with Kubernetes, but this guide a If you want to manage APM yourself, there are a few alternative options: ::::{dropdown} Expand alternatives -* [{{ecloud}} on Kubernetes (ECK)](/deploy-manage/deploy/cloud-on-k8s.md) — The Elastic recommended approach for managing APM Server deployed with Kubernetes. Built on the Kubernetes Operator pattern, ECK extends basic Kubernetes orchestration capabilities to support the setup and management of APM Server on Kubernetes. -* Deploy APM Server as a DaemonSet — Ensure a running instance of APM Server on each node in your cluster. Useful when all pods in a node should share a single APM Server instance. -* Deploy APM Server as a sidecar — For environments that should not share an APM Server, like when directing traces from multiple applications to separate {{es}} clusters. -* [Download and install APM Server](/solutions/observability/apm/get-started.md) — The classic, non-Kubernetes option. +* [{{ecloud}} on Kubernetes (ECK)](/deploy-manage/deploy/cloud-on-k8s.md) — The Elastic recommended approach for managing APM Server deployed with Kubernetes. Built on the Kubernetes Operator pattern, ECK extends basic Kubernetes orchestration capabilities to support the setup and management of APM Server on Kubernetes. +* Deploy APM Server as a DaemonSet — Ensure a running instance of APM Server on each node in your cluster. Useful when all pods in a node should share a single APM Server instance. +* Deploy APM Server as a sidecar — For environments that should not share an APM Server, like when directing traces from multiple applications to separate {{es}} clusters. +* [Download and install APM Server](/solutions/observability/apm/get-started.md) — The classic, non-Kubernetes option. :::: @@ -672,7 +672,7 @@ Before your application starts, copy the agent from the init container into a sh The Java command line needs a way to pick up this `javaagent` configuration. You can use the standard JVM TI [JAVA_TOOL_OPTIONS](https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.md#tooloptions) environment variable to do this. It doesn’t have to be explicitly specified and is picked up automatically by the JVM when it starts. ::::{tip} -For JVMs that don’t support this option, you can use any other environment variable — either one already defined in your startup script, like `JAVA_OPTS` in some servlet container scripts, or add a dedicated empty one that will have no effect if it’s not set. +For JVMs that don’t support this option, you can use any other environment variable — either one already defined in your startup script, like `JAVA_OPTS` in some servlet container scripts, or add a dedicated empty one that will have no effect if it’s not set. :::: @@ -814,7 +814,7 @@ npm install elastic-apm-node --save **Start the agent** -It’s important that the agent is started before you require any other modules in your Node.js application — before `express`, `http`, etc. +It’s important that the agent is started before you require any other modules in your Node.js application — before `express`, `http`, etc. ```js var apm = require('elastic-apm-node').start() diff --git a/solutions/observability/logs/inspect-log-anomalies.md b/solutions/observability/logs/inspect-log-anomalies.md index a6565dc5cc..bd62cad4de 100644 --- a/solutions/observability/logs/inspect-log-anomalies.md +++ b/solutions/observability/logs/inspect-log-anomalies.md @@ -9,7 +9,7 @@ products: # Inspect log anomalies [inspect-log-anomalies] -When the {{anomaly-detect}} features of {{ml}} are enabled, you can use the **Logs Anomalies** page to detect and inspect log anomalies and the log partitions where the log anomalies occur. This means you can easily see anomalous behavior without significant human intervention — no more manually sampling log data, calculating rates, and determining if rates are expected. +When the {{anomaly-detect}} features of {{ml}} are enabled, you can use the **Logs Anomalies** page to detect and inspect log anomalies and the log partitions where the log anomalies occur. This means you can easily see anomalous behavior without significant human intervention — no more manually sampling log data, calculating rates, and determining if rates are expected. **Anomalies** automatically highlight periods where the log rate is outside expected bounds and therefore may be anomalous. For example: diff --git a/solutions/observability/synthetics/index.md b/solutions/observability/synthetics/index.md index 98ad5f9d88..23af29119a 100644 --- a/solutions/observability/synthetics/index.md +++ b/solutions/observability/synthetics/index.md @@ -43,7 +43,7 @@ To set up your first monitor, refer to [Get started](/solutions/observability/sy Real browser synthetic monitoring enables you to test critical actions and requests that an end-user would make on your site at predefined intervals and in a controlled environment. Synthetic monitoring extends traditional end-to-end testing techniques because it allows your tests to run continuously on the cloud. The result is rich, consistent, and repeatable data that you can trend and alert on. -For example, you can test popular user journeys, like logging in, adding items to a cart, and checking out — actions that need to work for your users consistently. +For example, you can test popular user journeys, like logging in, adding items to a cart, and checking out — actions that need to work for your users consistently. You can run an automated Synthetics project on a real Chromium browser and view each synthetic monitoring journey in your Observability project side-by-side with your other monitors. diff --git a/solutions/observability/synthetics/migrate-from-elastic-synthetics-integration.md b/solutions/observability/synthetics/migrate-from-elastic-synthetics-integration.md index 7734adc192..e2fa3a3302 100644 --- a/solutions/observability/synthetics/migrate-from-elastic-synthetics-integration.md +++ b/solutions/observability/synthetics/migrate-from-elastic-synthetics-integration.md @@ -63,7 +63,7 @@ Below is a comparison of how you used the {{agent}} integration to create monito Find more details in [Use {{project-monitors-cap}}](/solutions/observability/synthetics/create-monitors-with-projects.md). -**How to use the UI ([read more](#synthetics-migrate-integration-ui))**: +**How to use the UI ([read more](#synthetics-migrate-integration-ui))**: * **{{agent}} integration**: diff --git a/solutions/search/api-quickstarts.md b/solutions/search/api-quickstarts.md index b8fa919780..92793acec7 100644 --- a/solutions/search/api-quickstarts.md +++ b/solutions/search/api-quickstarts.md @@ -20,7 +20,7 @@ Use the following quickstarts to get hands-on experience with Elasticsearch APIs - [Analyze eCommerce data with aggregations using Query DSL](/explore-analyze/query-filter/aggregations/tutorial-analyze-ecommerce-data-with-aggregations-using-query-dsl.md): Learn how to analyze data using different types of aggregations, including metrics, buckets, and pipelines. % - [Getting started with {{esql}}](esql-getting-started.md): Learn how to query and aggregate your data using {{esql}}. - [Search and filter with {{esql}}](esql-search-tutorial.md): Learn how to perform full-text and semantic search in {{esql}}, combined with filtering. -- [Semantic search](semantic-search/semantic-search-semantic-text.md): Learn how to create embeddings for your data with `semantic_text` and query using the `semantic` query. +- [Semantic search](semantic-search/semantic-search-semantic-text.md): Learn how to create embeddings for your data with `semantic_text` and query using the `semantic` query. - [Hybrid search](hybrid-semantic-text.md): Learn how to combine semantic search using`semantic_text` with full-text search. - [Bring your own dense vector embeddings](vector/bring-own-vectors.md): Learn how to ingest dense vector embeddings into Elasticsearch. diff --git a/solutions/search/vector/knn.md b/solutions/search/vector/knn.md index a8f9f36769..efafe2d0ac 100644 --- a/solutions/search/vector/knn.md +++ b/solutions/search/vector/knn.md @@ -47,7 +47,7 @@ Common use cases for kNN include: {{es}} supports two methods for kNN search: -* [Approximate kNN](#approximate-knn) using the `knn` search option, `knn` query or a `knn` [retriever](../retrievers-overview.md) +* [Approximate kNN](#approximate-knn) using the `knn` search option, `knn` query or a `knn` [retriever](../retrievers-overview.md) * [Exact, brute-force kNN](#exact-knn) using a `script_score` query with a vector function In most cases, you’ll want to use approximate kNN. Approximate kNN offers lower latency at the cost of slower indexing and imperfect accuracy. diff --git a/solutions/security/get-started/configure-advanced-settings.md b/solutions/security/get-started/configure-advanced-settings.md index d45417799f..66ff107262 100644 --- a/solutions/security/get-started/configure-advanced-settings.md +++ b/solutions/security/get-started/configure-advanced-settings.md @@ -194,7 +194,7 @@ The `securitySolution:maxUnassociatedNotes` field determines the maximum number ## Exclude cold and frozen data from rules [exclude-cold-frozen-data-rule-executions] -To ensure the rules in your {{kib}} space exclude query results from cold and frozen tiers when executing, specify cold and frozen [data tiers](/manage-data/lifecycle/data-tiers.md) in the `excludedDataTiersForRuleExecution` field. Multiple data tiers must be separated by commas, for example: `data_frozen`, `data_cold`. This setting is turned off by default; turning it on can improve rule performance and reduce execution time. +To ensure the rules in your {{kib}} space exclude query results from cold and frozen tiers when executing, specify cold and frozen [data tiers](/manage-data/lifecycle/data-tiers.md) in the `excludedDataTiersForRuleExecution` field. Multiple data tiers must be separated by commas, for example: `data_frozen`, `data_cold`. This setting is turned off by default; turning it on can improve rule performance and reduce execution time. This setting does not apply to {{ml}} rules because {{ml}} anomalies are not stored in cold or frozen data tiers. diff --git a/solutions/security/investigate/visual-event-analyzer.md b/solutions/security/investigate/visual-event-analyzer.md index 59c6ee6952..0d816d5107 100644 --- a/solutions/security/investigate/visual-event-analyzer.md +++ b/solutions/security/investigate/visual-event-analyzer.md @@ -112,7 +112,7 @@ To expand the analyzer to a full screen, select the **Full Screen** icon above t :screenshot: ::: -The left panel contains a list of all processes related to the event, starting with the event chain’s first process. **Analyzed Events** — the event you selected to analyze from the events list or Timeline — are highlighted with a light blue outline around the cube. +The left panel contains a list of all processes related to the event, starting with the event chain’s first process. **Analyzed Events** — the event you selected to analyze from the events list or Timeline — are highlighted with a light blue outline around the cube. :::{image} /solutions/images/security-process-list.png :alt: process list diff --git a/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md b/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md index 1257ac1183..23a6c4c607 100644 --- a/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md +++ b/solutions/security/manage-elastic-defend/elastic-endpoint-self-protection-features.md @@ -38,7 +38,7 @@ Other Windows and macOS variants (and all Linux distributions) do not have self- For {{stack}} version >= 7.11.0, self-protection defines the following permissions: -* Users — even Administrator/root — **cannot** delete {{elastic-endpoint}} files (located at `c:\Program Files\Elastic\Endpoint` on Windows, and `/Library/Elastic/Endpoint` on macOS). +* Users — even Administrator/root — **cannot** delete {{elastic-endpoint}} files (located at `c:\Program Files\Elastic\Endpoint` on Windows, and `/Library/Elastic/Endpoint` on macOS). * Users **cannot** terminate the {{elastic-endpoint}} program or service. * Administrator/root users **can** read {{elastic-endpoint}}'s files. On Windows, the easiest way to read {{elastic-endpoint}} files is to start an Administrator `cmd.exe` prompt. On macOS, an Administrator can use the `sudo` command. * Administrator/root users **can** stop the {{elastic-agent}}'s service. On Windows, run the `sc stop "Elastic Agent"` command. On macOS, run the `sudo launchctl stop elastic-agent` command. diff --git a/solutions/security/manage-elastic-defend/event-filters.md b/solutions/security/manage-elastic-defend/event-filters.md index 85e3531a00..c0bf346b77 100644 --- a/solutions/security/manage-elastic-defend/event-filters.md +++ b/solutions/security/manage-elastic-defend/event-filters.md @@ -14,7 +14,7 @@ products: # Event filters -Event filters allow you to filter out endpoint events that you don’t want stored in {{es}} — for example, high-volume events. By creating event filters, you can optimize your storage in {{es}}. +Event filters allow you to filter out endpoint events that you don’t want stored in {{es}} — for example, high-volume events. By creating event filters, you can optimize your storage in {{es}}. Event filters do not lower CPU usage on hosts; {{elastic-endpoint}} still monitors events to detect and prevent possible threats, but without writing event data to {{es}}. To compare event filters with other endpoint artifacts, refer to [](/solutions/security/manage-elastic-defend/optimize-elastic-defend.md). diff --git a/solutions/security/manage-elastic-defend/trusted-applications.md b/solutions/security/manage-elastic-defend/trusted-applications.md index b16a5d5642..13a2301cfd 100644 --- a/solutions/security/manage-elastic-defend/trusted-applications.md +++ b/solutions/security/manage-elastic-defend/trusted-applications.md @@ -25,7 +25,7 @@ You must have the **Trusted Applications** [privilege](/solutions/security/confi :::: -Trusted applications create blindspots for {{elastic-defend}}, because the applications are no longer monitored for threats. One avenue attackers use to exploit these blindspots is by DLL (Dynamic Link Library) side-loading, where they leverage processes signed by trusted vendors — such as antivirus software — to execute their malicious DLLs. Such activity appears to originate from the trusted application’s process. +Trusted applications create blindspots for {{elastic-defend}}, because the applications are no longer monitored for threats. One avenue attackers use to exploit these blindspots is by DLL (Dynamic Link Library) side-loading, where they leverage processes signed by trusted vendors — such as antivirus software — to execute their malicious DLLs. Such activity appears to originate from the trusted application’s process. Trusted applications might still generate alerts in some cases, such as if the application’s process events indicate malicious behavior. To reduce false positive alerts, add an [Endpoint alert exception](/solutions/security/detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions), which prevents {{elastic-defend}} from generating alerts. To compare trusted applications with other endpoint artifacts, refer to [](/solutions/security/manage-elastic-defend/optimize-elastic-defend.md). diff --git a/troubleshoot/observability/apm-agent-ruby/apm-ruby-agent.md b/troubleshoot/observability/apm-agent-ruby/apm-ruby-agent.md index 3100939b4a..6fd9dc1696 100644 --- a/troubleshoot/observability/apm-agent-ruby/apm-ruby-agent.md +++ b/troubleshoot/observability/apm-agent-ruby/apm-ruby-agent.md @@ -14,7 +14,7 @@ products: Hopefully the agent Just Works™, but depending on your situation the agent might need some tuning. -First, to learn more about what’s going on inside the agent, you can increase the amount of log messages it writes. To do this, set the log level with the option `log_level = 0` — `0` being the level of most messages, `DEBUG`. +First, to learn more about what’s going on inside the agent, you can increase the amount of log messages it writes. To do this, set the log level with the option `log_level = 0` — `0` being the level of most messages, `DEBUG`. In your `config/elastic_apm.yml`: