diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md index 9e5636e4cf..6df217dc21 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md @@ -341,7 +341,7 @@ If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem is If the volume driver does not support `ExpandInUsePersistentVolumes`, you must manually delete Pods after the resize so that they can be recreated automatically with the expanded filesystem. -Any other changes in the volumeClaimTemplates—​such as changing the storage class or decreasing the volume size—​are not allowed. To make changes such as these, you must fully delete the {{ls}} resource, delete and recreate or resize the volume, and create a new {{ls}} resource. +Any other changes in the volumeClaimTemplates— such as changing the storage class or decreasing the volume size— are not allowed. To make changes such as these, you must fully delete the {{ls}} resource, delete and recreate or resize the volume, and create a new {{ls}} resource. Before you delete a persistent queue (PQ) volume, ensure that the queue is empty. We recommend setting `queue.drain: true` on the {{ls}} Pods to ensure that the queue is drained when Pods are shutdown. Note that you should also increase the `terminationGracePeriodSeconds` to a large enough value to allow the queue to drain. diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md index 52afcd198b..72b261701c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md @@ -12,7 +12,7 @@ products: The power of {{ls}} is in the plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and [codecs](logstash-docs-md://lsr/codec-plugins.md). -In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—​including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation. +In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances— including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation. In this section, we’ll cover: diff --git a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md index eee7a1de58..a105ddef68 100644 --- a/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md +++ b/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md @@ -185,7 +185,7 @@ node.roles: [ data ] Content data nodes are part of the content tier. Data stored in the content tier is generally a collection of items such as a product catalog or article archive. Unlike time series data, the value of the content remains relatively constant over time, so it doesn’t make sense to move it to a tier with different performance characteristics as it ages. Content data typically has long data retention requirements, and you want to be able to retrieve items quickly regardless of how old they are. -Content tier nodes are usually optimized for query performance—​they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas. +Content tier nodes are usually optimized for query performance— they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas. The content tier is required and is often deployed within the same node grouping as the hot tier. System indices and other indices that aren’t part of a data stream are automatically allocated to the content tier. diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md index faa7cb2fb8..aaf48b0e6f 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md @@ -33,7 +33,7 @@ It is critical that all nodes share the same setup. Otherwise, monitoring data m :::: -When the exporters route monitoring data into the monitoring cluster, they use `_bulk` indexing for optimal performance. All monitoring data is forwarded in bulk to all enabled exporters on the same node. From there, the exporters serialize the monitoring data and send a bulk request to the monitoring cluster. There is no queuing—​in memory or persisted to disk—​so any failure during the export results in the loss of that batch of monitoring data. This design limits the impact on {{es}} and the assumption is that the next pass will succeed. +When the exporters route monitoring data into the monitoring cluster, they use `_bulk` indexing for optimal performance. All monitoring data is forwarded in bulk to all enabled exporters on the same node. From there, the exporters serialize the monitoring data and send a bulk request to the monitoring cluster. There is no queuing— in memory or persisted to disk— so any failure during the export results in the loss of that batch of monitoring data. This design limits the impact on {{es}} and the assumption is that the next pass will succeed. Routing monitoring data involves indexing it into the appropriate monitoring indices. Once the data is indexed, it exists in a monitoring index that, by default, is named with a daily index pattern. For {{es}} monitoring data, this is an index that matches `.monitoring-es-6-*`. From there, the data lives inside the monitoring cluster and must be curated or cleaned up as necessary. If you do not curate the monitoring data, it eventually fills up the nodes and the cluster might fail due to lack of disk space. diff --git a/deploy-manage/remote-clusters/ec-enable-ccs.md b/deploy-manage/remote-clusters/ec-enable-ccs.md index 5d450b0055..99e16ae299 100644 --- a/deploy-manage/remote-clusters/ec-enable-ccs.md +++ b/deploy-manage/remote-clusters/ec-enable-ccs.md @@ -13,7 +13,7 @@ products: You can configure an {{ech}} deployment to remotely access or (be accessed by) a cluster from: -* Another {{ech}} deployment of your {{ecloud}} organization, across any region or cloud provider (AWS, GCP, Azure…​) +* Another {{ech}} deployment of your {{ecloud}} organization, across any region or cloud provider (AWS, GCP, Azure… ) * An {{ech}} deployment of another {{ecloud}} organization * A deployment in an {{ece}} installation * A deployment in an {{eck}} installation diff --git a/deploy-manage/tools/snapshot-and-restore/azure-repository.md b/deploy-manage/tools/snapshot-and-restore/azure-repository.md index 93d741ba6f..dfcf17f289 100644 --- a/deploy-manage/tools/snapshot-and-restore/azure-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/azure-repository.md @@ -197,7 +197,7 @@ PUT _snapshot/my_backup ## Repository validation rules [repository-azure-validation] -According to the [containers naming guide](https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers—​Blobs—​and-Metadata), a container name must be a valid DNS name, conforming to the following naming rules: +According to the [containers naming guide](https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers— Blobs— and-Metadata), a container name must be a valid DNS name, conforming to the following naming rules: * Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character. * Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md index eb1e3000f3..71bdea9ba6 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md @@ -85,7 +85,7 @@ During setup, you can map users according to their properties to {{ece}} roles. ## Step 3: Change the order of provider profiles [ece-provider-order] -{{ece}} performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted—​regardless of what permissions might be available in another, lower-order profile. +{{ece}} performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted— regardless of what permissions might be available in another, lower-order profile. To change the provider order: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md b/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md index 68dd93a62d..bcd312dd73 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md @@ -31,7 +31,7 @@ xpack.security.authc: ``` 1. The username/principal of the anonymous user. Defaults to `_es_anonymous_user` if not specified. -2. The roles to associate with the anonymous user. If no roles are specified, anonymous access is disabled—​anonymous requests will be rejected and return an authentication error. +2. The roles to associate with the anonymous user. If no roles are specified, anonymous access is disabled— anonymous requests will be rejected and return an authentication error. 3. When `true`, a 403 HTTP status code is returned if the anonymous user does not have the permissions needed to perform the requested action and the user will NOT be prompted to provide credentials to access the requested resource. When `false`, a 401 HTTP status code is returned if the anonymous user does not have the necessary permissions and the user is prompted for credentials to access the requested resource. If you are using anonymous access in combination with HTTP, you might need to set `authz_exception` to `false` if your client does not support preemptive basic authentication. Defaults to `true`. diff --git a/explore-analyze/alerts-cases/alerts/maintenance-windows.md b/explore-analyze/alerts-cases/alerts/maintenance-windows.md index b1bdc07424..166bcd28b9 100644 --- a/explore-analyze/alerts-cases/alerts/maintenance-windows.md +++ b/explore-analyze/alerts-cases/alerts/maintenance-windows.md @@ -25,7 +25,7 @@ By default, a maintenance window affects all rules in all {{kib}} apps within it Alerts continue to be generated, however notifications are suppressed as follows: -* When an alert occurs during a maintenance window, there are no notifications. When the alert recovers, there are no notifications—​even if the recovery occurs after the maintenance window ends. +* When an alert occurs during a maintenance window, there are no notifications. When the alert recovers, there are no notifications— even if the recovery occurs after the maintenance window ends. * When an alert occurs before a maintenance window and recovers during or after the maintenance window, notifications are sent as usual. ## Configure access to maintenance windows [setup-maintenance-windows] diff --git a/explore-analyze/alerts-cases/watcher/actions-email.md b/explore-analyze/alerts-cases/watcher/actions-email.md index 7691406697..af3764e366 100644 --- a/explore-analyze/alerts-cases/watcher/actions-email.md +++ b/explore-analyze/alerts-cases/watcher/actions-email.md @@ -130,7 +130,7 @@ See [Automating report generation](../../report-and-share/automating-report-gene $$$email-address$$$ Email Address -: An email address can contain two possible parts—​the address itself and an optional personal name as described in [RFC 822](http://www.ietf.org/rfc/rfc822.txt). The address can be represented either as a string of the form `user@host.domain` or `Personal Name `. You can also specify an email address as an object that contains `name` and `address` fields. +: An email address can contain two possible parts— the address itself and an optional personal name as described in [RFC 822](http://www.ietf.org/rfc/rfc822.txt). The address can be represented either as a string of the form `user@host.domain` or `Personal Name `. You can also specify an email address as an object that contains `name` and `address` fields. $$$address-list$$$ diff --git a/explore-analyze/alerts-cases/watcher/input-search.md b/explore-analyze/alerts-cases/watcher/input-search.md index 9898371ce4..4202b0420e 100644 --- a/explore-analyze/alerts-cases/watcher/input-search.md +++ b/explore-analyze/alerts-cases/watcher/input-search.md @@ -19,7 +19,7 @@ In the search input’s `request` object, you specify: * The [search type](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) * The search request body -The search request body supports the full Elasticsearch Query DSL—​it’s the same as the body of an Elasticsearch `_search` request. +The search request body supports the full Elasticsearch Query DSL— it’s the same as the body of an Elasticsearch `_search` request. For example, the following input retrieves all `event` documents from the `logs` index: diff --git a/explore-analyze/alerts-cases/watcher/schedule-types.md b/explore-analyze/alerts-cases/watcher/schedule-types.md index 9ace5358c8..9a3125308c 100644 --- a/explore-analyze/alerts-cases/watcher/schedule-types.md +++ b/explore-analyze/alerts-cases/watcher/schedule-types.md @@ -34,7 +34,7 @@ If you don’t specify the `minute` attribute for an `hourly` schedule, it defau To configure a once an hour schedule, you specify a single time with the `minute` attribute. -For example, the following `hourly` schedule triggers at minute 30 every hour-- `12:30`, `13:30`, `14:30`, …​: +For example, the following `hourly` schedule triggers at minute 30 every hour-- `12:30`, `13:30`, `14:30`, … : ```js { @@ -49,7 +49,7 @@ For example, the following `hourly` schedule triggers at minute 30 every hour-- ### Configuring a multiple times hourly schedule [_configuring_a_multiple_times_hourly_schedule] -To configure an `hourly` schedule that triggers at multiple times during the hour, you specify an array of minutes. For example, the following schedule triggers every 15 minutes every hour--`12:00`, `12:15`, `12:30`, `12:45`, `1:00`, `1:15`, …​: +To configure an `hourly` schedule that triggers at multiple times during the hour, you specify an array of minutes. For example, the following schedule triggers every 15 minutes every hour--`12:00`, `12:15`, `12:30`, `12:45`, `1:00`, `1:15`, … : ```js { diff --git a/explore-analyze/alerts-cases/watcher/watch-cluster-status.md b/explore-analyze/alerts-cases/watcher/watch-cluster-status.md index 03d939cd86..58a94c34c9 100644 --- a/explore-analyze/alerts-cases/watcher/watch-cluster-status.md +++ b/explore-analyze/alerts-cases/watcher/watch-cluster-status.md @@ -148,7 +148,7 @@ GET .watcher-history*/_search?pretty ## Take action [health-take-action] -Recording `watch_records` in the watch history is nice, but the real power of {{watcher}} is being able to do something in response to an alert. A watch’s [actions](actions.md) define what to do when the watch condition is true—​you can send emails, call third-party webhooks, or write documents to an Elasticsearch index or log when the watch condition is met. +Recording `watch_records` in the watch history is nice, but the real power of {{watcher}} is being able to do something in response to an alert. A watch’s [actions](actions.md) define what to do when the watch condition is true— you can send emails, call third-party webhooks, or write documents to an Elasticsearch index or log when the watch condition is met. For example, you could add an action to index the cluster status information when the status is RED. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md b/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md index 9ce3b244f7..2d1e46a8ed 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md @@ -28,7 +28,7 @@ You need to have a compatible visualization on **Dashboard** to create an {{anom :::: 1. Go to **Analytics > Dashboard** from the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md). Select a dashboard with a compatible visualization. -2. Open the **Options (…​) menu** for the panel, then select **More**. +2. Open the **Options (… ) menu** for the panel, then select **More**. 3. Select **Create {{anomaly-job}}**. The option is only displayed if the visualization can be converted to an {{anomaly-job}} configuration. 4. (Optional) Select the layer from which the {{anomaly-job}} is created. diff --git a/explore-analyze/query-filter/languages/sql-client-apps-powerbi.md b/explore-analyze/query-filter/languages/sql-client-apps-powerbi.md index c4ba3db474..7adfae8458 100644 --- a/explore-analyze/query-filter/languages/sql-client-apps-powerbi.md +++ b/explore-analyze/query-filter/languages/sql-client-apps-powerbi.md @@ -26,7 +26,7 @@ Elastic does not endorse, promote or provide support for this application; for n ## Data loading [_data_loading] -First, you’ll need to choose ODBC as the source to load data from. Once launched, click on the *Get Data* button (under *Home* tab), then on the *More…​* button at the bottom of the list: +First, you’ll need to choose ODBC as the source to load data from. Once launched, click on the *Get Data* button (under *Home* tab), then on the *More… * button at the bottom of the list: $$$apps_pbi_fromodbc1$$$ ![apps pbi fromodbc1](/explore-analyze/images/elasticsearch-reference-apps_pbi_fromodbc1.png "") diff --git a/explore-analyze/query-filter/languages/sql-client-apps-qlik.md b/explore-analyze/query-filter/languages/sql-client-apps-qlik.md index 99af143434..cb4bddc47f 100644 --- a/explore-analyze/query-filter/languages/sql-client-apps-qlik.md +++ b/explore-analyze/query-filter/languages/sql-client-apps-qlik.md @@ -37,14 +37,14 @@ To use the Elasticsearch SQL ODBC Driver to load data into Qlik Sense Desktop pe 2. Name app - …​then give it a name, + … then give it a name, $$$apps_qlik_create$$$ ![apps qlik create](/explore-analyze/images/elasticsearch-reference-apps_qlik_create.png "") 3. Open app - …​and then open it: + … and then open it: $$$apps_qlik_open$$$ ![apps qlik open](/explore-analyze/images/elasticsearch-reference-apps_qlik_open.png "") diff --git a/explore-analyze/query-filter/languages/sql-client-apps-tableau-desktop.md b/explore-analyze/query-filter/languages/sql-client-apps-tableau-desktop.md index 43729ce768..8c55bde374 100644 --- a/explore-analyze/query-filter/languages/sql-client-apps-tableau-desktop.md +++ b/explore-analyze/query-filter/languages/sql-client-apps-tableau-desktop.md @@ -36,7 +36,7 @@ Move the {{es}} Connector for Tableau to the Tableau Desktop connectors director * Windows: `C:\Users\[Windows User]\Documents\My Tableau Repository\Connectors` * Mac: `/Users/[user]/Documents/My Tableau Repository/Connectors` -Launch Tableau Desktop. In the menu, click **More…​** and select **Elasticsearch by Elastic** as the data source. +Launch Tableau Desktop. In the menu, click **More… ** and select **Elasticsearch by Elastic** as the data source. $$$apps_tableau_desktop_from_connector$$$ ![Select Elasticsearch by Elastic as the data source](/explore-analyze/images/elasticsearch-reference-apps_tableau_desktop_from_connector.png "") diff --git a/explore-analyze/query-filter/languages/sql-functions-conditional.md b/explore-analyze/query-filter/languages/sql-functions-conditional.md index 64fa463ed6..aaf38feffb 100644 --- a/explore-analyze/query-filter/languages/sql-functions-conditional.md +++ b/explore-analyze/query-filter/languages/sql-functions-conditional.md @@ -159,7 +159,7 @@ COALESCE( 2. 2nd expression -…​ +… **N**th expression @@ -201,7 +201,7 @@ GREATEST( 2. 2nd expression -…​ +… **N**th expression @@ -360,7 +360,7 @@ LEAST( 2. 2nd expression -…​ +… **N**th expression diff --git a/explore-analyze/query-filter/languages/sql-functions-datetime.md b/explore-analyze/query-filter/languages/sql-functions-datetime.md index 68c043df47..cc11f72069 100644 --- a/explore-analyze/query-filter/languages/sql-functions-datetime.md +++ b/explore-analyze/query-filter/languages/sql-functions-datetime.md @@ -1158,7 +1158,7 @@ DAY_NAME(datetime_exp) <1> **Output**: string -**Description**: Extract the day of the week from a date/datetime in text format (`Monday`, `Tuesday`…​). +**Description**: Extract the day of the week from a date/datetime in text format (`Monday`, `Tuesday`… ). ```sql SELECT DAY_NAME(CAST('2018-02-19T10:23:27Z' AS TIMESTAMP)) AS day; @@ -1326,7 +1326,7 @@ MONTH_NAME(datetime_exp) <1> **Output**: string -**Description**: Extract the month from a date/datetime in text format (`January`, `February`…​). +**Description**: Extract the month from a date/datetime in text format (`January`, `February`… ). ```sql SELECT MONTH_NAME(CAST('2018-02-19T10:23:27Z' AS TIMESTAMP)) AS month; diff --git a/explore-analyze/query-filter/languages/sql-lexical-structure.md b/explore-analyze/query-filter/languages/sql-lexical-structure.md index b72d0e1205..24dc63d3b2 100644 --- a/explore-analyze/query-filter/languages/sql-lexical-structure.md +++ b/explore-analyze/query-filter/languages/sql-lexical-structure.md @@ -24,7 +24,7 @@ Take the following example: SELECT * FROM table ``` -This query has four tokens: `SELECT`, `*`, `FROM` and `table`. The first three, namely `SELECT`, `*` and `FROM` are *key words* meaning words that have a fixed meaning in SQL. The token `table` is an *identifier* meaning it identifies (by name) an entity inside SQL such as a table (in this case), a column, etc…​ +This query has four tokens: `SELECT`, `*`, `FROM` and `table`. The first three, namely `SELECT`, `*` and `FROM` are *key words* meaning words that have a fixed meaning in SQL. The token `table` is an *identifier* meaning it identifies (by name) an entity inside SQL such as a table (in this case), a column, etc… As one can see, both key words and identifiers have the *same* lexical structure and thus one cannot know whether a token is one or the other without knowing the SQL language; the complete list of key words is available in the [reserved appendix](sql-syntax-reserved.md). Do note that key words are case-insensitive meaning the previous example can be written as: @@ -132,7 +132,7 @@ A few characters that are not alphanumeric have a dedicated meaning different fr | --- | --- | | `*` | The asterisk (or wildcard) is used in some contexts to denote all fields for a table. Can be also used as an argument to some aggregate functions. | | `,` | Commas are used to enumerate the elements of a list. | -| `.` | Used in numeric constants or to separate identifiers qualifiers (catalog, table, column names, etc…​). | +| `.` | Used in numeric constants or to separate identifiers qualifiers (catalog, table, column names, etc… ). | | `()` | Parentheses are used for specific SQL commands, function declarations or to enforce precedence. | diff --git a/explore-analyze/query-filter/languages/sql-odbc-installation.md b/explore-analyze/query-filter/languages/sql-odbc-installation.md index aef15cd368..2903aa607d 100644 --- a/explore-analyze/query-filter/languages/sql-odbc-installation.md +++ b/explore-analyze/query-filter/languages/sql-odbc-installation.md @@ -151,7 +151,7 @@ Supported Windows Installer command line arguments can be viewed using: msiexec.exe /help ``` -…​or by consulting the [Windows Installer SDK Command-Line Options](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx). +… or by consulting the [Windows Installer SDK Command-Line Options](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx). ### Command line options [odbc-msi-command-line-options] diff --git a/explore-analyze/query-filter/languages/sql-odbc-setup.md b/explore-analyze/query-filter/languages/sql-odbc-setup.md index d7cf6bfcca..f928772d1e 100644 --- a/explore-analyze/query-filter/languages/sql-odbc-setup.md +++ b/explore-analyze/query-filter/languages/sql-odbc-setup.md @@ -18,7 +18,7 @@ Once the driver has been installed, in order for an application to be able to co DSN (*data source name*) is a generic name given to the set of parameters an ODBC driver needs to connect to a database. -We will refer to these parameters as *connection parameters* or *DSN* (despite some of these parameters configuring some other aspects of a driver’s functions; e.g. logging, buffer sizes…​). +We will refer to these parameters as *connection parameters* or *DSN* (despite some of these parameters configuring some other aspects of a driver’s functions; e.g. logging, buffer sizes… ). Using a DSN is the most widely used, simplest and safest way of performing the driver configuration. Constructing a connection string, on the other hand, is the most crude way and consequently the least common method. @@ -70,7 +70,7 @@ The configuration steps are similar for all the above points. Following is an ex #### 2.1 Launch Elasticsearch SQL ODBC Driver DSN Editor [_2_1_launch_elasticsearch_sql_odbc_driver_dsn_editor] -Click on the *System DSN* tab, then on the *Add…​* button: +Click on the *System DSN* tab, then on the *Add… * button: $$$system_add$$$ ![administrator system add](/explore-analyze/images/elasticsearch-reference-administrator_system_add.png "") @@ -192,7 +192,7 @@ One of the following SSL options can be chosen: :::: - If using the file browser to locate the certificate - by pressing the *Browse…​* button - only files with *.pem* and *.der* extensions will be considered by default. Choose *All Files (*.*)* from the drop down, if your file ends with a different extension: + If using the file browser to locate the certificate - by pressing the *Browse… * button - only files with *.pem* and *.der* extensions will be considered by default. Choose *All Files (*.*)* from the drop down, if your file ends with a different extension: $$$dsn_editor_cert$$$ ![dsn editor security cert](/explore-analyze/images/elasticsearch-reference-dsn_editor_security_cert.png "") diff --git a/explore-analyze/query-filter/languages/sql-syntax-select.md b/explore-analyze/query-filter/languages/sql-syntax-select.md index 4485abb3b7..3466ad9ad6 100644 --- a/explore-analyze/query-filter/languages/sql-syntax-select.md +++ b/explore-analyze/query-filter/languages/sql-syntax-select.md @@ -125,7 +125,7 @@ where: `table_name` : Represents the name (optionally qualified) of an existing table, either a concrete or base one (actual index) or alias. -If the table name contains special SQL characters (such as `.`,`-`,`*`,etc…​) use double quotes to escape them: +If the table name contains special SQL characters (such as `.`,`-`,`*`,etc… ) use double quotes to escape them: ```sql SELECT * FROM "emp" LIMIT 1; diff --git a/explore-analyze/transforms/transform-overview.md b/explore-analyze/transforms/transform-overview.md index 89da1bb95b..e948091b17 100644 --- a/explore-analyze/transforms/transform-overview.md +++ b/explore-analyze/transforms/transform-overview.md @@ -62,7 +62,7 @@ As in the case of a pivot, a latest {{transform}} can run once or continuously. {{transforms-cap}} perform search aggregations on the source indices then index the results into the destination index. Therefore, a {{transform}} never takes less time or uses less resources than the aggregation and indexing processes. -If your {{transform}} must process a lot of historic data, it has high resource usage initially—​particularly during the first checkpoint. +If your {{transform}} must process a lot of historic data, it has high resource usage initially— particularly during the first checkpoint. For better performance, make sure that your search aggregations and queries are optimized and that your {{transform}} is processing only necessary data. Consider whether you can apply a source query to the {{transform}} to reduce the scope of data it processes. Also consider whether the cluster has sufficient resources in place to support both the composite aggregation search and the indexing of its results. diff --git a/explore-analyze/visualize/canvas/canvas-tinymath-functions.md b/explore-analyze/visualize/canvas/canvas-tinymath-functions.md index adffd4abc5..fca7d08098 100644 --- a/explore-analyze/visualize/canvas/canvas-tinymath-functions.md +++ b/explore-analyze/visualize/canvas/canvas-tinymath-functions.md @@ -36,13 +36,13 @@ abs([-1 , -2, 3, -4]) // returns [1, 2, 3, 4] ``` -## add( …​args ) [_add_args] +## add( … args ) [_add_args] Calculates the sum of one or more numbers/arrays passed into the function. If at least one array of numbers is passed into the function, the function will calculate the sum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The sum of all numbers in `args` if `args` contains only numbers. Returns an array of sums of the elements at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -95,13 +95,13 @@ ceil([1.1, 2.2, 3.3]) // returns [2, 3, 4] ``` -## clamp( …​a, min, max ) [_clamp_a_min_max] +## clamp( … a, min, max ) [_clamp_a_min_max] Restricts value to a given range and returns closed available value. If only `min` is provided, values are restricted to only a lower bound. | Param | Type | Description | | --- | --- | --- | -| …​a | number | Array. | one or more numbers or arrays of numbers | +| … a | number | Array. | one or more numbers or arrays of numbers | | min | number | Array. | (optional) The minimum value this function will return. | | max | number | Array. | (optional) The maximum value this function will return. | @@ -328,13 +328,13 @@ log([10, 100, 1000, 10000, 100000]) // returns [1, 2, 3, 4, 5] ``` -## max( …​args ) [_max_args] +## max( … args ) [_max_args] Finds the maximum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the maximum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The maximum value of all numbers if `args` contains only numbers. Returns an array with the the maximum values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -349,13 +349,13 @@ max([1, 9], 4, [3, 5]) // returns [max([1, 4, 3]), max([9, 4, 5])] = [4, 9] ``` -## mean( …​args ) [_mean_args] +## mean( … args ) [_mean_args] Finds the mean value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mean by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The mean value of all numbers if `args` contains only numbers. Returns an array with the the mean values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -368,13 +368,13 @@ mean([1, 9], 5, [3, 4]) // returns [mean([1, 5, 3]), mean([9, 5, 4])] = [3, 6] ``` -## median( …​args ) [_median_args] +## median( … args ) [_median_args] Finds the median value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the median by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The median value of all numbers if `args` contains only numbers. Returns an array with the the median values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -388,13 +388,13 @@ median([1, 9], 2, 4, [3, 5]) // returns [median([1, 2, 4, 3]), median([9, 2, 4, ``` -## min( …​args ) [_min_args] +## min( … args ) [_min_args] Finds the minimum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the minimum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The minimum value of all numbers if `args` contains only numbers. Returns an array with the the minimum values of each index, including all scalar numbers in `args` in the calculation at each index if `a` is an array. @@ -435,13 +435,13 @@ mod([14, 42, 65, 108], [5, 4, 14, 2]) // returns [5, 2, 9, 0] ``` -## mode( …​args ) [_mode_args] +## mode( … args ) [_mode_args] Finds the mode value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mode by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.>`. An array of mode value(s) of all numbers if `args` contains only numbers. Returns an array of arrays with mode value(s) of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -521,13 +521,13 @@ random(-10,10) // returns a random number between -10 (inclusive) and 10 (exclus ``` -## range( …​args ) [_range_args] +## range( … args ) [_range_args] Finds the range of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the range by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -540,13 +540,13 @@ range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5 ``` -## range( …​args ) [_range_args_2] +## range( … args ) [_range_args_2] Finds the range of one of more numbers/arrays of numbers into the function. If at least one array of numbers is passed into the function, the function will find the range by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -664,7 +664,7 @@ subtract([14, 42, 65, 108], [2, 7, 5, 12]) // returns [12, 35, 52, 96] ``` -## sum( …​args ) [_sum_args] +## sum( … args ) [_sum_args] Calculates the sum of one or more numbers/arrays passed into the function. If at least one array is passed, the function will sum up one or more numbers/arrays of numbers and distinct values of an array. Sum accepts arrays of different lengths. diff --git a/explore-analyze/visualize/graph/graph-configuration.md b/explore-analyze/visualize/graph/graph-configuration.md index 4700edb913..77cd2447b2 100644 --- a/explore-analyze/visualize/graph/graph-configuration.md +++ b/explore-analyze/visualize/graph/graph-configuration.md @@ -18,7 +18,7 @@ When a user saves a graph workspace in Kibana, it is stored in the `.kibana` ind **data** : The visualized content (the vertices and connections displayed in the workspace). -The data in a saved workspace is like a report—​it is a saved snapshot that potentially summarizes millions of raw documents. Once saved, these summaries are no longer controlled by security policies. Because the original documents might be deleted after a workspace is saved, there’s no practical basis for checking permissions for the data in a saved workspace. +The data in a saved workspace is like a report— it is a saved snapshot that potentially summarizes millions of raw documents. Once saved, these summaries are no longer controlled by security policies. Because the original documents might be deleted after a workspace is saved, there’s no practical basis for checking permissions for the data in a saved workspace. For this reason, you can configure the save policy for graph workspaces to ensure appropriate handling of your data. You can allow all users to save only the configuration information for a graph, require all users to explicitly include the workspace data, or completely disable the ability to save a workspace. diff --git a/manage-data/data-store/near-real-time-search.md b/manage-data/data-store/near-real-time-search.md index 8b01f06ad6..176413cd21 100644 --- a/manage-data/data-store/near-real-time-search.md +++ b/manage-data/data-store/near-real-time-search.md @@ -21,7 +21,7 @@ Sitting between {{es}} and the disk is the filesystem cache. Documents in the in :name: img-pre-refresh ::: -Lucene allows new segments to be written and opened, making the documents they contain visible to search ​without performing a full commit. This is a much lighter process than a commit to disk, and can be done frequently without degrading performance. +Lucene allows new segments to be written and opened, making the documents they contain visible to search without performing a full commit. This is a much lighter process than a commit to disk, and can be done frequently without degrading performance. :::{image} /manage-data/images/elasticsearch-reference-lucene-written-not-committed.png :alt: The buffer contents are written to a segment, which is searchable, but is not yet committed diff --git a/manage-data/data-store/text-analysis/specify-an-analyzer.md b/manage-data/data-store/text-analysis/specify-an-analyzer.md index a081b35141..081bc9a6b0 100644 --- a/manage-data/data-store/text-analysis/specify-an-analyzer.md +++ b/manage-data/data-store/text-analysis/specify-an-analyzer.md @@ -18,7 +18,7 @@ products: ::::{admonition} Keep it simple :class: tip -The flexibility to specify analyzers at different levels and for different times is great…​ *but only when it’s needed*. +The flexibility to specify analyzers at different levels and for different times is great… *but only when it’s needed*. In most cases, a simple approach works best: Specify an analyzer for each `text` field, as outlined in [Specify the analyzer for a field](#specify-index-field-analyzer). diff --git a/manage-data/ingest.md b/manage-data/ingest.md index 46183476ce..ce1c609f25 100644 --- a/manage-data/ingest.md +++ b/manage-data/ingest.md @@ -51,7 +51,7 @@ In most cases, the *simplest option* for ingesting time series data is using {{a * Install [Elastic Agent](/reference/fleet/index.md) on the computer(s) from which you want to collect data. * Add the [Elastic integration](https://docs.elastic.co/en/integrations) for the data source to your deployment. -Integrations are available for many popular platforms and services, and are a good place to start for ingesting data into Elastic solutions—​Observability, Security, and Search—​or your own search application. +Integrations are available for many popular platforms and services, and are a good place to start for ingesting data into Elastic solutions— Observability, Security, and Search— or your own search application. Check out the [Integration quick reference](https://docs.elastic.co/en/integrations/all_integrations) to search for available integrations. If you don’t find an integration for your data source or if you need additional processing to extend the integration, we still have you covered. Refer to [Transform and enrich data](ingest/transform-enrich.md) to learn more. diff --git a/manage-data/ingest/ingest-reference-architectures/agent-to-es.md b/manage-data/ingest/ingest-reference-architectures/agent-to-es.md index 1af223e99c..71719ad36d 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-to-es.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-to-es.md @@ -9,7 +9,7 @@ products: To ingest data into {{es}}, use the *simplest option that meets your needs* and satisfies your use case. -Integrations offer advantages beyond easier data collection—​advantages such as dashboards, central agent management, and easy enablement of [Elastic solutions](https://www.elastic.co/products/), such as Security and Observability. +Integrations offer advantages beyond easier data collection— advantages such as dashboards, central agent management, and easy enablement of [Elastic solutions](https://www.elastic.co/products/), such as Security and Observability. :::{image} /manage-data/images/ingest-ea-es.png :alt: Image showing {{agent}} collecting data and sending to {{es}} diff --git a/manage-data/ingest/ingesting-data-for-elastic-solutions.md b/manage-data/ingest/ingesting-data-for-elastic-solutions.md index b3c6091bef..200ff374d3 100644 --- a/manage-data/ingest/ingesting-data-for-elastic-solutions.md +++ b/manage-data/ingest/ingesting-data-for-elastic-solutions.md @@ -9,7 +9,7 @@ products: [] # Ingesting data for Elastic solutions [ingest-for-solutions] -Elastic solutions—​Security, Observability, and Search—​are loaded with features and functionality to help you get value and insights from your data. [Elastic Agent](/reference/fleet/index.md) and [Elastic integrations](https://docs.elastic.co/en/integrations) can help, and are the best place to start. +Elastic solutions— Security, Observability, and Search— are loaded with features and functionality to help you get value and insights from your data. [Elastic Agent](/reference/fleet/index.md) and [Elastic integrations](https://docs.elastic.co/en/integrations) can help, and are the best place to start. When you use integrations with solutions, you have an integrated experience that offers easier implementation and decreases the time it takes to get insights and value from your data. diff --git a/manage-data/lifecycle/data-tiers.md b/manage-data/lifecycle/data-tiers.md index de38b0d57a..d09420f9cf 100644 --- a/manage-data/lifecycle/data-tiers.md +++ b/manage-data/lifecycle/data-tiers.md @@ -52,7 +52,7 @@ Learn more about each data tier, including when and how it should be used. Data stored in the content tier is generally a collection of items such as a product catalog or article archive. Unlike time series data, the value of the content remains relatively constant over time, so it doesn’t make sense to move it to a tier with different performance characteristics as it ages. Content data typically has long data retention requirements, and you want to be able to retrieve items quickly regardless of how old they are. -Content tier nodes are usually optimized for query performance—​they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas. +Content tier nodes are usually optimized for query performance— they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas. The content tier is required and is often deployed within the same node grouping as the hot tier. System indices and other indices that aren’t part of a data stream are automatically allocated to the content tier. diff --git a/manage-data/lifecycle/rollup/getting-started-api.md b/manage-data/lifecycle/rollup/getting-started-api.md index bf73fc6fbf..937a478088 100644 --- a/manage-data/lifecycle/rollup/getting-started-api.md +++ b/manage-data/lifecycle/rollup/getting-started-api.md @@ -91,7 +91,7 @@ If you’ve worked with rollups before, you may be cautious around averages. If For this reason, other systems tend to either omit the ability to average or store the average at multiple intervals to support more flexible querying. -Instead, the {{rollup-features}} save the `count` and `sum` for the defined time interval. This allows us to reconstruct the average at any interval greater-than or equal to the defined interval. This gives maximum flexibility for minimal storage costs…​ and you don’t have to worry about average accuracies (no average of averages here!) +Instead, the {{rollup-features}} save the `count` and `sum` for the defined time interval. This allows us to reconstruct the average at any interval greater-than or equal to the defined interval. This gives maximum flexibility for minimal storage costs… and you don’t have to worry about average accuracies (no average of averages here!) :::: @@ -120,7 +120,7 @@ POST _rollup/job/sensor/_start ## Searching the rolled results [_searching_the_rolled_results] -After the job has run and processed some data, we can use the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to…​ it just happens to run on the rolled up data instead. +After the job has run and processed some data, we can use the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to… it just happens to run on the rolled up data instead. For example, take this query: diff --git a/manage-data/lifecycle/rollup/rollup-search-limitations.md b/manage-data/lifecycle/rollup/rollup-search-limitations.md index 2b1df06157..74ecb4e655 100644 --- a/manage-data/lifecycle/rollup/rollup-search-limitations.md +++ b/manage-data/lifecycle/rollup/rollup-search-limitations.md @@ -40,7 +40,7 @@ To help simplify the problem, we have limited search to just one rollup index at A perhaps obvious limitation, but rollups can only aggregate on data that has been stored in the rollups. If you don’t configure the rollup job to store metrics about the `price` field, you won’t be able to use the `price` field in any query or aggregation. -For example, the `temperature` field in the following query has been stored in a rollup job…​ but not with an `avg` metric. Which means the usage of `avg` here is not allowed: +For example, the `temperature` field in the following query has been stored in a rollup job… but not with an `avg` metric. Which means the usage of `avg` here is not allowed: ```console GET sensor_rollup/_rollup_search diff --git a/manage-data/lifecycle/rollup/understanding-groups.md b/manage-data/lifecycle/rollup/understanding-groups.md index 5639fcce81..d96e263dc0 100644 --- a/manage-data/lifecycle/rollup/understanding-groups.md +++ b/manage-data/lifecycle/rollup/understanding-groups.md @@ -19,7 +19,7 @@ Rollups will be removed in a future version. [Migrate](migrating-from-rollup-to- To preserve flexibility, Rollup Jobs are defined based on how future queries may need to use the data. Traditionally, systems force the admin to make decisions about what metrics to rollup and on what interval. E.g. The average of `cpu_time` on an hourly basis. This is limiting; if, in the future, the admin wishes to see the average of `cpu_time` on an hourly basis *and* partitioned by `host_name`, they are out of luck. -Of course, the admin can decide to rollup the `[hour, host]` tuple on an hourly basis, but as the number of grouping keys grows, so do the number of tuples the admin needs to configure. Furthermore, these `[hours, host]` tuples are only useful for hourly rollups…​ daily, weekly, or monthly rollups all require new configurations. +Of course, the admin can decide to rollup the `[hour, host]` tuple on an hourly basis, but as the number of grouping keys grows, so do the number of tuples the admin needs to configure. Furthermore, these `[hours, host]` tuples are only useful for hourly rollups… daily, weekly, or monthly rollups all require new configurations. Rather than force the admin to decide ahead of time which individual tuples should be rolled up, Elasticsearch’s Rollup jobs are configured based on which groups are potentially useful to future queries. For example, this configuration: @@ -109,7 +109,7 @@ You’ll notice that the second aggregation is not only substantially larger, it } ``` -Ultimately, when configuring `groups` for a job, think in terms of how you might wish to partition data in a query at a future date…​ then include those in the config. Because Rollup Search allows any order or combination of the grouped fields, you just need to decide if a field is useful for aggregating later, and how you might wish to use it (terms, histogram, etc). +Ultimately, when configuring `groups` for a job, think in terms of how you might wish to partition data in a query at a future date… then include those in the config. Because Rollup Search allows any order or combination of the grouped fields, you just need to decide if a field is useful for aggregating later, and how you might wish to use it (terms, histogram, etc). ## Calendar vs fixed time intervals [rollup-understanding-group-intervals] diff --git a/manage-data/migrate.md b/manage-data/migrate.md index 19ce26fb13..43ae099dc7 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -50,7 +50,7 @@ Before you migrate your {{es}} data, [define your index mappings](/manage-data/d ### Index from the source [ec-index-source] -If you still have access to the original data source, outside of your old {{es}} cluster, you can load the data from there. This might be the simplest option, allowing you to choose the {{es}} version and take advantage of the latest features. You have the option to use any ingestion method that you want—​Logstash, Beats, the {{es}} clients, or whatever works best for you. +If you still have access to the original data source, outside of your old {{es}} cluster, you can load the data from there. This might be the simplest option, allowing you to choose the {{es}} version and take advantage of the latest features. You have the option to use any ingestion method that you want— Logstash, Beats, the {{es}} clients, or whatever works best for you. If the original source isn’t available or has other issues that make it non-viable, there are still two more migration options, getting the data from a remote cluster or restoring from a snapshot. diff --git a/reference/data-analysis/kibana/tinymath-functions.md b/reference/data-analysis/kibana/tinymath-functions.md index aba0cbb219..f85adfc7c0 100644 --- a/reference/data-analysis/kibana/tinymath-functions.md +++ b/reference/data-analysis/kibana/tinymath-functions.md @@ -33,13 +33,13 @@ abs([-1 , -2, 3, -4]) // returns [1, 2, 3, 4] ``` -## add( …​args ) [_add_args] +## add( … args ) [_add_args] Calculates the sum of one or more numbers/arrays passed into the function. If at least one array of numbers is passed into the function, the function will calculate the sum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The sum of all numbers in `args` if `args` contains only numbers. Returns an array of sums of the elements at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -92,13 +92,13 @@ ceil([1.1, 2.2, 3.3]) // returns [2, 3, 4] ``` -## clamp( …​a, min, max ) [_clamp_a_min_max] +## clamp( … a, min, max ) [_clamp_a_min_max] Restricts value to a given range and returns closed available value. If only `min` is provided, values are restricted to only a lower bound. | Param | Type | Description | | --- | --- | --- | -| …​a | number | Array. | one or more numbers or arrays of numbers | +| … a | number | Array. | one or more numbers or arrays of numbers | | min | number | Array. | (optional) The minimum value this function will return. | | max | number | Array. | (optional) The maximum value this function will return. | @@ -325,13 +325,13 @@ log([10, 100, 1000, 10000, 100000]) // returns [1, 2, 3, 4, 5] ``` -## max( …​args ) [_max_args] +## max( … args ) [_max_args] Finds the maximum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the maximum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The maximum value of all numbers if `args` contains only numbers. Returns an array with the the maximum values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -346,13 +346,13 @@ max([1, 9], 4, [3, 5]) // returns [max([1, 4, 3]), max([9, 4, 5])] = [4, 9] ``` -## mean( …​args ) [_mean_args] +## mean( … args ) [_mean_args] Finds the mean value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mean by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The mean value of all numbers if `args` contains only numbers. Returns an array with the the mean values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -365,13 +365,13 @@ mean([1, 9], 5, [3, 4]) // returns [mean([1, 5, 3]), mean([9, 5, 4])] = [3, 6] ``` -## median( …​args ) [_median_args] +## median( … args ) [_median_args] Finds the median value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the median by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The median value of all numbers if `args` contains only numbers. Returns an array with the the median values of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -385,13 +385,13 @@ median([1, 9], 2, 4, [3, 5]) // returns [median([1, 2, 4, 3]), median([9, 2, 4, ``` -## min( …​args ) [_min_args] +## min( … args ) [_min_args] Finds the minimum value of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the minimum by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The minimum value of all numbers if `args` contains only numbers. Returns an array with the the minimum values of each index, including all scalar numbers in `args` in the calculation at each index if `a` is an array. @@ -432,13 +432,13 @@ mod([14, 42, 65, 108], [5, 4, 14, 2]) // returns [5, 2, 9, 0] ``` -## mode( …​args ) [_mode_args] +## mode( … args ) [_mode_args] Finds the mode value(s) of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the mode by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.>`. An array of mode value(s) of all numbers if `args` contains only numbers. Returns an array of arrays with mode value(s) of each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -518,13 +518,13 @@ random(-10,10) // returns a random number between -10 (inclusive) and 10 (exclus ``` -## range( …​args ) [_range_args] +## range( … args ) [_range_args] Finds the range of one of more numbers/arrays of numbers passed into the function. If at least one array of numbers is passed into the function, the function will find the range by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -537,13 +537,13 @@ range([1, 9], 4, [3, 5]) // returns [range([1, 4, 3]), range([9, 4, 5])] = [3, 5 ``` -## range( …​args ) [_range_args_2] +## range( … args ) [_range_args_2] Finds the range of one of more numbers/arrays of numbers into the function. If at least one array of numbers is passed into the function, the function will find the range by index. | Param | Type | Description | | --- | --- | --- | -| …​args | number | Array. | one or more numbers or arrays of numbers | +| … args | number | Array. | one or more numbers or arrays of numbers | **Returns**: `number` | `Array.`. The range value of all numbers if `args` contains only numbers. Returns an array with the the range values at each index, including all scalar numbers in `args` in the calculation at each index if `args` contains at least one array. @@ -661,7 +661,7 @@ subtract([14, 42, 65, 108], [2, 7, 5, 12]) // returns [12, 35, 52, 96] ``` -## sum( …​args ) [_sum_args] +## sum( … args ) [_sum_args] Calculates the sum of one or more numbers/arrays passed into the function. If at least one array is passed, the function will sum up one or more numbers/arrays of numbers and distinct values of an array. Sum accepts arrays of different lengths. diff --git a/reference/fleet/add-fleet-server-mixed.md b/reference/fleet/add-fleet-server-mixed.md index 0b48ab45e2..ae6b1728a4 100644 --- a/reference/fleet/add-fleet-server-mixed.md +++ b/reference/fleet/add-fleet-server-mixed.md @@ -36,7 +36,7 @@ To deploy a self-managed {{fleet-server}} on-premises to work with an {{ech}} de * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). * {{kib}} should be on the same minor version as {{es}} -* {{ece}} 2.9 or later—​allows you to use a hosted {{fleet-server}} on {{ecloud}}. +* {{ece}} 2.9 or later— allows you to use a hosted {{fleet-server}} on {{ecloud}}. * Requires additional wildcard domains and certificates (which normally only cover `*.cname`, not `*.*.cname`). This enables us to provide the URL for {{fleet-server}} of `https://.fleet.`. * The deployment template must contain an {{integrations-server}} node. diff --git a/reference/fleet/agent-processors.md b/reference/fleet/agent-processors.md index 91397db830..c92db32f92 100644 --- a/reference/fleet/agent-processors.md +++ b/reference/fleet/agent-processors.md @@ -93,7 +93,7 @@ Processors have the following limitations. The {{stack}} provides several options for processing data collected by {{agent}}. The option you choose depends on what you need to do: -| If you need to…​ | Do this…​ | +| If you need to… | Do this… | | --- | --- | | Sanitize or enrich raw data at the source | Use an {{agent}} processor | | Convert data to ECS, normalize field data, or enrich incoming data | Use [ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md#pipelines-for-fleet-elastic-agent) | diff --git a/reference/fleet/data-streams-scenario3.md b/reference/fleet/data-streams-scenario3.md index 0b9cc38eb3..055f0d9894 100644 --- a/reference/fleet/data-streams-scenario3.md +++ b/reference/fleet/data-streams-scenario3.md @@ -10,7 +10,7 @@ products: # Scenario 3: Apply an ILM policy with integrations using multiple namespaces [data-streams-scenario3] -In this scenario, you have {{agent}}s collecting system metrics with the System integration in two environments—​one with the namespace `development`, and one with `production`. +In this scenario, you have {{agent}}s collecting system metrics with the System integration in two environments— one with the namespace `development`, and one with `production`. **Goal:** Customize the {{ilm-init}} policy for the `system.network` data stream in the `production` namespace. Specifically, apply the built-in `90-days-default` {{ilm-init}} policy so that data is deleted after 90 days. @@ -53,7 +53,7 @@ metrics-system.network-production@custom 1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates** 2. Click **Create component template**. -3. Use the template above to set the name—​in this case, `metrics-system.network-production@custom`. Click **Next**. +3. Use the template above to set the name— in this case, `metrics-system.network-production@custom`. Click **Next**. 4. Under **Index settings**, set the {{ilm-init}} policy name under the `lifecycle.name` key: ```json @@ -89,7 +89,7 @@ Note the following: * When duplicating the index template, do not change or remo 2. Find the index template you want to clone. The index template will have the `` and `` in its name, but not the ``. In this case, it’s `metrics-system.network`. 3. Select **Actions** > **Clone**. 4. Set the name of the new index template to `metrics-system.network-production`. -5. Change the index pattern to include a namespace—​in this case, `metrics-system.network-production*`. This ensures the previously created component template is only applied to the `production` namespace. +5. Change the index pattern to include a namespace— in this case, `metrics-system.network-production*`. This ensures the previously created component template is only applied to the `production` namespace. 6. Set the priority to `250`. This ensures that the new index template takes precedence over other index templates that match the index pattern. 7. Under **Component templates**, search for and add the component template created in the previous step. To ensure your namespace-specific settings are applied over other custom settings, the new template should be added below the existing `@custom` template. 8. Create the index template. diff --git a/reference/fleet/data-streams.md b/reference/fleet/data-streams.md index 413f56ce91..45a463f15c 100644 --- a/reference/fleet/data-streams.md +++ b/reference/fleet/data-streams.md @@ -136,7 +136,7 @@ Changes to component templates are not applied retroactively to existing indices Use the [index lifecycle management](/manage-data/lifecycle/index-lifecycle-management.md) ({{ilm-init}}) feature in {{es}} to manage your {{agent}} data stream indices as they age. For example, create a new index after a certain period of time, or delete stale indices to enforce data retention standards. -Installed integrations may have one or many associated data streams—​each with an associated {{ilm-init}} policy. By default, these data streams use an {{ilm-init}} policy that matches their data type. For example, the data stream `metrics-system.logs-*`, uses the metrics {{ilm-init}} policy as defined in the `metrics-system.logs` index template. +Installed integrations may have one or many associated data streams— each with an associated {{ilm-init}} policy. By default, these data streams use an {{ilm-init}} policy that matches their data type. For example, the data stream `metrics-system.logs-*`, uses the metrics {{ilm-init}} policy as defined in the `metrics-system.logs` index template. Want to customize your index lifecycle management? See [Tutorials: Customize data retention policies](/reference/fleet/data-streams-ilm-tutorial.md). diff --git a/reference/fleet/filter-agent-list-by-tags.md b/reference/fleet/filter-agent-list-by-tags.md index 7a69bd2a45..42ca97a4d9 100644 --- a/reference/fleet/filter-agent-list-by-tags.md +++ b/reference/fleet/filter-agent-list-by-tags.md @@ -44,9 +44,9 @@ To manage tags in {{fleet}}: 3. In the tags menu, perform an action: - | To…​ | Do this…​ | + | To… | Do this… | | --- | --- | - | Create a new tag | Type the tag name and click **Create new tag…​**. Notice the tag name hasa check mark to show that the tag has been added to the selected agents. | + | Create a new tag | Type the tag name and click **Create new tag… **. Notice the tag name hasa check mark to show that the tag has been added to the selected agents. | | Rename a tag | Hover over the tag name and click the ellipsis button. Type a new name and press Enter.The tag will be renamed in all agents that use it, even agents that are notselected. | | Delete a tag | Hover over the tag name and click the ellipsis button. Click **Delete tag**.The tag will be deleted from all agents, even agents that are not selected. | | Add or remove a tag from an agent | Click the tag name to add or clear the check mark. In the **Tags** column,notice that the tags are added or removed. Note that the menu only showstags that are common to all selected agents. | diff --git a/reference/fleet/fleet-server.md b/reference/fleet/fleet-server.md index 61bee34770..b7742f2f25 100644 --- a/reference/fleet/fleet-server.md +++ b/reference/fleet/fleet-server.md @@ -30,7 +30,7 @@ The following diagram shows how {{agent}}s communicate with {{fleet-server}} to ::::{admonition} **Does {{fleet-server}} run inside of {{agent}}?** -{{fleet-server}} is a subprocess that runs inside a deployed {{agent}}. This means the deployment steps are similar to any {{agent}}, except that you enroll the agent in a special {{fleet-Server}} policy. Typically—​especially in large-scale deployments—​this agent is dedicated to running {{fleet-server}} as an {{agent}} communication host and is not configured for data collection. +{{fleet-server}} is a subprocess that runs inside a deployed {{agent}}. This means the deployment steps are similar to any {{agent}}, except that you enroll the agent in a special {{fleet-Server}} policy. Typically— especially in large-scale deployments— this agent is dedicated to running {{fleet-server}} as an {{agent}} communication host and is not configured for data collection. :::: diff --git a/reference/fleet/hints-annotations-autodiscovery.md b/reference/fleet/hints-annotations-autodiscovery.md index 2d8a659a34..826c944f47 100644 --- a/reference/fleet/hints-annotations-autodiscovery.md +++ b/reference/fleet/hints-annotations-autodiscovery.md @@ -394,7 +394,7 @@ When things do not work as expected, you may need to troubleshoot your setup. He tail -f /etc/elastic-agent/data/logs/elastic-agent-*.ndjson ``` - Verify that the hints feature is enabled in the config and look for hints-related logs like: "Generated hints mappings are …​" In these logs, you can find the mappings that are extracted out of the annotations and determine if the values can populate a specific input. + Verify that the hints feature is enabled in the config and look for hints-related logs like: "Generated hints mappings are … " In these logs, you can find the mappings that are extracted out of the annotations and determine if the values can populate a specific input. 3. View the {{metricbeat}} logs: diff --git a/reference/fleet/migrate-auditbeat-to-agent.md b/reference/fleet/migrate-auditbeat-to-agent.md index e9053f676e..548be901a0 100644 --- a/reference/fleet/migrate-auditbeat-to-agent.md +++ b/reference/fleet/migrate-auditbeat-to-agent.md @@ -22,12 +22,12 @@ The integrations that provide replacements for `auditd` and `file_integrity` mod The following table describes the integrations you can use instead of {{auditbeat}} modules and datasets. -| If you use…​ | You can use this instead…​ | Notes | +| If you use… | You can use this instead… | Notes | | --- | --- | --- | | [Auditd](beats://reference/auditbeat/auditbeat-module-auditd.md) module | [Auditd Manager](integration-docs://reference/auditd_manager/index.md) integration | This integration is a direct replacement of the module. You can port rules andconfiguration to this integration. Starting in {{stack}} 8.4, you can also set the`immutable` flag in the audit configuration. | | [Auditd Logs](integration-docs://reference/auditd/index.md) integration | Use this integration if you don’t need to manage rules. It only parses logs fromthe audit daemon `auditd`. Note that the events created by this integrationare different than the ones created by[Auditd Manager](integration-docs://reference/auditd_manager/index.md), since the latter merges allrelated messages in a single event while [Auditd Logs](integration-docs://reference/auditd/index.md)creates one event per message. | | [File Integrity](beats://reference/auditbeat/auditbeat-module-file_integrity.md) module | [File Integrity Monitoring](integration-docs://reference/fim/index.md) integration | This integration is a direct replacement of the module. It reports real-timeevents, but cannot report who made the changes. If you need to track thisinformation, use [{{elastic-defend}}](/solutions/security/configure-elastic-defend/install-elastic-defend.md) instead. | -| [System](beats://reference/auditbeat/auditbeat-module-system.md) module | It depends…​ | There is not a single integration that collects all this information. | +| [System](beats://reference/auditbeat/auditbeat-module-system.md) module | It depends… | There is not a single integration that collects all this information. | | [System.host](beats://reference/auditbeat/auditbeat-dataset-system-host.md) dataset | [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:

* [system_info](https://www.osquery.io/schema/5.1.0/#system_info) for hostname, unique ID, and architecture
* [os_version](https://www.osquery.io/schema/5.1.0/#os_version)
* [interface_addresses](https://www.osquery.io/schema/5.1.0/#interface_addresses) for IPs and MACs
| | [System.login](beats://reference/auditbeat/auditbeat-dataset-system-login.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Report login events. | | [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Use the [last](https://www.osquery.io/schema/5.1.0/#last) table for Linux and macOS. | diff --git a/reference/fleet/upgrade-elastic-agent.md b/reference/fleet/upgrade-elastic-agent.md index 19687985c6..dbbeaa9662 100644 --- a/reference/fleet/upgrade-elastic-agent.md +++ b/reference/fleet/upgrade-elastic-agent.md @@ -209,7 +209,7 @@ When the upgrade process for multiple agents has been detected to have stalled, 1. On the **Agents** tab, select any set of the agents that are indicated to be stuck, and click **Actions**. 2. From the **Actions** menu, select **Restart upgrade agents**. -3. In the **Restart upgrade…​** window, select an upgrade version. +3. In the **Restart upgrade… ** window, select an upgrade version. 4. Select the amount of time available for the maintenance window. The upgrades are spread out uniformly across this maintenance window to avoid exhausting network resources. To force selected agents to upgrade immediately when the upgrade is triggered, select **Immediately**. Avoid using this setting for batches of more than 10 agents. diff --git a/reference/glossary/index.md b/reference/glossary/index.md index 68dc4b5163..1b3f713ffb 100644 --- a/reference/glossary/index.md +++ b/reference/glossary/index.md @@ -450,7 +450,7 @@ $$$glossary-integration-policy$$$ integration policy : An instance of an [integration](/reference/glossary/index.md#glossary-integration) that is configured for a specific use case, such as collecting logs from a specific file. $$$glossary-integration$$$ integration -: An easy way for external systems to connect to the {{stack}}. Whether it's collecting data or protecting systems from security threats, integrations provide out-of-the-box assets to make setup easy—​many with just a single click. +: An easy way for external systems to connect to the {{stack}}. Whether it's collecting data or protecting systems from security threats, integrations provide out-of-the-box assets to make setup easy— many with just a single click. ## J [j-glos] diff --git a/solutions/observability/apm/control-access-to-apm-data.md b/solutions/observability/apm/control-access-to-apm-data.md index 338ac0b091..873287ea7f 100644 --- a/solutions/observability/apm/control-access-to-apm-data.md +++ b/solutions/observability/apm/control-access-to-apm-data.md @@ -10,7 +10,7 @@ products: # Control access to APM data [apm-spaces] -Starting in version 8.2.0, the Applications UI is [Kibana space](/deploy-manage/manage-spaces.md) aware. This allows you to separate your data—​and access to that data—​by team, use case, service environment, or any other filter that you choose. +Starting in version 8.2.0, the Applications UI is [Kibana space](/deploy-manage/manage-spaces.md) aware. This allows you to separate your data— and access to that data— by team, use case, service environment, or any other filter that you choose. To take advantage of this feature, your APM data needs to be written to different data streams. One way to accomplish this is with different namespaces. For example, you can send production data to an APM integration with a namespace of `production`, while sending staging data to a different APM integration with a namespace of `staging`. @@ -41,7 +41,7 @@ The default index settings also query the `apm-*` data view. This data view matc Instead of querying the default APM data views, we can create filtered aliases for the Applications UI to query. A filtered alias is a secondary name for a group of data streams that has a user-defined filter to limit the documents that the alias can access. -To separate `staging` and `production` APM data, we’d need to create six filtered aliases—​three aliases for each service environment: +To separate `staging` and `production` APM data, we’d need to create six filtered aliases— three aliases for each service environment: | Index setting | `production` env | `staging` env | | --- | --- | --- | diff --git a/solutions/observability/apm/create-apm-rules-alerts.md b/solutions/observability/apm/create-apm-rules-alerts.md index abe6ee642f..d9aa2ad5ee 100644 --- a/solutions/observability/apm/create-apm-rules-alerts.md +++ b/solutions/observability/apm/create-apm-rules-alerts.md @@ -25,7 +25,7 @@ The following APM rules are supported: | **APM Anomaly** | Alert when either the latency, throughput, or failed transaction rate of a service is anomalous.Anomaly rules can be set at the environment level, service level, and/or transaction type level. Read more in [APM Anomaly rule →](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | | **Error count threshold** | Alert when the number of errors in a service exceeds a defined threshold. Error count rules can be set at theenvironment level, service level, and error group level. Read more in [Error count threshold rule →](/solutions/observability/incident-management/create-an-error-count-threshold-rule.md) | | **Failed transaction rate threshold** | Alert when the rate of transaction errors in a service exceeds a defined threshold. Read more in [Failed transaction rate threshold rule →](/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md) | -| **Latency threshold** | Alert when the latency or failed transaction rate is abnormal.Threshold rules can be as broad or as granular as you’d like, enabling you to define exactly when you want to be alerted—​whether that’s at the environment level, service name level, transaction type level, and/or transaction name level. Read more in [Latency threshold rule →](/solutions/observability/incident-management/create-latency-threshold-rule.md) | +| **Latency threshold** | Alert when the latency or failed transaction rate is abnormal.Threshold rules can be as broad or as granular as you’d like, enabling you to define exactly when you want to be alerted— whether that’s at the environment level, service name level, transaction type level, and/or transaction name level. Read more in [Latency threshold rule →](/solutions/observability/incident-management/create-latency-threshold-rule.md) | ::::{tip} For a complete walkthrough of the **Create rule** flyout panel, including detailed information on each configurable property, see Kibana’s [Create and manage rules](/explore-analyze/alerts-cases/alerts/create-manage-rules.md). diff --git a/solutions/observability/apm/custom-filters.md b/solutions/observability/apm/custom-filters.md index 85c507880b..f6a2ca407e 100644 --- a/solutions/observability/apm/custom-filters.md +++ b/solutions/observability/apm/custom-filters.md @@ -20,7 +20,7 @@ stack: serverless: unavailable ``` -Ingest pipelines specify a series of processors that transform data in a specific way. Transformation happens prior to indexing—​inflicting no performance overhead on the monitored application. Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. +Ingest pipelines specify a series of processors that transform data in a specific way. Transformation happens prior to indexing— inflicting no performance overhead on the monitored application. Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. Features of this approach: diff --git a/solutions/observability/apm/data-streams.md b/solutions/observability/apm/data-streams.md index 20f793d423..469fff4c9f 100644 --- a/solutions/observability/apm/data-streams.md +++ b/solutions/observability/apm/data-streams.md @@ -24,7 +24,7 @@ See the [{{fleet}} and {{agent}} Guide](/reference/fleet/data-streams.md) to lea ## Data stream naming scheme [apm-data-streams-naming-scheme] -APM data follows the `--` naming scheme. The `type` and `dataset` are predefined by the {{es}} apm-data plugin, but the `namespace` is your opportunity to customize how different types of data are stored in {{es}}. There is no recommendation for what to use as your namespace—​it is intentionally flexible. For example, you might create namespaces for each of your environments, like `dev`, `prod`, `production`, etc. Or, you might create namespaces that correspond to strategic business units within your organization. +APM data follows the `--` naming scheme. The `type` and `dataset` are predefined by the {{es}} apm-data plugin, but the `namespace` is your opportunity to customize how different types of data are stored in {{es}}. There is no recommendation for what to use as your namespace— it is intentionally flexible. For example, you might create namespaces for each of your environments, like `dev`, `prod`, `production`, etc. Or, you might create namespaces that correspond to strategic business units within your organization. ## APM data streams [apm-data-streams-list] @@ -46,7 +46,7 @@ Metrics * APM service summary metrics: `metrics-apm.service_summary.-` * Application metrics: `metrics-apm.app.-` - Application metrics include the instrumented service’s name—​defined in each {{apm-agent}}'s configuration—​in the data stream name. Service names therefore must follow certain index naming rules. + Application metrics include the instrumented service’s name— defined in each {{apm-agent}}'s configuration— in the data stream name. Service names therefore must follow certain index naming rules. ::::{dropdown} Service name rules * Service names are case-insensitive and must be unique. For example, you cannot have a service named `Foo` and another named `foo`. diff --git a/solutions/observability/apm/errors-ui.md b/solutions/observability/apm/errors-ui.md index 9a59f16974..50dba7978b 100644 --- a/solutions/observability/apm/errors-ui.md +++ b/solutions/observability/apm/errors-ui.md @@ -29,8 +29,8 @@ Selecting an error group ID or error message brings you to the **Error group**. :screenshot: ::: -The error group details page visualizes the number of error occurrences over time and compared to a recent time range. This allows you to quickly determine if the error rate is changing or remaining constant. You’ll also see the top 5 affected transactions—​enabling you to quickly narrow down which transactions are most impacted by the selected error. +The error group details page visualizes the number of error occurrences over time and compared to a recent time range. This allows you to quickly determine if the error rate is changing or remaining constant. You’ll also see the top 5 affected transactions— enabling you to quickly narrow down which transactions are most impacted by the selected error. -Further down, you’ll see an Error sample. The error shown is always the most recent to occur. The sample includes the exception message, culprit, stack trace where the error occurred, and additional contextual information to help debug the issue—​all of which can be copied with the click of a button. +Further down, you’ll see an Error sample. The error shown is always the most recent to occur. The sample includes the exception message, culprit, stack trace where the error occurred, and additional contextual information to help debug the issue— all of which can be copied with the click of a button. In some cases, you might also see a Transaction sample ID. This feature allows you to make a connection between the errors and transactions, by linking you to the specific transaction where the error occurred. This allows you to see the whole trace, including which services the request went through. \ No newline at end of file diff --git a/solutions/observability/apm/get-started-apm-server-binary.md b/solutions/observability/apm/get-started-apm-server-binary.md index 7244c7eeb6..f208d4aad3 100644 --- a/solutions/observability/apm/get-started-apm-server-binary.md +++ b/solutions/observability/apm/get-started-apm-server-binary.md @@ -88,7 +88,7 @@ See [Running on Docker](#apm-running-on-docker) for deploying Docker containers. ## Step 2: Set up and configure [apm-server-configuration] -Configure APM by editing the `apm-server.yml` configuration file. The location of this file varies by platform—​see the [Installation layout](/solutions/observability/apm/installation-layout.md) for help locating it. +Configure APM by editing the `apm-server.yml` configuration file. The location of this file varies by platform— see the [Installation layout](/solutions/observability/apm/installation-layout.md) for help locating it. A minimal configuration file might look like this: diff --git a/solutions/observability/apm/get-started-fleet-managed-apm-server.md b/solutions/observability/apm/get-started-fleet-managed-apm-server.md index 240bd5273d..a8fbc50947 100644 --- a/solutions/observability/apm/get-started-fleet-managed-apm-server.md +++ b/solutions/observability/apm/get-started-fleet-managed-apm-server.md @@ -122,13 +122,13 @@ If you don’t have a {{fleet}} setup already in place, the easiest way to get s :screenshot: ::: -4. On the **Add Elastic APM integration** page, define the host and port where APM Server will listen. Make a note of this value—​you’ll need it later. +4. On the **Add Elastic APM integration** page, define the host and port where APM Server will listen. Make a note of this value— you’ll need it later. ::::{tip} Using Docker or Kubernetes? Set the host to `0.0.0.0` to bind to all interfaces. :::: -5. Under **Agent authorization**, set a Secret token. This will be used to authorize requests from APM agents to the APM Server. Make a note of this value—​you’ll need it later. +5. Under **Agent authorization**, set a Secret token. This will be used to authorize requests from APM agents to the APM Server. Make a note of this value— you’ll need it later. 6. Click **Save and continue**. This step takes a minute or two to complete. When it’s done, you’ll have an agent policy that contains an APM integration policy for the configuration you just specified. 7. To view the new policy, click **Agent policy 1**. diff --git a/solutions/observability/apm/infrastructure.md b/solutions/observability/apm/infrastructure.md index ada7b9cbd7..c2a6206c80 100644 --- a/solutions/observability/apm/infrastructure.md +++ b/solutions/observability/apm/infrastructure.md @@ -23,7 +23,7 @@ The **Infrastructure** tab provides information about the containers, pods, and * **Pods**: Uses the `kubernetes.pod.name` from the [APM metrics data streams](/solutions/observability/apm/metrics.md). * **Containers**: Uses the `container.id` from the [APM metrics data streams](/solutions/observability/apm/metrics.md). -* **Hosts**: If the application is containerized—​if the APM metrics documents include `container.id`—the `host.name` is used from the infrastructure data streams (filtered by `container.id`). If not, `host.hostname` is used from the APM metrics data streams. +* **Hosts**: If the application is containerized— if the APM metrics documents include `container.id`—the `host.name` is used from the infrastructure data streams (filtered by `container.id`). If not, `host.hostname` is used from the APM metrics data streams. :::{image} /solutions/images/serverless-infra.png :alt: Example view of the Infrastructure tab in the Applications UI diff --git a/solutions/observability/apm/manage-storage.md b/solutions/observability/apm/manage-storage.md index 45086f7065..8f2bbe5e5f 100644 --- a/solutions/observability/apm/manage-storage.md +++ b/solutions/observability/apm/manage-storage.md @@ -14,5 +14,5 @@ products: The [storage and sizing guide](/solutions/observability/apm/storage-sizing-guide.md) attempts to define a "typical" storage reference for Elastic APM, and there are additional settings you can tweak to [reduce storage](/solutions/observability/apm/reduce-storage.md), or to [tune data ingestion in {{es}}](/solutions/observability/apm/tune-data-ingestion.md#apm-tune-elasticsearch). -In addition, the Applications UI makes it easy to visualize your APM data usage with [storage explorer](/solutions/observability/apm/storage-explorer.md). Storage explorer allows you to analyze the storage footprint of each of your services to see which are producing large amounts of data—​so you can better reduce the data you’re collecting or forecast and prepare for future storage needs. +In addition, the Applications UI makes it easy to visualize your APM data usage with [storage explorer](/solutions/observability/apm/storage-explorer.md). Storage explorer allows you to analyze the storage footprint of each of your services to see which are producing large amounts of data— so you can better reduce the data you’re collecting or forecast and prepare for future storage needs. diff --git a/solutions/observability/apm/mobile-service-overview.md b/solutions/observability/apm/mobile-service-overview.md index 48e450257e..b90e872284 100644 --- a/solutions/observability/apm/mobile-service-overview.md +++ b/solutions/observability/apm/mobile-service-overview.md @@ -10,7 +10,7 @@ products: # Mobile service overview [apm-mobile-service-overview] -Selecting a mobile service brings you to the **Mobile service overview**. The **Mobile service overview** contains a wide variety of charts and tables that provide high-level visibility into how a mobile service is performing for your users—​enabling you to make data-driven decisions about how to improve your user experience. +Selecting a mobile service brings you to the **Mobile service overview**. The **Mobile service overview** contains a wide variety of charts and tables that provide high-level visibility into how a mobile service is performing for your users— enabling you to make data-driven decisions about how to improve your user experience. For example, see: @@ -27,7 +27,7 @@ All of these metrics & insights can help SREs and developers better understand t ## Quick stats [mobile-service-stats] -Understand the impact of slow application load times and variations in application crash rate on user traffic (coming soon). Visualize session and HTTP trends, and see where your users are located—​enabling you to optimize your infrastructure deployment and routing topology. +Understand the impact of slow application load times and variations in application crash rate on user traffic (coming soon). Visualize session and HTTP trends, and see where your users are located— enabling you to optimize your infrastructure deployment and routing topology. Note: due to the way crash rate is calculated (crashes per session) it is possible to have greater than 100% rate, due to fact that a session may contain multiple crashes. diff --git a/solutions/observability/apm/observe-lambda-functions.md b/solutions/observability/apm/observe-lambda-functions.md index abaf01bd36..27088eaaa2 100644 --- a/solutions/observability/apm/observe-lambda-functions.md +++ b/solutions/observability/apm/observe-lambda-functions.md @@ -36,7 +36,7 @@ Cold start is also displayed in the trace waterfall, where you can drill-down in ### Latency distribution correlation [apm-lambda-cold-start-latency] -The [latency correlations](/solutions/observability/apm/find-transaction-latency-failure-correlations.md) feature can be used to visualize the impact of Lambda cold starts on latency—​just select the `faas.coldstart` field. +The [latency correlations](/solutions/observability/apm/find-transaction-latency-failure-correlations.md) feature can be used to visualize the impact of Lambda cold starts on latency— just select the `faas.coldstart` field. ## AWS Lambda function grouping [apm-lambda-service-config] diff --git a/solutions/observability/apm/service-map.md b/solutions/observability/apm/service-map.md index 1fd3a7302b..5f9312e17a 100644 --- a/solutions/observability/apm/service-map.md +++ b/solutions/observability/apm/service-map.md @@ -13,7 +13,7 @@ products: # Service Map [apm-service-maps] -A service map is a real-time visual representation of the instrumented services in your application’s architecture. It shows you how these services are connected, along with high-level metrics like average transaction duration, requests per minute, and errors per minute. If enabled, service maps also integrate with machine learning—​for real time health indicators based on anomaly detection scores. All of these features can help you quickly and visually assess your services' status and health. +A service map is a real-time visual representation of the instrumented services in your application’s architecture. It shows you how these services are connected, along with high-level metrics like average transaction duration, requests per minute, and errors per minute. If enabled, service maps also integrate with machine learning— for real time health indicators based on anomaly detection scores. All of these features can help you quickly and visually assess your services' status and health. We currently surface two types of service maps: diff --git a/solutions/observability/apm/storage-explorer.md b/solutions/observability/apm/storage-explorer.md index f653024e56..022bb9d7cb 100644 --- a/solutions/observability/apm/storage-explorer.md +++ b/solutions/observability/apm/storage-explorer.md @@ -10,7 +10,7 @@ products: # Storage Explorer [apm-storage-explorer] -Analyze your APM data and manage costs with **storage explorer**. For example, analyze the storage footprint of each of your services to see which are producing large amounts of data—​then change the sample rate of a service to lower the amount of data ingested. Or, expand the time filter to visualize data trends over time so that you can better forecast and prepare for future storage needs. +Analyze your APM data and manage costs with **storage explorer**. For example, analyze the storage footprint of each of your services to see which are producing large amounts of data— then change the sample rate of a service to lower the amount of data ingested. Or, expand the time filter to visualize data trends over time so that you can better forecast and prepare for future storage needs. :::{image} /solutions/images/observability-storage-explorer-overview.png :alt: APM Storage Explorer diff --git a/solutions/observability/apm/use-opentelemetry-with-apm.md b/solutions/observability/apm/use-opentelemetry-with-apm.md index 9b52b53bc6..2d60cec31a 100644 --- a/solutions/observability/apm/use-opentelemetry-with-apm.md +++ b/solutions/observability/apm/use-opentelemetry-with-apm.md @@ -61,7 +61,7 @@ Use the OpenTelemetry API/SDKs with [Elastic APM agents](/solutions/observabilit :screenshot: ::: -This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans — ​avoiding vendor lock-in and having to redo manual instrumentation. +This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans — avoiding vendor lock-in and having to redo manual instrumentation. However, not all features of the OpenTelemetry API are supported when using this approach, and not all Elastic APM agents support this approach. diff --git a/solutions/observability/apm/view-elasticsearch-index-template.md b/solutions/observability/apm/view-elasticsearch-index-template.md index 7d3a579d2b..14c46dd045 100644 --- a/solutions/observability/apm/view-elasticsearch-index-template.md +++ b/solutions/observability/apm/view-elasticsearch-index-template.md @@ -10,7 +10,7 @@ products: # View the Elasticsearch index template [apm-custom-index-template] -Index templates are used to configure the backing indices of data streams as they are created. These index templates are composed of multiple component templates—​reusable building blocks that configure index mappings, settings, and aliases. +Index templates are used to configure the backing indices of data streams as they are created. These index templates are composed of multiple component templates— reusable building blocks that configure index mappings, settings, and aliases. The default APM index templates can be viewed in {{kib}}. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Select **Index Templates** and search for `apm`. Select any of the APM index templates to view their relevant component templates. diff --git a/solutions/observability/cicd.md b/solutions/observability/cicd.md index 9eb3fb923e..09fbcc7d83 100644 --- a/solutions/observability/cicd.md +++ b/solutions/observability/cicd.md @@ -345,7 +345,7 @@ export OTEL_TRACES_EXPORTER="otlp" mvn verify ``` -You can instrument Maven builds without modifying the pom.xml file using the Maven command line argument “-Dmaven.ext.class.path=…​” +You can instrument Maven builds without modifying the pom.xml file using the Maven command line argument “-Dmaven.ext.class.path=… ” ```bash export OTEL_EXPORTER_OTLP_ENDPOINT="https://elastic-apm-server.example.com:8200" diff --git a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md index b9e176c0e4..ab591ae7d4 100644 --- a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md +++ b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md @@ -52,7 +52,7 @@ Expand the **quick guide** to learn how, or skip to the next section if your dat 1. In the popup, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. If you accidentally close the popup or the flyout doesn’t open, go to **{{fleet}} → Agents**, then click **Add agent** to access the flyout. 2. Follow the steps in the **Add agent** flyout to download, install, and enroll the {{agent}}. -9. When incoming data is confirmed—​after a minute or two—​click **View assets** to access the dashboards. +9. When incoming data is confirmed— after a minute or two— click **View assets** to access the dashboards. For more information {{agent}} and integrations, refer to the [{{fleet}} and {{agent}} documentation](/reference/fleet/index.md). diff --git a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md index 83b5341dd0..0e1970c696 100644 --- a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md +++ b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md @@ -54,7 +54,7 @@ Expand the **quick guide** to learn how, or skip to the next section if your dat 1. In the popup, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. If you accidentally close the popup or the flyout doesn’t open, go to **{{fleet}} → Agents**, then click **Add agent** to access the flyout. 2. Follow the steps in the **Add agent** flyout to download, install, and enroll the {{agent}}. -9. When incoming data is confirmed—​after a minute or two—​click **View assets** to access the dashboards. +9. When incoming data is confirmed— after a minute or two— click **View assets** to access the dashboards. For more information {{agent}} and integrations, refer to the [{{fleet}} and {{agent}} documentation](/reference/fleet/index.md). diff --git a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md index e7558618d9..fd4862d282 100644 --- a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md +++ b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md @@ -50,7 +50,7 @@ Expand the **quick guide** to learn how, or skip to the next section if your dat 1. In the popup, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. If you accidentally close the popup or the flyout doesn’t open, go to **{{fleet}} → Agents**, then click **Add agent** to access the flyout. 2. Follow the steps in the **Add agent** flyout to download, install, and enroll the {{agent}}. -9. When incoming data is confirmed—​after a minute or two—​click **View assets** to access the dashboards. +9. When incoming data is confirmed— after a minute or two— click **View assets** to access the dashboards. For more information {{agent}} and integrations, refer to the [{{fleet}} and {{agent}} documentation](/reference/fleet/index.md). diff --git a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md index f763f21151..806828aae3 100644 --- a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md +++ b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md @@ -53,7 +53,7 @@ Expand the **quick guide** to learn how, or skip to the next section if your dat 1. In the popup, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. If you accidentally close the popup or the flyout doesn’t open, go to **{{fleet}} → Agents**, then click **Add agent** to access the flyout. 2. Follow the steps in the **Add agent** flyout to download, install, and enroll the {{agent}}. -9. When incoming data is confirmed—​after a minute or two—​click **View assets** to access the dashboards. +9. When incoming data is confirmed— after a minute or two— click **View assets** to access the dashboards. For more information {{agent}} and integrations, refer to the [{{fleet}} and {{agent}} documentation](/reference/fleet/index.md). diff --git a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md index 2491266236..ac2345fa4e 100644 --- a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md +++ b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md @@ -43,7 +43,7 @@ To create an anomaly detection rule: 6. For the result type: - | Choose…​ | To generate an alert based on…​ | + | Choose… | To generate an alert based on… | | --- | --- | | **Bucket** | How unusual the anomaly was within the bucket of time | | **Record** | What individual anomalies are present in a time range | diff --git a/solutions/observability/incident-management/create-manage-rules.md b/solutions/observability/incident-management/create-manage-rules.md index 123e58d461..c44fbd1414 100644 --- a/solutions/observability/incident-management/create-manage-rules.md +++ b/solutions/observability/incident-management/create-manage-rules.md @@ -25,7 +25,7 @@ Learn more about Observability rules and how to create them: % Serverless rules below, need to make sure we aren't missing some from stateful. Stateful page seems out of date. -| Rule type | Name | Detects when…​ | +| Rule type | Name | Detects when… | | --- | --- | --- | | AIOps | [Anomaly detection](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | Anomalies match specific conditions. | | APM | [APM anomaly](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md) | The latency, throughput, or failed transaction rate of a service is abnormal. | diff --git a/solutions/search/site-or-app/search-ui.md b/solutions/search/site-or-app/search-ui.md index 00836a98a4..1372fdeb3a 100644 --- a/solutions/search/site-or-app/search-ui.md +++ b/solutions/search/site-or-app/search-ui.md @@ -125,7 +125,7 @@ The Enterprise Search team at Elastic maintains this library and are happy to he ## Contribute 🚀 [overview-contribute] -We welcome contributors to the project. Before you begin, a couple notes…​ +We welcome contributors to the project. Before you begin, a couple notes… * Read the [Search UI Contributor’s Guide](https://github.com/elastic/search-ui/blob/main/CONTRIBUTING.md). * Prior to opening a pull request: diff --git a/solutions/security/ai/ai-assistant.md b/solutions/security/ai/ai-assistant.md index 36c840ab82..8698ca4760 100644 --- a/solutions/security/ai/ai-assistant.md +++ b/solutions/security/ai/ai-assistant.md @@ -87,7 +87,7 @@ Each user’s chat history (up to the 99 most recent conversations) and custom Q Use these features to adjust and act on your conversations with AI Assistant: -* (Optional) Select a *System Prompt* at the beginning of a conversation by using the **Select Prompt** menu. System Prompts provide context to the model, informing its response. To create a System Prompt, open the System Prompts dropdown menu and click **+ Add new System Prompt…​**. +* (Optional) Select a *System Prompt* at the beginning of a conversation by using the **Select Prompt** menu. System Prompts provide context to the model, informing its response. To create a System Prompt, open the System Prompts dropdown menu and click **+ Add new System Prompt… **. * (Optional) Select a *Quick Prompt* at the bottom of the chat window to get help writing a prompt for a specific purpose, such as summarizing an alert or converting a query from a legacy SIEM to {{elastic-sec}}. :::{image} /solutions/images/security-quick-prompts.png diff --git a/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md b/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md index c729bc3a9e..5de1c3ea3d 100644 --- a/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md +++ b/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md @@ -48,8 +48,8 @@ To configure an integration policy: 4. Click the **Trusted applications**, **Event filters**, **Host isolation exceptions**, and **Blocklist** tabs to review the endpoint policy artifacts assigned to this integration policy (for more information, refer to [Trusted applications](/solutions/security/manage-elastic-defend/trusted-applications.md), [Event filters](/solutions/security/manage-elastic-defend/event-filters.md), [Host isolation exceptions](/solutions/security/manage-elastic-defend/host-isolation-exceptions.md), and [Blocklist](/solutions/security/manage-elastic-defend/blocklist.md)). On these tabs, you can: * Expand and view an artifact: Click the arrow next to its name. - * View an artifact’s details: Click the actions menu (**…​**), then select **View full details**. - * Unassign an artifact: Click the actions menu (**…​**), then select **Remove from policy**. This does not delete the artifact; this just unassigns it from the current policy. + * View an artifact’s details: Click the actions menu (**… **), then select **View full details**. + * Unassign an artifact: Click the actions menu (**… **), then select **Remove from policy**. This does not delete the artifact; this just unassigns it from the current policy. * Assign an existing artifact: Click **Assign *x* to policy**, then select an item from the flyout. This view lists any existing artifacts that aren’t already assigned to the current policy. ::::{note} diff --git a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md index 55ab17baf4..96dc16b943 100644 --- a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md +++ b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md @@ -34,7 +34,7 @@ To add alerts to a new case: 1. Do one of the following: - * To add a single alert to a case, select the **More actions** menu (**…​**) in the Alerts table or **Take action** in the alert details flyout, then select **Add to a new case**. + * To add a single alert to a case, select the **More actions** menu (**… **) in the Alerts table or **Take action** in the alert details flyout, then select **Add to a new case**. * To add multiple alerts, select the alerts, then select **Add to a new case** from the **Bulk actions** menu. 2. Give the case a name, assign a severity level, and provide a description. You can use [Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) syntax in the case description. @@ -60,7 +60,7 @@ To add alerts to an existing case: 1. Do one of the following: - * To add a single alert to a case, select the **More actions** menu (**…​**) in the Alerts table or **Take action** in the alert details flyout, then select **Add to existing case**. + * To add a single alert to a case, select the **More actions** menu (**… **) in the Alerts table or **Take action** in the alert details flyout, then select **Add to existing case**. * To add multiple alerts, select the alerts, then select **Add to an existing case** from the **Bulk actions** menu. 2. From the **Select case** dialog box, select the case to which you want to attach the alert. A confirmation message is displayed with an option to view the updated case. Click the link in the notification or go to the Cases page to view the case’s details. diff --git a/solutions/security/detect-and-alert/add-manage-exceptions.md b/solutions/security/detect-and-alert/add-manage-exceptions.md index 9bc11a0be4..7a0e3b387b 100644 --- a/solutions/security/detect-and-alert/add-manage-exceptions.md +++ b/solutions/security/detect-and-alert/add-manage-exceptions.md @@ -53,7 +53,7 @@ You can add exceptions to a rule from the rule details page, the Alerts table, t * To add an exception from the Alerts table: 1. Find **Alerts** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - 2. Scroll down to the Alerts table, go to the alert you want to create an exception for, click the **More Actions** menu (**…​**), then select **Add rule exception**. + 2. Scroll down to the Alerts table, go to the alert you want to create an exception for, click the **More Actions** menu (**… **), then select **Add rule exception**. * To add an exception from the alert details flyout: @@ -179,7 +179,7 @@ Additionally, to add an Endpoint exception to an endpoint protection rule, there * To add an Endpoint exception from the Alerts table: 1. Find **Alerts** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - 2. Scroll down to the Alerts table, and from an {{elastic-endpoint}} alert, click the **More actions** menu (**…​**), then select **Add Endpoint exception**. + 2. Scroll down to the Alerts table, and from an {{elastic-endpoint}} alert, click the **More actions** menu (**… **), then select **Add Endpoint exception**. * To add an Endpoint exception from Shared Exception Lists page: diff --git a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md index 6661148e3a..4aca196091 100644 --- a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md +++ b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md @@ -97,7 +97,7 @@ Apply shared exception lists to rules: 2. Do one of the following: * Select a shared exception list’s name to open its details page, then click **Link rules**. - * Find the shared exception list you want to assign to rules, then from the **More actions** menu (**…​**), select **Link rules**. + * Find the shared exception list you want to assign to rules, then from the **More actions** menu (**… **), select **Link rules**. 3. Click the toggles in the **Link** column to select the rules you want to link to the exception list. diff --git a/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md b/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md index feb0f1deb9..e0e54acc58 100644 --- a/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md +++ b/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md @@ -91,7 +91,7 @@ To update a rule’s API key, log into the local cluster as a user with the priv 1. Find **Stack Management** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then go to **Rules**. 2. Use the search box and filters to find the rules you want to update. For example, use the **Type** filter to find rules under the **Security** category. - 3. Select the rule’s actions menu (**…​**), then **Update API key**. + 3. Select the rule’s actions menu (**… **), then **Update API key**. ::::{tip} To update multiple rules, select their checkboxes, then click **Selected *x* rules** → **Update API keys**. diff --git a/solutions/security/detect-and-alert/manage-detection-alerts.md b/solutions/security/detect-and-alert/manage-detection-alerts.md index 5f4f0730ff..b1fd1f112d 100644 --- a/solutions/security/detect-and-alert/manage-detection-alerts.md +++ b/solutions/security/detect-and-alert/manage-detection-alerts.md @@ -176,7 +176,7 @@ You can set an alert’s status to indicate whether it needs to be investigated To change an alert’s status, do one of the following: -* In the Alerts table, click **More actions** (**…​**) in the alert’s row, then select a status. +* In the Alerts table, click **More actions** (**… **) in the alert’s row, then select a status. * In the Alerts table, select the alerts you want to change, click **Selected *x* alerts** at the upper-left above the table, and then select a status. :::{image} /solutions/images/security-alert-change-status.png @@ -208,7 +208,7 @@ To display alert tags in the Alerts table, click **Fields** and add the `kibana. To apply or remove alert tags on individual alerts, do one of the following: -* In the Alerts table, click **More actions** (**…​**) in an alert’s row, then click **Apply alert tags**. Select or unselect tags, then click **Apply tags**. +* In the Alerts table, click **More actions** (**… **) in an alert’s row, then click **Apply alert tags**. Select or unselect tags, then click **Apply tags**. * In an alert’s details flyout, click **Take action → Apply alert tags**. Select or unselect tags, then click **Apply tags**. To apply or remove alert tags on multiple alerts, select the alerts you want to change, then click **Selected *x* alerts** at the upper-left above the table. Click **Apply alert tags**, select or unselect tags, then click **Apply tags**. @@ -230,8 +230,8 @@ Users are not notified when they’ve been assigned to, or unassigned from, aler | Action | Instructions | | --- | --- | -| Assign users to an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**…​**) in an alert’s row, then click **Assign alert**. Select users, then click **Apply**.
- **Alert details flyout**: Click **Take action → Assign alert**. Alternatively, click the **Assign alert** icon at the top of the alert details flyout, select users, then click **Apply**.
| -| Unassign all users from an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**…​**) in an alert’s row, then click **Unassign alert**.
- **Alert details flyout**: Click **Take action → Unassign alert**.
| +| Assign users to an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**… **) in an alert’s row, then click **Assign alert**. Select users, then click **Apply**.
- **Alert details flyout**: Click **Take action → Assign alert**. Alternatively, click the **Assign alert** icon at the top of the alert details flyout, select users, then click **Apply**.
| +| Unassign all users from an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**… **) in an alert’s row, then click **Unassign alert**.
- **Alert details flyout**: Click **Take action → Unassign alert**.
| | Assign users to multiple alerts | From the Alerts table, select the alerts you want to change. Click **Selected *x* alerts** at the upper-left above the table, then click **Assign alert**. Select users, then click **Apply**.

**Note**: Users assigned to some of the selected alerts will be displayed as unassigned in the selection list. Selecting said users will assign them to all alerts they haven’t been assigned to yet.

| | Unassign users from multiple alerts | From the Alerts table, select the alerts you want to change and click **Selected *x* alerts** at the upper-left above the table. Click **Unassign alert** to remove users from the alert. | @@ -264,7 +264,7 @@ Click the **Assignees** filter above the Alerts table, then select the users you You can add exceptions to the rule that generated an alert directly from the Alerts table. Exceptions prevent a rule from generating alerts even when its criteria are met. -To add an exception, click the **More actions** menu (**…​**) in the Alerts table, then select **Add exception**. Alternatively, select **Take action** → **Add rule exception** in the alert details flyout. +To add an exception, click the **More actions** menu (**… **) in the Alerts table, then select **Add exception**. Alternatively, select **Take action** → **Add rule exception** in the alert details flyout. For information about exceptions and how to use them, refer to [Add and manage exceptions](/solutions/security/detect-and-alert/add-manage-exceptions.md). diff --git a/solutions/security/detect-and-alert/manage-detection-rules.md b/solutions/security/detect-and-alert/manage-detection-rules.md index 1cd33c2421..8b4cb094b2 100644 --- a/solutions/security/detect-and-alert/manage-detection-rules.md +++ b/solutions/security/detect-and-alert/manage-detection-rules.md @@ -77,7 +77,7 @@ For {{ml}} rules, an indicator icon (![Error icon from rules table](/solutions/i 1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Do one of the following: - * Edit a single rule: Select the **All actions** menu (**…​**) on a rule, then select **Edit rule settings**. Alternatively, open the rule’s details page and click **Edit rule settings**. The **Edit rule settings** view opens, where you can modify the [rule’s settings](/solutions/security/detect-and-alert/create-detection-rule.md). + * Edit a single rule: Select the **All actions** menu (**… **) on a rule, then select **Edit rule settings**. Alternatively, open the rule’s details page and click **Edit rule settings**. The **Edit rule settings** view opens, where you can modify the [rule’s settings](/solutions/security/detect-and-alert/create-detection-rule.md). * Bulk edit multiple rules: Select the rules you want to edit, then select an action from the **Bulk actions** menu: ::::{note} @@ -124,7 +124,7 @@ When duplicating a rule with exceptions, you can choose to duplicate the rule an 1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. In the Rules table, do one of the following: - * Select the **All actions** menu (**…​**) on a rule, then select an action. + * Select the **All actions** menu (**… **) on a rule, then select an action. * Select all the rules you want to modify, then select an action from the **Bulk actions** menu. * To enable or disable a single rule, switch on the rule’s **Enabled** toggle. * To [snooze](/solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) actions for rules, click the bell icon. @@ -143,7 +143,7 @@ Before manually running rules, make sure you properly understand and plan for ru 1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. In the **Rules** table, do one of the following: - * Select the **All actions** menu (**…​**) on a rule, then select **Manual run**. + * Select the **All actions** menu (**… **) on a rule, then select **Manual run**. * Select all the rules you want to manually run, select the **Bulk actions** menu, then select **Manual run**. 3. Specify when the manual run starts and ends. The default selection is the current day starting three hours in the past. The rule will search for events during the selected time range. diff --git a/solutions/security/endpoint-response-actions.md b/solutions/security/endpoint-response-actions.md index 9dacea383a..867234a26c 100644 --- a/solutions/security/endpoint-response-actions.md +++ b/solutions/security/endpoint-response-actions.md @@ -37,7 +37,7 @@ Response actions are supported on all endpoint platforms (Linux, macOS, and Wind Launch the response console from any of the following places in {{elastic-sec}}: -* **Endpoints** page → **Actions** menu (**…​**) → **Respond** +* **Endpoints** page → **Actions** menu (**… **) → **Respond** * Endpoint details flyout → **Take action** → **Respond** * Alert details flyout → **Take action** → **Respond** * Host details page → **Respond** diff --git a/solutions/security/endpoint-response-actions/isolate-host.md b/solutions/security/endpoint-response-actions/isolate-host.md index c9db994006..f1f8762df8 100644 --- a/solutions/security/endpoint-response-actions/isolate-host.md +++ b/solutions/security/endpoint-response-actions/isolate-host.md @@ -65,7 +65,7 @@ All actions executed on a host are tracked in the host’s response actions hist 1. Find **Endpoints** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then either: * Select the appropriate endpoint in the **Endpoint** column, and click **Take action → Isolate host** in the endpoint details flyout. - * Click the **Actions** menu (**…​**) on the appropriate endpoint, then select **Isolate host**. + * Click the **Actions** menu (**… **) on the appropriate endpoint, then select **Isolate host**. 2. Enter a comment describing why you’re isolating the host (optional). 3. Click **Confirm**. @@ -136,7 +136,7 @@ After the host is successfully isolated, an **Isolated** status is added to the 1. Find **Endpoints** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then either: * Select the appropriate endpoint in the **Endpoint** column, and click **Take action → Release host** in the endpoint details flyout. - * Click the **Actions** menu (**…​**) on the appropriate endpoint, then select **Release host**. + * Click the **Actions** menu (**… **) on the appropriate endpoint, then select **Release host**. 2. Enter a comment describing why you’re releasing the host (optional). 3. Click **Confirm**. diff --git a/solutions/security/explore/hosts-page.md b/solutions/security/explore/hosts-page.md index 8d9c611e53..417eb9a60c 100644 --- a/solutions/security/explore/hosts-page.md +++ b/solutions/security/explore/hosts-page.md @@ -28,7 +28,7 @@ The Hosts page has the following sections: KPI charts show metrics for hosts and unique IPs within the time range specified in the date picker. This data is visualized using linear or bar graphs. ::::{tip} -Hover inside a KPI chart to display the actions menu (**…​**), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. +Hover inside a KPI chart to display the actions menu (**… **), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. :::: diff --git a/solutions/security/explore/network-page.md b/solutions/security/explore/network-page.md index 7e3e9120df..27733bd145 100644 --- a/solutions/security/explore/network-page.md +++ b/solutions/security/explore/network-page.md @@ -40,7 +40,7 @@ There are several ways to drill down: You can start an investigation using the map, and the map refreshes to show related data when you run a query or update the time range. ::::{tip} -To add and remove layers, click on the **Options** menu (**…​**) in the top right corner of the map. +To add and remove layers, click on the **Options** menu (**… **) in the top right corner of the map. :::: diff --git a/solutions/security/explore/users-page.md b/solutions/security/explore/users-page.md index 2025c82a7c..939b539675 100644 --- a/solutions/security/explore/users-page.md +++ b/solutions/security/explore/users-page.md @@ -28,7 +28,7 @@ The Users page has the following sections: KPI charts show the total number of users and successful and failed user authentications within the time range specified in the date picker. Data in the KPI charts is visualized through linear and bar graphs. ::::{tip} -Hover inside a KPI chart to display the actions menu (**…​**), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. +Hover inside a KPI chart to display the actions menu (**… **), where you can perform these actions: inspect, open in Lens, and add to a new or existing case. :::: diff --git a/solutions/security/investigate/indicators-of-compromise.md b/solutions/security/investigate/indicators-of-compromise.md index 3c20bad4cc..3a86efd93f 100644 --- a/solutions/security/investigate/indicators-of-compromise.md +++ b/solutions/security/investigate/indicators-of-compromise.md @@ -121,7 +121,7 @@ Attaching indicators to cases provides more context and available actions for yo To add indicators to cases: -1. From the Indicators table, click the **More actions** (**…​​**) menu. Alternatively, open an indicator’s details, then select **Take action**. +1. From the Indicators table, click the **More actions** (**… **) menu. Alternatively, open an indicator’s details, then select **Take action**. 2. Select one of the following: * **Add to existing case**: From the **Select case** dialog box, select the case to which you want to attach the indicator. @@ -157,7 +157,7 @@ When you attach an indicator to a case, the indicator is added as a new comment ### Remove indicators from cases [delete-indicator-from-case] -To remove an indicator attached to a case, click the **More actions** (**…​​**) menu → **Delete attachment** in the case comment. +To remove an indicator attached to a case, click the **More actions** (**… **) menu → **Delete attachment** in the case comment. :::{image} /solutions/images/security-remove-indicator.png :alt: Removing an indicator from a case @@ -169,7 +169,7 @@ To remove an indicator attached to a case, click the **More actions** (**…​ Add indicator values to the [blocklist](/solutions/security/manage-elastic-defend/blocklist.md) to prevent selected applications from running on your hosts. You can use MD5, SHA-1, or SHA-256 hash values from `file` type indicators. -You can add indicator values to the blocklist from the Indicators table or the Indicator details flyout. From the Indicators table, select the **More actions** (**…​​**) menu → **Add blocklist entry**. Alternatively, open an indicator’s details, then select the **Take action** menu → **Add blocklist entry**. +You can add indicator values to the blocklist from the Indicators table or the Indicator details flyout. From the Indicators table, select the **More actions** (**… **) menu → **Add blocklist entry**. Alternatively, open an indicator’s details, then select the **Take action** menu → **Add blocklist entry**. ::::{note} Refer to [Blocklist](/solutions/security/manage-elastic-defend/blocklist.md) for more information about blocklist entries. diff --git a/solutions/security/investigate/open-manage-cases.md b/solutions/security/investigate/open-manage-cases.md index 6de7abaad3..f18d9c433f 100644 --- a/solutions/security/investigate/open-manage-cases.md +++ b/solutions/security/investigate/open-manage-cases.md @@ -125,7 +125,7 @@ Click on an existing case to access its summary. The case summary, located under ### Manage case comments [cases-manage-comments] -To edit, delete, or quote a comment, select the appropriate option from the **More actions** menu (**…​**). +To edit, delete, or quote a comment, select the appropriate option from the **More actions** menu (**… **). :::{image} /solutions/images/security-cases-manage-comments.png :alt: Shows you a summary of the case @@ -203,7 +203,7 @@ To add a Lens visualization to a comment within your case: 5. Click **Preview** to show how the visualization will appear in the case comment. 6. Click **Add Comment** to add the visualization to your case. -Alternatively, while viewing a [dashboard](/solutions/security/dashboards.md) you can open a panel’s menu then click **More actions (…​) → Add to existing case** or **More actions (…​) → Add to new case**. +Alternatively, while viewing a [dashboard](/solutions/security/dashboards.md) you can open a panel’s menu then click **More actions (… ) → Add to existing case** or **More actions (… ) → Add to new case**. After a visualization has been added to a case, you can modify or interact with it by clicking the **Open Visualization** option in the case’s comment menu. @@ -254,7 +254,7 @@ Go to the **Similar cases** tab to access other cases with the same observables. ### Copy the case UUID [cases-copy-case-uuid] -Each case has a universally unique identifier (UUID) that you can copy and share. To copy a case’s UUID to a clipboard, go to the Cases page and select **Actions** → **Copy Case ID** for the case you want to share. Alternatively, go to a case’s details page, then from the **More actions** menu (…​), select **Copy Case ID**. +Each case has a universally unique identifier (UUID) that you can copy and share. To copy a case’s UUID to a clipboard, go to the Cases page and select **Actions** → **Copy Case ID** for the case you want to share. Alternatively, go to a case’s details page, then from the **More actions** menu (… ), select **Copy Case ID**. :::{image} /solutions/images/security-cases-copy-case-id.png :alt: Copy Case ID option in More actions menu diff --git a/solutions/security/investigate/run-osquery-from-alerts.md b/solutions/security/investigate/run-osquery-from-alerts.md index e7e5987c8f..2c87c80045 100644 --- a/solutions/security/investigate/run-osquery-from-alerts.md +++ b/solutions/security/investigate/run-osquery-from-alerts.md @@ -28,7 +28,7 @@ To run Osquery from an alert: 1. Do one of the following from the Alerts table: * Click the **View details** button to open the Alert details flyout, then click **Take action → Run Osquery**. - * Select the **More actions** menu (**…​**), then select **Run Osquery**. + * Select the **More actions** menu (**… **), then select **Run Osquery**. 2. Choose to run a single query or a query pack. 3. Select one or more {{agent}}s or groups to query. Start typing in the search field to get suggestions for {{agent}}s by name, ID, platform, and policy. diff --git a/solutions/security/manage-elastic-defend/blocklist.md b/solutions/security/manage-elastic-defend/blocklist.md index 6bfd36e640..bd24dcb9bf 100644 --- a/solutions/security/manage-elastic-defend/blocklist.md +++ b/solutions/security/manage-elastic-defend/blocklist.md @@ -86,7 +86,7 @@ You can individually modify each blocklist entry. You can also change the polici To edit a blocklist entry: -1. Click the actions menu (**…​**) for the blocklist entry you want to edit, then select **Edit blocklist**. +1. Click the actions menu (**… **) for the blocklist entry you want to edit, then select **Edit blocklist**. 2. Modify details as needed. 3. Click **Save**. @@ -97,5 +97,5 @@ You can delete a blocklist entry, which removes it entirely from all {{elastic-d To delete a blocklist entry: -1. Click the actions menu (**…​**) for the blocklist entry you want to delete, then select **Delete blocklist**. +1. Click the actions menu (**… **) for the blocklist entry you want to delete, then select **Delete blocklist**. 2. On the dialog that opens, verify that you are removing the correct blocklist entry, then click **Delete**. A confirmation message displays. diff --git a/solutions/security/manage-elastic-defend/endpoints.md b/solutions/security/manage-elastic-defend/endpoints.md index d5bd4e9387..7ceb984e90 100644 --- a/solutions/security/manage-elastic-defend/endpoints.md +++ b/solutions/security/manage-elastic-defend/endpoints.md @@ -52,7 +52,7 @@ The Endpoints list provides the following data: * **IP address**: All IP addresses associated with the hostname. * **Version**: The {{agent}} version currently running. * **Last active**: A date and timestamp of the last time the {{agent}} was active. -* **Actions**: Select the context menu (**…​**) to do the following: +* **Actions**: Select the context menu (**… **) to do the following: * **Isolate host**: [Isolate the host](/solutions/security/endpoint-response-actions/isolate-host.md) from your network, blocking communication until the host is released. * **Respond**: Open the [response console](/solutions/security/endpoint-response-actions.md) to perform response actions directly on the host. diff --git a/solutions/security/manage-elastic-defend/event-filters.md b/solutions/security/manage-elastic-defend/event-filters.md index c0bf346b77..b13b5a3c4b 100644 --- a/solutions/security/manage-elastic-defend/event-filters.md +++ b/solutions/security/manage-elastic-defend/event-filters.md @@ -37,7 +37,7 @@ Create event filters from the **Hosts** page or the **Event filters** page. * To create an event filter from the **Hosts** page: 1. Select the **Events** tab to view the Events table. - 2. Find the event to filter, click the **More actions** menu (**…​**), then select **Add Endpoint event filter**. + 2. Find the event to filter, click the **More actions** menu (**… **), then select **Add Endpoint event filter**. ::::{tip} Since you can only create filters for endpoint events, be sure to filter the Events table to display events generated by {{elastic-endpoint}}.
For example, in the KQL search bar, enter the following query to find endpoint network events: `event.dataset : endpoint.events.network`. @@ -111,7 +111,7 @@ You can individually modify each event filter. You can also change the policies To edit an event filter: -1. Click the actions menu (**…​**) for the event filter you want to edit, then select **Edit event filter**. +1. Click the actions menu (**… **) for the event filter you want to edit, then select **Edit event filter**. 2. Modify details or conditions as needed. 3. Click **Save**. @@ -122,5 +122,5 @@ You can delete an event filter, which removes it entirely from all {{elastic-def To delete an event filter: -1. Click the actions menu (**…​**) on the event filter you want to delete, then select **Delete event filter**. +1. Click the actions menu (**… **) on the event filter you want to delete, then select **Delete event filter**. 2. On the dialog that opens, verify that you are removing the correct event filter, then click **Delete**. A confirmation message is displayed. diff --git a/solutions/security/manage-elastic-defend/host-isolation-exceptions.md b/solutions/security/manage-elastic-defend/host-isolation-exceptions.md index cec3b50b1c..c4fb26e995 100644 --- a/solutions/security/manage-elastic-defend/host-isolation-exceptions.md +++ b/solutions/security/manage-elastic-defend/host-isolation-exceptions.md @@ -68,7 +68,7 @@ You can individually modify each host isolation exception and change the policie To edit a host isolation exception: -1. Click the actions menu (**…​**) for the exception you want to edit, then select **Edit Exception**. +1. Click the actions menu (**… **) for the exception you want to edit, then select **Edit Exception**. 2. Modify details as needed. 3. Click **Save**. The newly modified exception appears at the top of the list. @@ -79,5 +79,5 @@ You can delete a host isolation exception, which removes it entirely from all {{ To delete a host isolation exception: -1. Click the actions menu (**…​**) on the exception you want to delete, then select **Delete Exception**. +1. Click the actions menu (**… **) on the exception you want to delete, then select **Delete Exception**. 2. On the dialog that opens, verify that you are removing the correct host isolation exception, then click **Delete**. A confirmation message is displayed. diff --git a/solutions/security/manage-elastic-defend/trusted-applications.md b/solutions/security/manage-elastic-defend/trusted-applications.md index 13a2301cfd..f32dc33311 100644 --- a/solutions/security/manage-elastic-defend/trusted-applications.md +++ b/solutions/security/manage-elastic-defend/trusted-applications.md @@ -91,7 +91,7 @@ You can individually modify each trusted application. You can also change the po To edit a trusted application: -1. Click the actions menu (**…​**) on the trusted application you want to edit, then select **Edit trusted application**. +1. Click the actions menu (**… **) on the trusted application you want to edit, then select **Edit trusted application**. 2. Modify details as needed. 3. Click **Save**. @@ -102,5 +102,5 @@ You can delete a trusted application, which removes it entirely from all {{elast To delete a trusted application: -1. Click the actions menu (**…​**) on the trusted application you want to delete, then select **Delete trusted application**. +1. Click the actions menu (**… **) on the trusted application you want to delete, then select **Delete trusted application**. 2. On the dialog that opens, verify that you are removing the correct application, then click **Delete**. A confirmation message is displayed. diff --git a/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md b/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md index 493d462897..4c9e1016f2 100644 --- a/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md +++ b/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md @@ -26,4 +26,4 @@ The **Deployments** page in the Cloud UI provides several ways to find deploymen :alt: Add a filter ::: - Looking for all deployments of a specific version, because you want to upgrade them? Easy. Or what about that deployments you noticed before lunch that seemed to be spending an awfully long time changing its configuration—​is it done? Just add a filter to find any ongoing configuration changes. + Looking for all deployments of a specific version, because you want to upgrade them? Easy. Or what about that deployments you noticed before lunch that seemed to be spending an awfully long time changing its configuration— is it done? Just add a filter to find any ongoing configuration changes. diff --git a/troubleshoot/observability/apm/common-problems.md b/troubleshoot/observability/apm/common-problems.md index 5e9302e503..dc042e86d9 100644 --- a/troubleshoot/observability/apm/common-problems.md +++ b/troubleshoot/observability/apm/common-problems.md @@ -251,7 +251,7 @@ In Elasticsearch, index templates are used to define settings and mappings that As an example, some APM agents store cookie values in `http.request.cookies`. Since `http.request` has disabled dynamic indexing, and `http.request.cookies` is not declared in a custom mapping, the values in `http.request.cookies` are not indexed and thus not searchable. -**Ensure an APM data view exists** As a first step, you should ensure the correct data view exists. In {{kib}}, go to **Stack Management** > **Data views**. You should see the APM data view—​the default is `traces-apm*,apm-*,logs-apm*,apm-*,metrics-apm*,apm-*`. If you don’t, the data view doesn’t exist. To fix this, navigate to the Applications UI in {{kib}} and select **Add data**. In the APM tutorial, click **Load Kibana objects** to create the APM data view. +**Ensure an APM data view exists** As a first step, you should ensure the correct data view exists. In {{kib}}, go to **Stack Management** > **Data views**. You should see the APM data view— the default is `traces-apm*,apm-*,logs-apm*,apm-*,metrics-apm*,apm-*`. If you don’t, the data view doesn’t exist. To fix this, navigate to the Applications UI in {{kib}} and select **Add data**. In the APM tutorial, click **Load Kibana objects** to create the APM data view. **Ensure a field is searchable** There are two things you can do to if you’d like to ensure a field is searchable: diff --git a/troubleshoot/observability/explore-data.md b/troubleshoot/observability/explore-data.md index 31ea740715..4d63fd1cdc 100644 --- a/troubleshoot/observability/explore-data.md +++ b/troubleshoot/observability/explore-data.md @@ -66,7 +66,7 @@ To create a multi-series visualization: * User experience (RUM) * Mobile experience -4. Click **Select report metric** and select the options and filters you need. You will see a **Missing…​** warning if required fields (highlighted with red underline) are incomplete. +4. Click **Select report metric** and select the options and filters you need. You will see a **Missing… ** warning if required fields (highlighted with red underline) are incomplete. 5. Click **Apply changes** to see the updated visualization, or repeat the **Add series** process to expand the visualization. 6. To add the visualization to an existing case, click **Add to case** and select the correct case. diff --git a/troubleshoot/observability/troubleshoot-logs.md b/troubleshoot/observability/troubleshoot-logs.md index cd1fde6454..0be2c6982a 100644 --- a/troubleshoot/observability/troubleshoot-logs.md +++ b/troubleshoot/observability/troubleshoot-logs.md @@ -160,9 +160,9 @@ Uninstalling the current {{agent}} removes the entire current setup, including t -### Waiting for Logs to be shipped…​ step never completes [logs-troubleshooting-wait-for-logs] +### Waiting for Logs to be shipped… step never completes [logs-troubleshooting-wait-for-logs] -If the **Waiting for Logs to be shipped…​** step never completes, logs are not being shipped to {{es}} or your Observability project, and there is most likely an issue with your {{agent}} configuration. +If the **Waiting for Logs to be shipped… ** step never completes, logs are not being shipped to {{es}} or your Observability project, and there is most likely an issue with your {{agent}} configuration. #### Solution [logs-troubleshooting-wait-for-logs-solution] diff --git a/troubleshoot/security/elastic-defend.md b/troubleshoot/security/elastic-defend.md index c98c23a536..7feb8f7575 100644 --- a/troubleshoot/security/elastic-defend.md +++ b/troubleshoot/security/elastic-defend.md @@ -90,7 +90,7 @@ To restart a transform that’s not running: 1. Go to **Kibana** → **Stack Management** → **Data** → **Transforms**. 2. Enter `endpoint.metadata` in the search box to find the transforms for {{elastic-defend}}. -3. Click the **Actions** menu (**…​**) and do one of the following for each transform, depending on the value in the **Status** column: +3. Click the **Actions** menu (**… **) and do one of the following for each transform, depending on the value in the **Status** column: * `stopped`: Select **Start** to restart the transform. * `failed`: Select **Stop** to first stop the transform, and then select **Start** to restart it.