diff --git a/manage-data/_snippets/ilm-start.md b/manage-data/_snippets/ilm-start.md index 64e542deed..6b805faa94 100644 --- a/manage-data/_snippets/ilm-start.md +++ b/manage-data/_snippets/ilm-start.md @@ -6,7 +6,7 @@ To restart {{ilm-init}} and resume executing policies, use the [{{ilm-init}} sta POST _ilm/start ``` -The response will look like this: +The response looks like this: ```console-result { @@ -20,7 +20,7 @@ Verify that {{ilm}} is now running: GET _ilm/status ``` -The response will look like this: +The response looks like this: ```console-result { diff --git a/manage-data/_snippets/ilm-status.md b/manage-data/_snippets/ilm-status.md index 7a1a3cdabc..3e3d37d91a 100644 --- a/manage-data/_snippets/ilm-status.md +++ b/manage-data/_snippets/ilm-status.md @@ -1,4 +1,4 @@ -To see the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status): +To view the current status of the {{ilm-init}} service, use the [{{ilm-init}} status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status): ```console GET _ilm/status diff --git a/manage-data/_snippets/ilm-stop.md b/manage-data/_snippets/ilm-stop.md index 22dccccae5..bd0ac5f2ac 100644 --- a/manage-data/_snippets/ilm-stop.md +++ b/manage-data/_snippets/ilm-stop.md @@ -3,7 +3,7 @@ By default, the {{ilm}} service is in the `RUNNING` state and manages all indice You can stop {{ilm-init}} to suspend management operations for all indices. For example, you might stop {{ilm}} when performing scheduled maintenance or making changes to the cluster that could impact the execution of {{ilm-init}} actions. ::::{important} -When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. No snapshots will be taken as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected. +When you stop {{ilm-init}}, [{{slm-init}}](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) operations are also suspended. {{slm-init}} will not take snapshots as scheduled until you restart {{ilm-init}}. In-progress snapshots are not affected. :::: To stop the {{ilm-init}} service and pause execution of all lifecycle policies, use the [{{ilm-init}} stop API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop): @@ -12,7 +12,7 @@ To stop the {{ilm-init}} service and pause execution of all lifecycle policies, POST _ilm/stop ``` -The response will look like this: +The response looks like this: ```console-result { @@ -28,7 +28,7 @@ While the {{ilm-init}} service is shutting down, run the status API to verify th GET _ilm/status ``` -The response will look like this: +The response looks like this: ```console-result { diff --git a/manage-data/data-store.md b/manage-data/data-store.md index 0bd433c948..ba1d90957b 100644 --- a/manage-data/data-store.md +++ b/manage-data/data-store.md @@ -16,6 +16,6 @@ The documentation in this section details how {{es}} works as a _data store_ sta Then, learn how these documents and the fields they contain are stored and indexed in [Mapping](/manage-data/data-store/mapping.md), and how unstructured text is converted into a structured format that’s optimized for search in [Text analysis](/manage-data/data-store/text-analysis.md). -You can also read more about working with {{es}} as a data store including how to use [index templates](/manage-data/data-store/templates.md) to tell {{es}} how to configure an index when it is created, how to use [aliases](/manage-data/data-store/aliases.md) to point to multiple indices, and how to use the [command line to manage data](/manage-data/data-store/manage-data-from-the-command-line.md) stored in {{es}}. +You can also read more about working with {{es}} as a data store including how to use [index templates](/manage-data/data-store/templates.md) to tell {{es}} how to configure an index when you create it, how to use [aliases](/manage-data/data-store/aliases.md) to point to multiple indices, and how to use the [command line to manage data](/manage-data/data-store/manage-data-from-the-command-line.md) stored in {{es}}. -If your use case involves working with continuous streams of time series data, you may consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. The data can be accessed through a single, named resource, while it is stored in a series of hidden, auto-generated backing indices. +If your use case involves working with continuous streams of time series data, you can consider using a [data stream](./data-store/data-streams.md). These are optimally suited for storing append-only data. You can access the data through a single, named resource, while {{es}} stores it in a series of hidden, auto-generated backing indices. diff --git a/manage-data/index.md b/manage-data/index.md index 3aa5c50f86..95d0ad7850 100644 --- a/manage-data/index.md +++ b/manage-data/index.md @@ -9,13 +9,13 @@ description: Learn how to ingest, store, and manage data in Elasticsearch. Under # Manage data -Whether you're looking to build a fast and relevant search solution, monitor business-critical applications and infrastructure, monitor endpoint security data, or one of the [many other use cases Elastic supports](/get-started/introduction.md), you'll need to understand how to ingest and manage data stored in {{es}}. +Whether you're looking to build a fast and relevant search solution, monitor business-critical applications and infrastructure, monitor endpoint security data, or one of the [many other use cases Elastic supports](/get-started/introduction.md), you need to understand how to ingest and manage data stored in {{es}}. ## Learn how data is stored % Topic: Learning about Elastic data store primitives -The fundamental unit of storage in {{es}}, the index, is a collection of documents uniquely identified by a name or an alias. These documents go through a process called mapping, which defines how a document and the fields it contains are stored and indexed, and a process called text analysis in which unstructured text is converted into a structured format that’s optimized for search. +The fundamental unit of storage in {{es}}, the index, is a collection of documents uniquely identified by a name or an alias. These documents go through a process called mapping, which defines how {{es}} stores and indexes a document and the fields it contains, and a process called text analysis in which {{es}} converts unstructured text into a structured format that's optimized for search. **Learn more in [The Elasticsearch data store](/manage-data/data-store.md)**. @@ -23,7 +23,7 @@ The fundamental unit of storage in {{es}}, the index, is a collection of documen % Topic: Evaluating and implementing ingestion and data enrichment technologies -Before you can start searching, visualizing, and pulling actionable insights from Elastic, you have to get your data into {{es}}. Elastic offers a wide range of tools and methods for getting data into {{es}}. The best approach will depend on the kind of data you're ingesting and your specific use case. +Before you can start searching, visualizing, and pulling actionable insights from Elastic, you have to get your data into {{es}}. Elastic offers a wide range of tools and methods for getting data into {{es}}. The best approach depends on the kind of data you're ingesting and your specific use case. **Learn more in [Ingestion](/manage-data/ingest.md).** @@ -31,9 +31,9 @@ Before you can start searching, visualizing, and pulling actionable insights fro % Topic: Managing your data volume (lifecycle) -After you've added data to {{es}}, you'll need to manage it over time. For example, you may specify that data be deleted after a retention period or store data in multiple tiers with different performance characteristics. +After you've added data to {{es}}, you need to manage it over time. For example, you can specify that data be deleted after a retention period or store data in multiple tiers with different performance characteristics. -Strategies for managing data depend on the type of data and how it's being used. For example, with a collection of items you want to search, like a catalog of products, the value of the content remains relatively constant over time so you want to be able to retrieve items quickly regardless of how old they are. Whereas with a stream of continuously-generated timestamped data, such as log entries, the data keeps accumulating over time, so you need strategies for balancing the value of the data against the cost of storing it. +Strategies for managing data depend on the type of data and how you're using it. For example, with a collection of items you want to search, like a catalog of products, the value of the content remains relatively constant over time so you want to be able to retrieve items quickly regardless of how old they are. Whereas with a stream of continuously-generated timestamped data, such as log entries, the data keeps accumulating over time, so you need strategies for balancing the value of the data against the cost of storing it. **Learn more in [Data lifecycle](/manage-data/lifecycle.md).** diff --git a/manage-data/ingest.md b/manage-data/ingest.md index 9a107e4726..a8c0b1dad1 100644 --- a/manage-data/ingest.md +++ b/manage-data/ingest.md @@ -19,7 +19,7 @@ products: Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights. -Our ingest tools are flexible, and support a wide range of scenarios. We can help you with everything from popular and straightforward use cases, all the way to advanced use cases that require additional processing in order to modify or reshape your data before it goes to {{es}}. +Our ingest tools are flexible, and support a wide range of scenarios. We can help you with everything from popular and straightforward use cases, all the way to advanced use cases that require additional processing to modify or reshape your data before it goes to {{es}}. You can ingest: @@ -42,10 +42,10 @@ If you would like to try things out before you add your own data, try using our ## Ingesting time series data [ingest-time-series] -::::{admonition} What’s the best approach for ingesting time series data? +::::{admonition} What's the best approach for ingesting time series data? The best approach for ingesting data is the *simplest option* that *meets your needs* and *satisfies your use case*. -In most cases, the *simplest option* for ingesting time series data is using {{agent}} paired with an Elastic integration. +Usually, the *simplest option* for ingesting time series data is using {{agent}} paired with an Elastic integration. * Install [Elastic Agent](/reference/fleet/index.md) on the computer(s) from which you want to collect data. * Add the [Elastic integration](https://docs.elastic.co/en/integrations) for the data source to your deployment. diff --git a/manage-data/lifecycle.md b/manage-data/lifecycle.md index 0d22046562..ee2e4d810c 100644 --- a/manage-data/lifecycle.md +++ b/manage-data/lifecycle.md @@ -22,7 +22,7 @@ The data you store in {{es}} generally falls into one of two categories: To help you manage your data, {{es}} offers you the following options: {{ilm-cap}}, Data stream lifecycle, and Elastic Curator. ::::{note} -[Data rollup](/manage-data/lifecycle/rollup.md) is a deprecated {{es}} feature that allows you to manage the amount of data that is stored in your cluster, similar to the downsampling functionality of {{ilm-init}} and data stream lifecycle. This feature should not be used for new deployments. +[Data rollup](/manage-data/lifecycle/rollup.md) is a deprecated {{es}} feature that allows you to manage the amount of data that your cluster stores, similar to the downsampling functionality of{{ilm-init}} and data stream lifecycle. Do not use this feature for new deployments. :::: ## {{ilm-init}} [ilm] @@ -40,9 +40,9 @@ In an {{ecloud}} or self-managed environment, ILM lets you automatically transit ::: :::: -**{{ilm-init}}** can be used to manage both indices and data streams. It allows you to do the following: +**{{ilm-init}}** can manage both indices and data streams. It allows you to do the following: -* Define the retention period of your data. The retention period is the minimum time your data will be stored in {{es}}. Data older than this period can be deleted by {{es}}. +* Define the retention period of your data. The retention period is the minimum time {{es}} stores your data. {{es}} can delete data older than this period. * Define [multiple tiers](/manage-data/lifecycle/data-tiers.md) of data nodes with different performance characteristics. * Automatically transition indices through the data tiers according to your performance needs and retention policies. * Leverage [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md) stored in a remote repository to provide resiliency for your older indices while reducing operating costs and maintaining search performance. @@ -52,10 +52,10 @@ In an {{ecloud}} or self-managed environment, ILM lets you automatically transit ## Data stream lifecycle [data-stream-lifecycle] -**Data stream lifecycle** is less feature rich but is focused on simplicity. It allows you to do the following: +**Data stream lifecycle** is less feature rich but focuses on simplicity. It allows you to do the following: -* Define the retention period of your data. The retention period is the minimum time your data will be stored in {{es}}. Data older than this period can be deleted by {{es}} at a later time. -* Improve the performance of your data stream by performing background operations that will optimize the way your data stream is stored. +* Define the retention period of your data. The retention period is the minimum time {{es}} stores your data. {{es}} can delete data older than this period at a later time. +* Improve the performance of your data stream by performing background operations that optimize how {{es}} stores your data stream. **[Read more in Data stream lifecycle ->](/manage-data/lifecycle/data-stream.md)** @@ -64,6 +64,6 @@ In an {{ecloud}} or self-managed environment, ILM lets you automatically transit serverless: unavailable ``` -**Elastic Curator** is a tool that allows you to manage your indices and snapshots using user-defined filters and predefined actions. If ILM provides the functionality to manage your index lifecycle, and you have at least a Basic license, consider using ILM in place of Curator. Many stack components make use of ILM by default. +**Elastic Curator** is a tool that allows you to manage your indices and snapshots using user-defined filters and predefined actions. If ILM provides the functionality to manage your index lifecycle, and you have at least a Basic license, consider using ILM in place of Curator. Many stack components use ILM by default. **[Read more in Elastic Curator ->](/manage-data/lifecycle/curator.md)** diff --git a/manage-data/migrate.md b/manage-data/migrate.md index 3ed4028d93..72be24cf8a 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -40,7 +40,7 @@ Reindex from a remote cluster For {{ech}}, if your cluster is self-managed with a self-signed certificate, you can follow this [step-by-step migration guide](migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md). Restore from a snapshot -: The new cluster must be the same size as your old one, or larger, to accommodate the data. The new cluster must also be an Elasticsearch version that is compatible with the old cluster (check [Elasticsearch snapshot version compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for details). If you have not already done so, you will need to [set up snapshots for your old cluster](/deploy-manage/tools/snapshot-and-restore/self-managed.md) using a repository that can be accessed from the new cluster. +: The new cluster must be the same size as your old one, or larger, to accommodate the data. The new cluster must also be an {{es}} version that is compatible with the old cluster (check [Elasticsearch snapshot version compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for details). If you have not already done so, you need to [set up snapshots for your old cluster](/deploy-manage/tools/snapshot-and-restore/self-managed.md) using a repository that the new cluster can access. :::{admonition} Migrating system {{es}} indices In {{es}} 8.0 and later versions, to back up and restore system indices and system data streams such as `.kibana` or `.security`, you must snapshot and restore the related feature's [feature state](/deploy-manage/tools/snapshot-and-restore.md#feature-state). @@ -75,10 +75,10 @@ Follow these steps to reindex data remotely: Otherwise, if your remote endpoint is not covered by the default patterns, adjust the setting to add the remote {{es}} cluster as an allowed host: 1. From your deployment menu, go to the **Edit** page. - 2. In the **Elasticsearch** section, select **Manage user settings and extensions**. For deployments with existing user settings, you may have to expand the **Edit elasticsearch.yml** caret for each node type instead. - 3. Add the following `reindex.remote.whitelist: [REMOTE_HOST:PORT]` user setting, where `REMOTE_HOST` is a pattern matching the URL for the remote {{es}} host that you are reindexing from, and PORT is the host port number. Do not include the `https://` prefix. + 2. In the **Elasticsearch** section, select **Manage user settings and extensions**. For deployments with existing user settings, you might have to expand the **Edit elasticsearch.yml** caret for each node type instead. + 3. Add the following `reindex.remote.whitelist: [REMOTE_HOST:PORT]` user setting, where `REMOTE_HOST` is a pattern matching the URL for the remote {{es}} host that you are reindexing from, and `PORT` is the host port number. Do not include the `https://` prefix. - Note that if you override the parameter it replaces the defaults: `["*.io:*", "*.com:*"]`. If you still want these patterns to be allowed you need to specify them explicitly in the value. + If you override the parameter, it replaces the defaults: `["*.io:*", "*.com:*"]`. If you still want these patterns to be allowed, you need to specify them explicitly in the value. For example: @@ -124,7 +124,7 @@ Follow these steps to reindex data remotely: Restoring from a snapshot is often the fastest and most reliable way to migrate data between {{es}} clusters. It preserves mappings, settings, and optionally parts of the cluster state such as index templates, component templates, and system indices. -System indices can be restored by including their corresponding [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) in the restore operation, allowing you to retain internal configurations related to security, {{kib}}, or other stack features. +You can restore system indices by including their corresponding [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) in the restore operation, allowing you to retain internal configurations related to security, {{kib}}, or other stack features. This method is especially useful when: @@ -140,7 +140,7 @@ When your source cluster is actively ingesting data, such as logs, metrics, or t For more information, refer to [Restore into a different cluster](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md#restore-different-cluster). ::::{note} -For {{ece}}, Amazon S3 us the most common snapshot storage, but you can restor from any accesible external storage that contains your {{es}} snapshots. +For {{ece}}, Amazon S3 is the most common snapshot storage, but you can restore from any accessible external storage that contains your {{es}} snapshots. :::: ### Step 1: Set up the repository in the new cluster [migrate-repo-setup] @@ -148,7 +148,7 @@ For {{ece}}, Amazon S3 us the most common snapshot storage, but you can restor f In this step, you’ll configure a snapshot repository in the new cluster that points to the storage location used by the old cluster. This allows the new cluster to access and restore snapshots created in the original environment. ::::{tip} -If your new {{ech}} or {{ece}} deployment cannot connect to the same repository used by your self-managed cluster, for example if it's a private NFS share, consider one of the following alternatives: +If your new {{ech}} or {{ece}} deployment cannot connect to the same repository used by your self-managed cluster, for example if it's a private Network File System (NFS) share, consider one of the following alternatives: * [Back up your repository](/deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) to a supported storage system such as AWS S3, Google Cloud Storage, or Azure Blob Storage, and then configure your new cluster to use that location for the data migration. * Expose the repository contents over `ftp`, `http`, or `https`, and use a [read-only URL repository](/deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) type in your new deployment to access the snapshots. @@ -168,8 +168,8 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository Considerations: - * If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. - * If the old cluster still has write access to the repository, register the repository as read-only to avoid data corruption. This can be done using the `readonly: true` option. + * If you're migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. + * If the old cluster still has write access to the repository, register the repository as read-only to avoid data corruption. You can do this using the `readonly: true` option. To connect the existing snapshot repository to your new deployment, follow the steps for the storage provider where the repository is hosted: @@ -184,7 +184,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository * [Create the repository](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md#ec-create-azure-repository). ::::{important} - Although these instructions are focused on {{ech}}, you should follow the same steps for {{ece}} by configuring the repository directly **at the deployment level**. + Although these instructions focus on {{ech}}, you should follow the same steps for {{ece}} by configuring the repository directly **at the deployment level**. **Do not** configure the repository as an [ECE-managed repository](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md), which is intended for automatic snapshots of deployments. In this case, you need to add a custom repository that already contains snapshots from another cluster. :::: @@ -192,7 +192,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository ### Step 2: Run the snapshot restore [migrate-restore] -After the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. +After you have registered and verified the repository, you are ready to restore any data from any of its snapshots to your new cluster. You can run a restore operation using the {{kib}} Management UI, or using the {{es}} API. Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details, including API-based examples. diff --git a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md index b412ad4b22..7e75d98dca 100644 --- a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md +++ b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md @@ -5,7 +5,7 @@ products: - id: elasticsearch --- -# Use case: use Elasticsearch to manage time series data [use-elasticsearch-for-time-series-data] +# Use case: Use Elasticsearch to manage time series data [use-elasticsearch-for-time-series-data] {{es}} offers features to help you store, manage, and search time series data, such as logs and metrics. Once in {{es}}, you can analyze and visualize your data using {{kib}} and other {{stack}} features. @@ -61,7 +61,7 @@ We recommend you use dedicated nodes in the frozen tier. If needed, you can assi node.roles: [ data_content, data_hot, data_warm ] ``` -Assign your nodes any other roles needed for your cluster. For example, a small cluster may have nodes with multiple roles. +Assign your nodes any other roles needed for your cluster. For example, a small cluster can have nodes with multiple roles. ```yaml node.roles: [ master, ingest, ml, data_hot, transform ] @@ -97,7 +97,7 @@ Use any of the following repository types with searchable snapshots: * [Google Cloud Storage](../deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md) * [Azure Blob Storage](../deploy-manage/tools/snapshot-and-restore/azure-repository.md) * [Hadoop Distributed File Store (HDFS)](elasticsearch://reference/elasticsearch-plugins/repository-hdfs.md) -* [Shared filesystems](../deploy-manage/tools/snapshot-and-restore/shared-file-system-repository.md) such as NFS +* [Shared filesystems](../deploy-manage/tools/snapshot-and-restore/shared-file-system-repository.md) such as Network File System (NFS) * [Read-only HTTP and HTTPS repositories](../deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) You can also use alternative implementations of these repository types, for instance [MinIO](../deploy-manage/tools/snapshot-and-restore/s3-repository.md#repository-s3-client), as long as they are fully compatible. Use the [Repository analysis](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) API to analyze your repository’s suitability for use with searchable snapshots. @@ -253,7 +253,7 @@ If you use {{fleet}} or {{agent}}, skip to [Search and visualize your data](#sea :::: -If you use a custom application, you need to set up your own data stream. A data stream requires a matching index template. In most cases, you compose this index template using one or more component templates. You typically use separate component templates for mappings and index settings. This lets you reuse the component templates in multiple index templates. +If you use a custom application, you need to set up your own data stream. A data stream requires a matching index template. Usually, you compose this index template using one or more component templates. You typically use separate component templates for mappings and index settings. This lets you reuse the component templates in multiple index templates. When creating your component templates, include: @@ -320,7 +320,7 @@ Use your component templates to create an index template. Specify: * One or more index patterns that match the data stream’s name. We recommend using our [data stream naming scheme](/reference/fleet/data-streams.md#data-streams-naming-scheme). * That the template is data stream enabled. * Any component templates that contain your mappings and index settings. -* A priority higher than `200` to avoid collisions with built-in templates. See [Avoid index pattern collisions](data-store/templates.md#avoid-index-pattern-collisions). +* A priority higher than `200` to avoid collisions with built-in templates. Refer to [Avoid index pattern collisions](data-store/templates.md#avoid-index-pattern-collisions). To create an index template in {{kib}}: @@ -367,9 +367,9 @@ POST my-data-stream/_doc ## Search and visualize your data [search-visualize-your-data] -To explore and search your data in {{kib}}, open the main menu and select **Discover**. See {{kib}}'s [Discover documentation](../explore-analyze/discover.md). +To explore and search your data in {{kib}}, open the main menu and select **Discover**. Refer to {{kib}}'s [Discover documentation](../explore-analyze/discover.md). -Use {{kib}}'s **Dashboard** feature to visualize your data in a chart, table, map, and more. See {{kib}}'s [Dashboard documentation](../explore-analyze/dashboards.md). +Use {{kib}}'s **Dashboard** feature to visualize your data in a chart, table, map, and more. Refer to {{kib}}'s [Dashboard documentation](../explore-analyze/dashboards.md). You can also search and aggregate your data using the [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search). Use [runtime fields](data-store/mapping/define-runtime-fields-in-search-request.md) and [grok patterns](../explore-analyze/scripting/grok.md) to dynamically extract data from log messages and other unstructured content at search time. @@ -422,7 +422,7 @@ GET my-data-stream/_search } ``` -{{es}} searches are synchronous by default. Searches across frozen data, long time ranges, or large datasets may take longer. Use the [async search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) to run searches in the background. For more search options, see [*The search API*](../solutions/search/querying-for-search.md). +{{es}} searches are synchronous by default. Searches across frozen data, long time ranges, or large datasets can take longer. Use the [async search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) to run searches in the background. For more search options, refer to [*The search API*](../solutions/search/querying-for-search.md). ```console POST my-data-stream/_async_search