diff --git a/deploy-manage/_snippets/field-doc-sec-limitations.md b/deploy-manage/_snippets/field-doc-sec-limitations.md
index a2413e8095..100839898f 100644
--- a/deploy-manage/_snippets/field-doc-sec-limitations.md
+++ b/deploy-manage/_snippets/field-doc-sec-limitations.md
@@ -6,7 +6,7 @@ When a user’s role enables [document level security](/deploy-manage/users-role
* Document level security doesn’t affect global index statistics that relevancy scoring uses. This means that scores are computed without taking the role query into account. Documents that don’t match the role query are never returned.
* The `has_child` and `has_parent` queries aren’t supported as query parameters in the role definition. The `has_child` and `has_parent` queries can be used in the search API with document level security enabled.
-* [Date math](elasticsearch://reference/elasticsearch/rest-apis/common-options.md#date-math) expressions cannot contain `now` in [range queries with date fields](elasticsearch://reference/query-languages/query-dsl-range-query.md#ranges-on-dates).
+* [Date math](elasticsearch://reference/elasticsearch/rest-apis/common-options.md#date-math) expressions cannot contain `now` in [range queries with date fields](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md#ranges-on-dates).
* Any query that makes remote calls to fetch query data isn’t supported, including the following queries:
* `terms` query with terms lookup
@@ -16,7 +16,7 @@ When a user’s role enables [document level security](/deploy-manage/users-role
* If suggesters are specified and document level security is enabled, the specified suggesters are ignored.
* A search request cannot be profiled if document level security is enabled.
* The [terms enum API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-terms-enum) does not return terms if document level security is enabled.
-* The [`multi_match`](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) query does not support specifying fields using wildcards.
+* The [`multi_match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) query does not support specifying fields using wildcards.
:::{note}
While document-level security prevents users from viewing restricted documents, it’s still possible to write search requests that return aggregate information about the entire index. A user whose access is restricted to specific documents in an index could still learn about field names and terms that only exist in inaccessible documents, and count how many inaccessible documents contain a given term.
diff --git a/deploy-manage/production-guidance/optimize-performance/search-speed.md b/deploy-manage/production-guidance/optimize-performance/search-speed.md
index 483e56019a..d68f9ddd18 100644
--- a/deploy-manage/production-guidance/optimize-performance/search-speed.md
+++ b/deploy-manage/production-guidance/optimize-performance/search-speed.md
@@ -48,7 +48,7 @@ In particular, joins should be avoided. [`nested`](elasticsearch://reference/ela
## Search as few fields as possible [search-as-few-fields-as-possible]
-The more fields a [`query_string`](elasticsearch://reference/query-languages/query-dsl-query-string-query.md) or [`multi_match`](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) query targets, the slower it is. A common technique to improve search speed over multiple fields is to copy their values into a single field at index time, and then use this field at search time. This can be automated with the [`copy-to`](elasticsearch://reference/elasticsearch/mapping-reference/copy-to.md) directive of mappings without having to change the source of documents. Here is an example of an index containing movies that optimizes queries that search over both the name and the plot of the movie by indexing both values into the `name_and_plot` field.
+The more fields a [`query_string`](elasticsearch://reference/query-languages/query-dsl/query-dsl-query-string-query.md) or [`multi_match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) query targets, the slower it is. A common technique to improve search speed over multiple fields is to copy their values into a single field at index time, and then use this field at search time. This can be automated with the [`copy-to`](elasticsearch://reference/elasticsearch/mapping-reference/copy-to.md) directive of mappings without having to change the source of documents. Here is an example of an index containing movies that optimizes queries that search over both the name and the plot of the movie by indexing both values into the `name_and_plot` field.
```console
PUT movies
@@ -146,13 +146,13 @@ GET index/_search
## Consider mapping identifiers as `keyword` [map-ids-as-keyword]
-Not all numeric data should be mapped as a [numeric](elasticsearch://reference/elasticsearch/mapping-reference/number.md) field data type. {{es}} optimizes numeric fields, such as `integer` or `long`, for [`range`](elasticsearch://reference/query-languages/query-dsl-range-query.md) queries. However, [`keyword`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md) fields are better for [`term`](elasticsearch://reference/query-languages/query-dsl-term-query.md) and other [term-level](elasticsearch://reference/query-languages/term-level-queries.md) queries.
+Not all numeric data should be mapped as a [numeric](elasticsearch://reference/elasticsearch/mapping-reference/number.md) field data type. {{es}} optimizes numeric fields, such as `integer` or `long`, for [`range`](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) queries. However, [`keyword`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md) fields are better for [`term`](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) and other [term-level](elasticsearch://reference/query-languages/query-dsl/term-level-queries.md) queries.
Identifiers, such as an ISBN or a product ID, are rarely used in `range` queries. However, they are often retrieved using term-level queries.
Consider mapping a numeric identifier as a `keyword` if:
-* You don’t plan to search for the identifier data using [`range`](elasticsearch://reference/query-languages/query-dsl-range-query.md) queries.
+* You don’t plan to search for the identifier data using [`range`](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) queries.
* Fast retrieval is important. `term` query searches on `keyword` fields are often faster than `term` searches on numeric fields.
If you’re unsure which to use, you can use a [multi-field](elasticsearch://reference/elasticsearch/mapping-reference/multi-fields.md) to map the data as both a `keyword` *and* a numeric data type.
@@ -160,7 +160,7 @@ If you’re unsure which to use, you can use a [multi-field](elasticsearch://ref
## Avoid scripts [_avoid_scripts]
-If possible, avoid using [script](../../../explore-analyze/scripting.md)-based sorting, scripts in aggregations, and the [`script_score`](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) query. See [Scripts, caching, and search speed](../../../explore-analyze/scripting/scripts-search-speed.md).
+If possible, avoid using [script](../../../explore-analyze/scripting.md)-based sorting, scripts in aggregations, and the [`script_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) query. See [Scripts, caching, and search speed](../../../explore-analyze/scripting/scripts-search-speed.md).
## Search rounded dates [_search_rounded_dates]
diff --git a/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md b/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md
index 8512edddac..9b19cca887 100644
--- a/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md
+++ b/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md
@@ -23,7 +23,7 @@ The drawback of an audited system is represented by the inevitable performance p
When utilizing audit events ignore policies you are acknowledging potential accountability gaps that could render illegitimate actions undetectable. Please take time to review these policies whenever your system architecture changes.
::::
-A policy is a named set of filter rules. Each filter rule applies to a single event attribute, one of the `users`, `realms`, `actions`, `roles` or `indices` attributes. The filter rule defines a list of [Lucene regexp](elasticsearch://reference/query-languages/regexp-syntax.md), **any** of which has to match the value of the audit event attribute for the rule to match. A policy matches an event if **all** the rules comprising it match the event. An audit event is ignored, therefore not printed, if it matches **any** policy. All other non-matching events are printed as usual.
+A policy is a named set of filter rules. Each filter rule applies to a single event attribute, one of the `users`, `realms`, `actions`, `roles` or `indices` attributes. The filter rule defines a list of [Lucene regexp](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md), **any** of which has to match the value of the audit event attribute for the rule to match. A policy matches an event if **all** the rules comprising it match the event. An audit event is ignored, therefore not printed, if it matches **any** policy. All other non-matching events are printed as usual.
All policies are defined under the `xpack.security.audit.logfile.events.ignore_filters` settings namespace. For example, the following policy named *example1* matches events from the *kibana_system* or *admin_user* principals that operate over indices of the wildcard form *app-logs**:
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md b/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md
index 3202b68885..12c2414235 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md
@@ -153,7 +153,7 @@ Client authentication is enabled by default for the JWT realms. Disabling client
: Specifies a list of JWT subjects that the realm will allow. These values are typically URLs, UUIDs, or other case-sensitive string values.
`allowed_subject_patterns`
- : Analogous to `allowed_subjects` but it accepts a list of [Lucene regexp](elasticsearch://reference/query-languages/regexp-syntax.md) and wildcards for the allowed JWT subjects. Wildcards use the `*` and `?` special characters (which are escaped by `\`) to mean "any string" and "any single character" respectively, for example "a?\**", matches "a1*" and "ab*whatever", but not "a", "abc", or "abc*" (in Java strings `\` must itself be escaped by another `\`). [Lucene regexp](elasticsearch://reference/query-languages/regexp-syntax.md) must be enclosed between `/`, for example "/https?://[^/]+/?/" matches any http or https URL with no path component (matches "https://elastic.co/" but not "https://elastic.co/guide").
+ : Analogous to `allowed_subjects` but it accepts a list of [Lucene regexp](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md) and wildcards for the allowed JWT subjects. Wildcards use the `*` and `?` special characters (which are escaped by `\`) to mean "any string" and "any single character" respectively, for example "a?\**", matches "a1*" and "ab*whatever", but not "a", "abc", or "abc*" (in Java strings `\` must itself be escaped by another `\`). [Lucene regexp](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md) must be enclosed between `/`, for example "/https?://[^/]+/?/" matches any http or https URL with no path component (matches "https://elastic.co/" but not "https://elastic.co/guide").
At least one of the `allowed_subjects` or `allowed_subject_patterns` settings must be specified (and be non-empty) when `token_type` is `access_token`.
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/role-mapping-resources.md b/deploy-manage/users-roles/cluster-or-deployment-auth/role-mapping-resources.md
index b4d77b9f1a..beeaf5a654 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/role-mapping-resources.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/role-mapping-resources.md
@@ -50,7 +50,7 @@ The value specified in the field rule can be one of the following types:
| --- | --- | --- |
| Simple String | Exactly matches the provided value. | `"esadmin"` |
| Wildcard String | Matches the provided value using a wildcard. | `"*,dc=example,dc=com"` |
-| Regular Expression | Matches the provided value using a [Lucene regexp](elasticsearch://reference/query-languages/regexp-syntax.md). | `"/.*-admin[0-9]*/"` |
+| Regular Expression | Matches the provided value using a [Lucene regexp](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md). | `"/.*-admin[0-9]*/"` |
| Number | Matches an equivalent numerical value. | `7` |
| Null | Matches a null or missing value. | `null` |
| Array | Tests each element in the array in accordance with the above definitions. If *any* of elements match, the match is successful. | `["admin", "operator"]` |
diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md b/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md
index 0932047794..9b96935d38 100644
--- a/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md
+++ b/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md
@@ -160,7 +160,7 @@ The remote indices privileges entry has an extra mandatory `clusters` field comp
}
```
-1. A list of remote cluster aliases. It supports literal strings as well as [wildcards](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-multi-index) and [regular expressions](elasticsearch://reference/query-languages/regexp-syntax.md). This field is required.
+1. A list of remote cluster aliases. It supports literal strings as well as [wildcards](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-multi-index) and [regular expressions](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md). This field is required.
2. A list of data streams, indices, and aliases to which the permissions in this entry apply. Supports wildcards (`*`).
3. The index level privileges the owners of the role have on the associated data streams and indices specified in the `names` argument.
4. Specification for document fields the owners of the role have read access to. See [Setting up field and document level security](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) for details.
@@ -186,7 +186,7 @@ The following describes the structure of a remote cluster permissions entry:
}
```
-1. A list of remote cluster aliases. It supports literal strings as well as [wildcards](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-multi-index) and [regular expressions](elasticsearch://reference/query-languages/regexp-syntax.md). This field is required.
+1. A list of remote cluster aliases. It supports literal strings as well as [wildcards](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#api-multi-index) and [regular expressions](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md). This field is required.
2. The cluster level privileges for the remote cluster. The allowed values here are a subset of the [cluster privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster). The [builtin privileges API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-builtin-privileges) can be used to determine which privileges are allowed here. This field is required.
diff --git a/explore-analyze/alerts-cases/alerts/alerting-setup.md b/explore-analyze/alerts-cases/alerts/alerting-setup.md
index e5786f81bd..151a2b3df1 100644
--- a/explore-analyze/alerts-cases/alerts/alerting-setup.md
+++ b/explore-analyze/alerts-cases/alerts/alerting-setup.md
@@ -22,7 +22,7 @@ If you are using an **on-premises** {{stack}} deployment with [**security**](../
* If you are unable to access {{kib}} {{alert-features}}, ensure that you have not [explicitly disabled API keys](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#api-key-service-settings).
-The alerting framework uses queries that require the `search.allow_expensive_queries` setting to be `true`. See the scripts [documentation](elasticsearch://reference/query-languages/query-dsl-script-query.md#_allow_expensive_queries_4).
+The alerting framework uses queries that require the `search.allow_expensive_queries` setting to be `true`. See the scripts [documentation](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-query.md#_allow_expensive_queries_4).
## Production considerations and scaling guidance [alerting-setup-production]
diff --git a/explore-analyze/geospatial-analysis.md b/explore-analyze/geospatial-analysis.md
index f9b15647e0..41505476bf 100644
--- a/explore-analyze/geospatial-analysis.md
+++ b/explore-analyze/geospatial-analysis.md
@@ -32,7 +32,7 @@ Data is often messy and incomplete. [Ingest pipelines](../manage-data/ingest/tra
## Query [geospatial-query]
-[Geo queries](elasticsearch://reference/query-languages/geo-queries.md) answer location-driven questions. Find documents that intersect with, are within, are contained by, or do not intersect your query geometry. Combine geospatial queries with full text search queries for unparalleled searching experience. For example, "Show me all subscribers that live within 5 miles of our new gym location, that joined in the last year and have running mentioned in their profile".
+[Geo queries](elasticsearch://reference/query-languages/query-dsl/geo-queries.md) answer location-driven questions. Find documents that intersect with, are within, are contained by, or do not intersect your query geometry. Combine geospatial queries with full text search queries for unparalleled searching experience. For example, "Show me all subscribers that live within 5 miles of our new gym location, that joined in the last year and have running mentioned in their profile".
## ES|QL [esql-query]
diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md
index 16a202dd9a..d6853ec4ca 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md
@@ -37,4 +37,4 @@ For the resource levels when adaptive resources are enabled, refer to <[*Trained
Each allocation of a model deployment has a dedicated queue to buffer {{infer}} requests. The size of this queue is determined by the `queue_capacity` parameter in the [start trained model deployment API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-trained-model-deployment). When the queue reaches its maximum capacity, new requests are declined until some of the queued requests are processed, creating available capacity once again. When multiple ingest pipelines reference the same deployment, the queue can fill up, resulting in rejected requests. Consider using dedicated deployments to prevent this situation.
-{{infer-cap}} requests originating from search, such as the [`text_expansion` query](elasticsearch://reference/query-languages/query-dsl-text-expansion-query.md), have a higher priority compared to non-search requests. The {{infer}} ingest processor generates normal priority requests. If both a search query and an ingest processor use the same deployment, the search requests with higher priority skip ahead in the queue for processing before the lower priority ingest requests. This prioritization accelerates search responses while potentially slowing down ingest where response time is less critical.
+{{infer-cap}} requests originating from search, such as the [`text_expansion` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-text-expansion-query.md), have a higher priority compared to non-search requests. The {{infer}} ingest processor generates normal priority requests. If both a search query and an ingest processor use the same deployment, the search requests with higher priority skip ahead in the queue for processing before the lower priority ingest requests. This prioritization accelerates search responses while potentially slowing down ingest where response time is less critical.
diff --git a/explore-analyze/query-filter/languages/eql.md b/explore-analyze/query-filter/languages/eql.md
index 42646e5c62..f90e113823 100644
--- a/explore-analyze/query-filter/languages/eql.md
+++ b/explore-analyze/query-filter/languages/eql.md
@@ -18,7 +18,7 @@ Event Query Language (EQL) is a query language for event-based time series data,
## Advantages of EQL [eql-advantages]
* **EQL lets you express relationships between events.**
Many query languages allow you to match single events. EQL lets you match a sequence of events across different event categories and time spans.
-* **EQL has a low learning curve.**
[EQL syntax](elasticsearch://reference/query-languages/eql-syntax.md) looks like other common query languages, such as SQL. EQL lets you write and read queries intuitively, which makes for quick, iterative searching.
+* **EQL has a low learning curve.**
[EQL syntax](elasticsearch://reference/query-languages/eql/eql-syntax.md) looks like other common query languages, such as SQL. EQL lets you write and read queries intuitively, which makes for quick, iterative searching.
* **EQL is designed for security use cases.**
While you can use it for any event-based data, we created EQL for threat hunting. EQL not only supports indicator of compromise (IOC) searches but can describe activity that goes beyond IOCs.
@@ -26,7 +26,7 @@ Event Query Language (EQL) is a query language for event-based time series data,
With the exception of sample queries, EQL searches require that the searched data stream or index contains a *timestamp* field. By default, EQL uses the `@timestamp` field from the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current).
-EQL searches also require an *event category* field, unless you use the [`any` keyword](elasticsearch://reference/query-languages/eql-syntax.md#eql-syntax-match-any-event-category) to search for documents without an event category field. By default, EQL uses the ECS `event.category` field.
+EQL searches also require an *event category* field, unless you use the [`any` keyword](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-syntax-match-any-event-category) to search for documents without an event category field. By default, EQL uses the ECS `event.category` field.
To use a different timestamp or event category field, see [Specify a timestamp or event category field](#specify-a-timestamp-or-event-category-field).
@@ -38,7 +38,7 @@ While no schema is required to use EQL, we recommend using the [ECS](https://www
## Run an EQL search [run-an-eql-search]
-Use the [EQL search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) to run a [basic EQL query](elasticsearch://reference/query-languages/eql-syntax.md#eql-basic-syntax).
+Use the [EQL search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) to run a [basic EQL query](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-basic-syntax).
```console
GET /my-data-stream/_eql/search
@@ -119,7 +119,7 @@ GET /my-data-stream/_eql/search
## Search for a sequence of events [eql-search-sequence]
-Use EQL’s [sequence syntax](elasticsearch://reference/query-languages/eql-syntax.md#eql-sequences) to search for a series of ordered events. List the event items in ascending chronological order, with the most recent event listed last:
+Use EQL’s [sequence syntax](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-sequences) to search for a series of ordered events. List the event items in ascending chronological order, with the most recent event listed last:
```console
GET /my-data-stream/_eql/search
@@ -188,7 +188,7 @@ The response’s `hits.sequences` property contains the 10 most recent matching
}
```
-Use [`with maxspan`](elasticsearch://reference/query-languages/eql-syntax.md#eql-with-maxspan-keywords) to constrain matching sequences to a timespan:
+Use [`with maxspan`](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-with-maxspan-keywords) to constrain matching sequences to a timespan:
```console
GET /my-data-stream/_eql/search
@@ -201,7 +201,7 @@ GET /my-data-stream/_eql/search
}
```
-Use `!` to match [missing events](elasticsearch://reference/query-languages/eql-syntax.md#eql-missing-events): events in a sequence that do not meet a condition within a given timespan:
+Use `!` to match [missing events](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-missing-events): events in a sequence that do not meet a condition within a given timespan:
```console
GET /my-data-stream/_eql/search
@@ -276,7 +276,7 @@ Missing events are indicated in the response as `missing": true`:
}
```
-Use the [`by` keyword](elasticsearch://reference/query-languages/eql-syntax.md#eql-by-keyword) to match events that share the same field values:
+Use the [`by` keyword](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-by-keyword) to match events that share the same field values:
```console
GET /my-data-stream/_eql/search
@@ -320,7 +320,7 @@ The `hits.sequences.join_keys` property contains the shared field values.
}
```
-Use the [`until` keyword](elasticsearch://reference/query-languages/eql-syntax.md#eql-until-keyword) to specify an expiration event for sequences. Matching sequences must end before this event.
+Use the [`until` keyword](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-until-keyword) to specify an expiration event for sequences. Matching sequences must end before this event.
```console
GET /my-data-stream/_eql/search
@@ -337,7 +337,7 @@ GET /my-data-stream/_eql/search
## Sample chronologically unordered events [eql-search-sample]
-Use EQL’s [sample syntax](elasticsearch://reference/query-languages/eql-syntax.md#eql-samples) to search for events that match one or more join keys and a set of filters. Samples are similar to sequences, but do not return events in chronological order. In fact, sample queries can run on data without a timestamp. Sample queries can be useful to find correlations in events that don’t always occur in the same sequence, or that occur across long time spans.
+Use EQL’s [sample syntax](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-samples) to search for events that match one or more join keys and a set of filters. Samples are similar to sequences, but do not return events in chronological order. In fact, sample queries can run on data without a timestamp. Sample queries can be useful to find correlations in events that don’t always occur in the same sequence, or that occur across long time spans.
::::{dropdown} Click to show the sample data used in the examples below
```console
@@ -553,7 +553,7 @@ POST /my-index-000003/_bulk?refresh
::::
-A sample query specifies at least one join key, using the [`by` keyword](elasticsearch://reference/query-languages/eql-syntax.md#eql-by-keyword), and up to five filters:
+A sample query specifies at least one join key, using the [`by` keyword](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-by-keyword), and up to five filters:
```console
GET /my-index*/_eql/search
diff --git a/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md b/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
index 902cb08fb8..42d7f06187 100644
--- a/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
+++ b/explore-analyze/query-filter/languages/example-detect-threats-with-eql.md
@@ -207,7 +207,7 @@ The query matches an event, confirming `scrobj.dll` was loaded.
## Determine the likelihood of success [eql-ex-detemine-likelihood-of-success]
-In many cases, attackers use malicious scripts to connect to remote servers or download other files. Use an [EQL sequence query](elasticsearch://reference/query-languages/eql-syntax.md#eql-sequences) to check for the following series of events:
+In many cases, attackers use malicious scripts to connect to remote servers or download other files. Use an [EQL sequence query](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-sequences) to check for the following series of events:
1. A `regsvr32.exe` process
2. A load of the `scrobj.dll` library by the same process
diff --git a/explore-analyze/query-filter/languages/lucene-query-syntax.md b/explore-analyze/query-filter/languages/lucene-query-syntax.md
index 3fa32a1adc..6b1f6ec293 100644
--- a/explore-analyze/query-filter/languages/lucene-query-syntax.md
+++ b/explore-analyze/query-filter/languages/lucene-query-syntax.md
@@ -8,7 +8,7 @@ mapped_pages:
# Lucene query syntax [lucene-query]
-Lucene query syntax is available to {{kib}} users who opt out of the [{{kib}} Query Language](kql.md). Full documentation for this syntax is available as part of {{es}} [query string syntax](elasticsearch://reference/query-languages/query-dsl-query-string-query.md#query-string-syntax).
+Lucene query syntax is available to {{kib}} users who opt out of the [{{kib}} Query Language](kql.md). Full documentation for this syntax is available as part of {{es}} [query string syntax](elasticsearch://reference/query-languages/query-dsl/query-dsl-query-string-query.md#query-string-syntax).
The main reason to use the Lucene query syntax in {{kib}} is for advanced Lucene features, such as regular expressions or fuzzy term matching. However, Lucene syntax is not able to search nested objects or scripted fields.
diff --git a/explore-analyze/query-filter/languages/querydsl.md b/explore-analyze/query-filter/languages/querydsl.md
index efbcc4c5a9..62f7f0ca47 100644
--- a/explore-analyze/query-filter/languages/querydsl.md
+++ b/explore-analyze/query-filter/languages/querydsl.md
@@ -30,7 +30,7 @@ Query DSL support a wide range of search techniques, including the following:
* [**Keyword search**](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md): Search for exact matches using `keyword` fields.
* [**Semantic search**](/solutions/search/semantic-search/semantic-search-semantic-text.md): Search `semantic_text` fields using dense or sparse vector search on embeddings generated in your {{es}} cluster.
* [**Vector search**](/solutions/search/vector/knn.md): Search for similar dense vectors using the kNN algorithm for embeddings generated outside of {{es}}.
-* [**Geospatial search**](elasticsearch://reference/query-languages/geo-queries.md): Search for locations and calculate spatial relationships using geospatial queries.
+* [**Geospatial search**](elasticsearch://reference/query-languages/query-dsl/geo-queries.md): Search for locations and calculate spatial relationships using geospatial queries.
You can also filter data using Query DSL. Filters enable you to include or exclude documents by retrieving documents that match specific field-level criteria. A query that uses the `filter` parameter indicates [filter context](#filter-context).
@@ -53,9 +53,9 @@ Run aggregations by specifying the [search API](https://www.elastic.co/docs/api/
Think of the Query DSL as an AST (Abstract Syntax Tree) of queries, consisting of two types of clauses:
-**Leaf query clauses**: Leaf query clauses look for a particular value in a particular field, such as the [`match`](elasticsearch://reference/query-languages/query-dsl-match-query.md), [`term`](elasticsearch://reference/query-languages/query-dsl-term-query.md) or [`range`](elasticsearch://reference/query-languages/query-dsl-range-query.md) queries. These queries can be used by themselves.
+**Leaf query clauses**: Leaf query clauses look for a particular value in a particular field, such as the [`match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md), [`term`](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) or [`range`](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) queries. These queries can be used by themselves.
-**Compound query clauses**: Compound query clauses wrap other leaf **or** compound queries and are used to combine multiple queries in a logical fashion (such as the [`bool`](elasticsearch://reference/query-languages/query-dsl-bool-query.md) or [`dis_max`](elasticsearch://reference/query-languages/query-dsl-dis-max-query.md) query), or to alter their behavior (such as the [`constant_score`](elasticsearch://reference/query-languages/query-dsl-constant-score-query.md) query).
+**Compound query clauses**: Compound query clauses wrap other leaf **or** compound queries and are used to combine multiple queries in a logical fashion (such as the [`bool`](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) or [`dis_max`](elasticsearch://reference/query-languages/query-dsl/query-dsl-dis-max-query.md) query), or to alter their behavior (such as the [`constant_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-constant-score-query.md) query).
Query clauses behave differently depending on whether they are used in [query context or filter context](#query-filter-context).
@@ -65,22 +65,22 @@ $$$query-dsl-allow-expensive-queries$$$
- Queries that need to do linear scans to identify matches:
- - [`script` queries](elasticsearch://reference/query-languages/query-dsl-script-query.md)
+ - [`script` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-query.md)
- queries on [numeric](elasticsearch://reference/elasticsearch/mapping-reference/number.md), [date](elasticsearch://reference/elasticsearch/mapping-reference/date.md), [boolean](elasticsearch://reference/elasticsearch/mapping-reference/boolean.md), [ip](elasticsearch://reference/elasticsearch/mapping-reference/ip.md), [geo_point](elasticsearch://reference/elasticsearch/mapping-reference/geo-point.md) or [keyword](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md) fields that are not indexed but have [doc values](elasticsearch://reference/elasticsearch/mapping-reference/doc-values.md) enabled
- Queries that have a high up-front cost:
- - [`fuzzy` queries](elasticsearch://reference/query-languages/query-dsl-fuzzy-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
- - [`regexp` queries](elasticsearch://reference/query-languages/query-dsl-regexp-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
- - [`prefix` queries](elasticsearch://reference/query-languages/query-dsl-prefix-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields or those without [`index_prefixes`](elasticsearch://reference/elasticsearch/mapping-reference/index-prefixes.md))
- - [`wildcard` queries](elasticsearch://reference/query-languages/query-dsl-wildcard-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
- - [`range` queries](elasticsearch://reference/query-languages/query-dsl-range-query.md) on [`text`](elasticsearch://reference/elasticsearch/mapping-reference/text.md) and [`keyword`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md) fields
+ - [`fuzzy` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-fuzzy-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
+ - [`regexp` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-regexp-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
+ - [`prefix` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-prefix-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields or those without [`index_prefixes`](elasticsearch://reference/elasticsearch/mapping-reference/index-prefixes.md))
+ - [`wildcard` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-wildcard-query.md) (except on [`wildcard`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md#wildcard-field-type) fields)
+ - [`range` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) on [`text`](elasticsearch://reference/elasticsearch/mapping-reference/text.md) and [`keyword`](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md) fields
- - [Joining queries](elasticsearch://reference/query-languages/joining-queries.md)
+ - [Joining queries](elasticsearch://reference/query-languages/query-dsl/joining-queries.md)
- Queries that may have a high per-document cost:
- - [`script_score` queries](elasticsearch://reference/query-languages/query-dsl-script-score-query.md)
- - [`percolate` queries](elasticsearch://reference/query-languages/query-dsl-percolate-query.md)
+ - [`script_score` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md)
+ - [`percolate` queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-percolate-query.md)
The execution of such queries can be prevented by setting the value of the `search.allow_expensive_queries` setting to `false` (defaults to `true`).
@@ -130,8 +130,8 @@ Common filter applications include:
Filter context applies when a query clause is passed to a `filter` parameter, such as:
-* `filter` or `must_not` parameters in [`bool`](elasticsearch://reference/query-languages/query-dsl-bool-query.md) queries
-* `filter` parameter in [`constant_score`](elasticsearch://reference/query-languages/query-dsl-constant-score-query.md) queries
+* `filter` or `must_not` parameters in [`bool`](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) queries
+* `filter` parameter in [`constant_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-constant-score-query.md) queries
* [`filter`](elasticsearch://reference/data-analysis/aggregations/search-aggregations-bucket-filter-aggregation.md) aggregations
Filters optimize query performance and efficiency, especially for structured data queries and when combined with full-text searches.
diff --git a/explore-analyze/query-filter/languages/sql-functions-search.md b/explore-analyze/query-filter/languages/sql-functions-search.md
index 87252476b5..c721deca7a 100644
--- a/explore-analyze/query-filter/languages/sql-functions-search.md
+++ b/explore-analyze/query-filter/languages/sql-functions-search.md
@@ -28,7 +28,7 @@ MATCH(
3. additional parameters; optional
-**Description**: A full-text search option, in the form of a predicate, available in Elasticsearch SQL that gives the user control over powerful [match](elasticsearch://reference/query-languages/query-dsl-match-query.md) and [multi_match](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) {{es}} queries.
+**Description**: A full-text search option, in the form of a predicate, available in Elasticsearch SQL that gives the user control over powerful [match](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) and [multi_match](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) {{es}} queries.
The first parameter is the field or fields to match against. In case it receives one value only, Elasticsearch SQL will use a `match` query to perform the search:
@@ -57,7 +57,7 @@ Frank Herbert |God Emperor of Dune|7.0029488
```
::::{note}
-The `multi_match` query in {{es}} has the option of [per-field boosting](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) that gives preferential weight (in terms of scoring) to fields being searched in, using the `^` character. In the example above, the `name` field has a greater weight in the final score than the `author` field when searching for `frank dune` text in both of them.
+The `multi_match` query in {{es}} has the option of [per-field boosting](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) that gives preferential weight (in terms of scoring) to fields being searched in, using the `^` character. In the example above, the `name` field has a greater weight in the final score than the `author` field when searching for `frank dune` text in both of them.
::::
@@ -98,7 +98,7 @@ QUERY(
2. additional parameters; optional
-**Description**: Just like `MATCH`, `QUERY` is a full-text search predicate that gives the user control over the [query_string](elasticsearch://reference/query-languages/query-dsl-query-string-query.md) query in {{es}}.
+**Description**: Just like `MATCH`, `QUERY` is a full-text search predicate that gives the user control over the [query_string](elasticsearch://reference/query-languages/query-dsl/query-dsl-query-string-query.md) query in {{es}}.
The first parameter is basically the input that will be passed as is to the `query_string` query, which means that anything that `query_string` accepts in its `query` field can be used here as well:
@@ -159,7 +159,7 @@ SCORE()
**Description**: Returns the [relevance](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/relevance-intro.html) of a given input to the executed query. The higher score, the more relevant the data.
::::{note}
-When doing multiple text queries in the `WHERE` clause then, their scores will be combined using the same rules as {{es}}'s [bool query](elasticsearch://reference/query-languages/query-dsl-bool-query.md).
+When doing multiple text queries in the `WHERE` clause then, their scores will be combined using the same rules as {{es}}'s [bool query](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md).
::::
diff --git a/explore-analyze/query-filter/languages/sql-like-rlike-operators.md b/explore-analyze/query-filter/languages/sql-like-rlike-operators.md
index 2ec59eef6c..02508f74ab 100644
--- a/explore-analyze/query-filter/languages/sql-like-rlike-operators.md
+++ b/explore-analyze/query-filter/languages/sql-like-rlike-operators.md
@@ -73,7 +73,7 @@ RLIKE constant_exp <2>
**Description**: This operator is similar to `LIKE`, but the user is not limited to search for a string based on a fixed pattern with the percent sign (`%`) and underscore (`_`); the pattern in this case is a regular expression which allows the construction of more flexible patterns.
-For supported syntax, see [*Regular expression syntax*](elasticsearch://reference/query-languages/regexp-syntax.md).
+For supported syntax, see [*Regular expression syntax*](elasticsearch://reference/query-languages/query-dsl/regexp-syntax.md).
```sql
SELECT author, name FROM library WHERE name RLIKE 'Child.* Dune';
diff --git a/explore-analyze/query-filter/languages/sql-syntax-select.md b/explore-analyze/query-filter/languages/sql-syntax-select.md
index 39212d7262..cc82f352fe 100644
--- a/explore-analyze/query-filter/languages/sql-syntax-select.md
+++ b/explore-analyze/query-filter/languages/sql-syntax-select.md
@@ -507,7 +507,7 @@ Ordering by aggregation is possible for up to **10000** entries for memory consu
When doing full-text queries in the `WHERE` clause, results can be returned based on their [score](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/relevance-intro.html) or *relevance* to the given query.
::::{note}
-When doing multiple text queries in the `WHERE` clause then, their scores will be combined using the same rules as {{es}}'s [bool query](elasticsearch://reference/query-languages/query-dsl-bool-query.md).
+When doing multiple text queries in the `WHERE` clause then, their scores will be combined using the same rules as {{es}}'s [bool query](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md).
::::
diff --git a/explore-analyze/scripting/modules-scripting-fields.md b/explore-analyze/scripting/modules-scripting-fields.md
index 5f891ea4c8..075fcb076e 100644
--- a/explore-analyze/scripting/modules-scripting-fields.md
+++ b/explore-analyze/scripting/modules-scripting-fields.md
@@ -36,9 +36,9 @@ Field values can be accessed from a script using [doc-values](#modules-scripting
### Accessing the score of a document within a script [scripting-score]
-Scripts used in the [`function_score` query](elasticsearch://reference/query-languages/query-dsl-function-score-query.md), in [script-based sorting](elasticsearch://reference/elasticsearch/rest-apis/sort-search-results.md), or in [aggregations](../query-filter/aggregations.md) have access to the `_score` variable which represents the current relevance score of a document.
+Scripts used in the [`function_score` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-function-score-query.md), in [script-based sorting](elasticsearch://reference/elasticsearch/rest-apis/sort-search-results.md), or in [aggregations](../query-filter/aggregations.md) have access to the `_score` variable which represents the current relevance score of a document.
-Here’s an example of using a script in a [`function_score` query](elasticsearch://reference/query-languages/query-dsl-function-score-query.md) to alter the relevance `_score` of each document:
+Here’s an example of using a script in a [`function_score` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-function-score-query.md) to alter the relevance `_score` of each document:
```console
PUT my-index-000001/_doc/1?refresh
@@ -76,9 +76,9 @@ GET my-index-000001/_search
### Accessing term statistics of a document within a script [scripting-term-statistics]
-Scripts used in a [`script_score`](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) query have access to the `_termStats` variable which provides statistical information about the terms in the child query.
+Scripts used in a [`script_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) query have access to the `_termStats` variable which provides statistical information about the terms in the child query.
-In the following example, `_termStats` is used within a [`script_score`](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) query to retrieve the average term frequency for the terms `quick`, `brown`, and `fox` in the `text` field:
+In the following example, `_termStats` is used within a [`script_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) query to retrieve the average term frequency for the terms `quick`, `brown`, and `fox` in the `text` field:
```console
PUT my-index-000001/_doc/1?refresh
diff --git a/explore-analyze/transforms/transform-limitations.md b/explore-analyze/transforms/transform-limitations.md
index 226f02477c..c1e8cc680e 100644
--- a/explore-analyze/transforms/transform-limitations.md
+++ b/explore-analyze/transforms/transform-limitations.md
@@ -85,7 +85,7 @@ The {{transform}} retrieves data in batches which means it calculates several bu
### Handling dynamic adjustments for many terms [transform-dynamic-adjustments-limitations]
-For each checkpoint, entities are identified that have changed since the last time the check was performed. This list of changed entities is supplied as a [terms query](elasticsearch://reference/query-languages/query-dsl-terms-query.md) to the {{transform}} composite aggregation, one page at a time. Then updates are applied to the destination index for each page of entities.
+For each checkpoint, entities are identified that have changed since the last time the check was performed. This list of changed entities is supplied as a [terms query](elasticsearch://reference/query-languages/query-dsl/query-dsl-terms-query.md) to the {{transform}} composite aggregation, one page at a time. Then updates are applied to the destination index for each page of entities.
The page `size` is defined by `max_page_search_size` which is also used to define the number of buckets returned by the composite aggregation search. The default value is 500, the minimum is 10.
diff --git a/explore-analyze/visualize/maps/maps-create-filter-from-map.md b/explore-analyze/visualize/maps/maps-create-filter-from-map.md
index 46a5f492a8..591d8d486a 100644
--- a/explore-analyze/visualize/maps/maps-create-filter-from-map.md
+++ b/explore-analyze/visualize/maps/maps-create-filter-from-map.md
@@ -35,7 +35,7 @@ A spatial filter narrows search results to documents that either intersect with,
Spatial filters have the following properties:
* **Geometry label** enables you to provide a meaningful name for your spatial filter.
-* **Spatial relation** determines the [spatial relation operator](elasticsearch://reference/query-languages/query-dsl-geo-shape-query.md#geo-shape-spatial-relations) to use at search time.
+* **Spatial relation** determines the [spatial relation operator](elasticsearch://reference/query-languages/query-dsl/query-dsl-geo-shape-query.md#geo-shape-spatial-relations) to use at search time.
* **Action** specifies whether to apply the filter to the current view or to a drilldown action.
::::{note}
diff --git a/manage-data/data-store/data-streams/modify-data-stream.md b/manage-data/data-store/data-streams/modify-data-stream.md
index 82d70cac53..3c2d53cd7e 100644
--- a/manage-data/data-store/data-streams/modify-data-stream.md
+++ b/manage-data/data-store/data-streams/modify-data-stream.md
@@ -426,7 +426,7 @@ Follow these steps:
You can also use a query to reindex only a subset of documents with each request.
- The following [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) request copies documents from `my-data-stream` to `new-data-stream`. The request uses a [`range` query](elasticsearch://reference/query-languages/query-dsl-range-query.md) to only reindex documents with a timestamp within the last week. Note the request’s `op_type` is `create`.
+ The following [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) request copies documents from `my-data-stream` to `new-data-stream`. The request uses a [`range` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) to only reindex documents with a timestamp within the last week. Note the request’s `op_type` is `create`.
```console
POST /_reindex
diff --git a/manage-data/data-store/mapping/explore-data-with-runtime-fields.md b/manage-data/data-store/mapping/explore-data-with-runtime-fields.md
index 4fe70b3128..dc3ed068d5 100644
--- a/manage-data/data-store/mapping/explore-data-with-runtime-fields.md
+++ b/manage-data/data-store/mapping/explore-data-with-runtime-fields.md
@@ -238,7 +238,7 @@ If the script didn’t include this condition, the query would fail on any shard
### Search for documents in a specific range [runtime-examples-grok-range]
-You can also run a [range query](elasticsearch://reference/query-languages/query-dsl-range-query.md) that operates on the `timestamp` field. The following query returns any documents where the `timestamp` is greater than or equal to `2020-04-30T14:31:27-05:00`:
+You can also run a [range query](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) that operates on the `timestamp` field. The following query returns any documents where the `timestamp` is greater than or equal to `2020-04-30T14:31:27-05:00`:
```console
GET my-index-000001/_search
diff --git a/manage-data/data-store/mapping/retrieve-runtime-field.md b/manage-data/data-store/mapping/retrieve-runtime-field.md
index a6da515e0f..ea3f7144c0 100644
--- a/manage-data/data-store/mapping/retrieve-runtime-field.md
+++ b/manage-data/data-store/mapping/retrieve-runtime-field.md
@@ -202,7 +202,7 @@ POST logs/_search
}
```
-1. Define a runtime field in the main search request with a type of `lookup` that retrieves fields from the target index using the [`term`](elasticsearch://reference/query-languages/query-dsl-term-query.md) queries.
+1. Define a runtime field in the main search request with a type of `lookup` that retrieves fields from the target index using the [`term`](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) queries.
2. The target index where the lookup query executes against
3. A field on the main index whose values are used as the input values of the lookup term query
4. A field on the lookup index which the lookup query searches against
diff --git a/manage-data/data-store/mapping/runtime-fields.md b/manage-data/data-store/mapping/runtime-fields.md
index 7e86a09b03..1c31ebf77a 100644
--- a/manage-data/data-store/mapping/runtime-fields.md
+++ b/manage-data/data-store/mapping/runtime-fields.md
@@ -37,7 +37,7 @@ Runtime fields can replace many of the ways you can use scripting with the `_sea
You can use [script fields](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md#script-fields) to access values in `_source` and return calculated values based on a script valuation. Runtime fields have the same capabilities, but provide greater flexibility because you can query and aggregate on runtime fields in a search request. Script fields can only fetch values.
-Similarly, you could write a [script query](elasticsearch://reference/query-languages/query-dsl-script-query.md) that filters documents in a search request based on a script. Runtime fields provide a very similar feature that is more flexible. You write a script to create field values and they are available everywhere, such as [`fields`](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md), [all queries](../../../explore-analyze/query-filter/languages/querydsl.md), and [aggregations](../../../explore-analyze/query-filter/aggregations.md).
+Similarly, you could write a [script query](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-query.md) that filters documents in a search request based on a script. Runtime fields provide a very similar feature that is more flexible. You write a script to create field values and they are available everywhere, such as [`fields`](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md), [all queries](../../../explore-analyze/query-filter/languages/querydsl.md), and [aggregations](../../../explore-analyze/query-filter/aggregations.md).
You can also use scripts to [sort search results](elasticsearch://reference/elasticsearch/rest-apis/sort-search-results.md#script-based-sorting), but that same script works exactly the same in a runtime field.
diff --git a/manage-data/data-store/text-analysis/index-search-analysis.md b/manage-data/data-store/text-analysis/index-search-analysis.md
index e7834e1d0b..e8ad3876d0 100644
--- a/manage-data/data-store/text-analysis/index-search-analysis.md
+++ b/manage-data/data-store/text-analysis/index-search-analysis.md
@@ -14,7 +14,7 @@ Index time
: When a document is indexed, any [`text`](elasticsearch://reference/elasticsearch/mapping-reference/text.md) field values are analyzed.
Search time
-: When running a [full-text search](elasticsearch://reference/query-languages/full-text-queries.md) on a `text` field, the query string (the text the user is searching for) is analyzed. Search time is also called *query time*.
+: When running a [full-text search](elasticsearch://reference/query-languages/query-dsl/full-text-queries.md) on a `text` field, the query string (the text the user is searching for) is analyzed. Search time is also called *query time*.
For more details on text analysis at search time, refer to [Text analysis during search](/solutions/search/full-text/text-analysis-during-search.md).
diff --git a/manage-data/data-store/text-analysis/specify-an-analyzer.md b/manage-data/data-store/text-analysis/specify-an-analyzer.md
index 44553bf8d4..69f56074d0 100644
--- a/manage-data/data-store/text-analysis/specify-an-analyzer.md
+++ b/manage-data/data-store/text-analysis/specify-an-analyzer.md
@@ -102,9 +102,9 @@ If none of these parameters are specified, the [`standard` analyzer](elasticsear
## Specify the search analyzer for a query [specify-search-query-analyzer]
-When writing a [full-text query](elasticsearch://reference/query-languages/full-text-queries.md), you can use the `analyzer` parameter to specify a search analyzer. If provided, this overrides any other search analyzers.
+When writing a [full-text query](elasticsearch://reference/query-languages/query-dsl/full-text-queries.md), you can use the `analyzer` parameter to specify a search analyzer. If provided, this overrides any other search analyzers.
-The following [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) request sets the `stop` analyzer as the search analyzer for a [`match`](elasticsearch://reference/query-languages/query-dsl-match-query.md) query.
+The following [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) request sets the `stop` analyzer as the search analyzer for a [`match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) query.
```console
GET my-index-000001/_search
diff --git a/manage-data/data-store/text-analysis/token-graphs.md b/manage-data/data-store/text-analysis/token-graphs.md
index 7261a83285..9be1ab7726 100644
--- a/manage-data/data-store/text-analysis/token-graphs.md
+++ b/manage-data/data-store/text-analysis/token-graphs.md
@@ -51,7 +51,7 @@ In the following graph, `domain name system` and its synonym, `dns`, both have a
[Indexing](index-search-analysis.md) ignores the `positionLength` attribute and does not support token graphs containing multi-position tokens.
-However, queries, such as the [`match`](elasticsearch://reference/query-languages/query-dsl-match-query.md) or [`match_phrase`](elasticsearch://reference/query-languages/query-dsl-match-query-phrase.md) query, can use these graphs to generate multiple sub-queries from a single query string.
+However, queries, such as the [`match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) or [`match_phrase`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query-phrase.md) query, can use these graphs to generate multiple sub-queries from a single query string.
:::::{dropdown} Example
A user runs a search for the following phrase using the `match_phrase` query:
diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md
index 93561d624a..617b17955b 100644
--- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md
+++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md
@@ -526,5 +526,5 @@ You can add titles to the visualizations, resize and position them as you like,
2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them.
-You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](beats://reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about ingesting data.
+You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](beats://reference/filebeat/index.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about ingesting data.
diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md
index 98aa3589cf..d4c0b65ec6 100644
--- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md
+++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md
@@ -421,5 +421,5 @@ You can add titles to the visualizations, resize and position them as you like,
2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal.
-You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](beats://reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about all about ingesting data.
+You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](beats://reference/filebeat/index.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about all about ingesting data.
diff --git a/manage-data/ingest/transform-enrich/data-enrichment.md b/manage-data/ingest/transform-enrich/data-enrichment.md
index d4ed37521c..a6711e2887 100644
--- a/manage-data/ingest/transform-enrich/data-enrichment.md
+++ b/manage-data/ingest/transform-enrich/data-enrichment.md
@@ -75,7 +75,7 @@ Use the **Enrich Policies** view to add data from your existing indices to incom
* The source indices that store enrich data as documents
* The fields from the source indices used to match incoming documents
* The enrich fields containing enrich data from the source indices that you want to add to incoming documents
-* An optional [query](elasticsearch://reference/query-languages/query-dsl-match-all-query.md).
+* An optional [query](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-all-query.md).
:::{image} /manage-data/images/elasticsearch-reference-management-enrich-policies.png
:alt: Enrich policies
diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md
index b10658a74a..81c96332eb 100644
--- a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md
+++ b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md
@@ -8,7 +8,7 @@ applies_to:
# Example: Enrich your data based on exact values [match-enrich-policy-type]
-`match` [enrich policies](data-enrichment.md#enrich-policy) match enrich data to incoming documents based on an exact value, such as a email address or ID, using a [`term` query](elasticsearch://reference/query-languages/query-dsl-term-query.md).
+`match` [enrich policies](data-enrichment.md#enrich-policy) match enrich data to incoming documents based on an exact value, such as a email address or ID, using a [`term` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md).
The following example creates a `match` enrich policy that adds user name and contact information to incoming documents based on an email address. It then adds the `match` enrich policy to a processor in an ingest pipeline.
diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md
index d386a2673d..286eb2b07f 100644
--- a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md
+++ b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md
@@ -8,7 +8,7 @@ applies_to:
# Example: Enrich your data based on geolocation [geo-match-enrich-policy-type]
-`geo_match` [enrich policies](data-enrichment.md#enrich-policy) match enrich data to incoming documents based on a geographic location, using a [`geo_shape` query](elasticsearch://reference/query-languages/query-dsl-geo-shape-query.md).
+`geo_match` [enrich policies](data-enrichment.md#enrich-policy) match enrich data to incoming documents based on a geographic location, using a [`geo_shape` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-geo-shape-query.md).
The following example creates a `geo_match` enrich policy that adds postal codes to incoming documents based on a set of coordinates. It then adds the `geo_match` enrich policy to a processor in an ingest pipeline.
@@ -71,7 +71,7 @@ Use the [create or update pipeline API](https://www.elastic.co/docs/api/doc/elas
* Your enrich policy.
* The `field` of incoming documents used to match the geoshape of documents from the enrich index.
* The `target_field` used to store appended enrich data for incoming documents. This field contains the `match_field` and `enrich_fields` specified in your enrich policy.
-* The `shape_relation`, which indicates how the processor matches geoshapes in incoming documents to geoshapes in documents from the enrich index. See [Spatial Relations](elasticsearch://reference/query-languages/query-dsl-shape-query.md#_spatial_relations) for valid options and more information.
+* The `shape_relation`, which indicates how the processor matches geoshapes in incoming documents to geoshapes in documents from the enrich index. See [Spatial Relations](elasticsearch://reference/query-languages/query-dsl/query-dsl-shape-query.md#_spatial_relations) for valid options and more information.
```console
PUT /_ingest/pipeline/postal_lookup
diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md b/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md
index 1098ba802a..218c450a16 100644
--- a/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md
+++ b/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md
@@ -8,7 +8,7 @@ applies_to:
# Example: Enrich your data by matching a value to a range [range-enrich-policy-type]
-A `range` [enrich policy](data-enrichment.md#enrich-policy) uses a [`term` query](elasticsearch://reference/query-languages/query-dsl-term-query.md) to match a number, date, or IP address in incoming documents to a range of the same type in the enrich index. Matching a range to a range is not supported.
+A `range` [enrich policy](data-enrichment.md#enrich-policy) uses a [`term` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) to match a number, date, or IP address in incoming documents to a range of the same type in the enrich index. Matching a range to a range is not supported.
The following example creates a `range` enrich policy that adds a descriptive network name and responsible department to incoming documents based on an IP address. It then adds the enrich policy to a processor in an ingest pipeline.
diff --git a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md
index e0bd073a30..34b1681e18 100644
--- a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md
+++ b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md
@@ -192,7 +192,7 @@ Use the **Enrich Policies** view to add data from your existing indices to incom
* The source indices that store enrich data as documents
* The fields from the source indices used to match incoming documents
* The enrich fields containing enrich data from the source indices that you want to add to incoming documents
-* An optional [query](elasticsearch://reference/query-languages/query-dsl-match-all-query.md).
+* An optional [query](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-all-query.md).
:::{image} /manage-data/images/elasticsearch-reference-management-enrich-policies.png
:alt: Enrich policies
diff --git a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md
index 02e041db21..81b1d7d19a 100644
--- a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md
+++ b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md
@@ -194,7 +194,7 @@ After creating the pipeline, complete the following steps:
Once the pipeline is set up, perform a **Full Content Sync** of the connector. The inference pipeline will process the data as follows:
- * As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a [sparse vector field](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md)) are added to capture semantic meaning and context of the data.
+ * As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a [sparse vector field](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md)) are added to capture semantic meaning and context of the data.
* When you look at the ingested documents, you can see the embeddings are added to the `predicted_value` field in the documents.
2. Check if AI Assistant can use the index (optional).
diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md
index 4bd66e9bae..58f5b01819 100644
--- a/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md
+++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-auditbeat.md
@@ -5,7 +5,7 @@ mapped_pages:
# {{auditbeat}} {{anomaly-detect}} configurations [ootb-ml-jobs-auditbeat]
-These {{anomaly-job}} wizards appear in {{kib}} if you use [{{auditbeat}}](beats://reference/auditbeat/auditbeat.md) to audit process activity on your systems. For more details, see the {{dfeed}} and job definitions in GitHub.
+These {{anomaly-job}} wizards appear in {{kib}} if you use [{{auditbeat}}](beats://reference/auditbeat/index.md) to audit process activity on your systems. For more details, see the {{dfeed}} and job definitions in GitHub.
## Auditbeat docker processes [auditbeat-process-docker-ecs]
diff --git a/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md b/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md
index de759bafe7..757175eb5e 100644
--- a/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md
+++ b/reference/ingestion-tools/fleet/configuring-kubernetes-metadata.md
@@ -7,7 +7,7 @@ mapped_pages:
Kubernetes [metadata](/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md#beats-metadata) refer to contextual information extracted from Kubernetes resources. Metadata information enrich metrics and logs collected from a Kubernetes cluster, enabling deeper insights into Kubernetes environments.
-When the {{agent}}'s policy includes the [{{k8s}} Integration](integration-docs://reference/kubernetes.md) which configures the collection of Kubernetes related metrics and container logs, the mechanisms used for the metadata enrichment are:
+When the {{agent}}'s policy includes the [{{k8s}} Integration](integration-docs://reference/kubernetes/index.md) which configures the collection of Kubernetes related metrics and container logs, the mechanisms used for the metadata enrichment are:
* [Kubernetes Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) for log collection
* Kubernetes metadata enrichers for metrics
diff --git a/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md b/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md
index 6c1eed58fc..81767bbf64 100644
--- a/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md
+++ b/reference/ingestion-tools/fleet/data-streams-pipeline-tutorial.md
@@ -89,7 +89,7 @@ Add the custom ingest pipeline to any other data streams you wish to update.
Allow time for new data to be ingested before testing your pipeline. In a new window, open {{kib}} and navigate to **{{kib}} Dev tools**.
-Use an [exists query](elasticsearch://reference/query-languages/query-dsl-exists-query.md) to ensure that the new field, "test" is being applied to documents.
+Use an [exists query](elasticsearch://reference/query-languages/query-dsl/query-dsl-exists-query.md) to ensure that the new field, "test" is being applied to documents.
```console
GET metrics-system.cpu-default/_search <1>
@@ -188,7 +188,7 @@ Let’s create a new custom ingest pipeline `logs@custom` that processes all log
}
```
-3. Allow some time for new data to be ingested, and then use a new [exists query](elasticsearch://reference/query-languages/query-dsl-exists-query.md) to confirm that the new field "my-logs-field" is being applied to log event documents.
+3. Allow some time for new data to be ingested, and then use a new [exists query](elasticsearch://reference/query-languages/query-dsl/query-dsl-exists-query.md) to confirm that the new field "my-logs-field" is being applied to log event documents.
For this example, we’ll check the System integration `system.syslog` dataset:
diff --git a/reference/ingestion-tools/fleet/dynamic-input-configuration.md b/reference/ingestion-tools/fleet/dynamic-input-configuration.md
index 066168d24b..adacac3b45 100644
--- a/reference/ingestion-tools/fleet/dynamic-input-configuration.md
+++ b/reference/ingestion-tools/fleet/dynamic-input-configuration.md
@@ -206,7 +206,7 @@ inputs:
### Condition syntax [condition-syntax]
-The conditions supported by {{agent}} are based on [EQL](elasticsearch://reference/query-languages/eql-syntax.md)'s boolean syntax, but add support for variables from providers and functions to manipulate the values.
+The conditions supported by {{agent}} are based on [EQL](elasticsearch://reference/query-languages/eql/eql-syntax.md)'s boolean syntax, but add support for variables from providers and functions to manipulate the values.
**Supported operators:**
diff --git a/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md b/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md
index 7747a28652..585a98f3bc 100644
--- a/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md
+++ b/reference/ingestion-tools/fleet/elastic-agent-inputs-list.md
@@ -38,7 +38,7 @@ When you [configure inputs](/reference/ingestion-tools/fleet/elastic-agent-input
| `elasticsearch/metrics` | Collects metrics about {{es}}. | [Elasticsearch module](beats://reference/metricbeat/metricbeat-module-elasticsearch.md) ({{metricbeat}} docs) |
| `etcd/metrics` | This module targets Etcd V2 and V3. When using V2, metrics are collected using [Etcd v2 API](https://coreos.com/etcd/docs/latest/v2/api.md). When using V3, metrics are retrieved from the `/metrics`` endpoint as intended for [Etcd v3](https://coreos.com/etcd/docs/latest/metrics.md). | [Etcd module](beats://reference/metricbeat/metricbeat-module-etcd.md) ({{metricbeat}} docs) |
| `gcp/metrics` | Periodically fetches monitoring metrics from Google Cloud Platform using [Stackdriver Monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp) for Google Cloud Platform services. | [Google Cloud Platform module](beats://reference/metricbeat/metricbeat-module-gcp.md) ({{metricbeat}} docs) |
-| `haproxy/metrics` | Collects stats from [HAProxy](http://www.haproxy.org/). It supports collection from TCP sockets, UNIX sockets, or HTTP with or without basic authentication. | [HAProxy module](beats://reference/metricbeat/metricbeat-overview.md) ({{metricbeat}} docs) |
+| `haproxy/metrics` | Collects stats from [HAProxy](http://www.haproxy.org/). It supports collection from TCP sockets, UNIX sockets, or HTTP with or without basic authentication. | [HAProxy module](beats://reference/metricbeat/index.md) ({{metricbeat}} docs) |
| `http/metrics` | Used to call arbitrary HTTP endpoints for which a dedicated Metricbeat module is not available. | [HTTP module](beats://reference/metricbeat/metricbeat-module-http.md) ({{metricbeat}} docs) |
| `iis/metrics` | Periodically retrieve IIS web server related metrics. | [IIS module](beats://reference/metricbeat/metricbeat-module-iis.md) ({{metricbeat}} docs) |
| `jolokia/metrics` | Collects metrics from [Jolokia agents](https://jolokia.org/reference/html/agents.html) running on a target JMX server or dedicated proxy server. | [Jolokia module](beats://reference/metricbeat/metricbeat-module-jolokia.md) ({{metricbeat}} docs) |
@@ -105,12 +105,12 @@ When you [configure inputs](/reference/ingestion-tools/fleet/elastic-agent-input
| `netflow` | Reads NetFlow and IPFIX exported flows and options records over UDP. | [NetFlow input](beats://reference/filebeat/filebeat-input-netflow.md) ({{filebeat}} docs) |
| `o365audit` | [beta] Retrieves audit messages from Office 365 and Azure AD activity logs. | [Office 365 Management Activity API input](beats://reference/filebeat/filebeat-input-o365audit.md) ({{filebeat}} docs) |
| `osquery` | Collects and decodes the result logs written by [osqueryd](https://osquery.readthedocs.io/en/latest/introduction/using-osqueryd/) in the JSON format. | - |
-| `redis` | [beta] Reads entries from Redis slowlogs. | [Redis input](beats://reference/filebeat/filebeat-overview.md) ({{filebeat}} docs) |
+| `redis` | [beta] Reads entries from Redis slowlogs. | [Redis input](beats://reference/filebeat/index.md) ({{filebeat}} docs) |
| `syslog` | Reads Syslog events as specified by RFC 3164 and RFC 5424, over TCP, UDP, or a Unix stream socket. | [Syslog input](beats://reference/filebeat/filebeat-input-syslog.md) ({{filebeat}} docs) |
| `tcp` | Reads events over TCP. | [TCP input](beats://reference/filebeat/filebeat-input-tcp.md) ({{filebeat}} docs) |
| `udp` | Reads events over UDP. | [UDP input](beats://reference/filebeat/filebeat-input-udp.md) ({{filebeat}} docs) |
-| `unix` | [beta] Reads events over a stream-oriented Unix domain socket. | [Unix input](beats://reference/filebeat/filebeat-overview.md) ({{filebeat}} docs) |
-| `winlog` | Reads from one or more event logs using Windows APIs, filters the events based on user-configured criteria, then sends the event data to the configured outputs ({{es}} or {{ls}}). | [Winlogbeat Overview](beats://reference/winlogbeat/_winlogbeat_overview.md) ({{winlogbeat}} docs) |
+| `unix` | [beta] Reads events over a stream-oriented Unix domain socket. | [Unix input](beats://reference/filebeat/index.md) ({{filebeat}} docs) |
+| `winlog` | Reads from one or more event logs using Windows APIs, filters the events based on user-configured criteria, then sends the event data to the configured outputs ({{es}} or {{ls}}). | [Winlogbeat Overview](beats://reference/winlogbeat/index.md) ({{winlogbeat}} docs) |
::::
@@ -132,7 +132,7 @@ When you [configure inputs](/reference/ingestion-tools/fleet/elastic-agent-input
| Input | Description | Learn more |
| --- | --- | --- |
-| `packet` | Sniffs the traffic between your servers, parses the application-level protocols on the fly, and correlates the messages into transactions. | [Packetbeat overview](beats://reference/packetbeat/packetbeat-overview.md) ({{packetbeat}} docs) |
+| `packet` | Sniffs the traffic between your servers, parses the application-level protocols on the fly, and correlates the messages into transactions. | [Packetbeat overview](beats://reference/packetbeat/index.md) ({{packetbeat}} docs) |
::::
diff --git a/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md
index 692aac13b0..e4379cf577 100644
--- a/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md
+++ b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md
@@ -11,7 +11,7 @@ Custom pipelines can be used to add custom data processing, like adding fields,
## Metadata enrichment for Kubernetes [_metadata_enrichment_for_kubernetes]
-The [{{k8s}} Integration](integration-docs://reference/kubernetes.md) is used to collect logs and metrics from Kubernetes clusters with {{agent}}. During the collection, the integration enhances the collected information with extra useful information that users can correlate with different Kubernetes assets. This additional information added on top of collected data, such as labels, annotations, ancestor names of Kubernetes assets, and others, are called metadata.
+The [{{k8s}} Integration](integration-docs://reference/kubernetes/index.md) is used to collect logs and metrics from Kubernetes clusters with {{agent}}. During the collection, the integration enhances the collected information with extra useful information that users can correlate with different Kubernetes assets. This additional information added on top of collected data, such as labels, annotations, ancestor names of Kubernetes assets, and others, are called metadata.
The [{{k8s}} Provider](/reference/ingestion-tools/fleet/kubernetes-provider.md) offers the `add_resource_metadata` option to configure the metadata enrichment options.
diff --git a/reference/ingestion-tools/fleet/integrations-assets-best-practices.md b/reference/ingestion-tools/fleet/integrations-assets-best-practices.md
index c91c10c27b..46c0a230b7 100644
--- a/reference/ingestion-tools/fleet/integrations-assets-best-practices.md
+++ b/reference/ingestion-tools/fleet/integrations-assets-best-practices.md
@@ -36,7 +36,7 @@ The {{fleet}} integration assets are not supposed to work when sending arbitrary
While it’s possible to include {{fleet}} and {{agent}} integration assets in a custom integration, this is not recommended nor supported. Assets from another integration should not be referenced directly from a custom integration.
-As an example scenario, one may want to ingest Redis logs from Kafka. This can be done using the [Redis integration](integration-docs://reference/redis-intro.md), but only certain files and paths are allowed. It’s technically possible to use the [Custom Kafka Logs integration](integration-docs://reference/kafka_log.md) with a custom ingest pipeline, referencing the ingest pipeline of the Redis integration to ingest logs into the index templates of the Custom Kafka Logs integration data streams.
+As an example scenario, one may want to ingest Redis logs from Kafka. This can be done using the [Redis integration](integration-docs://reference/redis-intro.md), but only certain files and paths are allowed. It’s technically possible to use the [Custom Kafka Logs integration](integration-docs://reference/kafka_log/index.md) with a custom ingest pipeline, referencing the ingest pipeline of the Redis integration to ingest logs into the index templates of the Custom Kafka Logs integration data streams.
However, referencing assets of an integration from another custom integration is not recommended nor supported. A configuration as described above can break when the integration is upgraded, as can happen automatically.
diff --git a/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md
index 74f5d99622..d453560581 100644
--- a/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md
+++ b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md
@@ -22,19 +22,19 @@ The following table describes the integrations you can use instead of {{auditbea
| If you use… | You can use this instead… | Notes |
| --- | --- | --- |
| [Auditd](beats://reference/auditbeat/auditbeat-module-auditd.md) module | [Auditd Manager](integration-docs://reference/auditd_manager/index.md) integration | This integration is a direct replacement of the module. You can port rules andconfiguration to this integration. Starting in {{stack}} 8.4, you can also set the`immutable` flag in the audit configuration. |
-| [Auditd Logs](integration-docs://reference/auditd.md) integration | Use this integration if you don’t need to manage rules. It only parses logs fromthe audit daemon `auditd`. Please note that the events created by this integrationare different than the ones created by[Auditd Manager](integration-docs://reference/auditd_manager/index.md), since the latter merges allrelated messages in a single event while [Auditd Logs](integration-docs://reference/auditd.md)creates one event per message. |
+| [Auditd Logs](integration-docs://reference/auditd/index.md) integration | Use this integration if you don’t need to manage rules. It only parses logs fromthe audit daemon `auditd`. Please note that the events created by this integrationare different than the ones created by[Auditd Manager](integration-docs://reference/auditd_manager/index.md), since the latter merges allrelated messages in a single event while [Auditd Logs](integration-docs://reference/auditd/index.md)creates one event per message. |
| [File Integrity](beats://reference/auditbeat/auditbeat-module-file_integrity.md) module | [File Integrity Monitoring](integration-docs://reference/fim/index.md) integration | This integration is a direct replacement of the module. It reports real-timeevents, but cannot report who made the changes. If you need to track thisinformation, use [{{elastic-defend}}](/solutions/security/configure-elastic-defend/install-elastic-defend.md) instead. |
| [System](beats://reference/auditbeat/auditbeat-module-system.md) module | It depends… | There is not a single integration that collects all this information. |
-| [System.host](beats://reference/auditbeat/auditbeat-dataset-system-host.md) dataset | [Osquery](integration-docs://reference/osquery.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:
* [system_info](https://www.osquery.io/schema/5.1.0/#system_info) for hostname, unique ID, and architecture
* [os_version](https://www.osquery.io/schema/5.1.0/#os_version)
* [interface_addresses](https://www.osquery.io/schema/5.1.0/#interface_addresses) for IPs and MACs
|
+| [System.host](beats://reference/auditbeat/auditbeat-dataset-system-host.md) dataset | [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:
* [system_info](https://www.osquery.io/schema/5.1.0/#system_info) for hostname, unique ID, and architecture
* [os_version](https://www.osquery.io/schema/5.1.0/#os_version)
* [interface_addresses](https://www.osquery.io/schema/5.1.0/#interface_addresses) for IPs and MACs
|
| [System.login](beats://reference/auditbeat/auditbeat-dataset-system-login.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Report login events. |
-| [Osquery](integration-docs://reference/osquery.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Use the [last](https://www.osquery.io/schema/5.1.0/#last) table for Linux and macOS. |
-| {{fleet}} [system](integration-docs://reference/system.md) integration | Collect login events for Windows through the [Security event log](integration-docs://reference/system/index.md#security). |
+| [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Use the [last](https://www.osquery.io/schema/5.1.0/#last) table for Linux and macOS. |
+| {{fleet}} [system](integration-docs://reference/system/index.md) integration | Collect login events for Windows through the [Security event log](integration-docs://reference/system/index.md#security). |
| [System.package](beats://reference/auditbeat/auditbeat-dataset-system-package.md) dataset | [System Audit](integration-docs://reference/system_audit/index.md) integration | This integration is a direct replacement of the System Package dataset. Starting in {{stack}} 8.7, you can port rules and configuration settings to this integration. This integration currently schedules collection of information such as:
* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
|
-| [Osquery](integration-docs://reference/osquery.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:
* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
* [apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)
* [programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)
* [npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)
* [atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)
* [chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)
* [portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)
* [python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)
|
+| [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:
* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
* [apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)
* [programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)
* [npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)
* [atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)
* [chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)
* [portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)
* [python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)
|
| [System.process](beats://reference/auditbeat/auditbeat-dataset-system-process.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because out of the box it reports events forevery process in [ECS](integration-docs://reference/index.md) format and has excellentintegration in [Kibana](/get-started/the-stack.md). |
-| [Custom Windows event log](integration-docs://reference/winlog.md) and [Sysmon](integration-docs://reference/sysmon_linux/index.md) integrations | Provide process data. |
-| [Osquery](integration-docs://reference/osquery.md) or[Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. |
+| [Custom Windows event log](integration-docs://reference/winlog/index.md) and [Sysmon](integration-docs://reference/sysmon_linux/index.md) integrations | Provide process data. |
+| [Osquery](integration-docs://reference/osquery/index.md) or[Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. |
| [System.socket](beats://reference/auditbeat/auditbeat-dataset-system-socket.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because it supports monitoring network connections on Linux,Windows, and MacOS. Includes process and user metadata. Currently does notdo flow accounting (byte and packet counts) or domain name enrichment (but doescollect DNS queries separately). |
-| [Osquery](integration-docs://reference/osquery.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Monitor socket events via the [socket_events](https://www.osquery.io/schema/5.1.0/#socket_events) tablefor Linux and MacOS. |
-| [System.user](beats://reference/auditbeat/auditbeat-dataset-system-user.md) dataset | [Osquery](integration-docs://reference/osquery.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Monitor local users via the [user](https://www.osquery.io/schema/5.1.0/#user) table for Linux, Windows, and MacOS. |
+| [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Monitor socket events via the [socket_events](https://www.osquery.io/schema/5.1.0/#socket_events) tablefor Linux and MacOS. |
+| [System.user](beats://reference/auditbeat/auditbeat-dataset-system-user.md) dataset | [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Monitor local users via the [user](https://www.osquery.io/schema/5.1.0/#user) table for Linux, Windows, and MacOS. |
diff --git a/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md
index cc91f4d4e4..046ee2a793 100644
--- a/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md
+++ b/reference/ingestion-tools/fleet/running-on-aks-managed-by-fleet.md
@@ -14,9 +14,9 @@ On managed Kubernetes solutions like AKS, {{agent}} has no access to several dat
1. Metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. In this regard, the respective **dashboards** will not be populated with data.
2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {{agent}}.
-3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://reference/kubernetes.md).
+3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://reference/kubernetes/index.md).
- In this regard, you can use [`add_fields` processor](beats://reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://reference/kubernetes.md)'s component:
+ In this regard, you can use [`add_fields` processor](beats://reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://reference/kubernetes/index.md)'s component:
```yaml
- add_fields:
diff --git a/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md
index 31e16dfae7..e4ce8e6a16 100644
--- a/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md
+++ b/reference/ingestion-tools/fleet/running-on-eks-managed-by-fleet.md
@@ -14,9 +14,9 @@ On managed Kubernetes solutions like EKS, {{agent}} has no access to several dat
1. Metrics from [Kubernetes control plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) components are not available. Consequently metrics are not available for `kube-scheduler` and `kube-controller-manager` components. In this regard, the respective **dashboards** will not be populated with data.
2. **Audit logs** are available only on Kubernetes master nodes as well, hence cannot be collected by {{agent}}.
-3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://reference/kubernetes.md).
+3. Fields `orchestrator.cluster.name` and `orchestrator.cluster.url` are not populated. `orchestrator.cluster.name` field is used as a cluster selector for default Kubernetes dashboards, shipped with [Kubernetes integration](integration-docs://reference/kubernetes/index.md).
- In this regard, you can use [`add_fields` processor](beats://reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://reference/kubernetes.md)'s component:
+ In this regard, you can use [`add_fields` processor](beats://reference/filebeat/add-fields.md) to add `orchestrator.cluster.name` and `orchestrator.cluster.url` fields for each [Kubernetes integration](integration-docs://reference/kubernetes/index.md)'s component:
```yaml
- add_fields:
diff --git a/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md
index 37634e844a..aa01617d85 100644
--- a/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md
+++ b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md
@@ -78,7 +78,7 @@ The size and the number of nodes in a Kubernetes cluster can be large at times,
### Step 2: Configure {{agent}} policy [_step_2_configure_agent_policy]
-The {{agent}} needs to be assigned to a policy to enable the proper inputs. To achieve Kubernetes observability, the policy needs to include the Kubernetes integration. Refer to [Create a policy](/reference/ingestion-tools/fleet/agent-policy.md#create-a-policy) and [Add an integration to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-integration) to learn how to configure the [Kubernetes integration](integration-docs://reference/kubernetes.md).
+The {{agent}} needs to be assigned to a policy to enable the proper inputs. To achieve Kubernetes observability, the policy needs to include the Kubernetes integration. Refer to [Create a policy](/reference/ingestion-tools/fleet/agent-policy.md#create-a-policy) and [Add an integration to a policy](/reference/ingestion-tools/fleet/agent-policy.md#add-integration) to learn how to configure the [Kubernetes integration](integration-docs://reference/kubernetes/index.md).
### Step 3: Enroll {{agent}} to the policy [_step_3_enroll_agent_to_the_policy]
diff --git a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md
index f2e2194d8e..55f0e5aec2 100644
--- a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md
+++ b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md
@@ -31,7 +31,7 @@ The document is divided in two main sections:
#### Configure agent resources [_configure_agent_resources]
-The {{k8s}} {{observability}} is based on [Elastic {{k8s}} integration](integration-docs://reference/kubernetes.md), which collects metrics from several components:
+The {{k8s}} {{observability}} is based on [Elastic {{k8s}} integration](integration-docs://reference/kubernetes/index.md), which collects metrics from several components:
* **Per node:**
diff --git a/reference/ingestion-tools/fleet/upgrade-integration.md b/reference/ingestion-tools/fleet/upgrade-integration.md
index 0297464dcc..bb614cc78c 100644
--- a/reference/ingestion-tools/fleet/upgrade-integration.md
+++ b/reference/ingestion-tools/fleet/upgrade-integration.md
@@ -55,7 +55,7 @@ The following integrations are installed automatically when you select certain o
* [Elastic Agent](integration-docs://reference/elastic_agent/index.md) - installed automatically when the default **Collect agent logs** or **Collect agent metrics** option is enabled in an {{agent}} policy).
* [Fleet Server](integration-docs://reference/fleet_server/index.md) - installed automatically when {{fleet-server}} is set up through the {{fleet}} UI.
-* [System](integration-docs://reference/system.md) - installed automatically when the default **Collect system logs and metrics** option is enabled in an {{agent}} policy).
+* [System](integration-docs://reference/system/index.md) - installed automatically when the default **Collect system logs and metrics** option is enabled in an {{agent}} policy).
The [Elastic Defend](integration-docs://reference/endpoint/index.md) integration also has an option to upgrade installation policies automatically.
diff --git a/reference/security/fields-and-object-schemas/timeline-object-schema.md b/reference/security/fields-and-object-schemas/timeline-object-schema.md
index dd6edb117b..99a2278de2 100644
--- a/reference/security/fields-and-object-schemas/timeline-object-schema.md
+++ b/reference/security/fields-and-object-schemas/timeline-object-schema.md
@@ -118,11 +118,11 @@ This screenshot maps the Timeline UI components to their JSON objects:
| Name | Type | Description |
| --- | --- | --- |
-| `exists` | String | [Exists term query](elasticsearch://reference/query-languages/query-dsl-exists-query.md) for thespecified field (`null` when undefined). For example, `{"field":"user.name"}`. |
+| `exists` | String | [Exists term query](elasticsearch://reference/query-languages/query-dsl/query-dsl-exists-query.md) for thespecified field (`null` when undefined). For example, `{"field":"user.name"}`. |
| `meta` | meta | Filter details:
* `alias` (string): UI filter name.
* `disabled` (boolean): Indicates if the filter is disabled.
* `key`(string): Field name or unique string ID.
* `negate` (boolean): Indicates if the filter query clause uses `NOT` logic.
* `params` (string): Value of `phrase` filter types.
* `type` (string): Type of filter. For example, `exists` and `range`. For more information about filtering, see [Query DSL](elasticsearch://reference/query-languages/querydsl.md).
|
-| `match_all` | String | [Match all term query](elasticsearch://reference/query-languages/query-dsl-match-all-query.md)for the specified field (`null` when undefined). |
+| `match_all` | String | [Match all term query](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-all-query.md)for the specified field (`null` when undefined). |
| `query` | String | [DSL query](elasticsearch://reference/query-languages/querydsl.md) (`null` when undefined). Forexample, `{"match_phrase":{"ecs.version":"1.4.0"}}`. |
-| `range` | String | [Range query](elasticsearch://reference/query-languages/query-dsl-range-query.md) (`null` whenundefined). For example, `{"@timestamp":{"gte":"now-1d","lt":"now"}}"`. |
+| `range` | String | [Range query](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) (`null` whenundefined). For example, `{"@timestamp":{"gte":"now-1d","lt":"now"}}"`. |
## globalNotes object [globalNotes-obj]
diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md
index 66cb9d8e58..31bc7abc1e 100644
--- a/solutions/observability/logs.md
+++ b/solutions/observability/logs.md
@@ -59,7 +59,7 @@ See [install {{agent}} in containers](/reference/ingestion-tools/fleet/install-e
{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing.
-* [{{filebeat}} overview](beats://reference/filebeat/filebeat-overview.md): General information on {{filebeat}} and how it works.
+* [{{filebeat}} overview](beats://reference/filebeat/index.md): General information on {{filebeat}} and how it works.
* [{{filebeat}} quick start](beats://reference/filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started.
* [Set up and run {{filebeat}}](beats://reference/filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}.
diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md
index 6f98b6cb6c..273dcb7eee 100644
--- a/solutions/observability/logs/filter-aggregate-logs.md
+++ b/solutions/observability/logs/filter-aggregate-logs.md
@@ -131,7 +131,7 @@ For more on using Logs Explorer, refer to the [Discover](../../../explore-analyz
[Query DSL](../../../explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**.
-For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](elasticsearch://reference/query-languages/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](elasticsearch://reference/query-languages/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels.
+For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels.
First, from **Developer Tools**, add some logs with varying timestamps and log levels to your data stream with the following command:
diff --git a/solutions/observability/logs/parse-route-logs.md b/solutions/observability/logs/parse-route-logs.md
index b779d52680..6b7fe85930 100644
--- a/solutions/observability/logs/parse-route-logs.md
+++ b/solutions/observability/logs/parse-route-logs.md
@@ -678,7 +678,7 @@ Because all of the example logs are in this range, you’ll get the following re
##### Range queries [observability-parse-log-data-range-queries]
-Use [range queries](elasticsearch://reference/query-languages/query-dsl-range-query.md) to query logs in a specific range.
+Use [range queries](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) to query logs in a specific range.
The following command searches for IP addresses greater than or equal to `192.168.1.100` and less than or equal to `192.168.1.102`.
diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md
index dbcbe0cf4b..d1e8735b21 100644
--- a/solutions/observability/observability-ai-assistant.md
+++ b/solutions/observability/observability-ai-assistant.md
@@ -199,7 +199,7 @@ After creating the pipeline, complete the following steps:
Once the pipeline is set up, perform a **Full Content Sync** of the connector. The inference pipeline will process the data as follows:
- * As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a [sparse vector field](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md)) are added to capture semantic meaning and context of the data.
+ * As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a [sparse vector field](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md)) are added to capture semantic meaning and context of the data.
* When you look at the ingested documents, you can see the embeddings are added to the `predicted_value` field in the documents.
2. Check if AI Assistant can use the index (optional).
diff --git a/solutions/search/elasticsearch-basics-quickstart.md b/solutions/search/elasticsearch-basics-quickstart.md
index 882ca2f553..20ac04dbfe 100644
--- a/solutions/search/elasticsearch-basics-quickstart.md
+++ b/solutions/search/elasticsearch-basics-quickstart.md
@@ -416,7 +416,7 @@ GET books/_search
### `match` query [getting-started-match-query]
-You can use the [`match` query](elasticsearch://reference/query-languages/query-dsl-match-query.md) to search for documents that contain a specific value in a specific field. This is the standard query for full-text searches.
+You can use the [`match` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) to search for documents that contain a specific value in a specific field. This is the standard query for full-text searches.
Run the following command to search the `books` index for documents containing `brave` in the `name` field:
diff --git a/solutions/search/full-text.md b/solutions/search/full-text.md
index e316a7de6f..849563bb8a 100644
--- a/solutions/search/full-text.md
+++ b/solutions/search/full-text.md
@@ -44,7 +44,7 @@ Learn about the core components of full-text search:
Learn how to build full-text search queries using {{es}}'s query languages:
-* [Full-text queries using Query DSL](elasticsearch://reference/query-languages/full-text-queries.md)
+* [Full-text queries using Query DSL](elasticsearch://reference/query-languages/query-dsl/full-text-queries.md)
* [Full-text search functions in {{esql}}](elasticsearch://reference/query-languages/esql/esql-functions-operators.md#esql-search-functions)
**Advanced topics**
diff --git a/solutions/search/full-text/how-full-text-works.md b/solutions/search/full-text/how-full-text-works.md
index 8b424e12b9..a80bf60fec 100644
--- a/solutions/search/full-text/how-full-text-works.md
+++ b/solutions/search/full-text/how-full-text-works.md
@@ -30,6 +30,6 @@ Refer to [Test an analyzer](../../../manage-data/data-store/text-analysis/test-a
* **Full-text search query**: Query text is analyzed [the same way as the indexed text](../../../manage-data/data-store/text-analysis/index-search-analysis.md), and the resulting tokens are used to search the inverted index.
- Query DSL supports a number of [full-text queries](elasticsearch://reference/query-languages/full-text-queries.md).
+ Query DSL supports a number of [full-text queries](elasticsearch://reference/query-languages/query-dsl/full-text-queries.md).
As of 8.17, {{esql}} also supports [full-text search](elasticsearch://reference/query-languages/esql/esql-functions-operators.md#esql-search-functions) functions.
diff --git a/solutions/search/full-text/search-relevance/static-scoring-signals.md b/solutions/search/full-text/search-relevance/static-scoring-signals.md
index 9d4739b816..baa4eb6b75 100644
--- a/solutions/search/full-text/search-relevance/static-scoring-signals.md
+++ b/solutions/search/full-text/search-relevance/static-scoring-signals.md
@@ -12,12 +12,12 @@ Many domains have static signals that are known to be correlated with relevance.
There are two main queries that allow combining static score contributions with textual relevance, eg. as computed with BM25:
-* [`script_score` query](elasticsearch://reference/query-languages/query-dsl-script-score-query.md)
-* [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl-rank-feature-query.md)
+* [`script_score` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md)
+* [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-rank-feature-query.md)
For instance imagine that you have a `pagerank` field that you wish to combine with the BM25 score so that the final score is equal to `score = bm25_score + pagerank / (10 + pagerank)`.
-With the [`script_score` query](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) the query would look like this:
+With the [`script_score` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) the query would look like this:
```console
GET index/_search
@@ -38,7 +38,7 @@ GET index/_search
1. `pagerank` must be mapped as a [Numeric](elasticsearch://reference/elasticsearch/mapping-reference/number.md)
-while with the [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl-rank-feature-query.md) it would look like below:
+while with the [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-rank-feature-query.md) it would look like below:
```console
GET _search
@@ -64,5 +64,5 @@ GET _search
1. `pagerank` must be mapped as a [`rank_feature`](elasticsearch://reference/elasticsearch/mapping-reference/rank-feature.md) field
-While both options would return similar scores, there are trade-offs: [script_score](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) provides a lot of flexibility, enabling you to combine the text relevance score with static signals as you prefer. On the other hand, the [`rank_feature` query](elasticsearch://reference/elasticsearch/mapping-reference/rank-feature.md) only exposes a couple ways to incorporate static signals into the score. However, it relies on the [`rank_feature`](elasticsearch://reference/elasticsearch/mapping-reference/rank-feature.md) and [`rank_features`](elasticsearch://reference/elasticsearch/mapping-reference/rank-features.md) fields, which index values in a special way that allows the [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl-rank-feature-query.md) to skip over non-competitive documents and get the top matches of a query faster.
+While both options would return similar scores, there are trade-offs: [script_score](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) provides a lot of flexibility, enabling you to combine the text relevance score with static signals as you prefer. On the other hand, the [`rank_feature` query](elasticsearch://reference/elasticsearch/mapping-reference/rank-feature.md) only exposes a couple ways to incorporate static signals into the score. However, it relies on the [`rank_feature`](elasticsearch://reference/elasticsearch/mapping-reference/rank-feature.md) and [`rank_features`](elasticsearch://reference/elasticsearch/mapping-reference/rank-features.md) fields, which index values in a special way that allows the [`rank_feature` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-rank-feature-query.md) to skip over non-competitive documents and get the top matches of a query faster.
diff --git a/solutions/search/querydsl-full-text-filter-tutorial.md b/solutions/search/querydsl-full-text-filter-tutorial.md
index c0625b81dc..af45d118f3 100644
--- a/solutions/search/querydsl-full-text-filter-tutorial.md
+++ b/solutions/search/querydsl-full-text-filter-tutorial.md
@@ -138,7 +138,7 @@ Full-text search involves executing text-based queries across one or more docume
### `match` query [_match_query]
-The [`match`](elasticsearch://reference/query-languages/query-dsl-match-query.md) query is the standard query for full-text, or "lexical", search. The query text will be analyzed according to the analyzer configuration specified on each field (or at query time).
+The [`match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) query is the standard query for full-text, or "lexical", search. The query text will be analyzed according to the analyzer configuration specified on each field (or at query time).
First, search the `description` field for "fluffy pancakes":
@@ -258,7 +258,7 @@ GET /cooking_blog/_search
### Specify a minimum number of terms to match [_specify_a_minimum_number_of_terms_to_match]
-Use the [`minimum_should_match`](elasticsearch://reference/query-languages/query-dsl-minimum-should-match.md) parameter to specify the minimum number of terms a document should have to be included in the search results.
+Use the [`minimum_should_match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-minimum-should-match.md) parameter to specify the minimum number of terms a document should have to be included in the search results.
Search the title field to match at least 2 of the 3 terms: "fluffy", "pancakes", or "breakfast". This is useful for improving relevance while allowing some flexibility.
@@ -279,7 +279,7 @@ GET /cooking_blog/_search
## Step 4: Search across multiple fields at once [full-text-filter-tutorial-multi-match]
-When users enter a search query, they often don’t know (or care) whether their search terms appear in a specific field. A [`multi_match`](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) query allows searching across multiple fields simultaneously.
+When users enter a search query, they often don’t know (or care) whether their search terms appear in a specific field. A [`multi_match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) query allows searching across multiple fields simultaneously.
Let’s start with a basic `multi_match` query:
@@ -320,7 +320,7 @@ GET /cooking_blog/_search
-Learn more about fields and per-field boosting in the [`multi_match` query](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md) reference.
+Learn more about fields and per-field boosting in the [`multi_match` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md) reference.
::::{dropdown} Example response
```console-result
@@ -385,7 +385,7 @@ The `multi_match` query is often recommended over a single `match` query for mos
[Filtering](../../explore-analyze/query-filter/languages/querydsl.md#filter-context) allows you to narrow down your search results based on exact criteria. Unlike full-text searches, filters are binary (yes/no) and do not affect the relevance score. Filters execute faster than queries because excluded results don’t need to be scored.
-This [`bool`](elasticsearch://reference/query-languages/query-dsl-bool-query.md) query will return only blog posts in the "Breakfast" category.
+This [`bool`](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) query will return only blog posts in the "Breakfast" category.
```console
GET /cooking_blog/_search
@@ -415,7 +415,7 @@ The `.keyword` suffix accesses the unanalyzed version of a field, enabling exact
### Search for posts within a date range [full-text-filter-tutorial-range-query]
-Often users want to find content published within a specific time frame. A [`range`](elasticsearch://reference/query-languages/query-dsl-range-query.md) query finds documents that fall within numeric or date ranges.
+Often users want to find content published within a specific time frame. A [`range`](elasticsearch://reference/query-languages/query-dsl/query-dsl-range-query.md) query finds documents that fall within numeric or date ranges.
```console
GET /cooking_blog/_search
@@ -438,7 +438,7 @@ GET /cooking_blog/_search
### Find exact matches [full-text-filter-tutorial-term-query]
-Sometimes users want to search for exact terms to eliminate ambiguity in their search results. A [`term`](elasticsearch://reference/query-languages/query-dsl-term-query.md) query searches for an exact term in a field without analyzing it. Exact, case-sensitive matches on specific terms are often referred to as "keyword" searches.
+Sometimes users want to search for exact terms to eliminate ambiguity in their search results. A [`term`](elasticsearch://reference/query-languages/query-dsl/query-dsl-term-query.md) query searches for an exact term in a field without analyzing it. Exact, case-sensitive matches on specific terms are often referred to as "keyword" searches.
Here you’ll search for the author "Maria Rodriguez" in the `author.keyword` field.
@@ -465,7 +465,7 @@ Avoid using the `term` query for [`text` fields](elasticsearch://reference/elast
## Step 6: Combine multiple search criteria [full-text-filter-tutorial-complex-bool]
-A [`bool`](elasticsearch://reference/query-languages/query-dsl-bool-query.md) query allows you to combine multiple query clauses to create sophisticated searches. In this tutorial scenario it’s useful for when users have complex requirements for finding recipes.
+A [`bool`](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) query allows you to combine multiple query clauses to create sophisticated searches. In this tutorial scenario it’s useful for when users have complex requirements for finding recipes.
Let’s create a query that addresses the following user needs:
@@ -571,7 +571,7 @@ GET /cooking_blog/_search
}
```
-1. The title contains "Spicy" and "Curry", matching our should condition. With the default [best_fields](elasticsearch://reference/query-languages/query-dsl-multi-match-query.md#type-best-fields) behavior, this field contributes most to the relevance score.
+1. The title contains "Spicy" and "Curry", matching our should condition. With the default [best_fields](elasticsearch://reference/query-languages/query-dsl/query-dsl-multi-match-query.md#type-best-fields) behavior, this field contributes most to the relevance score.
2. While the description also contains matching terms, only the best matching field’s score is used by default.
3. The recipe was published within the last month, satisfying our recency preference.
4. The "Main Course" category satisfies another `should` condition.
diff --git a/solutions/search/ranking/learning-to-rank-model-training.md b/solutions/search/ranking/learning-to-rank-model-training.md
index 32e201e63e..19a6c54598 100644
--- a/solutions/search/ranking/learning-to-rank-model-training.md
+++ b/solutions/search/ranking/learning-to-rank-model-training.md
@@ -81,7 +81,7 @@ feature_extractors=[
::::{admonition} Term statistics as features
:class: note
-It is very common for an LTR model to leverage raw term statistics as features. To extract this information, you can use the [term statistics feature](../../../explore-analyze/scripting/modules-scripting-fields.md#scripting-term-statistics) provided as part of the [`script_score`](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) query.
+It is very common for an LTR model to leverage raw term statistics as features. To extract this information, you can use the [term statistics feature](../../../explore-analyze/scripting/modules-scripting-fields.md#scripting-term-statistics) provided as part of the [`script_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) query.
::::
diff --git a/solutions/search/search-applications/search-application-api.md b/solutions/search/search-applications/search-application-api.md
index 4f881ca3f2..9460489b66 100644
--- a/solutions/search/search-applications/search-application-api.md
+++ b/solutions/search/search-applications/search-application-api.md
@@ -166,7 +166,7 @@ When you actually perform a search with no parameters, it will execute the under
POST _application/search_application/my_search_application/_search
```
-Searching with the `query_string` and/or `default_field` parameters will perform a [`query_string`](elasticsearch://reference/query-languages/query-dsl-query-string-query.md) query.
+Searching with the `query_string` and/or `default_field` parameters will perform a [`query_string`](elasticsearch://reference/query-languages/query-dsl/query-dsl-query-string-query.md) query.
::::{warning}
The default template is subject to change in future versions of the Search Applications feature.
@@ -516,7 +516,7 @@ POST _application/search_application/my_search_application/_search
Text search results and ELSER search results are expected to have significantly different scores in some cases, which makes ranking challenging. To find the best search result mix for your dataset, we suggest experimenting with the boost values provided in the example template:
* `text_query_boost` to boost the BM25 query as a whole
-* [`boost`](elasticsearch://reference/query-languages/query-dsl-query-string-query.md#_boosting) fields to boost individual text search fields
+* [`boost`](elasticsearch://reference/query-languages/query-dsl/query-dsl-query-string-query.md#boosting) fields to boost individual text search fields
* [`min_score`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-min_score) parameter to omit significantly low confidence results
The above boosts should be sufficient for many use cases, but there are cases when adding a [rescore](elasticsearch://reference/elasticsearch/rest-apis/filter-search-results.md#rescore) query or [index boost](elasticsearch://reference/elasticsearch/rest-apis/search-multiple-data-streams-indices.md#index-boost) to your template may be beneficial. Remember to update your search application to use the new template using the [put search application command](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search-application-put).
diff --git a/solutions/search/search-applications/search-application-client.md b/solutions/search/search-applications/search-application-client.md
index 337e018564..716f1262bb 100644
--- a/solutions/search/search-applications/search-application-client.md
+++ b/solutions/search/search-applications/search-application-client.md
@@ -309,7 +309,7 @@ If you need to adjust `search_fields` at query request time, you can add a new p
**Use case: I want to boost results given a certain proximity to the user**
-You can add additional template parameters to send the geo-coordinates of the user. Then use [`function_score`](elasticsearch://reference/query-languages/query-dsl-function-score-query.md) to boost documents which match a certain [`geo_distance`](elasticsearch://reference/query-languages/query-dsl-geo-distance-query.md) from the user.
+You can add additional template parameters to send the geo-coordinates of the user. Then use [`function_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-function-score-query.md) to boost documents which match a certain [`geo_distance`](elasticsearch://reference/query-languages/query-dsl/query-dsl-geo-distance-query.md) from the user.
## Result fields [search-application-client-client-features-result-fields]
diff --git a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
index 7a9ac7286f..d0cb25ec56 100644
--- a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
+++ b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
@@ -143,7 +143,7 @@ POST _tasks//_cancel
### Semantic search by using the `sparse_vector` query [text-expansion-query]
-To perform semantic search, use the [`sparse_vector` query](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md), and provide the query text and the inference ID associated with your ELSER model. The example below uses the query text "How to avoid muscle soreness after running?", the `content_embedding` field contains the generated ELSER output:
+To perform semantic search, use the [`sparse_vector` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md), and provide the query text and the inference ID associated with your ELSER model. The example below uses the query text "How to avoid muscle soreness after running?", the `content_embedding` field contains the generated ELSER output:
```console
GET my-index/_search
@@ -200,7 +200,7 @@ The result is the top 10 documents that are closest in meaning to your query tex
### Combining semantic search with other queries [text-expansion-compound-query]
-You can combine [`sparse_vector`](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md) with other queries in a [compound query](elasticsearch://reference/query-languages/compound-queries.md). For example, use a filter clause in a [Boolean](elasticsearch://reference/query-languages/query-dsl-bool-query.md) or a full text query with the same (or different) query text as the `sparse_vector` query. This enables you to combine the search results from both queries.
+You can combine [`sparse_vector`](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md) with other queries in a [compound query](elasticsearch://reference/query-languages/query-dsl/compound-queries.md). For example, use a filter clause in a [Boolean](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) or a full text query with the same (or different) query text as the `sparse_vector` query. This enables you to combine the search results from both queries.
The search hits from the `sparse_vector` query tend to score higher than other {{es}} queries. Those scores can be regularized by increasing or decreasing the relevance scores of each query by using the `boost` parameter. Recall on the `sparse_vector` query can be high where there is a long tail of less relevant results. Use the `min_score` parameter to prune those less relevant documents.
@@ -243,7 +243,7 @@ GET my-index/_search
### Saving disk space by excluding the ELSER tokens from document source [save-space]
-The tokens generated by ELSER must be indexed for use in the [sparse_vector query](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md). However, it is not necessary to retain those terms in the document source. You can save disk space by using the [source exclude](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#include-exclude) mapping to remove the ELSER terms from the document source.
+The tokens generated by ELSER must be indexed for use in the [sparse_vector query](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md). However, it is not necessary to retain those terms in the document source. You can save disk space by using the [source exclude](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#include-exclude) mapping to remove the ELSER terms from the document source.
::::{warning}
Reindex uses the document source to populate the destination index. **Once the ELSER terms have been excluded from the source, they cannot be recovered through reindexing.** Excluding the tokens from the source is a space-saving optimization that should only be applied if you are certain that reindexing will not be required in the future! It’s important to carefully consider this trade-off and make sure that excluding the ELSER terms from the source aligns with your specific requirements and use case. Review the [Disabling the `_source` field](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#disable-source-field) and [Including / Excluding fields from `_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#include-exclude) sections carefully to learn more about the possible consequences of excluding the tokens from the `_source`.
diff --git a/solutions/search/the-search-api.md b/solutions/search/the-search-api.md
index 06177d579a..3f4c77f980 100644
--- a/solutions/search/the-search-api.md
+++ b/solutions/search/the-search-api.md
@@ -18,7 +18,7 @@ You can use the [search API](https://www.elastic.co/docs/api/doc/elasticsearch/o
## Run a search [run-an-es-search]
-The following request searches `my-index-000001` using a [`match`](elasticsearch://reference/query-languages/query-dsl-match-query.md) query. This query matches documents with a `user.id` value of `kimchy`.
+The following request searches `my-index-000001` using a [`match`](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-query.md) query. This query matches documents with a `user.id` value of `kimchy`.
```console
GET /my-index-000001/_search
@@ -87,10 +87,10 @@ You can use the following options to customize your searches.
**Query DSL**
[Query DSL](../../explore-analyze/query-filter/languages/querydsl.md) supports a variety of query types you can mix and match to get the results you want. Query types include:
-* [Boolean](elasticsearch://reference/query-languages/query-dsl-bool-query.md) and other [compound queries](elasticsearch://reference/query-languages/compound-queries.md), which let you combine queries and match results based on multiple criteria
-* [Term-level queries](elasticsearch://reference/query-languages/term-level-queries.md) for filtering and finding exact matches
-* [Full text queries](elasticsearch://reference/query-languages/full-text-queries.md), which are commonly used in search engines
-* [Geo](elasticsearch://reference/query-languages/geo-queries.md) and [spatial queries](elasticsearch://reference/query-languages/shape-queries.md)
+* [Boolean](elasticsearch://reference/query-languages/query-dsl/query-dsl-bool-query.md) and other [compound queries](elasticsearch://reference/query-languages/query-dsl/compound-queries.md), which let you combine queries and match results based on multiple criteria
+* [Term-level queries](elasticsearch://reference/query-languages/query-dsl/term-level-queries.md) for filtering and finding exact matches
+* [Full text queries](elasticsearch://reference/query-languages/query-dsl/full-text-queries.md), which are commonly used in search engines
+* [Geo](elasticsearch://reference/query-languages/query-dsl/geo-queries.md) and [spatial queries](elasticsearch://reference/query-languages/query-dsl/shape-queries.md)
**Aggregations**
You can use [search aggregations](../../explore-analyze/query-filter/aggregations.md) to get statistics and other analytics for your search results. Aggregations help you answer questions like:
@@ -104,7 +104,7 @@ You can use the following options to customize your searches.
**Retrieve selected fields**
The search response’s `hits.hits` property includes the full document [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) for each hit. To retrieve only a subset of the `_source` or other fields, see [Retrieve selected fields](elasticsearch://reference/elasticsearch/rest-apis/retrieve-selected-fields.md).
-**Sort search results**
By default, search hits are sorted by `_score`, a [relevance score](../../explore-analyze/query-filter/languages/querydsl.md#relevance-scores) that measures how well each document matches the query. To customize the calculation of these scores, use the [`script_score`](elasticsearch://reference/query-languages/query-dsl-script-score-query.md) query. To sort search hits by other field values, see [Sort search results](elasticsearch://reference/elasticsearch/rest-apis/sort-search-results.md).
+**Sort search results**
By default, search hits are sorted by `_score`, a [relevance score](../../explore-analyze/query-filter/languages/querydsl.md#relevance-scores) that measures how well each document matches the query. To customize the calculation of these scores, use the [`script_score`](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md) query. To sort search hits by other field values, see [Sort search results](elasticsearch://reference/elasticsearch/rest-apis/sort-search-results.md).
**Run an async search**
{{es}} searches are designed to run on large volumes of data quickly, often returning results in milliseconds. For this reason, searches are *synchronous* by default. The search request waits for complete results before returning a response.
diff --git a/solutions/search/vector/bring-own-vectors.md b/solutions/search/vector/bring-own-vectors.md
index 21b347241c..1d0c357905 100644
--- a/solutions/search/vector/bring-own-vectors.md
+++ b/solutions/search/vector/bring-own-vectors.md
@@ -127,7 +127,7 @@ POST /amazon-reviews/_search
In this simple example, we’re sending a raw vector for the query text. In a real-world scenario you won’t know the query text ahead of time. You’ll need to generate query vectors, on the fly, using the same embedding model that generated the document vectors.
-For this you’ll need to deploy a text embedding model in {{es}} and use the [`query_vector_builder` parameter](elasticsearch://reference/query-languages/query-dsl-knn-query.md#knn-query-top-level-parameters). Alternatively, you can generate vectors client-side and send them directly with the search request.
+For this you’ll need to deploy a text embedding model in {{es}} and use the [`query_vector_builder` parameter](elasticsearch://reference/query-languages/query-dsl/query-dsl-knn-query.md#knn-query-top-level-parameters). Alternatively, you can generate vectors client-side and send them directly with the search request.
Learn how to [use a deployed text embedding model](dense-versus-sparse-ingest-pipelines.md) for semantic search.
diff --git a/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md b/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md
index d4d2e2ffd8..e6ca8a3566 100644
--- a/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md
+++ b/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md
@@ -180,12 +180,12 @@ Now it is time to perform semantic search!
## Search the data [deployed-search]
-Depending on the type of model you have deployed, you can query rank features with a [sparse vector](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md) query, or dense vectors with a kNN search.
+Depending on the type of model you have deployed, you can query rank features with a [sparse vector](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md) query, or dense vectors with a kNN search.
:::::::{tab-set}
::::::{tab-item} ELSER
-ELSER text embeddings can be queried using a [sparse vector query](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md). The sparse vector query enables you to query a [sparse vector](elasticsearch://reference/elasticsearch/mapping-reference/sparse-vector.md) field, by providing the inference ID associated with the NLP model you want to use, and the query text:
+ELSER text embeddings can be queried using a [sparse vector query](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md). The sparse vector query enables you to query a [sparse vector](elasticsearch://reference/elasticsearch/mapping-reference/sparse-vector.md) field, by providing the inference ID associated with the NLP model you want to use, and the query text:
```console
GET my-index/_search
diff --git a/solutions/search/vector/knn.md b/solutions/search/vector/knn.md
index f6df5ef132..d57357afc3 100644
--- a/solutions/search/vector/knn.md
+++ b/solutions/search/vector/knn.md
@@ -103,7 +103,7 @@ To run an approximate kNN search, use the [`knn` option](https://www.elastic.co/
...
```
-3. Run the search using the [`knn` option](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-knn) or the [`knn` query](elasticsearch://reference/query-languages/query-dsl-knn-query.md) (expert case).
+3. Run the search using the [`knn` option](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-knn) or the [`knn` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-knn-query.md) (expert case).
```console
POST image-index/_search
@@ -1005,7 +1005,7 @@ POST /my-index/_search
You can use this option when you want to rescore on each shard and want more fine-grained control on the rescoring than the `rescore_vector` option provides.
-Use rescore per shard with the [knn query](elasticsearch://reference/query-languages/query-dsl-knn-query.md) and [script_score query ](elasticsearch://reference/query-languages/query-dsl-script-score-query.md). Generally, this means that there will be more rescoring per shard, but this can increase overall recall at the cost of compute.
+Use rescore per shard with the [knn query](elasticsearch://reference/query-languages/query-dsl/query-dsl-knn-query.md) and [script_score query ](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md). Generally, this means that there will be more rescoring per shard, but this can increase overall recall at the cost of compute.
```console
POST /my-index/_search
@@ -1075,10 +1075,10 @@ To run an exact kNN search, use a `script_score` query with a vector function.
...
```
-3. Use the [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) to run a `script_score` query containing a [vector function](elasticsearch://reference/query-languages/query-dsl-script-score-query.md#vector-functions).
+3. Use the [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) to run a `script_score` query containing a [vector function](elasticsearch://reference/query-languages/query-dsl/query-dsl-script-score-query.md#vector-functions).
::::{tip}
- To limit the number of matched documents passed to the vector function, we recommend you specify a filter query in the `script_score.query` parameter. If needed, you can use a [`match_all` query](elasticsearch://reference/query-languages/query-dsl-match-all-query.md) in this parameter to match all documents. However, matching all documents can significantly increase search latency.
+ To limit the number of matched documents passed to the vector function, we recommend you specify a filter query in the `script_score.query` parameter. If needed, you can use a [`match_all` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-match-all-query.md) in this parameter to match all documents. However, matching all documents can significantly increase search latency.
::::
diff --git a/solutions/search/vector/sparse-vector.md b/solutions/search/vector/sparse-vector.md
index 238b1e4021..ddda4a679e 100644
--- a/solutions/search/vector/sparse-vector.md
+++ b/solutions/search/vector/sparse-vector.md
@@ -28,4 +28,4 @@ Sparse vector search with ELSER expands both documents and queries into weighted
- Deploy and configure the ELSER model
- Use the `sparse_vector` field type
- See [this overview](../semantic-search.md#using-nlp-models) for implementation options
-2. Query the index using [`sparse_vector` query](elasticsearch://reference/query-languages/query-dsl-sparse-vector-query.md).
\ No newline at end of file
+2. Query the index using [`sparse_vector` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md).
\ No newline at end of file
diff --git a/solutions/security/detect-and-alert/create-detection-rule.md b/solutions/security/detect-and-alert/create-detection-rule.md
index 787cbd61d9..f7c8639315 100644
--- a/solutions/security/detect-and-alert/create-detection-rule.md
+++ b/solutions/security/detect-and-alert/create-detection-rule.md
@@ -166,10 +166,10 @@ To create or edit {{ml}} rules, you need:
2. To create an event correlation rule using EQL, select **Event Correlation** on the **Create new rule** page, then:
1. Define which {{es}} indices or data view the rule searches when querying for events.
- 2. Write an [EQL query](elasticsearch://reference/query-languages/eql-syntax.md) that searches for matching events or a series of matching events.
+ 2. Write an [EQL query](elasticsearch://reference/query-languages/eql/eql-syntax.md) that searches for matching events or a series of matching events.
::::{tip}
- To find events that are missing in a sequence, use the [missing events](elasticsearch://reference/query-languages/eql-syntax.md#eql-missing-events) syntax.
+ To find events that are missing in a sequence, use the [missing events](elasticsearch://reference/query-languages/eql/eql-syntax.md#eql-missing-events) syntax.
::::
diff --git a/solutions/security/get-started/ingest-data-to-elastic-security.md b/solutions/security/get-started/ingest-data-to-elastic-security.md
index 6d9130f0f7..d2c0686299 100644
--- a/solutions/security/get-started/ingest-data-to-elastic-security.md
+++ b/solutions/security/get-started/ingest-data-to-elastic-security.md
@@ -97,17 +97,17 @@ To populate **Hosts** data, enable these modules:
* [Auditbeat file integrity module - Linux, macOS, Windows](beats://reference/auditbeat/auditbeat-module-file_integrity.md)
* [Filebeat system module - Linux system logs](beats://reference/filebeat/filebeat-module-system.md)
* [Filebeat Santa module - macOS security events](beats://reference/filebeat/filebeat-module-santa.md)
-* [Winlogbeat - Windows event logs](beats://reference/winlogbeat/_winlogbeat_overview.md)
+* [Winlogbeat - Windows event logs](beats://reference/winlogbeat/index.md)
To populate **Network** data, enable Packetbeat protocols and Filebeat modules:
-* [{{packetbeat}}](beats://reference/packetbeat/packetbeat-overview.md)
+* [{{packetbeat}}](beats://reference/packetbeat/index.md)
* [DNS](beats://reference/packetbeat/packetbeat-dns-options.md)
* [TLS](beats://reference/packetbeat/configuration-tls.md)
* [Other supported protocols](beats://reference/packetbeat/configuration-protocols.md)
-* [{{filebeat}}](beats://reference/filebeat/filebeat-overview.md)
+* [{{filebeat}}](beats://reference/filebeat/index.md)
* [Zeek NMS module](beats://reference/filebeat/filebeat-module-zeek.md)
* [Suricata IDS module](beats://reference/filebeat/filebeat-module-suricata.md)
diff --git a/troubleshoot/elasticsearch/troubleshooting-searches.md b/troubleshoot/elasticsearch/troubleshooting-searches.md
index c7f36787e6..83a4479c36 100644
--- a/troubleshoot/elasticsearch/troubleshooting-searches.md
+++ b/troubleshoot/elasticsearch/troubleshooting-searches.md
@@ -113,7 +113,7 @@ To change the mapping of an existing field, refer to [Changing the mapping of a
## Check the field’s values [troubleshooting-check-field-values]
-Use the [`exists` query](elasticsearch://reference/query-languages/query-dsl-exists-query.md) to check whether there are documents that return a value for a field. Check that `count` in the response is not 0.
+Use the [`exists` query](elasticsearch://reference/query-languages/query-dsl/query-dsl-exists-query.md) to check whether there are documents that return a value for a field. Check that `count` in the response is not 0.
```console
GET /my-index-000001/_count