diff --git a/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md b/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md index 0414952ed9..7123d159de 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md +++ b/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md @@ -30,7 +30,7 @@ logging: ## Log in JSON format [log-in-json-ECS-example] -Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. +Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. ```yaml logging: diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging.md b/deploy-manage/monitor/logging-configuration/kibana-logging.md index e0dd1e2649..b8508cd616 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging.md @@ -99,7 +99,7 @@ The pattern layout also offers a `highlight` option that allows you to highlight ### JSON layout [json-layout] -With `json` layout log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. +With `json` layout log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. ## Logger hierarchy [logger-hierarchy] diff --git a/deploy-manage/production-guidance.md b/deploy-manage/production-guidance.md index c2b005180b..a87676807c 100644 --- a/deploy-manage/production-guidance.md +++ b/deploy-manage/production-guidance.md @@ -13,7 +13,7 @@ This section provides some best practices for managing your data to help you set * Build a [data architecture](/manage-data/lifecycle/data-tiers.md) that best fits your needs. Your {{ech}} deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data. * Make your data [highly available](/deploy-manage/tools.md) for production environments or otherwise critical data stores, and take regular [backup snapshots](tools/snapshot-and-restore.md). -* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. +* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](ecs://reference/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. ## Optimize data storage and retention [ec_optimize_data_storage_and_retention] diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md index 10bd672bcd..9e4c85e2de 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md @@ -84,7 +84,7 @@ Another advanced option is the `categorization_filters` property, which can cont ## Per-partition categorization [ml-per-partition-categorization] -If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](asciidocalypse://docs/ecs/docs/reference/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately. +If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](ecs://reference/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately. If your job has multiple detectors, every detector that uses the `mlcategory` keyword must also define a `partition_field_name`. You must use the same `partition_field_name` value in all of these detectors. Otherwise, when you create or update a job and enable per-partition categorization, it fails. diff --git a/explore-analyze/transforms/transform-checkpoints.md b/explore-analyze/transforms/transform-checkpoints.md index db09d7eb77..c6f9e196da 100644 --- a/explore-analyze/transforms/transform-checkpoints.md +++ b/explore-analyze/transforms/transform-checkpoints.md @@ -39,7 +39,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans ## Using the ingest timestamp for syncing the {{transform}} [sync-field-ingest-timestamp] -In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](asciidocalypse://docs/ecs/docs/reference/index.md), you might already have an [`event.ingested`](asciidocalypse://docs/ecs/docs/reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}. +In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}. If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](elasticsearch://reference/ingestion-tools/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index 4e866b95ba..164858dfc7 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -115,7 +115,7 @@ In this step, you’ll create a Python script that generates logs in JSON format Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields. + Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](ecs://reference/ecs-field-reference.md) for the full list of available fields. 2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following: diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md index 552e3e795f..53243c008c 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md @@ -33,7 +33,7 @@ In **{{project-settings}} → {{manage-app}} → {{ingest-pipelines-app}}**, you To create a pipeline, click **Create pipeline → New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). -The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md). +The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md). ## Test pipelines [ingest-pipelines-test-pipelines] diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines.md b/manage-data/ingest/transform-enrich/ingest-pipelines.md index c1e739850b..a019c7db8b 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines.md @@ -45,7 +45,7 @@ In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines** To create a pipeline, click **Create pipeline > New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). ::::{tip} -The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md). +The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](ecs://reference/ecs-converting.md). :::: diff --git a/raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md b/raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md index daa214cc00..385ce79725 100644 --- a/raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md +++ b/raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md @@ -257,7 +257,7 @@ Also, refer to [{{filebeat}} and systemd](asciidocalypse://docs/beats/docs/refer #### Step 5: Parse logs with an ingest pipeline [observability-plaintext-application-logs-step-5-parse-logs-with-an-ingest-pipeline] -Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields. +Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields. Create an ingest pipeline with a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured ECS fields from your log messages. In your project, go to **Developer Tools** and use a command similar to the following example: @@ -279,7 +279,7 @@ PUT _ingest/pipeline/filebeat* <1> 1. `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). 2. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. 3. `field`: The field you’re extracting data from, `message` in this case. -4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` +4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data. @@ -338,7 +338,7 @@ You can add additional settings to the integration under **Custom log file** by #### Step 2: Add an ingest pipeline to your integration [observability-plaintext-application-logs-step-2-add-an-ingest-pipeline-to-your-integration] -To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields. +To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields. 1. From the custom logs integration, select **Integration policies** tab. 2. Select the integration policy you created in the previous section. @@ -364,7 +364,7 @@ To aggregate or search for information in plaintext logs, use an ingest pipeline 1. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. 2. `field`: The field you’re extracting data from, `message` in this case. - 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` + 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` 6. Click **Create pipeline**. 7. Save and deploy your integration. diff --git a/reference/ecs.md b/reference/ecs.md index b416500375..5c21c3334d 100644 --- a/reference/ecs.md +++ b/reference/ecs.md @@ -4,6 +4,6 @@ navigation_title: ECS # Elastic Common Schema Elastic Common Schema (ECS) defines a common set of fields for ingesting data into Elasticsearch. -For field details and usage information, refer to [](asciidocalypse://docs/ecs/docs/reference/index.md). +For field details and usage information, refer to [](ecs://reference/index.md). -ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](asciidocalypse://docs/ecs/docs/reference/intro.md). +ECS loggers are plugins for your favorite logging libraries, which help you to format your logs into ECS-compatible JSON. Check out [](ecs://reference/index.md). diff --git a/reference/ingestion-tools/fleet/kafka-output-settings.md b/reference/ingestion-tools/fleet/kafka-output-settings.md index 8a89f16f2e..2d694d6614 100644 --- a/reference/ingestion-tools/fleet/kafka-output-settings.md +++ b/reference/ingestion-tools/fleet/kafka-output-settings.md @@ -51,7 +51,7 @@ Use this option to set the Kafka topic for each {{agent}} event. | | | | --- | --- | -| $$$kafka-output-topics-default$$$
**Default topic**
| Set a default topic to use for events sent by {{agent}} to the Kafka output.

You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)) field. Available fields include:

* `data_stream_type`
* `data_stream.dataset`
* `data_stream.namespace`
* `@timestamp`
* `event-dataset`

You can also set a custom field. This is useful if you’re using the [`add_fields` processor](/reference/ingestion-tools/fleet/add_fields-processor.md) as part of your {{agent}} input. Otherwise, setting a custom field is not recommended.
| +| $$$kafka-output-topics-default$$$
**Default topic**
| Set a default topic to use for events sent by {{agent}} to the Kafka output.

You can set a static topic, for example `elastic-agent`, or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)) field. Available fields include:

* `data_stream_type`
* `data_stream.dataset`
* `data_stream.namespace`
* `@timestamp`
* `event-dataset`

You can also set a custom field. This is useful if you’re using the [`add_fields` processor](/reference/ingestion-tools/fleet/add_fields-processor.md) as part of your {{agent}} input. Otherwise, setting a custom field is not recommended.
| ### Header settings [_header_settings] diff --git a/reference/observability/fields-and-object-schemas.md b/reference/observability/fields-and-object-schemas.md index 43ed93a940..105468b9b8 100644 --- a/reference/observability/fields-and-object-schemas.md +++ b/reference/observability/fields-and-object-schemas.md @@ -9,7 +9,7 @@ This section lists Elastic Common Schema (ECS) fields the Logs and Infrastructur ECS is an open source specification that defines a standard set of fields to use when storing event data in {{es}}, such as logs and metrics. -Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool. +Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](ecs://reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool. This reference covers: diff --git a/reference/observability/fields-and-object-schemas/logs-app-fields.md b/reference/observability/fields-and-object-schemas/logs-app-fields.md index a80c464ea9..8ad247b1ac 100644 --- a/reference/observability/fields-and-object-schemas/logs-app-fields.md +++ b/reference/observability/fields-and-object-schemas/logs-app-fields.md @@ -5,7 +5,7 @@ mapped_pages: # Logs Explorer fields [logs-app-fields] -This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs). +This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://reference/index.md#_what_is_ecs). `@timestamp` : Date/time when the event originated. diff --git a/reference/observability/fields-and-object-schemas/metrics-app-fields.md b/reference/observability/fields-and-object-schemas/metrics-app-fields.md index 636a4168f2..1082d9fc7f 100644 --- a/reference/observability/fields-and-object-schemas/metrics-app-fields.md +++ b/reference/observability/fields-and-object-schemas/metrics-app-fields.md @@ -5,7 +5,7 @@ mapped_pages: # Infrastructure app fields [metrics-app-fields] -This section lists the required fields the {{infrastructure-app}} uses to display data. Please note that some of the fields listed are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs). +This section lists the required fields the {{infrastructure-app}} uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://reference/index.md#_what_is_ecs). ## Additional field details [_additional_field_details] diff --git a/reference/observability/serverless/infrastructure-app-fields.md b/reference/observability/serverless/infrastructure-app-fields.md index 7712722117..f1acfcd60b 100644 --- a/reference/observability/serverless/infrastructure-app-fields.md +++ b/reference/observability/serverless/infrastructure-app-fields.md @@ -5,7 +5,7 @@ mapped_pages: # Infrastructure app fields [observability-infrastructure-monitoring-required-fields] -This section lists the fields the Infrastructure UI uses to display data. Please note that some of the fields listed here are not [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md#_what_is_ecs). +This section lists the fields the Infrastructure UI uses to display data. Please note that some of the fields listed here are not [ECS fields](ecs://reference/index.md#_what_is_ecs). ## Additional field details [observability-infrastructure-monitoring-required-fields-additional-field-details] diff --git a/reference/security/fields-and-object-schemas/alert-schema.md b/reference/security/fields-and-object-schemas/alert-schema.md index 89fc0d3c44..3ef820fdbe 100644 --- a/reference/security/fields-and-object-schemas/alert-schema.md +++ b/reference/security/fields-and-object-schemas/alert-schema.md @@ -24,51 +24,51 @@ The non-ECS fields listed below are beta and subject to change. | Alert field | Description | | --- | --- | -| [`@timestamp`](asciidocalypse://docs/ecs/docs/reference/ecs-base.md#field-timestamp) | ECS field, represents the time when the alert was created or most recently updated. | -| [`message`](asciidocalypse://docs/ecs/docs/reference/ecs-base.md#field-message) | ECS field copied from the source document, if present, for custom query and indicator match rules. | -| [`tags`](asciidocalypse://docs/ecs/docs/reference/ecs-base.md#field-tags) | ECS field copied from the source document, if present, for custom query and indicator match rules. | -| [`labels`](asciidocalypse://docs/ecs/docs/reference/ecs-base.md#field-labels) | ECS field copied from the source document, if present, for custom query and indicator match rules. | -| [`ecs.version`](asciidocalypse://docs/ecs/docs/reference/ecs-ecs.md#field-ecs-version) | ECS mapping version of the alert. | -| [`event.kind`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md) | ECS field, always `signal` for alert documents. | -| [`event.category`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-category.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | -| [`event.type`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-type.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | -| [`event.outcome`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-outcome.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | -| [`agent.*`](asciidocalypse://docs/ecs/docs/reference/ecs-agent.md) | ECS `agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`client.*`](asciidocalypse://docs/ecs/docs/reference/ecs-client.md) | ECS `client.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`cloud.*`](asciidocalypse://docs/ecs/docs/reference/ecs-cloud.md) | ECS `cloud.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`container.*`](asciidocalypse://docs/ecs/docs/reference/ecs-container.md) | ECS `container.* fields` copied from the source document, if present, for custom query and indicator match rules. | -| [`data_stream.*`](asciidocalypse://docs/ecs/docs/reference/ecs-data_stream.md) | ECS `data_stream.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields may be constant keywords in the source documents, but are copied into the alert documents as keywords. | -| [`destination.*`](asciidocalypse://docs/ecs/docs/reference/ecs-destination.md) | ECS `destination.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`dll.*`](asciidocalypse://docs/ecs/docs/reference/ecs-dll.md) | ECS `dll.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`dns.*`](asciidocalypse://docs/ecs/docs/reference/ecs-dns.md) | ECS `dns.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`error.*`](asciidocalypse://docs/ecs/docs/reference/ecs-error.md) | ECS `error.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`event.*`](asciidocalypse://docs/ecs/docs/reference/ecs-event.md) | ECS `event.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: categorization fields above (`event.kind`, `event.category`, `event.type`, `event.outcome`) are listed separately above. | -| [`file.*`](asciidocalypse://docs/ecs/docs/reference/ecs-file.md) | ECS `file.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`group.*`](asciidocalypse://docs/ecs/docs/reference/ecs-group.md) | ECS `group.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`host.*`](asciidocalypse://docs/ecs/docs/reference/ecs-host.md) | ECS `host.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`http.*`](asciidocalypse://docs/ecs/docs/reference/ecs-http.md) | ECS `http.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`log.*`](asciidocalypse://docs/ecs/docs/reference/ecs-log.md) | ECS `log.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`network.*`](asciidocalypse://docs/ecs/docs/reference/ecs-network.md) | ECS `network.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`observer.*`](asciidocalypse://docs/ecs/docs/reference/ecs-observer.md) | ECS `observer.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`orchestrator.*`](asciidocalypse://docs/ecs/docs/reference/ecs-orchestrator.md) | ECS `orchestrator.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`organization.*`](asciidocalypse://docs/ecs/docs/reference/ecs-organization.md) | ECS `organization.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`package.*`](asciidocalypse://docs/ecs/docs/reference/ecs-package.md) | ECS `package.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`process.*`](asciidocalypse://docs/ecs/docs/reference/ecs-process.md) | ECS `process.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`registry.*`](asciidocalypse://docs/ecs/docs/reference/ecs-registry.md) | ECS `registry.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`related.*`](asciidocalypse://docs/ecs/docs/reference/ecs-related.md) | ECS `related.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`rule.*`](asciidocalypse://docs/ecs/docs/reference/ecs-rule.md) | ECS `rule.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields are not related to the detection rule that generated the alert. | -| [`server.*`](asciidocalypse://docs/ecs/docs/reference/ecs-server.md) | ECS `server.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`service.*`](asciidocalypse://docs/ecs/docs/reference/ecs-service.md) | ECS `service.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`source.*`](asciidocalypse://docs/ecs/docs/reference/ecs-source.md) | ECS `source.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`span.*`](asciidocalypse://docs/ecs/docs/reference/ecs-tracing.md#field-span-id) | ECS `span.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`threat.*`](asciidocalypse://docs/ecs/docs/reference/ecs-threat.md) | ECS `threat.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`tls.*`](asciidocalypse://docs/ecs/docs/reference/ecs-tls.md) | ECS `tls.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`trace.*`](asciidocalypse://docs/ecs/docs/reference/ecs-tracing.md) | ECS `trace.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`transaction.*`](asciidocalypse://docs/ecs/docs/reference/ecs-tracing.md#field-transaction-id) | ECS `transaction.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`url.*`](asciidocalypse://docs/ecs/docs/reference/ecs-url.md) | ECS `url.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`user.*`](asciidocalypse://docs/ecs/docs/reference/ecs-user.md) | ECS `user.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`user_agent.*`](asciidocalypse://docs/ecs/docs/reference/ecs-user_agent.md) | ECS `user_agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | -| [`vulnerability.*`](asciidocalypse://docs/ecs/docs/reference/ecs-vulnerability.md) | ECS `vulnerability.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`@timestamp`](ecs://reference/ecs-base.md#field-timestamp) | ECS field, represents the time when the alert was created or most recently updated. | +| [`message`](ecs://reference/ecs-base.md#field-message) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`tags`](ecs://reference/ecs-base.md#field-tags) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`labels`](ecs://reference/ecs-base.md#field-labels) | ECS field copied from the source document, if present, for custom query and indicator match rules. | +| [`ecs.version`](ecs://reference/ecs-ecs.md#field-ecs-version) | ECS mapping version of the alert. | +| [`event.kind`](ecs://reference/ecs-allowed-values-event-kind.md) | ECS field, always `signal` for alert documents. | +| [`event.category`](ecs://reference/ecs-allowed-values-event-category.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`event.type`](ecs://reference/ecs-allowed-values-event-type.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`event.outcome`](ecs://reference/ecs-allowed-values-event-outcome.md) | ECS field, copied from the source document, if present, for custom query and indicator match rules. | +| [`agent.*`](ecs://reference/ecs-agent.md) | ECS `agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`client.*`](ecs://reference/ecs-client.md) | ECS `client.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`cloud.*`](ecs://reference/ecs-cloud.md) | ECS `cloud.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`container.*`](ecs://reference/ecs-container.md) | ECS `container.* fields` copied from the source document, if present, for custom query and indicator match rules. | +| [`data_stream.*`](ecs://reference/ecs-data_stream.md) | ECS `data_stream.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields may be constant keywords in the source documents, but are copied into the alert documents as keywords. | +| [`destination.*`](ecs://reference/ecs-destination.md) | ECS `destination.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`dll.*`](ecs://reference/ecs-dll.md) | ECS `dll.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`dns.*`](ecs://reference/ecs-dns.md) | ECS `dns.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`error.*`](ecs://reference/ecs-error.md) | ECS `error.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`event.*`](ecs://reference/ecs-event.md) | ECS `event.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: categorization fields above (`event.kind`, `event.category`, `event.type`, `event.outcome`) are listed separately above. | +| [`file.*`](ecs://reference/ecs-file.md) | ECS `file.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`group.*`](ecs://reference/ecs-group.md) | ECS `group.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`host.*`](ecs://reference/ecs-host.md) | ECS `host.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`http.*`](ecs://reference/ecs-http.md) | ECS `http.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`log.*`](ecs://reference/ecs-log.md) | ECS `log.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`network.*`](ecs://reference/ecs-network.md) | ECS `network.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`observer.*`](ecs://reference/ecs-observer.md) | ECS `observer.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`orchestrator.*`](ecs://reference/ecs-orchestrator.md) | ECS `orchestrator.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`organization.*`](ecs://reference/ecs-organization.md) | ECS `organization.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`package.*`](ecs://reference/ecs-package.md) | ECS `package.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`process.*`](ecs://reference/ecs-process.md) | ECS `process.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`registry.*`](ecs://reference/ecs-registry.md) | ECS `registry.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`related.*`](ecs://reference/ecs-related.md) | ECS `related.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`rule.*`](ecs://reference/ecs-rule.md) | ECS `rule.*` fields copied from the source document, if present, for custom query and indicator match rules.
NOTE: These fields are not related to the detection rule that generated the alert. | +| [`server.*`](ecs://reference/ecs-server.md) | ECS `server.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`service.*`](ecs://reference/ecs-service.md) | ECS `service.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`source.*`](ecs://reference/ecs-source.md) | ECS `source.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`span.*`](ecs://reference/ecs-tracing.md#field-span-id) | ECS `span.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`threat.*`](ecs://reference/ecs-threat.md) | ECS `threat.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`tls.*`](ecs://reference/ecs-tls.md) | ECS `tls.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`trace.*`](ecs://reference/ecs-tracing.md) | ECS `trace.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`transaction.*`](ecs://reference/ecs-tracing.md#field-transaction-id) | ECS `transaction.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`url.*`](ecs://reference/ecs-url.md) | ECS `url.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`user.*`](ecs://reference/ecs-user.md) | ECS `user.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`user_agent.*`](ecs://reference/ecs-user_agent.md) | ECS `user_agent.*` fields copied from the source document, if present, for custom query and indicator match rules. | +| [`vulnerability.*`](ecs://reference/ecs-vulnerability.md) | ECS `vulnerability.*` fields copied from the source document, if present, for custom query and indicator match rules. | | `kibana.alert.ancestors.*` | Type: object | | `kibana.alert.depth` | Type: Long | | `kibana.alert.new_terms` | The value of the new term that generated this alert.
Type: keyword | diff --git a/reference/security/fields-and-object-schemas/siem-field-reference.md b/reference/security/fields-and-object-schemas/siem-field-reference.md index 5c3323a5d5..592e7ea266 100644 --- a/reference/security/fields-and-object-schemas/siem-field-reference.md +++ b/reference/security/fields-and-object-schemas/siem-field-reference.md @@ -13,7 +13,7 @@ mapped_pages: This section lists [Elastic Common Schema](asciidocalypse://ecs/docs/reference/index.md) fields that provide an optimal SIEM and security analytics experience to users. These fields are used to display data, provide rule previews, enable detection by prebuilt detection rules, provide context during rule triage and investigation, escalate to cases, and more. ::::{important} -We recommend you use {{agent}} integrations or {{beats}} to ship your data to {{elastic-sec}}. {{agent}} integrations and Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, which means data they ship to {{elastic-sec}} will automatically populate the relevant ECS fields. If you plan to use a custom implementation to map your data to ECS fields (see [how to map data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md)), ensure the [always required fields](#siem-always-required-fields) are populated. Ideally, all relevant ECS fields should be populated as well. +We recommend you use {{agent}} integrations or {{beats}} to ship your data to {{elastic-sec}}. {{agent}} integrations and Beat modules (for example, [{{filebeat}} modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md)) are ECS-compliant, which means data they ship to {{elastic-sec}} will automatically populate the relevant ECS fields. If you plan to use a custom implementation to map your data to ECS fields (see [how to map data to ECS](ecs://reference/ecs-converting.md)), ensure the [always required fields](#siem-always-required-fields) are populated. Ideally, all relevant ECS fields should be populated as well. :::: diff --git a/reference/security/fields-and-object-schemas/timeline-object-schema.md b/reference/security/fields-and-object-schemas/timeline-object-schema.md index b2f3fb64ce..dd6edb117b 100644 --- a/reference/security/fields-and-object-schemas/timeline-object-schema.md +++ b/reference/security/fields-and-object-schemas/timeline-object-schema.md @@ -13,7 +13,7 @@ mapped_pages: The Timeline schema lists all the JSON fields and objects required to create a Timeline or a Timeline template using the Create Timeline API. ::::{important} -All column, dropzone, and filter fields must be [ECS fields](asciidocalypse://docs/ecs/docs/reference/index.md). +All column, dropzone, and filter fields must be [ECS fields](ecs://reference/index.md). :::: diff --git a/solutions/observability/apps/find-transaction-latency-failure-correlations.md b/solutions/observability/apps/find-transaction-latency-failure-correlations.md index a47c54f110..f0674d5129 100644 --- a/solutions/observability/apps/find-transaction-latency-failure-correlations.md +++ b/solutions/observability/apps/find-transaction-latency-failure-correlations.md @@ -64,7 +64,7 @@ In this example screenshot, there are transactions that are skewed to the right ## Find failed transaction correlations [correlations-error-rate] -The correlations on the **Failed transaction correlations** tab help you discover which attributes are most influential in distinguishing between transaction failures and successes. In this context, the success or failure of a transaction is determined by its [event.outcome](asciidocalypse://docs/ecs/docs/reference/ecs-event.md#field-event-outcome) value. For example, APM agents set the `event.outcome` to `failure` when an HTTP transaction returns a `5xx` status code. +The correlations on the **Failed transaction correlations** tab help you discover which attributes are most influential in distinguishing between transaction failures and successes. In this context, the success or failure of a transaction is determined by its [event.outcome](ecs://reference/ecs-event.md#field-event-outcome) value. For example, APM agents set the `event.outcome` to `failure` when an HTTP transaction returns a `5xx` status code. The chart highlights the failed transactions in the overall latency distribution for the transaction group. If there are attributes that have a statistically significant correlation with failed transactions, they are listed in a table. The table is sorted by scores, which are mapped to high, medium, or low impact levels. Attributes with high impact levels are more likely to contribute to failed transactions. By default, the attribute with the highest score is added to the chart. To see a different attribute in the chart, select its row in the table. diff --git a/solutions/observability/apps/tutorial-monitor-java-application.md b/solutions/observability/apps/tutorial-monitor-java-application.md index 7e7ea2f286..713049508d 100644 --- a/solutions/observability/apps/tutorial-monitor-java-application.md +++ b/solutions/observability/apps/tutorial-monitor-java-application.md @@ -917,7 +917,7 @@ You have now learned about parsing logs in either {{beats}} or {{es}}. What if w Writing out logs as plain text works and is easy to read for humans. However, first writing them out as plain text, parsing them using the `dissect` processors, and then creating a JSON again sounds tedious and burns unneeded CPU cycles. -While log4j2 has a [JSONLayout](https://logging.apache.org/log4j/2.x/manual/layouts.html#JSONLayout), you can go further and use a Library called [ecs-logging-java](https://github.com/elastic/ecs-logging-java). The advantage of ECS logging is that it uses the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/index.md). ECS defines a standard set of fields used when storing event data in {{es}}, such as logs and metrics. +While log4j2 has a [JSONLayout](https://logging.apache.org/log4j/2.x/manual/layouts.html#JSONLayout), you can go further and use a Library called [ecs-logging-java](https://github.com/elastic/ecs-logging-java). The advantage of ECS logging is that it uses the [Elastic Common Schema](ecs://reference/index.md). ECS defines a standard set of fields used when storing event data in {{es}}, such as logs and metrics. 1. Instead of writing our logging standard, use an existing one. Let’s add the logging dependency to our Javalin application. diff --git a/solutions/observability/logs/configure-data-sources.md b/solutions/observability/logs/configure-data-sources.md index 048b3d6431..e4e7e87b6e 100644 --- a/solutions/observability/logs/configure-data-sources.md +++ b/solutions/observability/logs/configure-data-sources.md @@ -51,7 +51,7 @@ By default, the **Stream** page within the {{logs-app}} displays the following c | | | | --- | --- | | **Timestamp** | The timestamp of the log entry from the `timestamp` field. | -| **Message** | The message extracted from the document.The content of this field depends on the type of log message.If no special log message type is detected, the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs-base.md)base field, `message`, is used. | +| **Message** | The message extracted from the document.The content of this field depends on the type of log message.If no special log message type is detected, the [Elastic Common Schema (ECS)](ecs://reference/ecs-base.md)base field, `message`, is used. | 1. To add a new column to the logs stream, select **Settings > Add column**. 2. In the list of available fields, select the field you want to add. To filter the field list by that name, you can start typing a field name in the search box. diff --git a/solutions/observability/logs/inspect-log-anomalies.md b/solutions/observability/logs/inspect-log-anomalies.md index 7b04dfc489..ebb94d02ce 100644 --- a/solutions/observability/logs/inspect-log-anomalies.md +++ b/solutions/observability/logs/inspect-log-anomalies.md @@ -35,7 +35,7 @@ Create a {{ml}} job to detect anomalous log entry rates automatically. ## Anomalies chart [anomalies-chart] -The Anomalies chart shows an overall, color-coded visualization of the log entry rate, partitioned according to the value of the Elastic Common Schema (ECS) [`event.dataset`](asciidocalypse://docs/ecs/docs/reference/ecs-event.md) field. This chart helps you quickly spot increases or decreases in each partition’s log rate. +The Anomalies chart shows an overall, color-coded visualization of the log entry rate, partitioned according to the value of the Elastic Common Schema (ECS) [`event.dataset`](ecs://reference/ecs-event.md) field. This chart helps you quickly spot increases or decreases in each partition’s log rate. If you have a lot of log partitions, use the following to filter your data: diff --git a/solutions/observability/logs/parse-route-logs.md b/solutions/observability/logs/parse-route-logs.md index bed01defe5..b779d52680 100644 --- a/solutions/observability/logs/parse-route-logs.md +++ b/solutions/observability/logs/parse-route-logs.md @@ -99,7 +99,7 @@ While you can search for phrases in the `message` field, you can’t use this fi * **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field. ::::{note} -These fields are part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md). The ECS defines a common set of fields that you can use across Elastic when storing data, including log and metric data. +These fields are part of the [Elastic Common Schema (ECS)](ecs://reference/index.md). The ECS defines a common set of fields that you can use across Elastic when storing data, including log and metric data. :::: @@ -242,14 +242,14 @@ The previous command sets the following values for your index template: The example index template above sets the following component templates: -* `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs-data_stream.md). +* `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](ecs://reference/ecs-data_stream.md). * `logs@settings`: general settings for log data streams including the following: * The default lifecycle policy that rolls over when the primary shard reaches 50 GB or after 30 days. * The default pipeline uses the ingest timestamp if there is no specified `@timestamp` and places a hook for the `logs@custom` pipeline. If a `logs@custom` pipeline is installed, it’s applied to logs ingested into this data stream. * Sets the [`ignore_malformed`](elasticsearch://reference/elasticsearch/mapping-reference/ignore-malformed.md) flag to `true`. When ingesting a large batch of log data, a single malformed field like an IP address can cause the entire batch to fail. When set to true, malformed fields with a mapping type that supports this flag are still processed. * `logs@custom`: a predefined component template that is not installed by default. Use this name to install a custom component template to override or extend any of the default mappings or settings. - * `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md). + * `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](ecs://reference/index.md). @@ -478,7 +478,7 @@ The results should show only the high-severity logs: Extracting the `host.ip` field lets you filter logs by host IP addresses allowing you to focus on specific hosts that you’re having issues with or find disparities between hosts. -The `host.ip` field is part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](elasticsearch://reference/elasticsearch/mapping-reference/ip.md). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet. +The `host.ip` field is part of the [Elastic Common Schema (ECS)](ecs://reference/index.md). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](elasticsearch://reference/elasticsearch/mapping-reference/ip.md). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet. This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields: diff --git a/solutions/observability/logs/plaintext-application-logs.md b/solutions/observability/logs/plaintext-application-logs.md index 0fa33952f2..ec914c3cb3 100644 --- a/solutions/observability/logs/plaintext-application-logs.md +++ b/solutions/observability/logs/plaintext-application-logs.md @@ -232,7 +232,7 @@ By default, Windows log files are stored in `C:\ProgramData\filebeat\Logs`. #### Step 5: Parse logs with an ingest pipeline [step-5-plaintext-parse-logs-with-an-ingest-pipeline] -Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields. +Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields. Create an ingest pipeline that defines a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured ECS fields from your log messages. In your project, navigate to **Developer Tools** and using a command similar to the following example: @@ -254,7 +254,7 @@ PUT _ingest/pipeline/filebeat* <1> 1. `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). 2. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. 3. `field`: The field you’re extracting data from, `message` in this case. -4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` +4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data. @@ -294,7 +294,7 @@ To add the custom logs integration to your project: #### Step 2: Add an ingest pipeline to your integration [step-2-plaintext-add-an-ingest-pipeline-to-your-integration] -To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md)-compatible fields. +To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](ecs://reference/index.md)-compatible fields. 1. From the custom logs integration, select **Integration policies** tab. 2. Select the integration policy you created in the previous section. @@ -320,7 +320,7 @@ To aggregate or search for information in plaintext logs, use an ingest pipeline 1. `processors.dissect`: Adds a [dissect processor](elasticsearch://reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. 2. `field`: The field you’re extracting data from, `message` in this case. - 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` + 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](ecs://reference/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` 6. Click **Create pipeline**. 7. Save and deploy your integration. diff --git a/solutions/security/dashboards/data-quality-dashboard.md b/solutions/security/dashboards/data-quality-dashboard.md index 1a6b95a72c..836415dd9c 100644 --- a/solutions/security/dashboards/data-quality-dashboard.md +++ b/solutions/security/dashboards/data-quality-dashboard.md @@ -10,7 +10,7 @@ applies_to: # Data Quality dashboard -The Data Quality dashboard shows you whether your data is correctly mapped to the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/index.md) (ECS). Successful [mapping](/manage-data/data-store/mapping.md) enables you to search, visualize, and interact with your data throughout {{elastic-sec}} and {{kib}}. +The Data Quality dashboard shows you whether your data is correctly mapped to the [Elastic Common Schema](ecs://reference/index.md) (ECS). Successful [mapping](/manage-data/data-store/mapping.md) enables you to search, visualize, and interact with your data throughout {{elastic-sec}} and {{kib}}. :::{image} ../../../images/security-data-qual-dash.png :alt: The Data Quality dashboard diff --git a/solutions/security/detect-and-alert/create-manage-value-lists.md b/solutions/security/detect-and-alert/create-manage-value-lists.md index 580190f493..fce45c7552 100644 --- a/solutions/security/detect-and-alert/create-manage-value-lists.md +++ b/solutions/security/detect-and-alert/create-manage-value-lists.md @@ -14,7 +14,7 @@ Value lists hold multiple values of the same Elasticsearch data type, such as IP Value lists are lists of items with the same {{es}} [data type](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md). You can create value lists with these types: -* `Keywords` (many [ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) are keywords) +* `Keywords` (many [ECS fields](ecs://reference/ecs-field-reference.md) are keywords) * `IP Addresses` * `IP Ranges` * `Text` diff --git a/solutions/security/detect-and-alert/view-detection-alert-details.md b/solutions/security/detect-and-alert/view-detection-alert-details.md index 45cf1f8a5d..abbda8d2a6 100644 --- a/solutions/security/detect-and-alert/view-detection-alert-details.md +++ b/solutions/security/detect-and-alert/view-detection-alert-details.md @@ -330,8 +330,8 @@ The expanded Prevalence view provides the following details: * **Field**: Shows [highlighted fields](/solutions/security/detect-and-alert/view-detection-alert-details.md#investigation-section) for the alert and any custom highlighted fields that were added to the alert’s rule. * **Value**: Shows values for highlighted fields and any custom highlighted fields that were added to the alert’s rule. -* **Alert count**: Shows the total number of alert documents that have identical highlighted field values, including the alert you’re currently examining. For example, if the `host.name` field has an alert count of 5, that means there are five total alerts with the same `host.name` value. The Alert count column only retrieves documents that contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. -* **Document count**: Shows the total number of event documents that have identical field values. A dash (`——`) displays if there are no event documents that match the field value. The Document count column only retrieves documents that don’t contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. +* **Alert count**: Shows the total number of alert documents that have identical highlighted field values, including the alert you’re currently examining. For example, if the `host.name` field has an alert count of 5, that means there are five total alerts with the same `host.name` value. The Alert count column only retrieves documents that contain the [`event.kind:signal`](ecs://reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. +* **Document count**: Shows the total number of event documents that have identical field values. A dash (`——`) displays if there are no event documents that match the field value. The Document count column only retrieves documents that don’t contain the [`event.kind:signal`](ecs://reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. The following features require a [Platinum subscription](https://www.elastic.co/pricing) or higher in {{stack}} or the appropriate [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) diff --git a/solutions/security/explore/configure-network-map-data.md b/solutions/security/explore/configure-network-map-data.md index 79c1b02ff8..ceea727544 100644 --- a/solutions/security/explore/configure-network-map-data.md +++ b/solutions/security/explore/configure-network-map-data.md @@ -37,7 +37,7 @@ For example, to display data that is stored in indices matching the index patter ## Add geoIP data [geoip-data] -When the ECS [source.geo.location and destination.geo.location](asciidocalypse://docs/ecs/docs/reference/ecs-geo.md) fields are mapped, network data is displayed on the map. +When the ECS [source.geo.location and destination.geo.location](ecs://reference/ecs-geo.md) fields are mapped, network data is displayed on the map. If you use Beats, configure a geoIP processor to add data to the relevant fields: diff --git a/solutions/security/get-started/automatic-import.md b/solutions/security/get-started/automatic-import.md index cef8310df1..e496133dcb 100644 --- a/solutions/security/get-started/automatic-import.md +++ b/solutions/security/get-started/automatic-import.md @@ -24,7 +24,7 @@ This feature is in technical preview. It may change in the future, and you shoul Automatic Import helps you quickly parse, ingest, and create [ECS mappings](https://www.elastic.co/elasticsearch/common-schema) for data from sources that don’t yet have prebuilt Elastic integrations. This can accelerate your migration to {{elastic-sec}}, and help you quickly add new data sources to an existing SIEM solution in {{elastic-sec}}. Automatic Import uses a large language model (LLM) with specialized instructions to quickly analyze your source data and create a custom integration. -While Elastic has 400+ [prebuilt data integrations](https://docs.elastic.co/en/integrations), Automatic Import helps you extend data coverage to other security-relevant technologies and applications. Elastic integrations (including those created by Automatic Import) normalize data to [the Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/index.md), which creates uniformity across dashboards, search, alerts, machine learning, and more. +While Elastic has 400+ [prebuilt data integrations](https://docs.elastic.co/en/integrations), Automatic Import helps you extend data coverage to other security-relevant technologies and applications. Elastic integrations (including those created by Automatic Import) normalize data to [the Elastic Common Schema (ECS)](ecs://reference/index.md), which creates uniformity across dashboards, search, alerts, machine learning, and more. ::::{tip} Click [here](https://elastic.navattic.com/automatic-import) to access an interactive demo that shows the feature in action, before setting it up yourself. diff --git a/solutions/security/get-started/enable-threat-intelligence-integrations.md b/solutions/security/get-started/enable-threat-intelligence-integrations.md index 05fd55617e..f64c03fbbc 100644 --- a/solutions/security/get-started/enable-threat-intelligence-integrations.md +++ b/solutions/security/get-started/enable-threat-intelligence-integrations.md @@ -72,7 +72,7 @@ There are a few scenarios when data won’t display in the Threat Intelligence v 2. Update the `securitySolution:defaultThreatIndex` [advanced setting](configure-advanced-settings.md#update-threat-intel-indices) by adding the appropriate index pattern name after the default {{fleet}} threat intelligence index pattern (`logs-ti*`), for example, `logs-ti*`,`custom-ti-index*`. ::::{note} - Threat intelligence indices aren’t required to be ECS compatible. However, we strongly recommend compatibility if you’d like your alerts to be enriched with relevant threat indicator information. You can find a list of ECS-compliant threat intelligence fields at [Threat Fields](asciidocalypse://docs/ecs/docs/reference/ecs-threat.md). + Threat intelligence indices aren’t required to be ECS compatible. However, we strongly recommend compatibility if you’d like your alerts to be enriched with relevant threat indicator information. You can find a list of ECS-compliant threat intelligence fields at [Threat Fields](ecs://reference/ecs-threat.md). :::: diff --git a/solutions/security/get-started/ingest-data-to-elastic-security.md b/solutions/security/get-started/ingest-data-to-elastic-security.md index ad4d9201dd..902f868a70 100644 --- a/solutions/security/get-started/ingest-data-to-elastic-security.md +++ b/solutions/security/get-started/ingest-data-to-elastic-security.md @@ -21,7 +21,7 @@ To ingest data, you can use: ::::{important} If you use a third-party collector to ship data to {{elastic-sec}}, you must map its fields to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Additionally, you must add its index to the {{elastic-sec}} indices (update the **`securitySolution:defaultIndex`** [advanced setting](/solutions/security/get-started/configure-advanced-settings.md#update-sec-indices)). -{{elastic-sec}} uses the [`host.name`](asciidocalypse://docs/ecs/docs/reference/ecs-host.md) ECS field as the primary key for identifying hosts. +{{elastic-sec}} uses the [`host.name`](ecs://reference/ecs-host.md) ECS field as the primary key for identifying hosts. :::: diff --git a/troubleshoot/kibana/using-kibana-server-logs.md b/troubleshoot/kibana/using-kibana-server-logs.md index 8cb6d2fdf9..7994e873e6 100644 --- a/troubleshoot/kibana/using-kibana-server-logs.md +++ b/troubleshoot/kibana/using-kibana-server-logs.md @@ -41,7 +41,7 @@ logging.loggers: ``` ::::{warning} -Kibana’s `file` appender is configured to produce logs in [ECS JSON](asciidocalypse://docs/ecs/docs/reference/index.md) format. It’s the only format that includes the meta information necessary for [log correlation](asciidocalypse://docs/apm-agent-nodejs/docs/reference/logs.md) out-of-the-box. +Kibana’s `file` appender is configured to produce logs in [ECS JSON](ecs://reference/index.md) format. It’s the only format that includes the meta information necessary for [log correlation](asciidocalypse://docs/apm-agent-nodejs/docs/reference/logs.md) out-of-the-box. :::: @@ -49,7 +49,7 @@ The next step is to define what [observability tools](https://www.elastic.co/obs ## APM UI [debugging-logs-apm-ui] -**Prerequisites** {{kib}} logs are configured to be in [ECS JSON](asciidocalypse://docs/ecs/docs/reference/index.md) format to include tracing identifiers. +**Prerequisites** {{kib}} logs are configured to be in [ECS JSON](ecs://reference/index.md) format to include tracing identifiers. To debug {{kib}} with the APM UI, you must set up the APM infrastructure. You can find instructions for the setup process [on the Observability integrations page](/solutions/observability/logs/stream-application-logs.md). @@ -58,7 +58,7 @@ Once you set up the APM infrastructure, you can enable the APM agent and put {{k ## Plain {{kib}} logs [plain-kibana-logs] -**Prerequisites** {{kib}} logs are configured to be in [ECS JSON](asciidocalypse://docs/ecs/docs/reference/index.md) format to include tracing identifiers. +**Prerequisites** {{kib}} logs are configured to be in [ECS JSON](ecs://reference/index.md) format to include tracing identifiers. Open {{kib}} Logs and search for an operation you are interested in. For example, suppose you want to investigate the response times for queries to the `/internal/telemetry/clusters/_stats` {{kib}} endpoint. Open Kibana Logs and search for the HTTP server response for the endpoint. It looks similar to the following (some fields are omitted for brevity). @@ -71,6 +71,6 @@ Open {{kib}} Logs and search for an operation you are interested in. For example } ``` -You are interested in the [trace.id](asciidocalypse://docs/ecs/docs/reference/ecs-tracing.md#field-trace-id) field, which is a unique identifier of a trace. The `trace.id` provides a way to group multiple events, like transactions, which belong together. You can search for `"trace":{"id":"9b99131a6f66587971ef085ef97dfd07"}` to get all the logs that belong to the same trace. This enables you to see how many {{es}} requests were triggered during the `9b99131a6f66587971ef085ef97dfd07` trace, what they looked like, what {{es}} endpoints were hit, and so on. +You are interested in the [trace.id](ecs://reference/ecs-tracing.md#field-trace-id) field, which is a unique identifier of a trace. The `trace.id` provides a way to group multiple events, like transactions, which belong together. You can search for `"trace":{"id":"9b99131a6f66587971ef085ef97dfd07"}` to get all the logs that belong to the same trace. This enables you to see how many {{es}} requests were triggered during the `9b99131a6f66587971ef085ef97dfd07` trace, what they looked like, what {{es}} endpoints were hit, and so on.