diff --git a/deploy-manage/autoscaling/trained-model-autoscaling.md b/deploy-manage/autoscaling/trained-model-autoscaling.md index ac4d2b1ca9..81bedd36f0 100644 --- a/deploy-manage/autoscaling/trained-model-autoscaling.md +++ b/deploy-manage/autoscaling/trained-model-autoscaling.md @@ -32,6 +32,7 @@ The available resources of self-managed deployments are static, so trained model {{serverless-full}} Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results. ## Enabling autoscaling through APIs - adaptive allocations [enabling-autoscaling-through-apis-adaptive-allocations] + $$$nlp-model-adaptive-resources$$$ Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.) @@ -55,6 +56,7 @@ When you create inference endpoints on {{serverless-short}} using Kibana, adapti ::: ### Optimizing for typical use cases [optimizing-for-typical-use-cases] + You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. * If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`). @@ -86,6 +88,11 @@ The used resources for trained model deployments depend on three factors: * the use case you optimize the model deployment for (ingest or search) * whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources +::::{note} +On {{serverless-short}}, VCUs for {{ml}} are based on the amount of vCPU and memory consumed. For {{ml}}, `1` VCU equals `0.125` of vCPU and `1GB` of memory, where vCPUs are measured by allocations multiplied by threads, and where memory is the amount consumed by trained models or {{ml}} jobs. +As a math formula, `VCUs = 8 * allocations * threads`, or `1` VCU for every `1GB` of memory consumed, whichever is greater. +:::: + If you use a self-managed cluster or ECK, vCPUs level ranges are derived from the `total_ml_processors` and `max_single_ml_node_processors` values. Use the [get {{ml}} info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-info) to check these values. The following tables show you the number of allocations, threads, and vCPUs available in ECE and ECH when adaptive resources are enabled or disabled. diff --git a/deploy-manage/deploy/cloud-enterprise/configure.md b/deploy-manage/deploy/cloud-enterprise/configure.md index cc569297ad..12ed8b7040 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure.md +++ b/deploy-manage/deploy/cloud-enterprise/configure.md @@ -33,7 +33,7 @@ Other sections of the documentation describe important ECE features to consider: * [Configure allocator affinity](configure-allocator-affinity.md) - Determine how ECE distributes your Elastic Stack deployments across allocators. * [Change allocator disconnect timeout](change-allocator-disconnect-timeout.md) - Configure how long ECE waits before considering allocators to be disconnected. * [Migrate ECE to Podman hosts](./migrate-ece-to-podman-hosts.md) - If you are running a Docker based installation and you need to migrate to Podman. -* [Migrate ECE on Podman hosts to SELinux in enforcing mode](migrate-ece-on-podman-hosts-to-selinux-enforce.md) - Migrate ECE to SELinux in `enforcing` mode using Podman. +* [Migrate ECE on Podman hosts to SELinux in enforcing mode](../../security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md) - Migrate ECE to SELinux in `enforcing` mode using Podman. ## Maintenance activities diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md index a9a70ba715..51624138a7 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md @@ -52,5 +52,5 @@ To start orchestrating your {{es}} clusters, refer to [](./working-with-deployme The following tasks are only needed on certain circumstances: * [Migrate ECE to Podman hosts](./migrate-ece-to-podman-hosts.md) -* [Migrate ECE on Podman hosts to SELinux enforce](./migrate-ece-on-podman-hosts-to-selinux-enforce.md) +* [Migrate ECE on Podman hosts to SELinux enforce](./../../security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md) * [Change allocator disconnect timeout](./change-allocator-disconnect-timeout.md) diff --git a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md index 1fe7274782..d502a5d2e5 100644 --- a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md @@ -29,6 +29,7 @@ After installing or upgrading to version 2.10 or later: * For Kibana, the certificate needs to allow for ***.kb.** * For APM, the certificate needs to allow for ***.apm.** * For Fleet, the certificate needs to allow for ***.fleet.** + * For Universal Profiling, the certificate needs to allow for ***.profiling.** and ***.symbols.** 3. In the **Platform** menu, select **Settings**. 4. Under the **Enable custom endpoint alias naming**, toggle the setting to allow platform administrators and deployment managers to choose a simplified, unique URL for the endpoint. diff --git a/deploy-manage/deploy/cloud-on-k8s.md b/deploy-manage/deploy/cloud-on-k8s.md index c931c15bec..2d19e3105c 100644 --- a/deploy-manage/deploy/cloud-on-k8s.md +++ b/deploy-manage/deploy/cloud-on-k8s.md @@ -80,11 +80,11 @@ Alpha, beta, and stable API versions follow the same [conventions used by Kubern ECK is compatible with the following Elastic Stack applications: -* Elasticsearch, Kibana, APM Server: 6.8+, 7.1+, 8+ -* Enterprise Search: 7.7+, 8+ -* Beats: 7.0+, 8+ -* Elastic Agent: 7.10+ (standalone), 7.14+ (Fleet), 8+ -* Elastic Maps Server: 7.11+, 8+ +* Elasticsearch, Kibana, APM Server: 7.17+, 8+ +* Enterprise Search: 7.17+, 8+ +* Beats: 7.17+, 8+ +* Elastic Agent: 7.10+ (standalone), 7.17+ (Fleet), 8+ +* Elastic Maps Server: 7.17+, 8+ * Logstash: 8.7+ Elastic Stack application images for the OpenShift-certified Elasticsearch (ECK) Operator are only available from version 7.10 and later. diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md index 9dc95ca9be..a0c088145e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md @@ -12,7 +12,7 @@ mapped_pages: This page explains the various methods for configuring and applying ECK settings. ::::{tip} -For a detailed list and description of all available settings in ECK, refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/eck-configuration-flags.md). +For a detailed list and description of all available settings in ECK, refer to [ECK configuration flags](cloud-on-k8s://reference/eck-configuration-flags.md). :::: By default, the ECK installation includes a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with an `eck.yaml` key where you can add, remove, or update configuration settings. This ConfigMap is mounted into the operator’s container as a file, and provided to the application through the `--config` flag. @@ -56,7 +56,7 @@ If you installed ECK using the manifests and the commands listed in [Deploy ECK] You can update the ConfigMap directly using the command `kubectl edit configmap elastic-operator -n elastic-operator` or modify the installation manifests and reapply them with `kubectl apply -f `. -The following shows the default `elastic-operator` ConfigMap, for reference purposes. Refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/eck-configuration-flags.md) for a complete list of available settings. +The following shows the default `elastic-operator` ConfigMap, for reference purposes. Refer to [ECK configuration flags](cloud-on-k8s://reference/eck-configuration-flags.md) for a complete list of available settings. ```yaml apiVersion: v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md index 22a88db27f..faa5017fdf 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md @@ -44,7 +44,7 @@ The cluster that you deployed in this quickstart guide only allocates a persiste :::: -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd elasticsearch diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md index 84a30062ea..a60e034f1a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md @@ -43,7 +43,7 @@ kubectl annotate namespace elastic-stack linkerd.io/inject=enabled Any Elasticsearch, Kibana, or APM Server resources deployed to a namespace with the above annotation will automatically join the mesh. -Alternatively, if you only want specific resources to join the mesh, add the `linkerd.io/inject: enabled` annotation to the `podTemplate` (check [API documentation](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md)) of the resource as follows: +Alternatively, if you only want specific resources to join the mesh, add the `linkerd.io/inject: enabled` annotation to the `podTemplate` (check [API documentation](cloud-on-k8s://reference/api-docs.md)) of the resource as follows: ```yaml podTemplate: diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md index 9364c9067d..8e27c7a539 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md @@ -66,7 +66,7 @@ To deploy a simple [{{kib}}](/get-started/the-stack.md#stack-components-kibana) ``` -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd kibana diff --git a/deploy-manage/deploy/deployment-comparison.md b/deploy-manage/deploy/deployment-comparison.md index 4c71a81591..da68b528a8 100644 --- a/deploy-manage/deploy/deployment-comparison.md +++ b/deploy-manage/deploy/deployment-comparison.md @@ -12,7 +12,7 @@ For more details about feature availability in Serverless, check [](elastic-clou | [Security configurations](/deploy-manage/security.md) | Full control | Limited control | Limited control | | [Authentication realms](/deploy-manage/users-roles.md) | Available | Available | Available, through Elastic Cloud only | | [Custom roles](/deploy-manage/users-roles.md) | Available | Available | Available | -| [Audit logging](/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md) | Available | Available | No | +| [Audit logging](/deploy-manage/security/logging-configuration/security-event-audit-logging.md) | Available | Available | No | ## Infrastructure and cluster management diff --git a/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md b/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md index 23f677a7a8..65d396d437 100644 --- a/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md +++ b/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md @@ -16,11 +16,11 @@ Plugins extend the core functionality of {{es}}. There are many suitable plugins * Analysis plugins, to provide analyzers targeted at languages other than English. * Scripting plugins, to provide additional scripting languages. -Plugins can come from different sources: the official ones created or at least maintained by Elastic, community-sourced plugins from other users, and plugins that you provide. Some of the official plugins are always provided with our service, and can be [enabled per deployment](elasticsearch://reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). +Plugins can come from different sources: the official ones created or at least maintained by Elastic, community-sourced plugins from other users, and plugins that you provide. Some of the official plugins are always provided with our service, and can be [enabled per deployment](elasticsearch://reference/elasticsearch-plugins/plugin-management.md). There are two ways to add plugins to a hosted deployment in {{ecloud}}: -* [Enable one of the official plugins already available in {{ecloud}}](elasticsearch://reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). +* [Enable one of the official plugins already available in {{ecloud}}](elasticsearch://reference/elasticsearch-plugins/plugin-management.md). * [Upload a custom plugin and then enable it per deployment](upload-custom-plugins-bundles.md). Custom plugins can include the official {{es}} plugins not provided with {{ecloud}}, any of the community-sourced plugins, or [plugins that you write yourself](elasticsearch://extend/index.md). Uploading custom plugins is available only to Gold, Platinum, and Enterprise subscriptions. For more information, check [Upload custom plugins and bundles](upload-custom-plugins-bundles.md). diff --git a/deploy-manage/deploy/elastic-cloud/regions.md b/deploy-manage/deploy/elastic-cloud/regions.md index fd72694a02..7be5a73b52 100644 --- a/deploy-manage/deploy/elastic-cloud/regions.md +++ b/deploy-manage/deploy/elastic-cloud/regions.md @@ -12,7 +12,7 @@ A region is the geographic area where the data center of the cloud provider that Elastic Cloud Serverless handles all hosting details for you. You are unable to change the region after you create a project. ::::{note} -Currently, a limited number of Amazon Web Services (AWS) and Microsoft Azure regions are available. More regions for AWS and Azure, as well as Google Cloud Platform (GCP), will be added in the future. +Currently, a limited number of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) regions are available. More regions for AWS, Azure, and GCP, will be added in the future. :::: @@ -39,4 +39,16 @@ The following Azure regions are currently available: | Region | Name | | :--- | :--- | -| eastus | East US | \ No newline at end of file +| eastus | East US | + +## Google Cloud Platform (GCP) regions [regions-gcp-regions] + +```yaml {applies_to} +serverless: preview +``` + +The following GCP regions are currently available: + +| Region | Name | +| :--- | :--- | +| us-central1 | Iowa | \ No newline at end of file diff --git a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md index 7b1e195490..93e7d5f1ab 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md @@ -27,7 +27,7 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst 2. Identify which logs you want to monitor. - The {{filebeat}} {{es}} module can handle [audit logs](../logging-configuration/logfile-audit-output.md), [deprecation logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md#deprecation-logging), [gc logs](elasticsearch://reference/elasticsearch/jvm-settings.md#gc-logging), [server logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md), and [slow logs](elasticsearch://reference/elasticsearch/index-settings/slow-log.md). For more information about the location of your {{es}} logs, see the [path.logs](../../deploy/self-managed/important-settings-configuration.md#path-settings) setting. + The {{filebeat}} {{es}} module can handle [audit logs](../../security/logging-configuration/logfile-audit-output.md), [deprecation logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md#deprecation-logging), [gc logs](elasticsearch://reference/elasticsearch/jvm-settings.md#gc-logging), [server logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md), and [slow logs](elasticsearch://reference/elasticsearch/index-settings/slow-log.md). For more information about the location of your {{es}} logs, see the [path.logs](../../deploy/self-managed/important-settings-configuration.md#path-settings) setting. ::::{important} If there are both structured (`*.json`) and unstructured (plain text) versions of the logs, you must use the structured logs. Otherwise, they might not appear in the appropriate context in {{kib}}. diff --git a/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md index 247ce65afe..af3d682d76 100644 --- a/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md @@ -181,7 +181,7 @@ When shipping logs to a monitoring deployment there are more logging features av #### For {{es}}: [ece-extra-logging-features-elasticsearch] -* [Audit logging](../logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment * [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md) - helps find and debug slow queries and indexing * Verbose logging - helps debug stack issues by increasing component logs @@ -190,7 +190,7 @@ After you’ve enabled log delivery on your deployment, you can [add the Elastic #### For Kibana: [ece-extra-logging-features-kibana] -* [Audit logging](../logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment After you’ve enabled log delivery on your deployment, you can [add the Kibana user settings](../../deploy/cloud-enterprise/edit-stack-settings.md) to enable this feature. diff --git a/deploy-manage/production-guidance/optimize-performance/size-shards.md b/deploy-manage/production-guidance/optimize-performance/size-shards.md index aa7b654ec9..5e3711ddb1 100644 --- a/deploy-manage/production-guidance/optimize-performance/size-shards.md +++ b/deploy-manage/production-guidance/optimize-performance/size-shards.md @@ -5,8 +5,33 @@ mapped_pages: # Size your shards [size-your-shards] -Each index in {{es}} is divided into one or more shards, each of which may be replicated across multiple nodes to protect against hardware failures. If you are using [Data streams](../../../manage-data/data-store/data-streams.md) then each data stream is backed by a sequence of indices. There is a limit to the amount of data you can store on a single node so you can increase the capacity of your cluster by adding nodes and increasing the number of indices and shards to match. However, each index and shard has some overhead and if you divide your data across too many shards then the overhead can become overwhelming. A cluster with too many indices or shards is said to suffer from *oversharding*. An oversharded cluster will be less efficient at responding to searches and in extreme cases it may even become unstable. +## What is a shard? [what-is-a-shard] +A shard is a basic unit of storage in {{es}}. Every index is divided into one or more shards to help distribute data and workload across nodes in a cluster. This division allows {{es}} to handle large datasets and perform operations like searches and indexing efficiently. For more detailed information on shards, see [this page](/deploy-manage/distributed-architecture/clusters-nodes-shards.md). + +## General guidelines [sizing-shard-guidelines] + +Balancing the number and size of your shards is important for the performance and stability of an {{es}} cluster: + +* Too many shards can degrade search performance and make the cluster unstable. This is referred to as _oversharding_. +* Very large shards can slow down search operations and prolong recovery times after failures. + +To avoid either of these states, implement the following guidelines: + +### General sizing guidelines + +* Aim for shard sizes between 10GB and 50GB +* Keep the number of documents on each shard below 200 million + +### Shard distribution guidelines + +To ensure that each node is working optimally, distribute shards evenly across nodes. Uneven distribution can cause some nodes to work harder than others, leading to performance degradation and instability. + +While {{es}} automatically balances shards, you need to configure indices with an appropriate number of shards and replicas to allow for even distribution across nodes. + +If you are using [data streams](/manage-data/data-store/data-streams.md), each data stream is backed by a sequence of indices, each index potentially having multiple shards. + +Despite these general guidelines, it is good to develop a tailored [sharding strategy](#create-a-sharding-strategy) that considers your specific infrastructure, use case, and performance expectations. ## Create a sharding strategy [create-a-sharding-strategy] diff --git a/deploy-manage/security.md b/deploy-manage/security.md index b085b550c1..bd7f9293e4 100644 --- a/deploy-manage/security.md +++ b/deploy-manage/security.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: all + serverless: ga mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-files.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-cluster.html @@ -12,8 +15,6 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-faq-technical.html --- -# Security - % SR: include this info somewhere in this section % {{ech}} doesn't support custom SSL certificates, which means that a custom CNAME for an {{ech}} endpoint such as *mycluster.mycompanyname.com* also is not supported. % @@ -22,7 +23,7 @@ mapped_urls: % encryption at rest (EAR) is enabled in {{ech}} by default. We support EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. % You can also bring your own key (BYOK) to encrypt your Elastic Cloud deployment data and snapshots. For more information, check [Encrypt your deployment with a customer-managed encryption key](../../../deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md). -Note that the encryption happens at the file system level. +% Note that the encryption happens at the file system level. % What needs to be done: Refine @@ -54,15 +55,178 @@ $$$preserving-data-integrity$$$ $$$maintaining-audit-trail$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md) -* [/raw-migrated-files/kibana/kibana/xpack-security.md](/raw-migrated-files/kibana/kibana/xpack-security.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-ece.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-ece.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-security.md](/raw-migrated-files/cloud/cloud-heroku/ech-security.md) -* [/raw-migrated-files/kibana/kibana/using-kibana-with-security.md](/raw-migrated-files/kibana/kibana/using-kibana-with-security.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md) -* [/raw-migrated-files/cloud/cloud/ec-faq-technical.md](/raw-migrated-files/cloud/cloud/ec-faq-technical.md) \ No newline at end of file +:::{warning} +**This page is a work in progress.** +::: + + +% The documentation team is working to combine content pulled from the following pages: + +% * [/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md) +% * [/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md) +% * [/raw-migrated-files/kibana/kibana/xpack-security.md](/raw-migrated-files/kibana/kibana/xpack-security.md) +% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md) +% * [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-ece.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-ece.md) +% * [/raw-migrated-files/cloud/cloud-heroku/ech-security.md](/raw-migrated-files/cloud/cloud-heroku/ech-security.md) +% * [/raw-migrated-files/kibana/kibana/using-kibana-with-security.md](/raw-migrated-files/kibana/kibana/using-kibana-with-security.md) +% * [/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md) +% * [/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md) +% * [/raw-migrated-files/cloud/cloud/ec-faq-technical.md](/raw-migrated-files/cloud/cloud/ec-faq-technical.md) + +# Security + +This overview page helps you understand Elastic's security capabilities across different deployment types. You'll find: + +- Key security features for protecting your Elastic deployment +- Security capabilities specific to each deployment type +- Comparison tables showing feature availability and configurability by deployment type +- Links to detailed implementation guides + +## Security overview + +An Elastic implementation comprises many moving parts: {{es}} nodes forming the cluster, {{kib}} instances, additional stack components such as Logstash and Beats, and various clients and integrations communicating with your deployment. + +To keep your data secured, Elastic offers comprehensive security features that: +- Prevent unauthorized access to your deployment +- Encrypt communications between components +- Protect data at rest +- Secure sensitive settings and saved objects + +:::{note} +The availability and configurability of security features vary by deployment type. Refer to [Security by deployment type](#security-features-by-deployment-type) for a comparison table. +::: + +## Security topics + +The documentation is organized into three main areas. + +On every page, you'll see deployment type indicators that show which content applies to specific deployment types. Focus on sections tagged with your deployment type and look for subsections specifically addressing your deployment model. + +### 1. Secure your orchestrator + +The [security of your orchestrator](security/secure-hosting-environment.md) forms the foundation of your overall security posture. This section covers environment-specific security controls: + +- [**Elastic Cloud Hosted and Serverless**](security/secure-your-elastic-cloud-organization.md) +- [**Elastic Cloud Enterprise**](security/secure-your-elastic-cloud-enterprise-installation.md) +- [**Elastic Cloud on Kubernetes**](security/secure-your-eck-installation.md) + +:::{note} +There is no orchestration layer for self-managed deployments because you directly control the host environment. Refer to [](security/manually-configure-security-in-self-managed-cluster.md) to learn more about securing self-managed installations. +::: + +### 2. Secure your deployments and clusters + +[Secure your deployments](security/secure-your-cluster-deployment.md) with features available across all deployment types: + +- [**Traffic filtering**](security/traffic-filtering.md): IP filtering, private links, and static IPs +- [**Secure communications**](security/secure-cluster-communications.md): TLS configuration, certificates management +- [**Data protection**](security/data-security.md): Encryption at rest, secure settings, saved objects +- [**Security event audit logging**](security/logging-configuration/security-event-audit-logging.md): {{es}} and {{kib}} audit logs +- [**Session management**](security/kibana-session-management.md): Kibana session controls +- [**FIPS 140-2 compliance**](security/fips-140-2.md): Federal security standards + +### 3. Secure your clients and integrations + +[Secure your clients and integrations](security/secure-clients-integrations.md) to ensure secure communication between your applications and Elastic: + +- [**Client security**](security/httprest-clients-security.md): Best practices for securely connecting applications to {{es}} +- **Integration security**: Secure configuration for Beats, Logstash, and other integrations + +## Security features by deployment type + +Security feature availability varies by deployment type, with each feature having one of the following statuses: + +| **Status** | **Description** | +|--------|-------------| +| **Managed** | Handled automatically by Elastic with no user configuration needed | +| **Configurable** | Built-in feature that needs your configuration (like IP filters or passwords) | +| **Self-managed** | Infrastructure-level security you implement and maintain | +| **N/A** | Not available for this deployment type | + +Select your deployment type below to see what's available and how implementation responsibilities are distributed: + +::::{tab-set} +:group: deployment-type + +:::{tab-item} Elastic Cloud Hosted +:sync: cloud-hosted + +| **Security Category** | **Security Feature** | **Status** | **Description** | +|------------------|------------|--------------|-------------| +| **Communication** | TLS (HTTP Layer) | Managed | Automatically configured by Elastic | +| | TLS (Transport Layer) | Managed | Automatically configured by Elastic | +| **Network** | IP traffic filtering | Configurable | Configure IP-based access restrictions | +| | Private link | Configurable | Establish secure VPC connection | +| | Static IPs | Configurable | Enable fixed IP addresses | +| **Data** | Encryption at rest | Managed | Automatically encrypted by Elastic | +| | Bring your own encryption key | Configurable | Implement customer-provided keys | +| | Keystore security | Managed | Automatically protected by Elastic | +| | Saved object encryption | Managed | Automatically encrypted by Elastic | +| **User Session** | Kibana Sessions | Configurable | Customize session parameters | + +::: + +:::{tab-item} Serverless +:sync: serverless + +| **Security Category** | **Security Feature** | **Status** | **Description** | +|------------------|------------|--------------|-------------| +| **Communication** | TLS (HTTP Layer) | Managed | Automatically configured by Elastic | +| | TLS (Transport Layer) | Managed | Automatically configured by Elastic | +| **Network** | IP traffic filtering | Configurable | Configure IP-based access restrictions | +| | Private link | N/A | X | +| | Static IPs | Configurable | Enable fixed IP addresses | +| **Data** | Encryption at rest | Managed | Automatically encrypted by Elastic | +| | Bring your own encryption key | N/A | X | +| | Keystore security | Managed | Automatically protected by Elastic | +| | Saved object encryption | Managed | Automatically encrypted by Elastic | +| **User Session** | Kibana Sessions | Managed | Automatically configured by Elastic | + +::: + +:::{tab-item} ECE/ECK +:sync: ece-eck + +| **Security Category** | **Security Feature** | **Status** | **Description** | +|------------------|------------|--------------|-------------| +| **Communication** | TLS (HTTP Layer) | Configurable | Configure custom certificates | +| | TLS (Transport Layer) | Managed | Automatically configured by Elastic | +| **Network** | IP traffic filtering | Configurable | Configure IP-based access restrictions | +| | Private link | N/A | X | +| | Static IPs | N/A | X | +| **Data** | Encryption at rest | Self-managed | Implement at infrastructure level | +| | Bring your own encryption key | N/A | X | +| | Keystore security | Configurable | Configure secure settings storage | +| | Saved object encryption | Configurable | Enable encryption for saved objects | +| **User Session** | Kibana Sessions | Configurable | Customize session parameters | + +::: + +:::{tab-item} Self-managed +:sync: self-managed + +| **Security Category** | **Security Feature** | **Status** | **Description** | +|------------------|------------|--------------|-------------| +| **Communication** | TLS (HTTP Layer) | Self-managed | Implement and maintain certificates | +| | TLS (Transport Layer) | Self-managed | Implement and maintain certificates | +| **Network** | IP traffic filtering | Configurable | Configure IP-based access restrictions | +| | Private link | N/A | X | +| | Static IPs | N/A | X | +| **Data** | Encryption at rest | Self-managed | Implement at infrastructure level | +| | Bring your own encryption key | N/A | X | +| | Keystore security | Configurable | Configure secure settings storage | +| | Saved object encryption | Configurable | Enable encryption for saved objects | +| **User Session** | Kibana Sessions | Configurable | Customize session parameters | + +::: + +:::: + +## Next steps + +Refer to the following sections for detailed instructions about securing your hosting environment: + +* [Elastic Cloud Hosted and Serverless security setup](/deploy-manage/security/secure-your-elastic-cloud-organization.md) +* [Elastic Cloud Enterprise (ECE) security setup](/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md) +* [Elastic Cloud on Kubernetes (ECK) security setup](/deploy-manage/security/secure-your-eck-installation.md) +* [Self-managed cluster security setup](/deploy-manage/security/manually-configure-security-in-self-managed-cluster.md) \ No newline at end of file diff --git a/deploy-manage/security/data-security.md b/deploy-manage/security/data-security.md new file mode 100644 index 0000000000..1226f922c7 --- /dev/null +++ b/deploy-manage/security/data-security.md @@ -0,0 +1,5 @@ +# Secure your data + +:::{warning} +**This page is a work in progress.** +::: diff --git a/deploy-manage/security/fips-140-2.md b/deploy-manage/security/fips-140-2.md index def22c70c7..f2b71557a7 100644 --- a/deploy-manage/security/fips-140-2.md +++ b/deploy-manage/security/fips-140-2.md @@ -4,7 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/kibana/current/xpack-security-fips-140-2.html --- -# FIPS 140-2 +# FIPS 140-2 compliance % What needs to be done: Refine diff --git a/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md b/deploy-manage/security/install-stack-demo-secure.md similarity index 91% rename from raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md rename to deploy-manage/security/install-stack-demo-secure.md index 532be43c13..f75b41bf8f 100644 --- a/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md +++ b/deploy-manage/security/install-stack-demo-secure.md @@ -1,29 +1,35 @@ -# Tutorial 2: Securing a self-managed {{stack}} [install-stack-demo-secure] +--- +applies_to: + deployment: + self: ga +--- -This tutorial is a follow-on to [Tutorial 1: Installing a self-managed {{stack}}](../../../deploy-manage/deploy/self-managed/installing-elasticsearch.md). The first tutorial describes how to configure a multi-node {{es}} cluster and then set up {{kib}}, followed by {{fleet-server}} and {{agent}}. In a production environment, it’s recommended after completing the {{kib}} setup to proceed directly to this tutorial to configure your SSL certificates. These steps guide you through that process, and then describe how to configure {{fleet-server}} and {{agent}} with the certificates in place. +# Tutorial: Securing a self-managed {{stack}} [install-stack-demo-secure] + +This tutorial is a follow-on to [installing a self-managed {{stack}}](/deploy-manage/deploy/self-managed.md) with a multi-node {{es}} cluster, {{kib}}, {{fleet-server}} and {{agent}}. In a production environment, it’s recommended after completing the {{kib}} setup to proceed directly to this tutorial to configure your SSL certificates. These steps guide you through that process, and then describe how to configure {{fleet-server}} and {{agent}} with the certificates in place. **Securing the {{stack}}** -Beginning with Elastic 8.0, security is enabled in the {{stack}} by default, meaning that traffic between {{es}} nodes and between {{kib}} and {{es}} is SSL-encrypted. While this is suitable for testing non-production viability of the Elastic platform, most production networks have requirements for the use of trusted CA-signed certificates. These steps demonstrate how to update the out-of-the-box self-signed certificates with your own trusted CA-signed certificates. +Since {{stack}} 8.0, security is enabled by default, meaning that traffic between {{es}} nodes and between {{kib}} and {{es}} is SSL-encrypted. While this is suitable for testing non-production viability of the Elastic platform, most production networks have requirements for the use of trusted CA-signed certificates. These steps demonstrate how to update the out-of-the-box self-signed certificates with your own trusted CA-signed certificates. For traffic to be encrypted between {{es}} cluster nodes and between {{kib}} and {{es}}, SSL certificates must be created for the transport ({{es}} inter-node communication) and HTTP (for the {{es}} REST API) layers. Similarly, when setting up {{fleet-server}} you’ll generate and configure a new certificate bundle, and then {{elastic-agent}} uses the generated certificates to communicate with both {{fleet-server}} and {{es}}. The process to set things up is as follows: -* [Prerequisites and assumptions](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-prereqs) -* [Step 1: Generate a new self-signed CA certificate](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-ca) -* [Step 2: Generate a new certificate for the transport layer](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-transport) -* [Step 3: Generate new certificate(s) for the HTTP layer](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-http) -* [Step 4: Configure security on additional {{es}} nodes](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-second-node) -* [Step 5: Generate server-side and client-side certificates for {{kib}}](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-kib-es) -* [Step 6: Install {{fleet}} with SSL certificates configured](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-fleet) -* [Step 7: Install {{agent}}](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-agent) -* [Step 8: View your system data](../../../deploy-manage/security/secure-your-cluster-deployment.md#install-stack-demo-secure-view-data) +* [Prerequisites and assumptions](secure-your-cluster-deployment.md#install-stack-demo-secure-prereqs) +* [Step 1: Generate a new self-signed CA certificate](secure-your-cluster-deployment.md#install-stack-demo-secure-ca) +* [Step 2: Generate a new certificate for the transport layer](secure-your-cluster-deployment.md#install-stack-demo-secure-transport) +* [Step 3: Generate new certificate(s) for the HTTP layer](secure-your-cluster-deployment.md#install-stack-demo-secure-http) +* [Step 4: Configure security on additional {{es}} nodes](secure-your-cluster-deployment.md#install-stack-demo-secure-second-node) +* [Step 5: Generate server-side and client-side certificates for {{kib}}](secure-your-cluster-deployment.md#install-stack-demo-secure-kib-es) +* [Step 6: Install {{fleet}} with SSL certificates configured](secure-your-cluster-deployment.md#install-stack-demo-secure-fleet) +* [Step 7: Install {{agent}}](secure-your-cluster-deployment.md#install-stack-demo-secure-agent) +* [Step 8: View your system data](secure-your-cluster-deployment.md#install-stack-demo-secure-view-data) It should take between one and two hours to complete these steps. ## Prerequisites and assumptions [install-stack-demo-secure-prereqs] -Before starting, you’ll need to have set up an on-premises {{es}} cluster with {{kib}}, following the steps in [Tutorial 1: Installing a self-managed {{stack}}](../../../deploy-manage/deploy/self-managed/installing-elasticsearch.md). +Before starting, you’ll need to have set up an on-premises {{es}} cluster with {{kib}}, following the steps for [installing a self-managed {{stack}}](/deploy-manage/deploy/self-managed.md). The examples in this guide use RPM packages to install the {{stack}} components on hosts running Red Hat Enterprise Linux 8. The steps for other install methods and operating systems are similar, and can be found in the documentation linked from each section. @@ -40,7 +46,7 @@ In a production environment you would typically use the CA certificate from your sudo systemctl stop elasticsearch.service ``` -2. Generate a CA certificate using the provided certificate utility, `elasticsearch-certutil`. Note that the location of the utility depends on the installation method you used to install {{es}}. Refer to [elasticsearch-certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) for the command details and to [Update security certificates with a different CA](../../../deploy-manage/security/different-ca.md) for details about the procedure as a whole. +2. Generate a CA certificate using the provided certificate utility, `elasticsearch-certutil`. Note that the location of the utility depends on the installation method you used to install {{es}}. Refer to [elasticsearch-certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) for the command details and to [Update security certificates with a different CA](different-ca.md) for details about the procedure as a whole. Run the following command. When prompted, specify a unique name for the output file, such as `elastic-stack-ca.zip`: @@ -77,7 +83,7 @@ In a production environment you would typically use the CA certificate from your 6. From the `/etc/elasticsearch/certs/ca/` directory, import the newly created CA certificate into the {{es}} truststore. This step ensures that your cluster trusts the new CA certificate. ::::{note} - On a new installation a new keystore and truststore are created automatically. If you’re running these steps on an existing {{es}} installation and you know the password to the keystore and the truststore, follow the instructions in [Update security certificates with a different CA](../../../deploy-manage/security/different-ca.md) to import the CA certificate. + On a new installation a new keystore and truststore are created automatically. If you’re running these steps on an existing {{es}} installation and you know the password to the keystore and the truststore, follow the instructions in [Update security certificates with a different CA](different-ca.md) to import the CA certificate. :::: @@ -131,7 +137,7 @@ This guide assumes the use of self-signed certificates, but the process to impor ::::{important} If you’re running these steps on a production cluster that already contains data: - * In a cluster with multiple {{es}} nodes, before proceeding you first need to perform a [Rolling restart](../../../deploy-manage/maintenance/start-stop-services/full-cluster-restart-rolling-restart-procedures.md#restart-cluster-rolling) beginning with the node where you’re updating the keystore. Stop at the `Perform any needed changes` step, and then proceed to the next step in this guide. + * In a cluster with multiple {{es}} nodes, before proceeding you first need to perform a [Rolling restart](../maintenance/start-stop-services/full-cluster-restart-rolling-restart-procedures.md#restart-cluster-rolling) beginning with the node where you’re updating the keystore. Stop at the `Perform any needed changes` step, and then proceed to the next step in this guide. * In a single node cluster, always stop {{es}} before proceeding. :::: @@ -227,7 +233,7 @@ Now that communication between {{es}} nodes (the transport layer) has been secur ``` 2. When prompted, confirm that the settings are correct. - 3. Add the network IP address that clients can use to connect to the first {{es}} node. This is the same value that’s described in Step 2 of [Tutorial 1: Installing a self-managed {{stack}}](../../../deploy-manage/deploy/self-managed/installing-elasticsearch.md), for example `10.128.0.84`: + 3. Add the network IP address that clients can use to connect to the first {{es}} node. For example `10.128.0.84`: ```shell 10.128.0.84 @@ -506,7 +512,7 @@ Now that the security is configured for the first {{es}} node, some steps need t ## Step 5: Generate server-side and client-side certificates for {{kib}} [install-stack-demo-secure-kib-es] -Now that the transport and HTTP layers are configured with encryption using the new certificates, there are two more tasks that must be accomplished for end-to-end connectivity to {{es}}: Set up certificates for encryption between {{kib}} and {{es}}, and between the client browser and {{kib}}. For additional details about any of these steps, refer to [Mutual TLS authentication between {{kib}} and {{es}}](../../../deploy-manage/security/secure-cluster-communications.md) and [Encrypt traffic between your browser and {{kib}}](../../../deploy-manage/security/set-up-basic-security-plus-https.md#encrypt-kibana-browser). +Now that the transport and HTTP layers are configured with encryption using the new certificates, there are two more tasks that must be accomplished for end-to-end connectivity to {{es}}: Set up certificates for encryption between {{kib}} and {{es}}, and between the client browser and {{kib}}. For additional details about any of these steps, refer to [Mutual TLS authentication between {{kib}} and {{es}}](secure-cluster-communications.md) and [Encrypt traffic between your browser and {{kib}}](set-up-basic-security-plus-https.md#encrypt-kibana-browser). 1. In Step 3, when you generated a new certificate for the HTTP layer, the process created an archive `elasticsearch-ssl-http.zip`. @@ -617,7 +623,7 @@ Now that the transport and HTTP layers are configured with encryption using the Open a web browser to the external IP address of the Kibana host machine: `https://:5601`. Note that the URL should use the `https` and not the `http` protocol. -15. Log in using the `elastic` user and password that you configured in Step 1 of [Tutorial 1: Installing a self-managed {{stack}}](../../../deploy-manage/deploy/self-managed/installing-elasticsearch.md). +15. Log in using the `elastic` user and password that you configured when [installing your self-managed {{stack}}](/deploy-manage/deploy/self-managed.md). Congratulations! You’ve successfully updated the SSL certificates between {{es}} and {{kib}}. @@ -874,7 +880,7 @@ View your system metrics data: 2. In the query field, search for `Metrics System`. 3. Select the `[Metrics System] Host overview` link. The {{kib}} Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and more. - :::{image} ../../../images/elastic-stack-install-stack-metrics-dashboard.png + :::{image} /images/elastic-stack-install-stack-metrics-dashboard.png :alt: The System metrics host overview showing CPU usage, memory usage, and other visualizations ::: @@ -884,6 +890,6 @@ Congratulations! You’ve successfully configured security for {{es}}, {{kib}}, ## What’s next? [_whats_next] -* Do you have data ready to ingest into your newly set up {{stack}}? Learn how to [add data to Elasticsearch](../../../manage-data/ingest.md). +* Do you have data ready to ingest into your newly set up {{stack}}? Learn how to [add data to Elasticsearch](../../manage-data/ingest.md). * Use [Elastic {{observability}}](https://www.elastic.co/observability) to unify your logs, infrastructure metrics, uptime, and application performance data. * Want to protect your endpoints from security threats? Try [{{elastic-sec}}](https://www.elastic.co/security). Adding endpoint protection is just another integration that you add to the agent policy! diff --git a/deploy-manage/monitor/logging-configuration/auditing-search-queries.md b/deploy-manage/security/logging-configuration/auditing-search-queries.md similarity index 100% rename from deploy-manage/monitor/logging-configuration/auditing-search-queries.md rename to deploy-manage/security/logging-configuration/auditing-search-queries.md diff --git a/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md b/deploy-manage/security/logging-configuration/configuring-audit-logs.md similarity index 96% rename from deploy-manage/monitor/logging-configuration/configuring-audit-logs.md rename to deploy-manage/security/logging-configuration/configuring-audit-logs.md index d4e13dfe5b..4286c13bb8 100644 --- a/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md +++ b/deploy-manage/security/logging-configuration/configuring-audit-logs.md @@ -26,7 +26,7 @@ When auditing security events, a single client request might generate multiple a For a complete description of event details and format, refer to the following resources: * [{{es}} audit events details and schema](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) - * [{{es}} log entry output format](/deploy-manage/monitor/logging-configuration/logfile-audit-output.md#audit-log-entry-format) + * [{{es}} log entry output format](./logfile-audit-output.md#audit-log-entry-format) ### Kibana auditing configuration diff --git a/deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md b/deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md similarity index 100% rename from deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md rename to deploy-manage/security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md diff --git a/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md b/deploy-manage/security/logging-configuration/enabling-audit-logs.md similarity index 98% rename from deploy-manage/monitor/logging-configuration/enabling-audit-logs.md rename to deploy-manage/security/logging-configuration/enabling-audit-logs.md index 31e889c6ea..4eeb50af32 100644 --- a/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md +++ b/deploy-manage/security/logging-configuration/enabling-audit-logs.md @@ -25,10 +25,10 @@ You can log security-related events such as authentication failures and refused This section describes how to enable and configure audit logging in both {{es}} and {{kib}} for all supported deployment types, including self-managed clusters, {{ech}}, {{ece}} (ECE), and {{eck}} (ECK). ::::{important} -In orchestrated deployments, audit logs must be shipped to a monitoring deployment; otherwise, they remain at container level and won't be accessible to users. For details on configuring log forwarding in orchestrated environments, refer to [logging configuration](../logging-configuration.md). +In orchestrated deployments, audit logs must be shipped to a monitoring deployment; otherwise, they remain at container level and won't be accessible to users. For details on configuring log forwarding in orchestrated environments, refer to [logging configuration](/deploy-manage/monitor/logging-configuration.md). :::: -When audit logging is enabled, security events are persisted to a dedicated `_audit.json` file on the host’s file system, on every cluster node. For more information, refer to [{{es}} logfile audit output](logfile-audit-output.md). +When audit logging is enabled, security events are persisted to a dedicated `_audit.json` file on the host’s file system, on every cluster node. For more information, refer to [{{es}} logfile audit output](./logfile-audit-output.md). ## Enable audit logging [enable-audit-logging-procedure] diff --git a/deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md b/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md similarity index 100% rename from deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md rename to deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md diff --git a/deploy-manage/monitor/logging-configuration/logfile-audit-output.md b/deploy-manage/security/logging-configuration/logfile-audit-output.md similarity index 95% rename from deploy-manage/monitor/logging-configuration/logfile-audit-output.md rename to deploy-manage/security/logging-configuration/logfile-audit-output.md index 87e56c79b2..f7c8b6346b 100644 --- a/deploy-manage/monitor/logging-configuration/logfile-audit-output.md +++ b/deploy-manage/security/logging-configuration/logfile-audit-output.md @@ -31,6 +31,6 @@ The audit events are formatted as JSON documents, and each event is printed on a There are however a few attributes that are exceptions to the above format. The `put`, `delete`, `change`, `create` and `invalidate` attributes, which are only present for events with the `event.type: "security_config_change"` attribute, contain the **nested JSON** representation of the security change taking effect. The contents of the security config change are hence not displayed as top-level dot-named fields in the audit event document. That’s because the fields are specific to the particular kind of security change and do not show up in any other audit events. The benefits of a columnar format are therefore much more limited; the space-saving benefits of the nested structure is the favoured trade-off in this case. -When the `request.body` attribute is present (see [Auditing search queries](auditing-search-queries.md)), it contains a string value containing the full HTTP request body, escaped as per the JSON RFC 4677. +When the `request.body` attribute is present (see [Auditing search queries](./auditing-search-queries.md)), it contains a string value containing the full HTTP request body, escaped as per the JSON RFC 4677. Refer to [audit event types](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) for a complete list of fields, as well as examples, for each entry type. diff --git a/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md b/deploy-manage/security/logging-configuration/security-event-audit-logging.md similarity index 100% rename from deploy-manage/monitor/logging-configuration/security-event-audit-logging.md rename to deploy-manage/security/logging-configuration/security-event-audit-logging.md diff --git a/deploy-manage/security/manually-configure-security-in-self-managed-cluster.md b/deploy-manage/security/manually-configure-security-in-self-managed-cluster.md index 6a02f33975..43cc3ece51 100644 --- a/deploy-manage/security/manually-configure-security-in-self-managed-cluster.md +++ b/deploy-manage/security/manually-configure-security-in-self-managed-cluster.md @@ -1,11 +1,19 @@ --- +navigation_title: Self-managed +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/manually-configure-security.html --- # Manually configure security in a self-managed cluster [manually-configure-security] -Security needs vary depending on whether you’re developing locally on your laptop or securing all communications in a production environment. Regardless of where you’re deploying the {{stack}} ("ELK"), running a secure cluster is incredibly important to protect your data. That’s why security is [enabled and configured by default](../deploy/self-managed/installing-elasticsearch.md) in {{es}} 8.0 and later. +:::{note} +This page describes important aspects to consider and common end-to-end scenarios for securing your self-managed {{stack}}. For a more granular view of the available security options for your clusters and nodes, refer to [](secure-your-cluster-deployment.md). +::: + +Security needs vary depending on whether you’re developing locally on your laptop or securing all communications in a production environment. Regardless of where you’re deploying the {{stack}} ("ELK"), running a secure cluster is incredibly important to protect your data. That’s why security is [enabled and configured by default](../deploy/self-managed/installing-elasticsearch.md) since {{es}} 8.0. If you want to enable security on an existing, unsecured cluster, use your own Certificate Authority (CA), or would rather manually configure security, the following scenarios provide steps for configuring TLS on the transport layer, plus securing HTTPS traffic if you want it. @@ -15,8 +23,13 @@ If you configure security manually *before* starting your {{es}} nodes, the auto :alt: Elastic Security layers ::: +## Common security scenarios + +Even with security enabled, never expose {{es}} to public internet traffic. Using an application to sanitize requests to {{es}} still poses risks, such as a malicious user writing [`_search`](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-search) requests that could overwhelm an {{es}} cluster and bring it down. Keep {{es}} as isolated as possible, preferably behind a firewall and a VPN. Any internet-facing applications should run pre-canned aggregations, or not run aggregations at all. + +While you absolutely shouldn’t expose {{es}} directly to the internet, you also shouldn’t expose {{es}} directly to users. Instead, use an intermediary application to make requests on behalf of users. This implementation allows you to track user behaviors, such as can submit requests, and to which specific nodes in the cluster. For example, you can implement an application that accepts a search term from a user and funnels it through a [`simple_query_string`](elasticsearch://reference/query-languages/query-dsl-simple-query-string-query.md) query. -## Minimal security ({{es}} Development) [security-minimal-overview] +### Minimal security ({{es}} Development) [security-minimal-overview] If you’ve been working with {{es}} and want to enable security on your existing, unsecured cluster, start here. You’ll set passwords for the built-in users to prevent unauthorized access to your local cluster, and also configure password authentication for {{kib}}. @@ -28,7 +41,7 @@ The minimal security scenario is not sufficient for [production mode](../deploy/ [Set up minimal security](set-up-minimal-security.md) -## Basic security ({{es}} + {{kib}}) [security-basic-overview] +### Basic security ({{es}} + {{kib}}) [security-basic-overview] This scenario configures TLS for communication between nodes. This security layer requires that nodes verify security certificates, which prevents unauthorized nodes from joining your {{es}} cluster. @@ -37,7 +50,7 @@ Your external HTTP traffic between {{es}} and {{kib}} won’t be encrypted, but [Set up basic security](secure-cluster-communications.md) -## Basic security plus secured HTTPS traffic ({{stack}}) [security-basic-https-overview] +### Basic security plus secured HTTPS traffic ({{stack}}) [security-basic-https-overview] This scenario builds on the one for basic security and secures all HTTP traffic with TLS. In addition to configuring TLS on the transport interface of your {{es}} cluster, you configure TLS on the HTTP interface for both {{es}} and {{kib}}. diff --git a/deploy-manage/security/secure-clients-integrations.md b/deploy-manage/security/secure-clients-integrations.md index e63862420f..59bf4b700f 100644 --- a/deploy-manage/security/secure-clients-integrations.md +++ b/deploy-manage/security/secure-clients-integrations.md @@ -5,6 +5,11 @@ mapped_pages: # Secure clients and integrations [security-clients-integrations] +:::{warning} +**This page is a work in progress.** +::: + + You will need to update the configuration for several [clients](httprest-clients-security.md) to work with a secured {{es}} cluster. The {{es}} {{security-features}} enable you to secure your {{es}} cluster. But {{es}} itself is only one product within the {{stack}}. It is often the case that other products in the {{stack}} are connected to the cluster and therefore need to be secured as well, or at least communicate with the cluster in a secured way: diff --git a/deploy-manage/security/secure-endpoints.md b/deploy-manage/security/secure-endpoints.md deleted file mode 100644 index 6e89c619de..0000000000 --- a/deploy-manage/security/secure-endpoints.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/elasticsearch/reference/current/es-security-principles.html ---- - -# Secure endpoints [es-security-principles] - -Protecting your {{es}} cluster and the data it contains is of utmost importance. Implementing a defense in depth strategy provides multiple layers of security to help safeguard your system. The following principles provide a foundation for running {{es}} in a secure manner that helps to mitigate attacks on your system at multiple levels. - - -## Run {{es}} with security enabled [security-run-with-security] - -Never run an {{es}} cluster without security enabled. This principle cannot be overstated. Running {{es}} without security leaves your cluster exposed to anyone who can send network traffic to {{es}}, permitting these individuals to download, modify, or delete any data in your cluster. [Start the {{stack}} with security enabled](../deploy/self-managed/installing-elasticsearch.md) or [manually configure security](manually-configure-security-in-self-managed-cluster.md) to prevent unauthorized access to your clusters and ensure that internode communication is secure. - - -## Run {{es}} with a dedicated non-root user [security-not-root-user] - -Never try to run {{es}} as the `root` user, which would invalidate any defense strategy and permit a malicious user to do **anything** on your server. You must create a dedicated, unprivileged user to run {{es}}. By default, the `rpm`, `deb`, `docker`, and Windows packages of {{es}} contain an `elasticsearch` user with this scope. - - -## Protect {{es}} from public internet traffic [security-protect-cluster-traffic] - -Even with security enabled, never expose {{es}} to public internet traffic. Using an application to sanitize requests to {{es}} still poses risks, such as a malicious user writing [`_search`](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-search) requests that could overwhelm an {{es}} cluster and bring it down. Keep {{es}} as isolated as possible, preferably behind a firewall and a VPN. Any internet-facing applications should run pre-canned aggregations, or not run aggregations at all. - -While you absolutely shouldn’t expose {{es}} directly to the internet, you also shouldn’t expose {{es}} directly to users. Instead, use an intermediary application to make requests on behalf of users. This implementation allows you to track user behaviors, such as can submit requests, and to which specific nodes in the cluster. For example, you can implement an application that accepts a search term from a user and funnels it through a [`simple_query_string`](elasticsearch://reference/query-languages/query-dsl-simple-query-string-query.md) query. - - -## Implement role based access control [security-create-appropriate-users] - -[Define roles](../users-roles/cluster-or-deployment-auth/defining-roles.md) for your users and [assign appropriate privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to ensure that users have access only to the resources that they need. This process determines whether the user behind an incoming request is allowed to run that request. - diff --git a/deploy-manage/security/secure-hosting-environment.md b/deploy-manage/security/secure-hosting-environment.md new file mode 100644 index 0000000000..2800715d9e --- /dev/null +++ b/deploy-manage/security/secure-hosting-environment.md @@ -0,0 +1,21 @@ +--- +applies_to: + deployment: all + serverless: ga +--- + +# Secure your orchestrator + +Whether you're running Elastic on {{ecloud}} or using an {{ece}} or {{eck}} orchestrator, it is critical that you secure the layer responsible for deploying and hosting your Elastic products. + +This section covers security measures specific to: + +- [{{ecloud}}](secure-your-elastic-cloud-organization.md) +- [{{ece}}](secure-your-elastic-cloud-enterprise-installation.md) +- [{{eck}}](secure-your-eck-installation.md) + +:::{note} +There is no orchestration layer for self-managed installations, but you can find a summary of your security options in [](manually-configure-security-in-self-managed-cluster.md). +::: + +Learn how to manage security certificates, configure TLS versions, and implement additional security controls at the environment level. diff --git a/deploy-manage/security/secure-your-cluster-deployment.md b/deploy-manage/security/secure-your-cluster-deployment.md index 6e487f539c..c8aa6d60c0 100644 --- a/deploy-manage/security/secure-your-cluster-deployment.md +++ b/deploy-manage/security/secure-your-cluster-deployment.md @@ -45,4 +45,41 @@ $$$security-configure-settings$$$ * [/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md) * [/raw-migrated-files/kibana/kibana/using-kibana-with-security.md](/raw-migrated-files/kibana/kibana/using-kibana-with-security.md) -* [/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md](/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md) \ No newline at end of file +* [/raw-migrated-files/stack-docs/elastic-stack/install-stack-demo-secure.md](/deploy-manage/security/install-stack-demo-secure.md) + +Protecting your {{es}} cluster and the data it contains is of utmost importance. Implementing a defense in depth strategy provides multiple layers of security to help safeguard your system. + +:::{important} +Never run an {{es}} cluster without security enabled. This principle cannot be overstated. Running {{es}} without security leaves your cluster exposed to anyone who can send network traffic to {{es}}, permitting these individuals to download, modify, or delete any data in your cluster. +::: + +To secure your clusters and deployments, consider the following: + +## Network access + +Control which systems can access your Elastic deployments and clusters through traffic filtering and network controls: + +- **IP traffic filtering**: Restrict access based on IP addresses or CIDR ranges. +- **Private link filters**: Secure connectivity through AWS PrivateLink, Azure Private Link, or GCP Private Service Connect. +- **Static IPs**: Use static IP addresses for predictable firewall rules. + + +## Cluster communication + +- **HTTP and HTTPs** +- **TLS certificates and keys** + + +## Data and objects security + +- **Bring your own encryption key** +- **Elasticsearch keystore** +- **Kibana saved objects** + +## User roles and sessions + +[Define roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for your users and [assign appropriate privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to ensure that users have access only to the resources that they need. This process determines whether the user behind an incoming request is allowed to run that request. + +:::{important} +Never try to run {{es}} as the `root` user, which would invalidate any defense strategy and permit a malicious user to do **anything** on your server. You must create a dedicated, unprivileged user to run {{es}}. By default, the `rpm`, `deb`, `docker`, and Windows packages of {{es}} contain an `elasticsearch` user with this scope. +::: diff --git a/deploy-manage/security/secure-your-eck-installation.md b/deploy-manage/security/secure-your-eck-installation.md new file mode 100644 index 0000000000..27d889fc93 --- /dev/null +++ b/deploy-manage/security/secure-your-eck-installation.md @@ -0,0 +1,23 @@ +--- +navigation_title: "{{eck}}" +applies_to: + deployment: + eck: ga +--- + +# Secure your {{eck}} orchestrator [eck-securing-considerations] + +This section covers security settings for your {{eck}} orchestrator. + +**Orchestrator-level security** + +- [Restrict cross-namespace resources associations](/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md) +- [Isolate pods with network policies](/deploy-manage/deploy/cloud-on-k8s/network-policies.md) +- [Secure the metrics endpoint](/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md) + +Also refer to [](/deploy-manage/deploy/cloud-on-k8s/configure.md) for more information about configuring {{eck}}. + +**Additional deployment-level security settings** + +Additional security settings are available for you to configure individually for each deployment orchestrated using {{eck}}. Refer to [](secure-your-cluster-deployment.md) for more information. + diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md index 227b8c4f44..ba9337fd2d 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md @@ -1,25 +1,38 @@ --- +navigation_title: "{{ece}}" +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-considerations.html --- -# Secure your Elastic Cloud Enterprise installation [ece-securing-considerations] +# Secure your Elastic Cloud Enterprise orchestrator [ece-securing-considerations] -Elastic Cloud Enterprise can run on shared and less secure environments, but you should be aware of some limitations when deploying our product. +This section covers security settings for your {{ece}} orchestrator. +**Orchestrator-level security** -### Users with admin privileges [ece_users_with_admin_privileges] +- [**Enforcing SELinux with RHEL/Podman installations**](secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md): SELinux (Security-Enhanced Linux) is a security module that enforces mandatory access controls, helping to protect systems from unauthorized access and privilege escalation. +- [**TLS certificates**](secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md): Apply security controls to network communications. With {{ece}}, you manage proxy certificates for the HTTP layer. The transport layer is managed by ECE. +- [**Platform role-based access control**](/deploy-manage/users-roles/cloud-enterprise-orchestrator.md): Define the roles of users who have access to your organization and its resources. Note that you can also [manage non-cloud users and roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). +- [**Authentication providers**](/deploy-manage/users-roles/cloud-enterprise-orchestrator.md): Integrate with external authentication providers, including Active Directory, LDAP, and SAML. -In Elastic Cloud Enterprise 3.8.1, every user who can manage your installation through the Cloud UI or the RESTful API is a user with admin privileges. This includes both the `admin` user and the `readonly` user that get created when you install ECE on your first host. Initially, only the `admin` user has the required privileges to make changes to resources on ECE. -[Role-based access control](../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for Elastic Cloud Enterprise allows you to connect multiple users or user groups to the platform. +**Additional deployment-level security settings** -All Elasticsearch clusters come with X-Pack security features and support role-based access control. To learn more, check [Secure Your Clusters](../users-roles/cluster-or-deployment-auth.md). +Additional security settings are available for you to configure individually for each deployment orchestrated using {{ece}}. Refer to [](secure-your-cluster-deployment.md) for more information. -### Clusters share the same resources [ece_clusters_share_the_same_resources] +## Notes about {{ece}} security -The Elasticsearch clusters you create on Elastic Cloud Enterprise share the same resources. It is currently not possible to run a specific cluster on entirely dedicated hardware not shared by other clusters. +### Users with admin privileges [ece_users_with_admin_privileges] + +In Elastic Cloud Enterprise, every user who can manage your installation through the Cloud UI or the RESTful API is a user with admin privileges. This includes both the `admin` user and the `readonly` user that get created when you install ECE on your first host. Initially, only the `admin` user has the required privileges to make changes to resources on ECE. + +[Role-based access control](../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for Elastic Cloud Enterprise allows you to connect multiple users or user groups to the platform. + +All Elasticsearch clusters come with X-Pack security features and support role-based access control. To learn more, check [Secure Your Clusters](../users-roles/cluster-or-deployment-auth.md). ### Encryption [ece_encryption] @@ -28,14 +41,12 @@ Elastic Cloud Enterprise does not implement encryption at rest out of the box. T Configuring dm-crypt or similar technologies is outside the scope of the Elastic Cloud Enterprise documentation, and issues related to disk encryption are outside the scope of support. -Elastic Cloud Enterprise provides full encryption of all network traffic by default when using Elasticsearch 6.0 or higher. +Elastic Cloud Enterprise provides full encryption of all network traffic by default. -TLS is supported when interacting with the RESTful API of Elastic Cloud Enterprise and for the proxy layer that routes user requests to clusters of all versions. Internally, our administrative services also ensure transport-level encryption. +TLS is supported when interacting with the [RESTful API of Elastic Cloud Enterprise](https://www.elastic.co/docs/api/doc/cloud-enterprise/) and for the proxy layer that routes user requests to clusters of all versions. Internally, our administrative services also ensure transport-level encryption. -In Elasticsearch versions lower than 6.0, traffic between nodes in a cluster and between proxies and the clusters is *not* encrypted. - -## Attack vectors versus separation of roles [ece-securing-vectors] +### Attack vectors versus separation of roles [ece-securing-vectors] As covered in [Separation of Roles](../deploy/cloud-enterprise/ece-roles.md), it is important to not mix certain roles in a production environment. @@ -45,3 +56,10 @@ Elastic Cloud Enterprise is designed to ensure that an allocator has access only Security comes in layers, and running separate services on separate infrastructure is the last layer of defense, on top of other security features like the JVM security manager, system call filtering, and running nodes in isolated containers with no shared secrets. + +### Hardware isolation +$$$ece_clusters_share_the_same_resources$$$ + +The Elasticsearch clusters you create on Elastic Cloud Enterprise share the same resources. It is currently not possible to run a specific cluster on entirely dedicated hardware not shared by other clusters. + + diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md index 04565dc9cc..e9f7ccc8dd 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-allow-x509-sha1.html --- diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md index 92af919aca..c0b44658d8 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-tls-version.html --- diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md index d2cf158ae3..a9c90eede8 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-certificates.html --- diff --git a/deploy-manage/deploy/cloud-enterprise/migrate-ece-on-podman-hosts-to-selinux-enforce.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md similarity index 100% rename from deploy-manage/deploy/cloud-enterprise/migrate-ece-on-podman-hosts-to-selinux-enforce.md rename to deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md diff --git a/deploy-manage/security/secure-your-elastic-cloud-organization.md b/deploy-manage/security/secure-your-elastic-cloud-organization.md new file mode 100644 index 0000000000..50f8e53353 --- /dev/null +++ b/deploy-manage/security/secure-your-elastic-cloud-organization.md @@ -0,0 +1,37 @@ +--- +navigation_title: "{{ecloud}}" +applies_to: + deployment: + ess: ga + serverless: ga +--- + +# Secure your Elastic Cloud organization [ec-securing-considerations] + +This section covers security settings for your {{ecloud}} organization, the platform for managing {{ech}} deployments and serverless projects. + +**Managed by Elastic** + +As a managed service, Elastic automatically handles a [number of security features](https://www.elastic.co/cloud/security#details) with no configuration required: + +- **TLS encrypted communication** is provided in the default configuration. Elasticsearch nodes communicate using TLS. +- **Encryption at rest**. By default, all of your {{ecloud}} resources are encrypted at rest. Note that you can choose to encrypt your {{ech}} deployments [using your own encryption key](/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md). +- **Cluster isolation**. Elasticsearch nodes run in isolated containers, configured according to the principle of least privilege, and with restrictions on system calls and allowed root operations. + +**Additional organization-level security settings** + +To reinforce the security of your organization, consider implementing the following measures: + +- [**Organization-level SSO**](/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md). Note that for {{ech}} deployments, you can also configure SSO at the [deployment level](/deploy-manage/users-roles/cluster-or-deployment-auth.md). +- [**Cloud role-based access control**](/deploy-manage/users-roles/cloud-organization/manage-users.md): Define the roles of users who have access to your organization and its resources. Note that for {{ech}} deployments, you can also [manage non-cloud users and roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). +- [**Cloud API keys**](/deploy-manage/api-keys/elastic-cloud-api-keys.md): Manage API keys used for programmatic access to [{{ecloud}}](https://www.elastic.co/docs/api/doc/cloud/) and [{{ecloud}} serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless/) APIs. + + + +**Additional deployment-level security settings** + +While serverless projects are fully managed and secured by Elastic, additional security settings are available for you to configure individually for your {{ech}} deployments. Refer to [](secure-your-cluster-deployment.md) for more information. + + +In addition, multifactor authentication is mandatory for all {{ecloud}} users belonging to your organization. For more information on the available authentication methods, refer to [](/cloud-account/multifactor-authentication.md). + diff --git a/deploy-manage/security/set-up-basic-security-plus-https.md b/deploy-manage/security/set-up-basic-security-plus-https.md index b77f32be2a..1781e4544c 100644 --- a/deploy-manage/security/set-up-basic-security-plus-https.md +++ b/deploy-manage/security/set-up-basic-security-plus-https.md @@ -1,5 +1,8 @@ --- navigation_title: "Set up basic security plus HTTPS" +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html --- diff --git a/deploy-manage/security/set-up-basic-security.md b/deploy-manage/security/set-up-basic-security.md index 173d367d96..53ed3b8424 100644 --- a/deploy-manage/security/set-up-basic-security.md +++ b/deploy-manage/security/set-up-basic-security.md @@ -1,5 +1,8 @@ --- navigation_title: "Set up basic security" +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html --- @@ -25,7 +28,7 @@ Transport Layer Security (TLS) is the name of an industry standard protocol for Transport Protocol is the name of the protocol that {{es}} nodes use to communicate with one another. This name is specific to {{es}} and distinguishes the transport port (default `9300`) from the HTTP port (default `9200`). Nodes communicate with one another using the transport port, and REST clients communicate with {{es}} using the HTTP port. -Although the word *transport* appears in both contexts, they mean different things. It’s possible to apply TLS to both the {{es}} transport port and the HTTP port. We know that these overlapping terms can be confusing, so to clarify, in this scenario we’re applying TLS to the {{es}} transport port. In [the next scenario](secure-http-communications.md), we’ll apply TLS to the {{es}} HTTP port. +Although the word *transport* appears in both contexts, they mean different things. It’s possible to apply TLS to both the {{es}} transport port and the HTTP port. We know that these overlapping terms can be confusing, so to clarify, in this scenario we’re applying TLS to the {{es}} transport port. In [](secure-http-communications.md), we’ll apply TLS to the {{es}} HTTP port. :::: diff --git a/deploy-manage/security/set-up-minimal-security.md b/deploy-manage/security/set-up-minimal-security.md index a3dd85c8a6..856bebdb1e 100644 --- a/deploy-manage/security/set-up-minimal-security.md +++ b/deploy-manage/security/set-up-minimal-security.md @@ -1,5 +1,8 @@ --- navigation_title: "Set up minimal security" +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-minimal-setup.html --- diff --git a/deploy-manage/security/traffic-filtering.md b/deploy-manage/security/traffic-filtering.md index 58c1312a10..3eb12320d8 100644 --- a/deploy-manage/security/traffic-filtering.md +++ b/deploy-manage/security/traffic-filtering.md @@ -5,22 +5,15 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-heroku/current/ech-traffic-filtering-deployment-configuration.html --- -# Traffic filtering +# Secure network access -% What needs to be done: Refine +:::{warning} +**This page is a work in progress.** +::: -% GitHub issue: https://github.com/elastic/docs-projects/issues/346 -% Use migrated content from existing pages that map to this page: +Never expose {{es}} to unwanted internet traffic. Using an application to sanitize requests to {{es}} still poses risks, such as a malicious user writing [`_search`](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-search) requests that could overwhelm an {{es}} cluster and bring it down. Depending on your environment, consider the following: -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md - -⚠️ **This page is a work in progress.** ⚠️ - -The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md](/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md) -* [/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md](/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md](/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md) \ No newline at end of file +- **IP traffic filtering**: Restrict access based on IP addresses or CIDR ranges. +- **Private link filters**: Secure connectivity through AWS PrivateLink, Azure Private Link, or GCP Private Service Connect. +- **Elastic Cloud static IPs**: Use static IP addresses for predictable firewall rules. diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index f8cb841b54..a54e7f7b73 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -156,7 +156,6 @@ toc: - file: deploy/cloud-enterprise/change-allocator-disconnect-timeout.md - file: deploy/cloud-enterprise/manage-elastic-stack-versions.md - file: deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md - - file: deploy/cloud-enterprise/migrate-ece-on-podman-hosts-to-selinux-enforce.md - file: deploy/cloud-enterprise/working-with-deployments.md children: - file: deploy/cloud-enterprise/deployment-templates.md @@ -474,53 +473,69 @@ toc: - file: autoscaling/autoscaling-deciders.md - file: autoscaling/trained-model-autoscaling.md - file: security.md - children: - - file: security/secure-your-elastic-cloud-enterprise-installation.md + children: + - file: security/secure-hosting-environment.md children: - - file: security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md - - file: security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md - - file: security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md + - file: security/secure-your-elastic-cloud-organization.md + - file: security/secure-your-elastic-cloud-enterprise-installation.md + children: + - file: security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md + - file: security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md + - file: security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md + - file: security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md + - file: security/secure-your-eck-installation.md + - file: security/manually-configure-security-in-self-managed-cluster.md + children: + - file: security/set-up-minimal-security.md + - file: security/set-up-basic-security.md + - file: security/set-up-basic-security-plus-https.md + - file: security/install-stack-demo-secure.md - file: security/secure-your-cluster-deployment.md children: - - file: security/secure-endpoints.md + - file: security/traffic-filtering.md children: - - file: security/secure-http-communications.md - - file: security/traffic-filtering.md + - file: security/ip-traffic-filtering.md children: - - file: security/ip-traffic-filtering.md - - file: security/private-link-traffic-filters.md - children: - - file: security/aws-privatelink-traffic-filters.md - - file: security/azure-private-link-traffic-filters.md - - file: security/gcp-private-service-connect-traffic-filters.md - - file: security/claim-traffic-filter-link-id-ownership-through-api.md - file: security/manage-traffic-filtering-through-api.md + - file: security/private-link-traffic-filters.md + children: + - file: security/aws-privatelink-traffic-filters.md + - file: security/azure-private-link-traffic-filters.md + - file: security/gcp-private-service-connect-traffic-filters.md + - file: security/claim-traffic-filter-link-id-ownership-through-api.md - file: security/elastic-cloud-static-ips.md - - file: security/kibana-session-management.md - file: security/secure-cluster-communications.md children: + - file: security/secure-http-communications.md - file: security/security-certificates-keys.md children: - file: security/updating-certificates.md children: - file: security/same-ca.md - file: security/different-ca.md - - file: security/secure-clients-integrations.md - children: - - file: security/httprest-clients-security.md - - file: security/encrypt-deployment.md + - file: security/supported-ssltls-versions-by-jdk-version.md + - file: security/data-security.md children: - - file: security/encrypt-deployment-with-customer-managed-encryption-key.md - - file: security/secure-settings.md - - file: security/secure-saved-objects.md - - file: security/manually-configure-security-in-self-managed-cluster.md + - file: security/encrypt-deployment.md + children: + - file: security/encrypt-deployment-with-customer-managed-encryption-key.md + - file: security/enabling-cipher-suites-for-stronger-encryption.md + - file: security/secure-settings.md + - file: security/secure-saved-objects.md + - file: security/logging-configuration/security-event-audit-logging.md children: - - file: security/set-up-minimal-security.md - - file: security/set-up-basic-security.md - - file: security/set-up-basic-security-plus-https.md - - file: security/enabling-cipher-suites-for-stronger-encryption.md - - file: security/supported-ssltls-versions-by-jdk-version.md + - file: security/logging-configuration/enabling-audit-logs.md + - file: security/logging-configuration/configuring-audit-logs.md + children: + - file: security/logging-configuration/logfile-audit-events-ignore-policies.md + - file: security/logging-configuration/logfile-audit-output.md + - file: security/logging-configuration/auditing-search-queries.md + - file: security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md + - file: security/kibana-session-management.md - file: security/fips-140-2.md + - file: security/secure-clients-integrations.md + children: + - file: security/httprest-clients-security.md - file: users-roles.md children: - file: users-roles/cloud-organization.md @@ -736,15 +751,6 @@ toc: children: - file: monitor/logging-configuration/kibana-log-settings-examples.md - file: monitor/logging-configuration/kibana-logging-cli-configuration.md - - file: monitor/logging-configuration/security-event-audit-logging.md - children: - - file: monitor/logging-configuration/enabling-audit-logs.md - - file: monitor/logging-configuration/configuring-audit-logs.md - children: - - file: monitor/logging-configuration/logfile-audit-events-ignore-policies.md - - file: monitor/logging-configuration/logfile-audit-output.md - - file: monitor/logging-configuration/auditing-search-queries.md - - file: monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md - file: cloud-organization.md children: - file: cloud-organization/billing.md @@ -803,6 +809,8 @@ toc: - file: upgrade/orchestrator.md children: - file: upgrade/orchestrator/upgrade-cloud-enterprise.md + children: + - file: upgrade/orchestrator/re-running-the-ece-upgrade.md - file: upgrade/orchestrator/upgrade-cloud-on-k8s.md - file: upgrade/deployment-or-cluster.md children: diff --git a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md index 69465a4296..9a22b41288 100644 --- a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md @@ -65,7 +65,7 @@ How you create the AWS S3 bucket depends on what version of Elasticsearch you ar * For version 7.x: 1. Using the Minio browser or an S3 client application, create an S3 bucket to store your snapshots. - 2. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md) and [add the S3 repository plugin](elasticsearch://reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) to your cluster. + 2. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md) and [add the S3 repository plugin](elasticsearch://reference/elasticsearch-plugins/plugin-management.md) to your cluster. * For versions 8.0 and later, {{es}} has built-in support for AWS S3 repositories; no repository plugin is needed. Use the Minio browser or an S3 client application to create an S3 bucket to store your snapshots. diff --git a/deploy-manage/upgrade.md b/deploy-manage/upgrade.md index c871b26ecd..fb72a19bd1 100644 --- a/deploy-manage/upgrade.md +++ b/deploy-manage/upgrade.md @@ -48,7 +48,7 @@ It is very important to map all the components that are being used on the {{stac * External services (Kafka, etc.) :::{tip} -When you do your inventory, you can [enable audit logging](/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. +When you do your inventory, you can [enable audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. ::: **Test your development environment** diff --git a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md index c4b296173e..66884d9959 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md +++ b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md @@ -1 +1,202 @@ -# Upgrade {{es}} \ No newline at end of file +--- +applies_to: + deployment: + self: all +--- + +# Upgrade {{es}} [upgrading-elasticsearch] + +An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. + +Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#rolling-upgrades): + +1. Upgrade the data nodes first, tier-by-tier, starting with the frozen tier, then the cold tier, then the warm tier, then the hot tier, and finally any other data nodes which are not in a tier. Complete the upgrade for all nodes in each data tier before moving to the next. This ensures {{ilm-init}} can continue to move data through the tiers during the upgrade. You can get the list of nodes in a specific tier with a `GET /_nodes` request, for example: `GET /_nodes/data_frozen:true/_none`. +2. Upgrade all remaining nodes that are neither master-eligible nor data nodes. This includes dedicated ML nodes, dedicated ingest nodes, and dedicated coordinating nodes. +3. Upgrade the master-eligible nodes last. You can retrieve a list of these nodes with `GET /_nodes/master:true/_none`. + +This order ensures that all nodes can join the cluster during the upgrade. Upgraded nodes can join a cluster with an older master, but older nodes cannot always join a cluster with a upgraded master. + +To upgrade a cluster: + +1. **Disable shard allocation**. + + When you shut down a data node, the allocation process waits for `index.unassigned.node_left.delayed_timeout` (by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by [disabling allocation](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-allocation-enable) of replicas before shutting down [data nodes](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#data-node): + + ```console + PUT _cluster/settings + { + "persistent": { + "cluster.routing.allocation.enable": "primaries" + } + } + ``` + +2. **Stop non-essential indexing and perform a flush.** (Optional) + + While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a [flush](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-flush). + + ```console + POST /_flush + ``` + +3. **Temporarily stop the tasks associated with active {{ml}} jobs and {{dfeeds}}.** (Optional) + + It is possible to leave your {{ml}} jobs running during the upgrade, but it puts increased load on the cluster. When you shut down a {{ml}} node, its jobs automatically move to another node and restore the model states. + + ::::{note} + Any {{ml}} indices created before 8.x must be reindexed before upgrading, which you can initiate from the **Upgrade Assistant** in 8.18. For more information, refer to [Anomaly detection results migration] + :::: + + + * Temporarily halt the tasks associated with your {{ml}} jobs and {{dfeeds}} and prevent new jobs from opening by using the [set upgrade mode API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-set-upgrade-mode): + + ```console + POST _ml/set_upgrade_mode?enabled=true + ``` + + When you disable upgrade mode, the jobs resume using the last model state that was automatically saved. This option avoids the overhead of managing active jobs during the upgrade and is faster than explicitly stopping {{dfeeds}} and closing jobs. + + * [Stop all {{dfeeds}} and close all jobs](../../../explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-close-job). This option saves the model state at the time of closure. When you reopen the jobs after the upgrade, they use the exact same model. However, saving the latest model state takes longer than using upgrade mode, especially if you have a lot of jobs or jobs with large model states. + +4. $$$upgrade-node$$$ **Shut down a single node**. + + To shut down a single node depends on what is currently used to run {{es}}. For example, if using `systemd` or SysV `init` run the commands below. + + * If you are running {{es}} with `systemd`: + + ```sh + sudo systemctl stop elasticsearch.service + ``` + + * If you are running {{es}} with SysV `init`: + + ```sh + sudo -i service elasticsearch stop + ``` + +5. **Upgrade the node you shut down.** + + To upgrade using a [Debian](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) or [RPM](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) package: + + * Use `rpm` or `dpkg` to install the new package. All files are installed in the appropriate location for the operating system and {{es}} config files are not overwritten. + + To upgrade using a zip or compressed tarball: + + 1. Extract the zip or tarball to a **new** directory. This is critical if you are not using external `config` and `data` directories. + 2. Set the `ES_PATH_CONF` environment variable to specify the location of your external `config` directory and `jvm.options` file. If you are not using an external `config` directory, copy your old configuration over to the new installation. + 3. Set `path.data` in `config/elasticsearch.yml` to point to your external data directory. If you are not using an external `data` directory, copy your old data directory over to the new installation.
+ + ::::{important} + If you use {{monitor-features}}, re-use the data directory when you upgrade {{es}}. Monitoring identifies unique {{es}} nodes by using the persistent UUID, which is stored in the data directory. + :::: + + 4. Set `path.logs` in `config/elasticsearch.yml` to point to the location where you want to store your logs. If you do not specify this setting, logs are stored in the directory you extracted the archive to. + + ::::{tip} + When you extract the zip or tarball packages, the `elasticsearch-{{bare_version}}` directory contains the {{es}} `config`, `data`, and `logs` directories. + + We recommend moving these directories out of the {{es}} directory so that there is no chance of deleting them when you upgrade {{es}}. To specify the new locations, use the `ES_PATH_CONF` environment variable and the `path.data` and `path.logs` settings. For more information, see [Important {{es}} configuration](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md). + + The Debian and RPM packages place these directories in the appropriate place for each operating system. In production, we recommend using the deb or rpm package. + + :::: + + + $$$rolling-upgrades-bootstrapping$$$ + Leave `cluster.initial_master_nodes` unset when performing a rolling upgrade. Each upgraded node is joining an existing cluster so there is no need for [cluster bootstrapping](../../../deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md). You must configure [either `discovery.seed_hosts` or `discovery.seed_providers`](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md#discovery-settings) on every node. + +6. **Upgrade any plugins.** + + Use the `elasticsearch-plugin` script to install the upgraded version of each installed {{es}} plugin. All plugins must be upgraded when you upgrade a node. + +7. **Start the upgraded node.** + + Start the newly-upgraded node and confirm that it joins the cluster by checking the log file or by submitting a `_cat/nodes` request: + + ```console + GET _cat/nodes + ``` + +8. **Reenable shard allocation.** + + For data nodes, once the node has joined the cluster, remove the `cluster.routing.allocation.enable` setting to enable shard allocation and start using the node: + + ```console + PUT _cluster/settings + { + "persistent": { + "cluster.routing.allocation.enable": null + } + } + ``` + +9. **Wait for the node to recover.** + + Before upgrading the next node, wait for the cluster to finish shard allocation. You can check progress by submitting a `_cat/health` request: + + ```console + GET _cat/health?v=true + ``` + + Wait for the `status` column to switch to `green`. Once the node is `green`, all primary and replica shards have been allocated. + + ::::{important} + During a rolling upgrade, primary shards assigned to a node running the new version cannot have their replicas assigned to a node with the old version. The new version might have a different data format that is not understood by the old version. + + If it is not possible to assign the replica shards to another node (there is only one upgraded node in the cluster), the replica shards remain unassigned and status stays `yellow`. + + In this case, you can proceed once there are no initializing or relocating shards (check the `init` and `relo` columns). + + As soon as another node is upgraded, the replicas can be assigned and the status will change to `green`. + + :::: + + + Shards that were not flushed might take longer to recover. You can monitor the recovery status of individual shards by submitting a `_cat/recovery` request: + + ```console + GET _cat/recovery + ``` + + If you stopped indexing, it is safe to resume indexing as soon as recovery completes. + +10. **Repeat**. + + When the node has recovered and the cluster is stable, repeat these steps for each node that needs to be updated. You can monitor the health of the cluster with a `_cat/health` request: + + ```console + GET /_cat/health?v=true + ``` + + And check which nodes have been upgraded with a `_cat/nodes` request: + + ```console + GET /_cat/nodes?h=ip,name,version&v=true + ``` + +11. **Restart machine learning jobs.** + + If you temporarily halted the tasks associated with your {{ml}} jobs, use the set upgrade mode API to return them to active states: + + ```console + POST _ml/set_upgrade_mode?enabled=false + ``` + + If you closed all {{ml}} jobs before the upgrade, open the jobs and start the datafeeds from {{kib}} or with the [open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) and [start datafeed](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-datafeed) APIs. + + + +## Rolling upgrades [rolling-upgrades] + +During a rolling upgrade, the cluster continues to operate normally. However, any new functionality is disabled or operates in a backward compatible mode until all nodes in the cluster are upgraded. New functionality becomes operational once the upgrade is complete and all nodes are running the new version. Once that has happened, there’s no way to return to operating in a backward compatible mode. Nodes running the previous version will not be allowed to join the fully-updated cluster. + +In the unlikely case of a network malfunction during the upgrade process that isolates all remaining old nodes from the cluster, you must take the old nodes offline and upgrade them to enable them to join the cluster. + +If you stop half or more of the master-eligible nodes all at once during the upgrade the cluster will become unavailable. You must upgrade and restart all of the stopped master-eligible nodes to allow the cluster to re-form. It might also be necessary to upgrade all other nodes running the old version to enable them to join the re-formed cluster. + +Similarly, if you run a testing/development environment with a single master node it should be upgraded last. Restarting a single master node forces the cluster to be reformed. The new cluster will initially only have the upgraded master node and will thus reject the older nodes when they re-join the cluster. Nodes that have already been upgraded will successfully re-join the upgraded master. + + +## Archived settings [archived-settings] + +If you upgrade an {{es}} cluster that uses deprecated cluster or index settings that are not used in the target version, they are archived. We recommend you remove any archived settings after upgrading. For more information, see [Archived settings](../../../deploy-manage/upgrade/deployment-or-cluster/archived-settings.md). diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana.md b/deploy-manage/upgrade/deployment-or-cluster/kibana.md index 94bffee473..9b512caabe 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana.md @@ -1 +1,75 @@ -# Upgrade {{kib}} \ No newline at end of file +--- +applies_to: + deployment: + self: all +--- + +# Upgrade {{kib}} [upgrade-kibana] + +When you upgrade {{kib}}, you also upgrade the {{observability}} and {{elastic-sec}} solutions, which use {{kib}} as their main interface. + +::::{warning} +{{kib}} automatically runs saved object migrations when required. To roll back to an earlier version in case of an upgrade failure, you **must** have a [backup snapshot](../../tools/snapshot-and-restore.md) that includes the `kibana` feature state. Snapshots include this feature state by default. + +For more information, refer to [Migrate saved objects](../internal-upgrade-processes/saved-object-migrations.md). + +:::: + + +## Upgrading multiple {{kib}} instances [_upgrading_multiple_kib_instances] + +When upgrading several {{kib}} instances connected to the same {{es}} cluster, ensure that all outdated instances are shut down before starting the upgrade. + +Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade migration in parallel. + +For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade migration before bringing up the remaining instances. + +## Preparing for migration [preventing-migration-failures] + +Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready for migration. + + +### Ensure your {{es}} cluster is healthy [_ensure_your_es_cluster_is_healthy] + +Problems with your {{es}} cluster can prevent {{kib}} upgrades from succeeding. + +During the upgrade process, {{kib}} creates new indices into which updated documents are written. If a cluster is approaching the low watermark, there’s a high risk of {{kib}} not being able to create these. Reading, transforming and writing updated documents can be memory intensive, using more available heap than during routine operation. You must make sure that enough heap is available to prevent requests from timing out or throwing errors from circuit breaker exceptions. You should also ensure that all shards are replicated and assigned. + +A healthy cluster has: + +* Enough free disk space, at least twice the amount of storage taken up by the `.kibana` and `.kibana_task_manager` indices +* Sufficient heap size +* A "green" cluster status + + +### Ensure that all {{kib}} instances are the same [_ensure_that_all_kib_instances_are_the_same] + +When you perform an upgrade migration of different {{kib}} versions, the migration can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. + +## Perform the upgrade [perform-kibana-upgrade] + +To upgrade {{kib}}: + +1. Shut down all {{kib}} instances. {{kib}} does not support rolling upgrades. **Upgrading while older {{kib}} instances are running can cause data loss or upgrade failures.** +2. To install the `deb` or `rpm` package: + + a. Use `rpm` or `dpkg`. This installs all files in their proper locations and will not overwrite the config files. + b. Upgrade any plugins by removing the existing plugin and reinstalling the appropriate version using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). + +3. To install from a `zip` or `tar.gz` archive: + + a. **Extract the archive to a new directory** to be sure that you don’t overwrite the `config` or `data` directories. + b. Copy the files from the `config` directory from your old installation to your new installation. + c. Copy the files from the `data` directory from your old installation to your new installation. + + ::::{important} + If you use {{monitor-features}}, you must re-use the data directory when you upgrade {{kib}}. Otherwise, the {{kib}} instance is assigned a new persistent UUID and becomes a new instance in the monitoring data. + :::: + + d. Install the appropriate versions of all your plugins for your new installation using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). + +4. Start {{kib}}. + +::::{note} +{{kib}} 8.0.0 and later uses a new logging system, so the log formats have changed. For additional information, see [Logging configuration changes](kibana://extend/logging-config-changes.md). +:::: diff --git a/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md b/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md new file mode 100644 index 0000000000..b0e3e339ad --- /dev/null +++ b/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md @@ -0,0 +1,9 @@ +# Re-running the ECE upgrade [re-running-ece-upgrade] + +By default, re-running the `elastic-cloud-enterprise.sh upgrade` command resumes the previous ECE upgrade process. However, if your previous ECE upgrade attempt got stuck (for example, due to infrastructure problems, a host going offline, or similar) and re-attempting the upgrade still results in the upgrade process being blocked, run `elastic-cloud-enterprise.sh upgrade --force-upgrade` to ensure that any existing upgrade state gets cleared before starting the new ECE upgrade process. + +The `--force-upgrade` parameter is also helpful in situations where the Elastic Cloud Enterprise platform is already upgraded to the desired target version but there are containers still running at the old version. These rare situations can also be caused by infrastructure issues, for example, runners temporarily going offline and not being "seen" by the upgrade process. Running `elastic-cloud-enterprise.sh upgrade --force-upgrade` with the same target version makes the Elastic Cloud Enterprise upgrader perform the upgrade procedure anyway, thereby covering any containers that failed to upgrade previously. + +If the Elastic Cloud Enterprise platform was upgraded successfully and yet one or more system deployments were not upgraded to a higher Elastic Stack version during the very last phase of the Elastic Cloud Enterprise upgrade, we recommend running the Elastic Cloud Enterprise upgrader again without the `--force-upgrade` parameter. The Elastic Cloud Enterprise upgrader will recognize that the platform is already at the desired target version and will apply upgrade plans to system deployments. + +Refer to [](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md) for details on the system deployment versions associated with each {{ece}} version. \ No newline at end of file diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md index 093990370e..9acd384693 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md @@ -4,20 +4,135 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece_re_running_the_ece_upgrade.html --- -# Upgrade Elastic Cloud Enterprise +% The upgrade procedure is expected to change with ECE 3.8.0 release. This document is currently a temporary draft, pending to be refined. + +# Upgrade Elastic Cloud Enterprise [ece-upgrade] + +Periodically, you might need to upgrade an Elastic Cloud Enterprise installation as new versions with additional features become available. The upgrade process updates all hosts that are part of an Elastic Cloud Enterprise installation to the latest version of ECE, with little or no downtime for managed deployments. To upgrade deployments to Elastic Stack 9.x, the minimum required version of ECE is 3.0.0. + +Before initiating the ECE upgrade process, consult the [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) to make sure that the Operating System (OS) and Docker or Podman versions you run are compatible with the ECE version you’re about to upgrade to. We recommend that Docker, Podman, and the operating system be at the target version before starting the ECE upgrade procedure. + +:::{note} +During the upgrade window, there might be a short period of time during which you run a combination of versions which is not explicitly supported. For example, if you are on ECE 3.5 with Docker version 20.10 on Ubuntu 20.04, and plan to upgrade to ECE 3.7 on the same OS, you will need to upgrade Docker to version 24.0 first. In this case, and only during your upgrade window, we support the mixed OS/Docker versions. In general, this won’t be a problem. However, should anything become a blocker for the upgrade, [reach out to support for help](/troubleshoot/deployments/cloud-enterprise/ask-for-help.md). +::: + +## The upgrade version matrix [ece-upgrade-version-matrix] + +The following table shows the recommended upgrade paths from older Elastic Cloud Enterprise versions to 4.0.0. + +| Upgrade from | Recommended upgrade path to 4.0 | +| --- | --- | +| Any 3.x version | 1. Upgrade to 3.8.0
2. Upgrade to 4.0.0
| +| 2.13 | 1. Upgrade to 3.8.0
2. Upgrade to 4.0.0
| +| 2.5-2.12 | 1. Upgrade to 2.13.4
2. Upgrade to 3.8.0
3. Upgrade to 4.0.0
| +| 2.0-2.4 | 1. Upgrade to 2.5.1
2. Upgrade to 2.13.4
3. Upgrade to 3.8.0
4. Upgrade to 4.0.0
| + +In case you have to upgrade to any of the intermediate versions, follow the upgrade instructions of the relevant release before upgrading to 4.0.0: +- [ECE 2.5 Upgrade](https://www.elastic.co/guide/en/cloud-enterprise/2.5/ece-upgrade.html) +- [ECE 2.13 Upgrade](https://www.elastic.co/guide/en/cloud-enterprise/2.13/ece-upgrade.html) +- [ECE 3.8 Upgrade](https://www.elastic.co/guide/en/cloud-enterprise/3.7/ece-upgrade.html) + +## The upgrade process [ece-upgrade-overview] + +Upgrading Elastic Cloud Enterprise works by replacing the [containers](/reference/glossary/index.md#glossary-container) that ECE itself requires to run on each host. Upgrading ECE does not touch any of the containers that run Elasticsearch clusters and Kibana instances. Each container that needs to be upgraded is renamed and stopped, followed by the creation of a new container with an upgraded instance of the ECE software and its dependencies. When the upgrade process has completed successfully, it cleans up after itself and removes the old containers. + +The upgrade process creates a `frc-upgraders-monitor` container on the host where you initiate the process that performs the following actions: + +* Back up the ZooKeeper transaction log to `HOST_STORAGE_PATH/RUNNER_ID/services/zookeeper/data/backup`, where `HOST_STORAGE_PATH` and `RUNNER_ID` are specific to an ECE installation. +* Configure Elastic Cloud Enterprise to perform the individual container upgrades by creating a `frc-upgraders-upgrader` container on each host that is part of the installation. +* Monitor the upgrade process to ensure that all `frc-upgraders-upgrader` containers perform their part of the upgrade as expected and report their status. +* After all hosts have been upgraded successfully, clean up temporary artifacts created during the upgrade process, and remove the old containers. + +The entire process is designed to be failsafe. Containers get upgraded sequentially and the upgrade status of each container is tracked. The upgrade process also monitors that each new container is viable and continues to run as expected. If there is an issue with any part of the upgrade, [the entire process is rolled back](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-troubleshooting-upgrade). + +## Recommendations [ece-upgrade-recommendations] + +Before starting the upgrade process, check which of the following recommendations may apply for your setup: + + +* We strongly recommend that you routinely update your ECE installation to the most current version so that any bugs and security issues are fixed promptly. If you need to upgrade but are currently experiencing any issues with your platform, note that as long as ZooKeeper is running and healthy you should be able to upgrade (you can use the [get runners API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-runners) to easily verify the health of the runners on the [ECE allocators](../../../deploy-manage/deploy/cloud-enterprise/ece-architecture.md#ece-architecture-allocators)). That is, healthy system deployments are not required in order to perform an upgrade successfully. +* Before upgrading to Elastic Cloud Enterprise 4.0, refer to the [lists of removals](/release-notes/breaking-changes/index.md) to find out about features and API endpoints that are no longer supported. +* We strongly recommend that you do not attempt to perform certain actions during the upgrade process, such as: + + * Creating or changing Elasticsearch clusters and Kibana instances + * Adding new hosts to an installation or removing existing hosts + +* As a precaution, we recommend taking snapshots of your Elasticsearch clusters. +* We recommend that you take a backup snapshot of the `security` [system deployment](../../../deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md). This cluster stores [role-based access control configurations](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md), and a snapshot will allow you to restore those in case the upgrade fails. + +## Requirements [ece-upgrade-prereqs] + +Before starting the upgrade process, verify that your setup meets the following requirements: + +- XFS with quotas enabled on all allocators. You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly. To enable XFS quotas, modify the entry for the XFS volume in the `/etc/fstab file` to add pquota and `prjquota`. The default filesystem path used by Elastic Cloud Enterprise is `/mnt/data`. + +- **Supported Docker / Podman version**. Make sure that you’re running a supported Docker or Podman version for the version of ECE that you are going to upgrade to. An overview of compatible versions can be found in the [support matrix](https://www.elastic.co/support/matrix#matrix_os&#elastic-cloud-enterprise) and install instructions are available under [Installing Elastic Cloud Enterprise](../../../deploy-manage/deploy/cloud-enterprise/install.md). +- **Required user, roles and groups**. To run the script to upgrade Elastic Cloud Enterprise, log in as the user used to run Elastic Cloud Enterprise (by default called `elastic` with UID/GID 1000). Initiate the upgrade process by running the `elastic-cloud-enterprise.sh` script with the upgrade action on a single host. The host that the script is run on must be a host that holds the director role. You do not need to run the script on additional hosts. +- **Available disk space**. Each host in the Elastic Cloud Enterprise installation must have at least 5 GB of disk space available to ensure that the upgrade process can complete successfully. +- **Proxies and load balancing**. To avoid any downtime for Elastic Cloud Enterprise, the installation must include more than one proxy and must use a load balancer as recommended. If only a single proxy is configured or if the installation is not using a load balancer, some downtime is expected when the containers on the proxies are upgraded. Each container upgrade typically takes five to ten seconds, times the number of containers on a typical host. +- **For *offline* or *air-gapped* installations**. Additional steps are required to upgrade Elastic Cloud Enterprise. After downloading the installation script for the new version, pull and load the required container images and push them to a private Docker registry. To learn more about pulling and loading Docker images, check Install [ECE offline](../../../deploy-manage/deploy/cloud-enterprise/air-gapped-install.md). +- Check the security cluster’s zone count. Due to internal limitations in ECE, the built-in security cluster cannot be scaled to two zones during the ECE upgrade procedure. If the zone count is set to 2 zones, scale the cluster to 3 or 1 zone(s) before upgrading ECE. +- **[Verify if you can upgrade directly](#ece-upgrade-version-matrix)**. When upgrading to ECE 4.0 or a higher version: + - You need to first upgrade to ECE 3.8.0 or later. Refer to the ECE version 3.8.0 upgrade instructions for details. + + :::{warning} + Don’t manually upgrade your system deployments if you are on ECE version 2.7.0 or a later version, as it can cause issues and you may lose access to the Cloud UI. Note that the only exception to that rule is when you’re upgrading to ECE 3.6.0 and your system deployments are at a version lower than 7.17.0. + ::: + + - Ensure that your system deployments are at their [expected versions](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md). Since ECE 3.6.1, the upgrade process automatically upgrades system deployments to the required version. If the Elastic Cloud Enterprise platform was upgraded successfully and yet one or more system deployments were not upgraded to [their expected version](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md) during the very last phase of the Elastic Cloud Enterprise upgrade, you can re-run the `elastic-cloud-enterprise.sh upgrade --cloud-enterprise-version ` command to retry system deployment upgrade only. + - Check that your deployments are running on Elastic Stack version 7.0.0 or above. +- Before running the upgrade command, ensure that you include the same installation flags that were used during the initial setup. Some deployment configurations, such as those using Podman or SELinux, require specific flags to be passed again during the upgrade. Failure to do so may result in compatibility errors. + +## Certificate rotation [ece-upgrade-certificates] + +If your ECE installation is still using the default, auto-generated certificates, we recommend performing one of the following steps to avoid trust errors related to the proxy server certificate after the upgrade. The proxy server certificate is used when connecting to Kibana and Elasticsearch clusters. During the upgrade, the ECE certificate authority generates a new certificate. As with any server certificate rotation, you must add an exception for the new proxy server certificate, unless the certificate authority is present in the trust store of the system or browser. You can perform either of these steps before or after the upgrade: + +- Recommended: [Add your organization’s own certificate](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md) to Elastic Cloud Enterprise. The upgrade process ensures that the certificates you add do not change, which avoids the trust errors. +- Add the default CA certificate to the trust store of your system or of your browser. Only the server certificate changes during upgrade, but the CA certificate remains the same. Adding the CA certificate to your trust store alone is sufficient to avoid the trust errors. +- Apply a valid license. It is required to have an `Enterprise resource unit`-compatible license applied before upgrading to ECE 2.7 or later. The most reliable way to check if your license is compatible is to use the Elastic Cloud Enterprise API and check the value of the license version field: + + ```sh + curl -X GET -u admin:PASSWORD -k https://COORDINATOR_HOST:12443/api/v1/platform/license + { + "license": { + "version": 4, + // other fields + } + } + ``` + +If the license version is not 4 or higher, you must request an updated license from [Elastic Support](../../../troubleshoot/deployments/cloud-enterprise/ask-for-help.md). Once you receive your new license, make sure Elastic Cloud Enterprise is upgraded to at least version 2.5.0, and then upload the new license in the Settings page under the Platform menu. + +In versions from 2.6 to 2.10 included, some or all platform certificates are generated with a 398-day expiration. Installations that ran on these versions, even temporarily, must have their certificates rotated manually before expiry. For details, check our [KB article](https://support.elastic.co/knowledge/ECE-Certificate-Rotation). + + +## Perform the upgrade [ece-upgrade-steps] + +To upgrade an Elastic Cloud Enterprise installation, download the latest installation script. Log in as the user used to run Elastic Cloud Enterprise (by default called `elastic` with UID/GID 1000), and run the script with the `upgrade` action on a single host that holds the director role: + +::::{important} +* If your ECE installation was set up using **Podman** instead of Docker, append the `--podman` flag when running the upgrade command. +* If your installation uses **SELinux**, append the `--selinux` flag when running the upgrade command. +* If you configured a **custom Docker registry** during installation using the `--docker-registry` or `--ece-docker-repository` parameters, include the same parameters when running the upgrade. +:::: + +```sh +bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) upgrade +``` + +You can follow along while each container for Elastic Cloud Enterprise is upgraded on the hosts that are part of the installation. + +## Upgrade to a specific version + +By default, ECE updates to the most current available version. If you want to upgrade to a specific ECE version, use the `--cloud-enterprise-version` option: + +```sh +bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) upgrade --user admin --pass $PASSWORD --cloud-enterprise-version 4.0.0 +``` -% What needs to be done: Refine -% GitHub issue: ? -% Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-upgrade.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece_re_running_the_ece_upgrade.md -⚠️ **This page is a work in progress.** ⚠️ -The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece_re_running_the_ece_upgrade.md](/raw-migrated-files/cloud/cloud-enterprise/ece_re_running_the_ece_upgrade.md) \ No newline at end of file diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md index 66195cb802..209a47ab17 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md @@ -10,13 +10,13 @@ This page provides instructions on how to upgrade the ECK operator. For upgrades of Elastic Stack applications like Elasticsearch or Kibana, check [Upgrade the Elastic Stack version](../deployment-or-cluster.md). -## Before you upgrade to ECK 2.16.1 [k8s-ga-upgrade] +## Before you upgrade to ECK 3.0.0 [k8s-ga-upgrade] -The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all Elasticsearch and Kibana pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large Elasticsearch cluster or multiple Elastic Stack deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and Elasticsearch clusters. Check for more information on how to [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart). +The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all Elasticsearch and Kibana pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large Elasticsearch cluster or multiple Elastic Stack deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and Elasticsearch clusters. For more details on controlling rolling restarts during the upgrade, refer to the [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart) section. -Before upgrading, refer to the [release notes](https://www.elastic.co/guide/en/cloud-on-k8s/current/release-notes-2.16.1.html) to make sure that the release does not contain any breaking changes that could affect you. The [release highlights document](https://www.elastic.co/guide/en/cloud-on-k8s/current/release-highlights-2.16.1.html) provides more details and possible workarounds for any breaking changes or known issues in each release. +Before upgrading, refer to the [release notes](cloud-on-k8s://release-notes/index.md) to make sure that the release does not contain any breaking changes that could affect you. The [release highlights document](cloud-on-k8s://release-notes/index.md) provides more details and possible workarounds for any breaking changes or known issues in each release. -Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to 2.16.1, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. +Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to 3.0.0, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. ::::{warning} When upgrading always ensure that the version of the CRDs installed in the cluster matches the version of the operator. If you are using Helm, the CRDs are upgraded automatically as part of the Helm chart. If you are using the YAML manifests, you must upgrade the CRDs manually. Running differing versions of the CRDs and the operator is not a supported configuration and can lead to unexpected behavior. @@ -32,14 +32,14 @@ When upgrading always ensure that the version of the CRDs installed in the clust Release 1.7.0 moved the [CustomResourceDefinitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD) used by ECK to the v1 version. If you upgrade from a previous version of ECK, the new version of the CRDs replaces the existing CRDs. If you cannot remove the current ECK installation because you have production workloads that must not be deleted, the following approach is recommended. ```shell -kubectl replace -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml +kubectl replace -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml ``` ::::{note} If you skipped a release in which new CRDs where introduced, you will get an error message similar to `Error from server (NotFound): error when replacing "config/crds.yaml": customresourcedefinitions.apiextensions.k8s.io ... not found`. To add the missing CRDs run ```shell -kubectl create -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml +kubectl create -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml ``` :::: @@ -48,9 +48,11 @@ kubectl create -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml Then upgrade the remaining objects with the operator manifest: ```shell -kubectl apply -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/operator.yaml ``` +If you are using Helm: force upgrade the CRD chart to move to the v1 CRDs. + ```shell helm upgrade elastic-operator elastic/eck-operator-crds -n elastic-system --force ``` @@ -71,13 +73,17 @@ Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with a ### Upgrading from ECK 2.0 or later [k8s_upgrading_from_eck_2_0_or_later] -There are no special instructions to follow if you upgrade from any 2.x version to 2.16.1. Use the upgrade method applicable to your installation method of choice. +There are no special instructions to follow if you upgrade from any 2.x version to 3.0.0. Use the upgrade method applicable to your installation method of choice. + +If you are using our YAML manifests: ```shell -kubectl apply -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml -kubectl apply -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/operator.yaml ``` +If you are using Helm: + ```shell helm upgrade elastic-operator elastic/eck-operator -n elastic-system ``` @@ -103,6 +109,8 @@ If you have a very large Elasticsearch cluster or multiple Elastic Stack deploym Once a resource is excluded from being managed by ECK, you will not be able to add/remove nodes, upgrade Stack version, or perform other [orchestration tasks](../../deploy/cloud-on-k8s/configure-deployments.md) by updating the resource manifest. You must remember to remove the exclusion to ensure that your Elastic Stack deployment is continually monitored and managed by the operator. :::: +Exclude Elastic resources from being managed by the operator: + ```shell ANNOTATION='eck.k8s.elastic.co/managed=false' diff --git a/deploy-manage/users-roles.md b/deploy-manage/users-roles.md index 8059b7bd7e..3bac1bef76 100644 --- a/deploy-manage/users-roles.md +++ b/deploy-manage/users-roles.md @@ -21,8 +21,8 @@ The methods that you use to authenticate users and control access depends on the Preventing unauthorized access is only one element of a complete security strategy. To secure your Elastic environment, you can also do the following: * Restrict the nodes and clients that can connect to the cluster using [traffic filters](/deploy-manage/security/traffic-filtering.md). -* Take steps to maintain your data integrity and confidentiality by [encrypting HTTP and inter-node communications](/deploy-manage/security/secure-endpoints.md), as well as [encrypting your data at rest](/deploy-manage/security/encrypt-deployment.md). -* Maintain an [audit trail](/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md) for security-related events. +* Take steps to maintain your data integrity and confidentiality by [encrypting HTTP and inter-node communications](/deploy-manage/security/secure-cluster-communications.md), as well as [encrypting your data at rest](/deploy-manage/security/encrypt-deployment.md). +* Maintain an [audit trail](/deploy-manage/security/logging-configuration/security-event-audit-logging.md) for security-related events. * Control access to dashboards and other saved objects in your UI using [{{kib}} spaces](/deploy-manage/manage-spaces.md). * Connect your cluster to a [remote cluster](/deploy-manage/remote-clusters.md) to enable cross-cluster replication and search. * Manage [API keys](/deploy-manage/api-keys.md) used for programmatic access to Elastic. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md index 36f2d2ee70..dd9ce8c007 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md @@ -20,5 +20,5 @@ The {{stack-security-features}} use eight *internal* users (`_system`, `_xpack`, These users are only used by requests that originate from within the cluster. For this reason, they cannot be used to authenticate against the API and there is no password to manage or reset. -From time-to-time you may find a reference to one of these users inside your logs, including [audit logs](../../monitor/logging-configuration/enabling-audit-logs.md). +From time-to-time you may find a reference to one of these users inside your logs, including [audit logs](../../security/logging-configuration/enabling-audit-logs.md). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md index 18421981a2..9050b51952 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md @@ -166,7 +166,7 @@ For detailed information of available realm settings, see [Kerberos realm settin ::::{tab-item} ECH and ECE -1. Create a [custom bundle](elasticsearch://reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. +1. Create a [custom bundle](elasticsearch://reference/elasticsearch-plugins/plugin-management.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. :::{tip} You should use these exact filenames for {{ecloud}} to recognize the file in the bundle. diff --git a/docset.yml b/docset.yml index 6d80ca5ac7..cd506a293a 100644 --- a/docset.yml +++ b/docset.yml @@ -11,6 +11,7 @@ cross_links: - apm-agent-dotnet - apm-agent-go - apm-agent-ios + - apm-agent-rum-js - apm-aws-lambda - apm-k8s-attacher - asciidocalypse @@ -63,146 +64,15 @@ toc: - hidden: 404.md subs: - ref: "https://www.elastic.co/guide/en/elasticsearch/reference/current" - ref-bare: "https://www.elastic.co/guide/en/elasticsearch/reference" - ref-8x: "https://www.elastic.co/guide/en/elasticsearch/reference/8.1" - ref-80: "https://www.elastic.co/guide/en/elasticsearch/reference/8.0" - ref-7x: "https://www.elastic.co/guide/en/elasticsearch/reference/7.17" - ref-70: "https://www.elastic.co/guide/en/elasticsearch/reference/7.0" - ref-60: "https://www.elastic.co/guide/en/elasticsearch/reference/6.0" - ref-64: "https://www.elastic.co/guide/en/elasticsearch/reference/6.4" - xpack-ref: "https://www.elastic.co/guide/en/x-pack/6.2" - logstash-ref: "https://www.elastic.co/guide/en/logstash/current" - kibana-ref: "https://www.elastic.co/guide/en/kibana/current" - kibana-ref-all: "https://www.elastic.co/guide/en/kibana" - beats-ref-root: "https://www.elastic.co/guide/en/beats" - beats-ref: "https://www.elastic.co/guide/en/beats/libbeat/current" - beats-ref-60: "https://www.elastic.co/guide/en/beats/libbeat/6.0" - beats-ref-63: "https://www.elastic.co/guide/en/beats/libbeat/6.3" - beats-devguide: "https://www.elastic.co/guide/en/beats/devguide/current" - auditbeat-ref: "https://www.elastic.co/guide/en/beats/auditbeat/current" - packetbeat-ref: "https://www.elastic.co/guide/en/beats/packetbeat/current" - metricbeat-ref: "https://www.elastic.co/guide/en/beats/metricbeat/current" filebeat-ref: "https://www.elastic.co/guide/en/beats/filebeat/current" - functionbeat-ref: "https://www.elastic.co/guide/en/beats/functionbeat/current" - winlogbeat-ref: "https://www.elastic.co/guide/en/beats/winlogbeat/current" - heartbeat-ref: "https://www.elastic.co/guide/en/beats/heartbeat/current" - journalbeat-ref: "https://www.elastic.co/guide/en/beats/journalbeat/current" - ingest-guide: "https://www.elastic.co/guide/en/ingest/current" - fleet-guide: "https://www.elastic.co/guide/en/fleet/current" - apm-guide-ref: "https://www.elastic.co/guide/en/apm/guide/current" - apm-guide-7x: "https://www.elastic.co/guide/en/apm/guide/7.17" - apm-app-ref: "https://www.elastic.co/guide/en/kibana/current" - apm-agents-ref: "https://www.elastic.co/guide/en/apm/agent" - apm-android-ref: "https://www.elastic.co/guide/en/apm/agent/android/current" - apm-py-ref: "https://www.elastic.co/guide/en/apm/agent/python/current" - apm-py-ref-3x: "https://www.elastic.co/guide/en/apm/agent/python/3.x" - apm-node-ref-index: "https://www.elastic.co/guide/en/apm/agent/nodejs" - apm-node-ref: "https://www.elastic.co/guide/en/apm/agent/nodejs/current" - apm-node-ref-1x: "https://www.elastic.co/guide/en/apm/agent/nodejs/1.x" - apm-rum-ref: "https://www.elastic.co/guide/en/apm/agent/rum-js/current" - apm-ruby-ref: "https://www.elastic.co/guide/en/apm/agent/ruby/current" - apm-java-ref: "https://www.elastic.co/guide/en/apm/agent/java/current" - apm-go-ref: "https://www.elastic.co/guide/en/apm/agent/go/current" - apm-dotnet-ref: "https://www.elastic.co/guide/en/apm/agent/dotnet/current" - apm-php-ref: "https://www.elastic.co/guide/en/apm/agent/php/current" - apm-ios-ref: "https://www.elastic.co/guide/en/apm/agent/swift/current" - apm-lambda-ref: "https://www.elastic.co/guide/en/apm/lambda/current" - apm-attacher-ref: "https://www.elastic.co/guide/en/apm/attacher/current" - docker-logging-ref: "https://www.elastic.co/guide/en/beats/loggingplugin/current" - esf-ref: "https://www.elastic.co/guide/en/esf/current" - kinesis-firehose-ref: "https://www.elastic.co/guide/en/kinesis/{{kinesis_version}}" - estc-welcome-current: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current" - estc-welcome: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current" - estc-welcome-all: "https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions" - hadoop-ref: "https://www.elastic.co/guide/en/elasticsearch/hadoop/current" - stack-ref: "https://www.elastic.co/guide/en/elastic-stack/current" - stack-ref-67: "https://www.elastic.co/guide/en/elastic-stack/6.7" - stack-ref-68: "https://www.elastic.co/guide/en/elastic-stack/6.8" - stack-ref-70: "https://www.elastic.co/guide/en/elastic-stack/7.0" - stack-ref-80: "https://www.elastic.co/guide/en/elastic-stack/8.0" - stack-ov: "https://www.elastic.co/guide/en/elastic-stack-overview/current" - stack-gs: "https://www.elastic.co/guide/en/elastic-stack-get-started/current" - stack-gs-current: "https://www.elastic.co/guide/en/elastic-stack-get-started/current" - javaclient: "https://www.elastic.co/guide/en/elasticsearch/client/java-api/current" - java-api-client: "https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current" - java-rest: "https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current" - jsclient: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current" - jsclient-current: "https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current" - es-ruby-client: "https://www.elastic.co/guide/en/elasticsearch/client/ruby-api/current" - es-dotnet-client: "https://www.elastic.co/guide/en/elasticsearch/client/net-api/current" - es-php-client: "https://www.elastic.co/guide/en/elasticsearch/client/php-api/current" - es-python-client: "https://www.elastic.co/guide/en/elasticsearch/client/python-api/current" defguide: "https://www.elastic.co/guide/en/elasticsearch/guide/2.x" - painless: "https://www.elastic.co/guide/en/elasticsearch/painless/current" - plugins: "https://www.elastic.co/guide/en/elasticsearch/plugins/current" - plugins-8x: "https://www.elastic.co/guide/en/elasticsearch/plugins/8.1" - plugins-7x: "https://www.elastic.co/guide/en/elasticsearch/plugins/7.17" - plugins-6x: "https://www.elastic.co/guide/en/elasticsearch/plugins/6.8" - glossary: "https://www.elastic.co/guide/en/elastic-stack-glossary/current" - upgrade_guide: "https://www.elastic.co/products/upgrade_guide" - blog-ref: "https://www.elastic.co/blog/" - curator-ref: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current" - curator-ref-current: "https://www.elastic.co/guide/en/elasticsearch/client/curator/current" - metrics-ref: "https://www.elastic.co/guide/en/metrics/current" - metrics-guide: "https://www.elastic.co/guide/en/metrics/guide/current" - logs-ref: "https://www.elastic.co/guide/en/logs/current" - logs-guide: "https://www.elastic.co/guide/en/logs/guide/current" - uptime-guide: "https://www.elastic.co/guide/en/uptime/current" - observability-guide: "https://www.elastic.co/guide/en/observability/current" - observability-guide-all: "https://www.elastic.co/guide/en/observability" - siem-guide: "https://www.elastic.co/guide/en/siem/guide/current" - security-guide: "https://www.elastic.co/guide/en/security/current" security-guide-all: "https://www.elastic.co/guide/en/security" - endpoint-guide: "https://www.elastic.co/guide/en/endpoint/current" sql-odbc: "https://www.elastic.co/guide/en/elasticsearch/sql-odbc/current" - ecs-ref: "https://www.elastic.co/guide/en/ecs/current" - ecs-logging-ref: "https://www.elastic.co/guide/en/ecs-logging/overview/current" - ecs-logging-go-logrus-ref: "https://www.elastic.co/guide/en/ecs-logging/go-logrus/current" - ecs-logging-go-zap-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current" - ecs-logging-go-zerolog-ref: "https://www.elastic.co/guide/en/ecs-logging/go-zap/current" - ecs-logging-java-ref: "https://www.elastic.co/guide/en/ecs-logging/java/current" - ecs-logging-dotnet-ref: "https://www.elastic.co/guide/en/ecs-logging/dotnet/current" - ecs-logging-nodejs-ref: "https://www.elastic.co/guide/en/ecs-logging/nodejs/current" - ecs-logging-php-ref: "https://www.elastic.co/guide/en/ecs-logging/php/current" - ecs-logging-python-ref: "https://www.elastic.co/guide/en/ecs-logging/python/current" - ecs-logging-ruby-ref: "https://www.elastic.co/guide/en/ecs-logging/ruby/current" ml-docs: "https://www.elastic.co/guide/en/machine-learning/current" eland-docs: "https://www.elastic.co/guide/en/elasticsearch/client/eland/current" - eql-ref: "https://eql.readthedocs.io/en/latest/query-guide" subscriptions: "https://www.elastic.co/subscriptions" extendtrial: "https://www.elastic.co/trialextension" - wikipedia: "https://en.wikipedia.org/wiki" - forum: "https://discuss.elastic.co/" - xpack-forum: "https://discuss.elastic.co/c/50-x-pack" - security-forum: "https://discuss.elastic.co/c/x-pack/shield" - watcher-forum: "https://discuss.elastic.co/c/x-pack/watcher" - monitoring-forum: "https://discuss.elastic.co/c/x-pack/marvel" - graph-forum: "https://discuss.elastic.co/c/x-pack/graph" - apm-forum: "https://discuss.elastic.co/c/apm" - enterprise-search-ref: "https://www.elastic.co/guide/en/enterprise-search/current" - app-search-ref: "https://www.elastic.co/guide/en/app-search/current" - workplace-search-ref: "https://www.elastic.co/guide/en/workplace-search/current" - enterprise-search-node-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/enterprise-search-node/current" - enterprise-search-php-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/php/current" - enterprise-search-python-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/python/current" - enterprise-search-ruby-ref: "https://www.elastic.co/guide/en/enterprise-search-clients/ruby/current" - elastic-maps-service: "https://maps.elastic.co" - integrations-docs: "https://docs.elastic.co/en/integrations" - integrations-devguide: "https://www.elastic.co/guide/en/integrations-developer/current" - time-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#time-units" - byte-units: "https://www.elastic.co/guide/en/elasticsearch/reference/current/api-conventions.html#byte-units" - apm-py-ref-v: "https://www.elastic.co/guide/en/apm/agent/python/current" - apm-node-ref-v: "https://www.elastic.co/guide/en/apm/agent/nodejs/current" - apm-rum-ref-v: "https://www.elastic.co/guide/en/apm/agent/rum-js/current" - apm-ruby-ref-v: "https://www.elastic.co/guide/en/apm/agent/ruby/current" - apm-java-ref-v: "https://www.elastic.co/guide/en/apm/agent/java/current" - apm-go-ref-v: "https://www.elastic.co/guide/en/apm/agent/go/current" - apm-ios-ref-v: "https://www.elastic.co/guide/en/apm/agent/swift/current" - apm-dotnet-ref-v: "https://www.elastic.co/guide/en/apm/agent/dotnet/current" - apm-php-ref-v: "https://www.elastic.co/guide/en/apm/agent/php/current" ecloud: "Elastic Cloud" - esf: "Elastic Serverless Forwarder" ess: "Elasticsearch Service" ech: "Elastic Cloud Hosted" ece: "Elastic Cloud Enterprise" @@ -221,8 +91,6 @@ subs: ess-product: "https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body" ess-console: "https://cloud.elastic.co?page=docs&placement=docs-body" ess-deployments: "https://cloud.elastic.co/deployments?page=docs&placement=docs-body" - ece-ref: "https://www.elastic.co/guide/en/cloud-enterprise/current" - eck-ref: "https://www.elastic.co/guide/en/cloud-on-k8s/current" ess-leadin: "You can run Elasticsearch on your own hardware or use our hosted Elasticsearch Service that is available on AWS, GCP, and Azure. https://cloud.elastic.co/registration{ess-utm-params}[Try the Elasticsearch Service for free]." ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can https://cloud.elastic.co/registration{ess-utm-params}[try it for free]." ess-icon: "image:https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg[link=\"https://cloud.elastic.co/registration{ess-utm-params}\", title=\"Supported on Elasticsearch Service\"]" @@ -234,7 +102,6 @@ subs: api-ece: "https://www.elastic.co/docs/api/doc/cloud-enterprise" api-kibana-serverless: "https://www.elastic.co/docs/api/doc/serverless" es-feature-flag: "This feature is in development and not yet available for use. This documentation is provided for informational purposes only." - es-ref-dir: "'{{elasticsearch-root}}/docs/reference'" apm-app: "APM app" uptime-app: "Uptime app" synthetics-app: "Synthetics app" @@ -528,16 +395,6 @@ subs: apm-issue: "https://github.com/elastic/apm-server/issues/" apm-pull: "https://github.com/elastic/apm-server/pull/" kibana-blob: "https://github.com/elastic/kibana/blob/current/" - apm-get-started-ref: "https://www.elastic.co/guide/en/apm/get-started/current" - apm-server-ref: "https://www.elastic.co/guide/en/apm/server/current" - apm-server-ref-v: "https://www.elastic.co/guide/en/apm/server/current" - apm-server-ref-m: "https://www.elastic.co/guide/en/apm/server/master" - apm-server-ref-62: "https://www.elastic.co/guide/en/apm/server/6.2" - apm-server-ref-64: "https://www.elastic.co/guide/en/apm/server/6.4" - apm-server-ref-70: "https://www.elastic.co/guide/en/apm/server/7.0" - apm-overview-ref-v: "https://www.elastic.co/guide/en/apm/get-started/current" - apm-overview-ref-70: "https://www.elastic.co/guide/en/apm/get-started/7.0" - apm-overview-ref-m: "https://www.elastic.co/guide/en/apm/get-started/master" infra-guide: "https://www.elastic.co/guide/en/infrastructure/guide/current" a-data-source: "a data view" icon-bug: "pass:[]" diff --git a/explore-analyze/elastic-inference/inference-api/alibabacloud-ai-search-inference-integration.md b/explore-analyze/elastic-inference/inference-api/alibabacloud-ai-search-inference-integration.md index 9e6602661a..080d8fb202 100644 --- a/explore-analyze/elastic-inference/inference-api/alibabacloud-ai-search-inference-integration.md +++ b/explore-analyze/elastic-inference/inference-api/alibabacloud-ai-search-inference-integration.md @@ -76,7 +76,11 @@ Creates an {{infer}} endpoint to perform an {{infer}} task with the `alibabaclou * `ops-qwen-turbo` * `qwen-turbo` * `qwen-plus` - * `qwen-max` ÷ `qwen-max-longcontext` + * `qwen-max` + * `deepseek-r1` + * `deepseek-v3` + * `deepseek-r1-distill-qwen-7b` + * `deepseek-r1-distill-qwen-14b` For the supported `completion` service_ids, refer to the [documentation](https://help.aliyun.com/zh/open-search/search-platform/developer-reference/text-generation-api-details). diff --git a/explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md b/explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md index ced0ccd53f..ed7aeb215f 100644 --- a/explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md +++ b/explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md @@ -15,12 +15,10 @@ Refer to the [{{infer-cap}} APIs](https://www.elastic.co/docs/api/doc/elasticsea Creates an {{infer}} endpoint to perform an {{infer}} task with the `hugging_face` service. - ## {{api-request-title}} [infer-service-hugging-face-api-request] `PUT /_inference//` - ## {{api-path-parms-title}} [infer-service-hugging-face-api-path-params] `` @@ -33,8 +31,6 @@ Creates an {{infer}} endpoint to perform an {{infer}} task with the `hugging_fac * `text_embedding`. - - ## {{api-request-body-title}} [infer-service-hugging-face-api-request-body] `chunking_settings` @@ -52,7 +48,6 @@ Creates an {{infer}} endpoint to perform an {{infer}} task with the `hugging_fac `strategy` : (Optional, string) Specifies the chunking strategy. It could be either `sentence` or `word`. - `service` : (Required, string) The type of service supported for the specified task type. In this case, `hugging_face`. @@ -63,7 +58,6 @@ Creates an {{infer}} endpoint to perform an {{infer}} task with the `hugging_fac `api_key` : (Required, string) A valid access token of your Hugging Face account. You can find your Hugging Face access tokens or you can create a new one [on the settings page](https://huggingface.co/settings/tokens). - ::::{important} You need to provide the API key only once, during the {{infer}} model creation. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not retrieve your API key. After creating the {{infer}} model, you cannot change the associated API key. If you want to use a different API key, delete the {{infer}} model and recreate it with the same name and the updated API key. :::: diff --git a/explore-analyze/elastic-inference/inference-api/openai-inference-integration.md b/explore-analyze/elastic-inference/inference-api/openai-inference-integration.md deleted file mode 100644 index 712922678d..0000000000 --- a/explore-analyze/elastic-inference/inference-api/openai-inference-integration.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -navigation_title: "OpenAI" -mapped_pages: - - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-openai.html -applies_to: - stack: - serverless: ---- - -# OpenAI inference integration [infer-service-openai] - -:::{tip} Inference API reference -Refer to the [{{infer-cap}} APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) for further information. -::: - -Creates an {{infer}} endpoint to perform an {{infer}} task with the `openai` service. - - -## {{api-request-title}} [infer-service-openai-api-request] - -`PUT /_inference//` - - -## {{api-path-parms-title}} [infer-service-openai-api-path-params] - -`` -: (Required, string) The unique identifier of the {{infer}} endpoint. - -`` -: (Required, string) The type of the {{infer}} task that the model will perform. - - Available task types: - - * `chat_completion`, - * `completion`, - * `text_embedding`. - - -::::{note} -The `chat_completion` task type only supports streaming and only through the `_stream` API. - -For more information on how to use the `chat_completion` task type, please refer to the [chat completion documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/chat-completion-inference-api.html). - -:::: - - - -## {{api-request-body-title}} [infer-service-openai-api-request-body] - -`chunking_settings` -: (Optional, object) Chunking configuration object. Refer to [Configuring chunking](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) to learn more about chunking. - - `max_chunk_size` - : (Optional, integer) Specifies the maximum size of a chunk in words. Defaults to `250`. This value cannot be higher than `300` or lower than `20` (for `sentence` strategy) or `10` (for `word` strategy). - - `overlap` - : (Optional, integer) Only for `word` chunking strategy. Specifies the number of overlapping words for chunks. Defaults to `100`. This value cannot be higher than the half of `max_chunk_size`. - - `sentence_overlap` - : (Optional, integer) Only for `sentence` chunking strategy. Specifies the numnber of overlapping sentences for chunks. It can be either `1` or `0`. Defaults to `1`. - - `strategy` - : (Optional, string) Specifies the chunking strategy. It could be either `sentence` or `word`. - - -`service` -: (Required, string) The type of service supported for the specified task type. In this case, `openai`. - -`service_settings` -: (Required, object) Settings used to install the {{infer}} model. - - These settings are specific to the `openai` service. - - `api_key` - : (Required, string) A valid API key of your OpenAI account. You can find your OpenAI API keys in your OpenAI account under the [API keys section](https://platform.openai.com/api-keys). - - ::::{important} - You need to provide the API key only once, during the {{infer}} model creation. The [Get {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get) does not retrieve your API key. After creating the {{infer}} model, you cannot change the associated API key. If you want to use a different API key, delete the {{infer}} model and recreate it with the same name and the updated API key. - :::: - - - `dimensions` - : (Optional, integer) The number of dimensions the resulting output embeddings should have. Only supported in `text-embedding-3` and later models. If not set the OpenAI defined default for the model is used. - - `model_id` - : (Required, string) The name of the model to use for the {{infer}} task. Refer to the [OpenAI documentation](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) for the list of available text embedding models. - - `organization_id` - : (Optional, string) The unique identifier of your organization. You can find the Organization ID in your OpenAI account under [**Settings** > **Organizations**](https://platform.openai.com/account/organization). - - `url` - : (Optional, string) The URL endpoint to use for the requests. Can be changed for testing purposes. Defaults to `https://api.openai.com/v1/embeddings`. - - `rate_limit` - : (Optional, object) The `openai` service sets a default number of requests allowed per minute depending on the task type. For `text_embedding` it is set to `3000`. For `completion` it is set to `500`. This helps to minimize the number of rate limit errors returned from OpenAI. To modify this, set the `requests_per_minute` setting of this object in your service settings: - - ```text - "rate_limit": { - "requests_per_minute": <> - } - ``` - - More information about the rate limits for OpenAI can be found in your [Account limits](https://platform.openai.com/account/limits). - - -`task_settings` -: (Optional, object) Settings to configure the {{infer}} task. These settings are specific to the `` you specified. - - ::::{dropdown} `task_settings` for the `completion` task type - `user` - : (Optional, string) Specifies the user issuing the request, which can be used for abuse detection. - - :::: - - - ::::{dropdown} `task_settings` for the `text_embedding` task type - `user` - : (optional, string) Specifies the user issuing the request, which can be used for abuse detection. - - :::: - - - -## OpenAI service example [inference-example-openai] - -The following example shows how to create an {{infer}} endpoint called `openai-embeddings` to perform a `text_embedding` task type. The embeddings created by requests to this endpoint will have 128 dimensions. - -```console -PUT _inference/text_embedding/openai-embeddings -{ - "service": "openai", - "service_settings": { - "api_key": "", - "model_id": "text-embedding-3-small", - "dimensions": 128 - } -} -``` - -The next example shows how to create an {{infer}} endpoint called `openai-completion` to perform a `completion` task type. - -```console -PUT _inference/completion/openai-completion -{ - "service": "openai", - "service_settings": { - "api_key": "", - "model_id": "gpt-3.5-turbo" - } -} -``` diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md index a4a325318e..e6757a4f49 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md @@ -117,7 +117,7 @@ PUT ner-test ``` ::::{tip} -To use the `annotated_text` data type in this example, you must install the [mapper annotated text plugin](elasticsearch://reference/elasticsearch-plugins/mapper-annotated-text.md). For more installation details, refer to [Add plugins provided with {{ech}}](elasticsearch://reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). +To use the `annotated_text` data type in this example, you must install the [mapper annotated text plugin](elasticsearch://reference/elasticsearch-plugins/mapper-annotated-text.md). For more installation details, refer to [Add plugins provided with {{ech}}](elasticsearch://reference/elasticsearch-plugins/plugin-management.md). :::: You can then use the new pipeline to index some documents. For example, use a bulk indexing request with the `pipeline` query parameter for your NER pipeline: diff --git a/explore-analyze/toc.yml b/explore-analyze/toc.yml index 41abc68da3..2c3e9340ff 100644 --- a/explore-analyze/toc.yml +++ b/explore-analyze/toc.yml @@ -136,7 +136,6 @@ toc: - file: elastic-inference/inference-api/huggingface-inference-integration.md - file: elastic-inference/inference-api/jinaai-inference-integration.md - file: elastic-inference/inference-api/mistral-inference-integration.md - - file: elastic-inference/inference-api/openai-inference-integration.md - file: machine-learning.md children: - file: machine-learning/setting-up-machine-learning.md diff --git a/images/security-sec-admin-user.png b/images/security-sec-admin-user.png deleted file mode 100644 index 18a8c87a5e..0000000000 Binary files a/images/security-sec-admin-user.png and /dev/null differ diff --git a/manage-data/data-store/data-streams/logs-data-stream.md b/manage-data/data-store/data-streams/logs-data-stream.md index cccf0513f5..5d152030be 100644 --- a/manage-data/data-store/data-streams/logs-data-stream.md +++ b/manage-data/data-store/data-streams/logs-data-stream.md @@ -96,7 +96,7 @@ On existing data streams, `logsdb` mode is applied on [rollover](../data-streams ### Optimized routing on sort fields [logsdb-sort-routing] -To reduce the storage footprint of `logsdb` indexes, you can enable routing optimizations. A routing optimization uses the fields in the sort configuration (except for `@timestamp`) to route documents to shards. +If you have the required [subscription](https://www.elastic.co/subscriptions), you can enable routing optimizations to reduce the storage footprint of `logsdb` indexes. A routing optimization uses the fields in the sort configuration (except for `@timestamp`) to route documents to shards. In benchmarks, routing optimizations reduced storage requirements by 20% compared to the default `logsdb` configuration, with a negligible penalty to ingestion performance (1-4%). Routing optimizations can benefit data streams that are expected to grow substantially over time. Exact results depend on the sort configuration and the nature of the logged data. diff --git a/manage-data/data-store/templates.md b/manage-data/data-store/templates.md index 1f57cf7754..d06b40eb33 100644 --- a/manage-data/data-store/templates.md +++ b/manage-data/data-store/templates.md @@ -45,6 +45,7 @@ If you use {{fleet}} or {{agent}}, assign your index templates a priority lower * Use a non-overlapping index pattern. * Assign templates with an overlapping pattern a `priority` higher than `500`. For example, if you don’t use {{fleet}} or {{agent}} and want to create a template for the `logs-*` index pattern, assign your template a priority of `500`. This ensures your template is applied instead of the built-in template for `logs-*-*`. * To avoid naming collisions with built-in and Fleet-managed index templates, avoid using `@` as part of the name of your own index templates. +* Beginning in {{stack}} version 9.1, {{fleet}} uses indices named `fleet-synced-integrations*` for a feature. Avoid using this name to avoid collisions with built-in indices. :::: diff --git a/manage-data/ingest/ingesting-data-from-applications.md b/manage-data/ingest/ingesting-data-from-applications.md index 2025e0a9be..39e58486d5 100644 --- a/manage-data/ingest/ingesting-data-from-applications.md +++ b/manage-data/ingest/ingesting-data-from-applications.md @@ -2,22 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-ingest-guides.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-ingest-guides.html +applies_to: + stack: ga --- # Ingesting data from applications -% What needs to be done: Refine - -% GitHub issue: docs-projects#387 - -% Scope notes: The ESS and ECE versions should be identical, but they'll still need to be combined. - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/cloud/cloud/ec-ingest-guides.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-ingest-guides.md - - The following tutorials demonstrate how you can use the Elasticsearch language clients to ingest data from an application into a deployment in {{ech}} or {{ece}}. [Ingest data with Node.js](ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md) diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index 34bbb33260..43ab84c861 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -2,61 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-search-use-cases-beats-logstash.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-beats-logstash.html +applies_to: + stack: ga --- # Ingest data from Beats with Logstash as a proxy -% What needs to be done: Refine - -% Scope notes: Merge ESS and ECE versions (should be pretty much identical) - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-beats-logstash.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$ec-beats-logstash-connect-securely$$$ - -$$$ec-beats-logstash-elasticsearch$$$ - -$$$ec-beats-logstash-filebeat$$$ - -$$$ec-beats-logstash-listen$$$ - -$$$ec-beats-logstash-logstash$$$ - -$$$ec-beats-logstash-metricbeat-send$$$ - -$$$ec-beats-logstash-metricbeat$$$ - -$$$ec-beats-logstash-stdout$$$ - -$$$ec-beats-logstash-trial$$$ - -$$$ec-beats-logstash-view-kibana$$$ - -$$$ece-beats-logstash-connect-securely$$$ - -$$$ece-beats-logstash-deployment$$$ - -$$$ece-beats-logstash-elasticsearch$$$ - -$$$ece-beats-logstash-filebeat$$$ - -$$$ece-beats-logstash-listen$$$ - -$$$ece-beats-logstash-logstash$$$ - -$$$ece-beats-logstash-metricbeat-send$$$ - -$$$ece-beats-logstash-metricbeat$$$ - -$$$ece-beats-logstash-stdout$$$ - -$$$ece-beats-logstash-view-kibana$$$ - This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to your {{ech}} or {{ece}} deployment. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: * You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to your deployment, as shown in the following diagram: diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md index 0dd8b20802..b461ac67af 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md @@ -2,44 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-search-use-cases-db-logstash.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-db-logstash.html +applies_to: + stack: ga --- # Ingest data from a relational database -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$ec-db-logstash-connect-securely$$$ - -$$$ec-db-logstash-database-structure$$$ - -$$$ec-db-logstash-database$$$ - -$$$ec-db-logstash-driver$$$ - -$$$ec-db-logstash-output$$$ - -$$$ec-db-logstash-pipeline$$$ - -$$$ec-db-logstash-prerequisites$$$ - -$$$ec-db-logstash-trial$$$ - -$$$ece-db-logstash-connect-securely$$$ - -$$$ece-db-logstash-database-structure$$$ - -$$$ece-db-logstash-database$$$ - -$$$ece-db-logstash-deployment$$$ - -$$$ece-db-logstash-driver$$$ - -$$$ece-db-logstash-output$$$ - -$$$ece-db-logstash-pipeline$$$ - -$$$ece-db-logstash-prerequisites$$$ - This guide explains how to ingest data from a relational database into {{ecloud}} through [{{ls}}](logstash://reference/index.md), using the Logstash [JDBC input plugin](logstash://reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md index 6516553f50..7c1ed9b29d 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md @@ -2,6 +2,8 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-node-js.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-node-js.html +applies_to: + stack: ga --- # Ingest data with Node.js @@ -290,7 +292,7 @@ const client = new Client({ }) ``` -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). ### Best practices [ec_best_practices] diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md index 6d9024495e..5cfa8bd831 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md @@ -2,6 +2,8 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-python.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-python.html +applies_to: + stack: ga --- # Ingest data with Python @@ -351,7 +353,7 @@ es = Elasticsearch( ) ``` -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](elasticsearch-py://reference/examples.md). diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md index c8d617eb23..463a384565 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md @@ -2,53 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-search-use-cases-node-logs.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-node-logs.html +applies_to: + stack: ga --- # Ingest logs from a Node.js web application using Filebeat -% What needs to be done: Refine - -% Scope notes: Merge ESS and ECE versions (should be pretty much identical) - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-node-logs.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$ec-node-logs-connect-securely$$$ - -$$$ec-node-logs-create-request-script$$$ - -$$$ec-node-logs-create-server-script$$$ - -$$$ec-node-logs-filebeat$$$ - -$$$ec-node-logs-prerequisites$$$ - -$$$ec-node-logs-send-ess$$$ - -$$$ec-node-logs-trial$$$ - -$$$ec-node-logs-view-kibana$$$ - -$$$ece-node-logs-connect-securely$$$ - -$$$ece-node-logs-create-request-script$$$ - -$$$ece-node-logs-create-server-script$$$ - -$$$ece-node-logs-deployment$$$ - -$$$ece-node-logs-filebeat$$$ - -$$$ece-node-logs-prerequisites$$$ - -$$$ece-node-logs-send-ess$$$ - -$$$ece-node-logs-view-kibana$$$ - This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](ecs-logging://reference/intro.md#_get_started). *Time required: 1.5 hours* diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index c1d846580b..4e866b95ba 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -2,37 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-search-use-cases-python-logs.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-getting-started-search-use-cases-python-logs.html +applies_to: + stack: ga --- # Ingest logs from a Python application using Filebeat -% What needs to be done: Refine - -% Scope notes: Merge ESS and ECE versions (should be pretty much identical) - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$ec-python-logs-create-script$$$ - -$$$ec-python-logs-filebeat$$$ - -$$$ec-python-logs-send-ess$$$ - -$$$ec-python-logs-view-kibana$$$ - -$$$ece-python-logs-create-script$$$ - -$$$ece-python-logs-filebeat$$$ - -$$$ece-python-logs-send-ess$$$ - -$$$ece-python-logs-view-kibana$$$ - This guide demonstrates how to ingest logs from a Python application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](ecs-logging://reference/intro.md). *Time required: 1 hour* diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md deleted file mode 100644 index 246dba52e0..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md +++ /dev/null @@ -1,298 +0,0 @@ -# Ingest data with Node.js on Elastic Cloud Enterprise [ece-getting-started-node-js] - -This guide tells you how to get started with: - -* Securely connecting to Elastic Cloud Enterprise with Node.js -* Ingesting data into your deployment from your application -* Searching and modifying your data on Elastic Cloud Enterprise - -If you are an Node.js application programmer who is new to the Elastic Stack, this content helps you get started more easily. - -*Time required: 45 minutes* - - -## Create a deployment [ece_create_a_deployment] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Set up your application [ece_set_up_your_application] - -These steps are applicable to your existing application. If you don’t have one, use the example included here to create one. - - -### Create the npm `package.json` file [ece_create_the_npm_package_json_file] - -```sh -npm init -``` - - -### Get the `elasticsearch` and `config` packages [ece_get_the_elasticsearch_and_config_packages] - -```sh -npm install @elastic/elasticsearch -npm install config -``` - -::::{note} -The `config` package is not required if you have your own method to keep your configuration details private. -:::: - - - -### Create a configuration file [ece_create_a_configuration_file] - -```sh -mkdir config -vi config/default.json -``` - -The example here shows what the `config` package expects. You need to update `config/default.json` with your deployment details in the following sections: - -```json -{ - "elastic": { - "cloudID": "DEPLOYMENT_NAME:CLOUD_ID_DETAILS", <1> - "username": "elastic", - "password": "LONGPASSWORD" - "ca": "/path/to/your/elastic-ece-ca-cert.pem" <2> - } -} -``` - -1. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. This line is only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root certificate (Root CA) used for the Elastic Cloud Enterprise proxy. Retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. - - - -## About connecting securely [ece_about_connecting_securely] - -When connecting to Elastic Cloud Enterprise use a Cloud ID to specify the connection details. You must pass the Cloud ID that is found in {{kib}} or the cloud console. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - - -### Basic authentication [ece_basic_authentication] - -For basic authentication, use the same deployment credentials (`username` and `password` parameters) and Cloud ID you copied down earlier when you created your deployment. (If you did not save the password, you can [reset the password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) .) - - -## Create a sample application [ece_create_a_sample_application] - -The sample application connects to {{es}}, creates an index, inserts some records, performs a search, and updates a record. - -Read the configuration created earlier, and connect to {{es}}: - -```javascript -const { Client } = require('@elastic/elasticsearch') -const config = require('config'); -const elasticConfig = config.get('elastic'); -const fs = require('fs'); <1> - -const client = new Client({ - cloud: { - id: elasticConfig.cloudID - }, - auth: { - username: elasticConfig.username, - password: elasticConfig.password - }, - ssl: { <2> - ca: fs.readFileSync(elasticConfig.ca), - rejectUnauthorized: false - } -}) -``` - -1. If your Elastic Cloud Enterprise deployment is using a self-signed TLS certificate you need to read in the root certificate that you downloaded earlier. Add a line to require the `fs` package, -2. and add the `ssl` stanza to the `client`. - - -Now confirm that you are connected to the deployment by returning some information about the deployment: - -```javascript -client.info() - .then(response => console.log(response)) - .catch(error => console.error(error)) -``` - - -## Ingest data [ece_ingest_data] - -After connecting to your deployment, you are ready to index and search data. Let’s create a new index, insert some quotes from our favorite characters, and refresh the index so that it is ready to be searched. A refresh makes all operations performed on an index since the last refresh available for search. - -```javascript -async function run() { - await client.index({ - index: 'game-of-thrones', - body: { - character: 'Ned Stark', - quote: 'Winter is coming.' - } - }) - - await client.index({ - index: 'game-of-thrones', - body: { - character: 'Daenerys Targaryen', - quote: 'I am the blood of the dragon.' - } - }) - - await client.index({ - index: 'game-of-thrones', - body: { - character: 'Tyrion Lannister', - quote: 'A mind needs books like a sword needs whetstone.' - } - }) - - await client.indices.refresh({index: 'game-of-thrones'}) -} - -run().catch(console.log) -``` - -When using the [client.index](elasticsearch-js://reference/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. - - -## Search and modify data [ece_search_and_modify_data] - -After creating a new index and ingesting some data, you are now ready to search. Let’s find what characters have said things about `winter`: - -```javascript -async function read() { - const { body } = await client.search({ - index: 'game-of-thrones', - body: { - query: { - match: { quote: 'winter' } - } - } - }) - console.log(body.hits.hits) -} - -read().catch(console.log) -``` - -The search request returns content of documents containing `'winter'` in the `quote` field, including document IDs that were automatically generated. You can make updates to specific documents using document IDs. Let’s add a birthplace for our character: - -```javascript -async function update() { - await client.update({ - index: 'game-of-thrones', - id: , - body: { - script: { - source: "ctx._source.birthplace = 'Winterfell'" - } - } - }) - const { body } = await client.get({ - index: 'game-of-thrones', - id: - }) - - console.log(body) -} - -update().catch(console.log) -``` - -This [more comprehensive list of API examples](elasticsearch-js://reference/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](elasticsearch-js://reference/api-reference.md). - - -## Switch to API key authentication [ece_switch_to_api_key_authentication] - -To get started, authentication to {{es}} used the `elastic` superuser and password, but an API key is much safer and a best practice for production. - -In the example that follows, an API key is created with the cluster `monitor` privilege which gives read-only access for determining the cluster state. Some additional privileges also allow `create_index`, `write`, `read`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - -The `security.createApiKey` function returns an `id` and `api_key` value which can then be concatenated and encoded in `base64`: - -```javascript -async function generateApiKeys (opts) { - const { body } = await client.security.createApiKey({ - body: { - name: 'nodejs_example', - role_descriptors: { - 'nodejs_example_writer': { - 'cluster': ['monitor'], - 'index': [ - { - 'names': ['game-of-thrones'], - 'privileges': ['create_index', 'write', 'read', 'manage'] - } - ] - } - } - } - }) - - return Buffer.from(`${body.id}:${body.api_key}`).toString('base64') -} - -generateApiKeys() - .then(console.log) - .catch(err => { - console.error(err) - process.exit(1) -}) -``` - -The `base64` encoded output is as follows and is ready to be added to the configuration file: - -```text -API_KEY_DETAILS -``` - -Edit the `config/default.json` configuration file you created earlier and add this API key: - -```json -{ - "elastic-cloud": { - "cloudID": "DEPLOYMENT_NAME:CLOUD_ID_DETAILS", - "username": "elastic", - "password": "LONGPASSWORD", - "apiKey": "API_KEY_DETAILS" - } -} -``` - -Now the API key can be used in place of the username and password. The client connection becomes: - -```javascript -const elasticConfig = config.get('elastic-cloud'); - -const client = new Client({ - cloud: { - id: elasticConfig.cloudID - }, - auth: { - apiKey: elasticConfig.apiKey - } -}) -``` - -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). - -### Best practices [ece_best_practices] - -Security -: When connecting to Elastic Cloud Enterprise, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - - Do not enable sniffing when using Elastic Cloud Enterprise, since the nodes are behind a load balancer. Elastic Cloud Enterprise takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. - - -Schema -: When the example code was run an index mapping was created automatically. The field types were selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you are designing the schema for your production use cases. - -Ingest -: For more advanced scenarios, this [bulk ingestion](elasticsearch-js://reference/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md deleted file mode 100644 index 0b580691b8..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md +++ /dev/null @@ -1,361 +0,0 @@ -# Ingest data with Python on Elastic Cloud Enterprise [ece-getting-started-python] - -This guide tells you how to get started with: - -* Securely connecting to Elastic Cloud Enterprise with Python -* Ingesting data into your deployment from your application -* Searching and modifying your data on Elastic Cloud Enterprise - -If you are an Python application programmer who is new to the Elastic Stack, this content can help you get started more easily. - -*Time required: 45 minutes* - - -## Prerequisites [ece_prerequisites] - -These steps are applicable to your existing application. If you don’t have one, you can use the example included here to create one. - - -### Get the `elasticsearch` packages [ece_get_the_elasticsearch_packages] - -```sh -python -m pip install elasticsearch -python -m pip install elasticsearch-async -``` - - -### Create the `setup.py` file [ece_create_the_setup_py_file] - -```sh -# Elasticsearch 7.x -elasticsearch>=7.0.0,<8.0.0 -``` - - -## Create a deployment [ece_create_a_deployment_2] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Connect securely [ece_connect_securely] - -When connecting to Elastic Cloud Enterprise you need to use your Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - - -### Basic authentication [ece_basic_authentication_2] - -For basic authentication, use the same deployment credentials (`username` and `password` parameters) and Cloud ID you copied down earlier. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. (If you did not save the password, you can [reset the password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) .) - -You first need to create and edit an `example.ini` file with your deployment details: - -```sh -[ELASTIC] -cloud_id = DEPLOYMENT_NAME:CLOUD_ID_DETAILS -user = elastic -password = LONGPASSWORD -ca = /path/to/your/elastic-ece-ca-cert.pem <1> -``` - -1. This line is only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root certificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. - - -The following examples are to be typed into the Python interpreter in interactive mode. The prompts have been removed to make it easier for you to copy the samples, the output from the interpreter is shown unmodified. - - -### Import libraries and read in the configuration [ece_import_libraries_and_read_in_the_configuration] - -```python -❯ python3 -Python 3.9.6 (default, Jun 29 2021, 05:25:02) -[Clang 12.0.5 (clang-1205.0.22.9)] on darwin -Type "help", "copyright", "credits" or "license" for more information. - -from elasticsearch import Elasticsearch, helpers -import configparser -from ssl import create_default_context - -config = configparser.ConfigParser() -config.read('example.ini') -``` - - -#### Output [ece_output] - -```python -['example.ini'] ->>> -``` - - -### Instantiate the {{es}} connection [ece_instantiate_the_es_connection] - -```python -context = create_default_context(cafile=config['ELASTIC']['ca']) -es = Elasticsearch( - cloud_id=config['ELASTIC']['cloud_id'], - scheme="https", - use_ssl=True, - verify_certs=False, - ssl_show_warn=False, - http_auth=(config['ELASTIC']['user'], config['ELASTIC']['password']) -) -``` - -You can now confirm that you have connected to the deployment by returning some information about the deployment: - -```python -es.info() -``` - - -#### Output [ece_output_2] - -```python -{'name': 'instance-0000000000', - 'cluster_name': '747ab208fb70403dbe3155af102aef56', - 'cluster_uuid': 'IpgjkPkVQ5efJY-M9ilG7g', - 'version': {'number': '7.15.0', 'build_flavor': 'default', 'build_type': 'docker', 'build_hash': '79d65f6e357953a5b3cbcc5e2c7c21073d89aa29', 'build_date': '2021-09-16T03:05:29.143308416Z', 'build_snapshot': False, 'lucene_version': '8.9.0', 'minimum_wire_compatibility_version': '6.8.0', 'minimum_index_compatibility_version': '6.0.0-beta1'}, - 'tagline': 'You Know, for Search'} -``` - - -## Ingest data [ece_ingest_data_2] - -After connecting to your deployment, you are ready to index and search data. Let’s create a new index, insert some quotes from our favorite characters, and then refresh the index so that it is ready to be searched. A refresh makes all operations performed on an index since the last refresh available for search. - - -### Index a document [ece_index_a_document] - -```python -es.index( - index='lord-of-the-rings', - document={ - 'character': 'Aragon', - 'quote': 'It is not this day.' - }) -``` - - -#### Output [ece_output_3] - -```python -{'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': 'IanWEnwBg_mH2XweqDqg', - '_version': 1, - 'result': 'created', - '_shards': {'total': 2, 'successful': 1, 'failed': 0}, '_seq_no': 34, '_primary_term': 1} -``` - - -### Index another record [ece_index_another_record] - -```python -es.index( - index='lord-of-the-rings', - document={ - 'character': 'Gandalf', - 'quote': 'A wizard is never late, nor is he early.' - }) -``` - - -#### Output [ece_output_4] - -```python -{'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': 'IqnWEnwBg_mH2Xwezjpj', - '_version': 1, - 'result': 'created', - '_shards': {'total': 2, 'successful': 1, 'failed': 0}, '_seq_no': 35, '_primary_term': 1} -``` - - -### Index a third record [ece_index_a_third_record] - -```python -es.index( - index='lord-of-the-rings', - document={ - 'character': 'Frodo Baggins', - 'quote': 'You are late' - }) -``` - - -#### Output [ece_output_5] - -```python -{'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': 'I6nWEnwBg_mH2Xwe_Tre', - '_version': 1, - 'result': 'created', - '_shards': {'total': 2, 'successful': 1, 'failed': 0}, '_seq_no': 36, '_primary_term': 1} -``` - - -### Refresh the index [ece_refresh_the_index] - -```python -es.indices.refresh(index='lord-of-the-rings') -``` - - -#### Output [ece_output_6] - -```python -{'_shards': {'total': 2, 'successful': 1, 'failed': 0}} -``` - -When using the `es.index` API, the request automatically creates the `lord-of-the-rings` index, if it doesn’t exist already, as well as document IDs for each indexed document if they are not explicitly specified. - - -## Search and modify data [ece_search_and_modify_data_2] - -After creating a new index and ingesting some data, you are now ready to search. Let’s find what different characters have said things about being `late`: - -```python -result = es.search( - index='lord-of-the-rings', - query={ - 'match': {'quote': 'late'} - } - ) - -result['hits']['hits'] -``` - - -### Output [ece_output_7] - -```python -[{'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': '2EkAzngB_pyHD3p65UMt', - '_score': 0.5820575, - '_source': {'character': 'Frodo Baggins', 'quote': 'You are late'}}, - {'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': '10kAzngB_pyHD3p65EPR', - '_score': 0.37883914, - '_source': {'character': 'Gandalf', - 'quote': 'A wizard is never late, nor is he early.'}}] -``` - -The search request returns content of documents containing `late` in the quote field, including document IDs that were automatically generated. - -You can make updates to specific documents using document IDs. Let’s add a birthplace for our character: - -```python -es.update( - index='lord-of-the-rings', - id='2EkAzngB_pyHD3p65UMt', <1> - doc={'birthplace': 'The Shire'} - ) -``` - -1. This update example uses the field `id` to identify the document to update. Copy the `id` from the document related to `Frodo Baggins` when you update and add the `birthplace`. - - - -### Output [ece_output_8] - -```python -es.get(index='lord-of-the-rings', id='2EkAzngB_pyHD3p65UMt') -{'_index': 'lord-of-the-rings', - '_type': '_doc', - '_id': '2EkAzngB_pyHD3p65UMt', - '_version': 2, - '_seq_no': 3, - '_primary_term': 1, - 'found': True, - '_source': {'character': 'Frodo Baggins', - 'quote': 'You are late', - 'birthplace': 'The Shire'}} -``` - -For frequently used API calls with the Python client, check [Examples](elasticsearch-py://reference/examples.md). - - -## Switch to API key authentication [ece_switch_to_api_key_authentication_2] - -To get started, authentication to Elasticsearch used the `elastic` superuser and password, but an API key is much safer and a best practice for production. - -In the example that follows, an API key is created with the cluster `monitor` privilege which gives read-only access for determining the cluster state. Some additional privileges also allow `create_index`, `write`, `read`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - -The easiest way to create this key is in the API console for your deployment. Select the deployment name and go to **☰** > **Management** > **Dev Tools**: - -```json -POST /_security/api_key -{ - "name": "python_example", - "role_descriptors": { - "python_read_write": { - "cluster": ["monitor"], - "index": [ - { - "names": ["test-index"], - "privileges": ["create_index", "write", "read", "manage"] - } - ] - } - } -} -``` - - -### The output is: [ece_the_output_is] - -```json -{ - "id" : "API_KEY_ID", - "name" : "python_example", - "api_key" : "API_KEY_DETAILS" -} -``` - -Edit the `example.ini` file you created earlier and add the `id` and `api_key` you just created. You should also remove the lines for `user` and `password` you added earlier after you have tested the `api_key`, and consider changing the `elastic` password using the Cloud UI. - -```sh -[DEFAULT] -cloud_id = DEPLOYMENT_NAME:CLOUD_ID_DETAILS -apikey_id = API_KEY_ID -apikey_key = API_KEY_DETAILS -``` - -You can now use the API key in place of a username and password. The client connection becomes: - -```python -es = Elasticsearch( - cloud_id=config['DEFAULT']['cloud_id'], - api_key=(config['DEFAULT']['apikey_id'], config['DEFAULT']['apikey_key']), -) -``` - -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). - -For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](elasticsearch-py://reference/examples.md). - - -### Best practices [ece_best_practices_2] - -Security -: When connecting to Elastic Cloud Enterprise, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - - Do not enable sniffing when using Elastic Cloud Enterprise, since the nodes are behind a load balancer. Elastic Cloud Enterprise takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. - - -Schema -: When the example code is run, an index mapping is created automatically. The field types are selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you design the schema for your production use cases. - -Ingest -: For more advanced scenarios, [Bulk helpers](elasticsearch-py://reference/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md deleted file mode 100644 index fa585aa751..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md +++ /dev/null @@ -1,514 +0,0 @@ -# Ingest data from Beats to Elastic Cloud Enterprise with {{ls}} as a proxy [ece-getting-started-search-use-cases-beats-logstash] - -This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to Elastic Cloud Enterprise. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: - -* You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to Elastic Cloud Enterprise, as shown in the following diagram: - - ![A diagram showing data from multiple Beats into Logstash](../../../images/cloud-enterprise-ec-logstash-beats-dataflow.png "") - -* This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with Elastic Cloud Enterprise. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with an Elastic Cloud Enterprise deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to Elastic Cloud Enterprise, you can use {{ls}} to proxy this traffic through one firewall exception or rule. -* This setup is not suitable in simple scenarios when there is only one or a couple of Beats in use. {{ls}} makes the most sense for proxying when there are many Beats. - -The configuration in this example makes use of the System module, available for both Filebeat and Metricbeat. Filebeat’s System sends server system log details (that is, login success/failures, sudo *superuser do* command usage, and other key usage details). Metricbeat’s System module sends memory, CPU, disk, and other server usage metrics. - -In the following sections you are going to learn how to: - -1. [Create a deployment](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-deployment) -2. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-connect-securely) -3. [Set up {{ls}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-logstash) -4. [Set up Metricbeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-metricbeat) -5. [Configure Metricbeat to send data to {{ls}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-metricbeat-send) -6. [Set up Filebeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-filebeat) -7. [Configure {{ls}} to listen for Beats](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-listen) -8. [Output {{ls}} data to stdout](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-stdout) -9. [Output {{ls}} data to {{es}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-elasticsearch) -10. [View data in Kibana](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ece-beats-logstash-view-kibana) - -*Time required: 1 hour* - - -## Create a deployment [ece-beats-logstash-deployment] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Connect securely [ece-beats-logstash-connect-securely] - -When connecting to Elastic Cloud Enterprise you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - - -## Set up {{ls}} [ece-beats-logstash-logstash] - -[Download](https://www.elastic.co/downloads/logstash) and unpack {{ls}} on the local machine that hosts Beats or another machine granted access to the Beats machines. - - -## Set up Metricbeat [ece-beats-logstash-metricbeat] - -Now that {{ls}} is downloaded and your Elastic Cloud Enterprise deployment is set up, you can configure Metricbeat to send operational data to {{ls}}. - -Install Metricbeat as close as possible to the service that you want to monitor. For example, if you have four servers with MySQL running, we recommend that you run Metricbeat on each server. This allows Metricbeat to access your service from *localhost*. This setup does not cause any additional network traffic and enables Metricbeat to collect metrics even in the event of network problems. Metrics from multiple Metricbeat instances are combined on the {{ls}} server. - -If you have multiple servers with metrics data, repeat the following steps to configure Metricbeat on each server. - -**Download Metricbeat** - -[Download Metricbeat](https://www.elastic.co/downloads/beats/metricbeat) and unpack it on the local server from which you want to collect data. - -**About Metricbeat modules** - -Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. - -**Load the Metricbeat Kibana dashboards** - -Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the Elastic Cloud Enterprise deployment. - -::::{note} -Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been renamed to *data views*. To learn more, check the Kibana [What’s new in 8.0](https://www.elastic.co/guide/en/kibana/8.0/whats-new.html#index-pattern-rename) page. -:::: - - -1. Open a command line instance and then go to */metricbeat-/* -2. Run the following command: - -```txt -sudo ./metricbeat setup \ - -E cloud.id= \ <1> - -E cloud.auth=: <2> - -E setup.kibana.ssl.certificate_authorities=['/path/to/the/elastic-ece-ca-cert.pem'] \ <3> - -E setup.kibana.ssl.verification_mode='certificate' \ <3> - -E output.elasticsearch.ssl.certificate_authorities=['/path/to/the/elastic-ece-ca-cert.pem'] \ <3> - -E output.elasticsearch.ssl.verification_mode='certificate' <3> -``` - -1. Specify the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. -3. The four lines related to `ssl` are only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root cetificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this command example, the file is named `elastic-ece-ca-cert.pem`.::::{important} -Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the metricbeat.yml. - -You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. - -If this isn’t a production environment and you want a fast-pass with less permissions hassles, then you can disable strict permission checks from the command line by using `--strict.perms=false` when executing Beats (for example, `./metricbeat --strict.perms=false`). - -Depending on your system, you may also find that some commands need to be run as root, by prefixing `sudo` to the command. - -:::: - - - - -Your results should be similar to the following: - -```txt -Index setup finished. -Loading dashboards (Kibana must be running and reachable) -Loaded dashboards -``` - - -## Configure Metricbeat to send data to {{ls}} [ece-beats-logstash-metricbeat-send] - -1. In */metricbeat-/* (where ** is the directory where Metricbeat is installed), open the *metricbeat.yml* configuration file for editing. -2. Scroll down to the *Elasticsearch Output* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*. -3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts*, as follows: - -```txt -# ---------------- Logstash Output ----------------- -output.logstash: - # The Logstash hosts - hosts: ["localhost:5044"] <1> -``` - -1. Replace `localhost` and the port number with the hostname and port number where Logstash is listening. - - - -## Set up Filebeat [ece-beats-logstash-filebeat] - -The next step is to configure Filebeat to send operational data to Logstash. As with Metricbeat, install Filebeat as close as possible to the service that you want to monitor. - -**Download Filebeat** - -[Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. - -**Enable the Filebeat system module** - -Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. - -1. Go to */filebeat-/modules.d/* where ** is the directory where Filebeat is installed. -2. Filebeat requires at least one fileset to be enabled. In file */filebeat-/modules.d/system.yml.disabled*, under both `syslog` and `auth` set `enabled` to `true`: - -```txt -- module: system - # Syslog - syslog: - enabled: true - - # Authorization logs - auth: - enabled: true -``` - -From the */filebeat-* directory, run the `filebeat modules` command as shown: - -```txt -./filebeat modules enable system -``` - -The system module is now enabled in Filebeat and it will be used the next time Filebeat starts. - -**Load the Filebeat Kibana dashboards** - -Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view *filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet. - -1. Open a command line instance and then go to */filebeat-/* -2. Run the following command: - -```txt -sudo ./filebeat setup \ - -E cloud.id= \ <1> - -E cloud.auth=: <2> - -E setup.kibana.ssl.certificate_authorities=['/path/to/the/elastic-ece-ca-cert.pem'] \ <3> - -E setup.kibana.ssl.verification_mode='certificate' \ <3> - -E output.elasticsearch.ssl.certificate_authorities=['/path/to/the/elastic-ece-ca-cert.pem'] \ <3> - -E output.elasticsearch.ssl.verification_mode='certificate' <3> -``` - -1. Specify the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. -3. The four lines related to `ssl` are only needed if you are using self-signed certificates.::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the filebeat.yml. -:::: - - - - -Your results should be similar to the following: - -```txt -Index setup finished. -Loading dashboards (Kibana must be running and reachable) -Loaded dashboards -Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. -See more: /explore-analyze/machine-learning.md -Loaded machine learning job configurations -Loaded Ingest pipelines -``` - -1. Exit the CLI. - -The data views for *filebeat-** and *metricbeat-** are now available in {{es}}. To verify: - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the Kibana main menu and select **Management** and go to **Kibana** > **Data views**. -3. In the search bar, search for *data views*. -4. In the search results, choose *Kibana / Data Views Management*. - -**Finish configuring Filebeat** - -1. In */filebeat-/* (where ** is the directory where Filebeat is installed), open the *filebeat.yml* configuration file for editing. -2. Scroll down to the *Outputs* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*. -3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts* as follows: - -```txt -# ---------------- Logstash Output ----------------- -output.logstash: - # The Logstash hosts - hosts: ["localhost:5044"] <1> -``` - -1. Replace `localhost` and the port number with the hostname and port number where Logstash is listening. - - - -## Configure {{ls}} to listen for Beats [ece-beats-logstash-listen] - -Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline to input data from Beats and send results to the standard output. This enables you to verify the data output before sending it for indexing in {{es}}. - -1. In */logstash-/*, create a new text file named *beats.conf*. -2. Copy and paste the following code into the new text file. This code creates a {{ls}} pipeline that listens for connections from Beats on port 5044 and writes to standard out (typically to your terminal) with formatting provided by the {{ls}} rubydebug output plugin. - - ```txt - input { - beats{port => 5044} <1> - } - output { - stdout{codec => rubydebug} <2> - } - ``` - - 1. {{ls}} listens for Beats input on the default port of 5044. Only one line is needed to do this. {{ls}} can handle input from many Beats of the same and also of varying types (Metricbeat, Filebeat, and others). - 2. This sends output to the standard output, which displays through your command line interface. This plugin enables you to verify the data before you send it to {{es}}, in a later step. - -3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](logstash://reference/config-examples.md). - - -## Output {{ls}} data to stdout [ece-beats-logstash-stdout] - -Now, let’s try out the {{ls}} pipeline with the Metricbeats and Filebeats configurations from the prior steps. Each Beat sends data into a {{ls}} pipeline, and the results display on the standard output where you can verify that everything looks correct. - -**Test Metricbeat to stdout** - -1. Open a command line interface instance. Go to */logstash-/*, where is the directory where {{ls}} is installed, and start {{ls}} by running the following command: - - ```txt - bin/logstash -f beats.conf - ``` - -2. Open a second command line interface instance. Go to */metricbeat-/*, where is the directory where Metricbeat is installed, and start Metricbeat by running the following command: - - ```txt - ./metricbeat -c metricbeat.yml - ``` - -3. Switch back to your first command line interface instance with {{ls}}. Now, Metricbeat events are input into {{ls}} and the output data is directed to the standard output. Your results should be similar to the following: - - ```txt - "tags" => [ - [0] "beats_input_raw_event" - ], - "agent" => { - "type" => "metricbeat", - "name" => "john-VirtualBox", - "version" => "8.13.1", - "ephemeral_id" => "1e69064c-d49f-4ec0-8414-9ab79b6f27a4", - "id" => "1b6c39e8-025f-4310-bcf1-818930a411d4", - "hostname" => "john-VirtualBox" - }, - "service" => { - "type" => "system" - }, - "event" => { - "duration" => 39833, - "module" => "system", - "dataset" => "system.cpu" - }, - "@timestamp" => 2021-04-21T17:06:05.231Z, - "metricset" => { - "name" => "cpu", - "period" => 10000 - }, - "@version" => "1","host" => { - "id" => "939972095cf1459c8b22cc608eff85da", - "ip" => [ - [0] "10.0.2.15", - [1] "fe80::3700:763c:4ba3:e48c" - ], - "name" => "john-VirtualBox","mac" => [ - [0] "08:00:27:a3:c7:a9" - ], - "os" => { - "type" => "linux", - ``` - -4. Switch back to the Metricbeat command line instance. Enter *CTRL + C* to shut down Metricbeat, and then exit the CLI. -5. Switch back to the {{ls}} command line instance. Enter *CTRL + C* to shut down {{ls}}, and then exit the CLI. - -**Test Filebeat to stdout** - -1. Open a command line interface instance. Go to */logstash-/*, where is the directory where {{ls}} is installed, and start {{ls}} by running the following command: - - ```txt - bin/logstash -f beats.conf - ``` - -2. Open a second command line interface instance. Go to */filebeat-/*, where is the directory where Filebeat is installed, and start Filebeat by running the following command: - - ```txt - ./filebeat -c filebeat.yml - ``` - -3. Switch back to your first command line interface instance with {{ls}}. Now, Filebeat events are input into {{ls}} and the output data is directed to the standard output. Your results should be similar to the following: - - ```txt - { - "service" => { - "type" => "system" - }, - "event" => { - "timezone" => "-04:00", - "dataset" => "system.syslog", - "module" => "system" - }, - "fileset" => { - "name" => "syslog" - }, - "agent" => { - "id" => "113dc127-21fa-4ebb-ab86-8a151d6a23a6", - "type" => "filebeat", - "version" => "8.13.1", - "hostname" => "john-VirtualBox", - "ephemeral_id" => "1058ad74-8494-4a5e-9f48-ad7c5b9da915", - "name" => "john-VirtualBox" - }, - "@timestamp" => 2021-04-28T15:33:41.727Z, - "input" => { - "type" => "log" - }, - "ecs" => { - "version" => "1.8.0" - }, - "@version" => "1", - "log" => { - "offset" => 73281, - "file" => { - "path" => "/var/log/syslog" - } - }, - ``` - -4. Review the {{ls}} output results to make sure your data looks correct. Enter *CTRL + C* to shut down {{ls}}. -5. Switch back to the Filebeats CLI. Enter *CTRL + C* to shut down Filebeat. - - -## Output {{ls}} data to {{es}} [ece-beats-logstash-elasticsearch] - -In this section, you configure {{ls}} to send the Metricbeat and Filebeat data to {{es}}. You modify the *beats.conf* created earlier, and specify the output credentials needed for our Elastic Cloud Enterprise deployment. Then, you start {{ls}} to send the Beats data into {{es}}. - -1. In your */logstash-/* folder, open *beats.conf* for editing. -2. Replace the *output {}* section of the JSON with the following code: - - ```txt - output { - elasticsearch { - index => "%{[@metadata][beat]}-%{[@metadata][version]}" - ilm_enabled => true - cloud_id => ":" <1> - cloud_auth => "elastic:" <2> - ssl => true - cacert => '/path/to/the/elastic-ece-ca-cert.pem' <3> - # api_key => "" - } - } - ``` - - 1. Use the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) documentation. - 3. The cacert line is only needed if you are using a self-signed certificate. - - - Following are some additional details about the configuration file settings: - - * *index*: We specify the name of the {{es}} index with which to associate the Beats output. - - * *%{[@metadata][beat]}* sets the first part of the index name to the value of the Beat metadata field. - * *%{[@metadata][version]}* sets the second part of the index name to the Beat version. - - If you use Metricbeat version 8.13.1, the index created in {{es}} is named *metricbeat-8.13.1*. Similarly, using the 8.13.1 version of Filebeat, the {{es}} index is named *filebeat-8.13.1*. - - * *cloud_id*: This is the ID that uniquely identifies your Elastic Cloud Enterprise deployment. - * *ssl*: This should be set to `true` so that Secure Socket Layer (SSL) certificates are used for secure communication between {{ls}} and your Elastic Cloud Enterprise deployment. - * *ilm_enabled*: Enables and disables Elastic Cloud Enterprise [index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md). - * *api_key*: If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. - -3. **Optional**: For additional security, you can generate an {{es}} API key through the Elastic Cloud Enterprise console and configure {{ls}} to use the new key to connect securely to the Elastic Cloud Enterprise. - - 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). - 2. Select the deployment and go to **☰** > **Management** > **Dev Tools**. - 3. Enter the following: - - ```json - POST /_security/api_key - { - "name": "logstash-apikey", - "role_descriptors": { - "logstash_read_write": { - "cluster": ["manage_index_templates", "monitor"], - "index": [ - { - "names": ["logstash-*","metricbeat-*","filebeat-*"], - "privileges": ["create_index", "write", "read", "manage"] - } - ] - } - } - } - ``` - - This creates an API key with the cluster `monitor` privilege which gives read-only access for determining the cluster state, and `manage_index_templates` which allows all operations on index templates. Some additional privileges also allow `create_index`, `write`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - - 4. Click **▶**. The output should be similar to the following: - - ```json - { - "api_key": "aB1cdeF-GJI23jble4NOH4", - "id": "2GBe63fBcxgJAetmgZeh", - "name": "logstash_api_key" - } - ``` - - 5. Enter your new `api_key` value into the {{ls}} `beats.conf` file, in the format `:`. If your results were as shown in this example, you would enter `2GBe63fBcxgJAetmgZeh:aB1cdeF-GJI23jble4NOH4`. Remember to remove the pound (`#`) sign to uncomment the line, and comment out the `username` and `password` lines: - - ```txt - output { - elasticsearch { - index => "%{[@metadata][beat]}-%{[@metadata][version]}" - cloud_id => "" - ssl => true - ilm_enabled => true - api_key => "2GBe63fBcxgJAetmgZeh:aB1cdeF-GJI23jble4NOH4" - # user => "" - # password => "" - } - } - ``` - -4. Open a command line interface instance, go to your {{ls}} installation path, and start {{ls}}: - - ```txt - bin/logstash -f beats.conf - ``` - -5. Open a second command line interface instance, go to your Metricbeat installation path, and start Metricbeat: - - ```txt - ./metricbeat -c metricbeat.yml - ``` - -6. Open a third command line interface instance, go to your Filebeat installation path, and start Filebeat: - - ```txt - ./filebeat -c filebeat.yml - ``` - -7. {{ls}} now outputs the Filebeat and Metricbeat data to your Elastic Cloud Enterprise instance. - -::::{note} -In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure {{ls}}, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service: - -* [Running {{ls}} as a service on Debian or RPM](logstash://reference/running-logstash.md) -* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/metricbeat/running-with-systemd.md) -* [Start filebeat](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-starting.md) - -:::: - - - -## View data in Kibana [ece-beats-logstash-view-kibana] - -In this section, you log into Elastic Cloud Enterprise, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data. - -**View the Metricbeat dashboard** - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the Kibana main menu and select **Analytics**, then **Dashboard**. -3. In the search box, search for *metricbeat system*. The search results show several dashboards available for you to explore. -4. In the search results, choose *[Metricbeat System] Overview ECS*. A Metricbeat dashboard opens: - -![A screencapture of the Kibana dashboard named Metricbeat System Overview ECS](../../../images/cloud-enterprise-ec-logstash-beats-metricbeat-dashboard.png "") - -**View the Filebeat dashboard** - -1. Open the Kibana main menu and select **Analytics**, then **Dashboard**. -2. In the search box, search for *filebeat system*. -3. In the search results, choose *[Filebeat System] Syslog dashboard ECS*. A Filebeat dashboard displaying your Filebeat data: - -![A screencapture of the Kibana dashboard named Filebeat System ECS](../../../images/cloud-enterprise-ec-logstash-beats-filebeat-dashboard.png "") - -Now, you should have a good understanding of how to configure {{ls}} to ingest data from multiple Beats. You have the basics needed to begin experimenting with your own combination of Beats and modules. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md deleted file mode 100644 index 19eb056ee6..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md +++ /dev/null @@ -1,398 +0,0 @@ -# Ingest data from a relational database into Elastic Cloud Enterprise [ece-getting-started-search-use-cases-db-logstash] - -This guide explains how to ingest data from a relational database into Elastic Cloud Enterprise through [Logstash](logstash://reference/index.md), using the Logstash [JDBC input plugin](logstash://reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an Elastic Cloud Enterprise deployment. - -The code and methods presented here have been tested with MySQL. They should work with other relational databases. - -The Logstash Java Database Connectivity (JDBC) input plugin enables you to pull in data from many popular relational databases including MySQL and Postgres. Conceptually, the JDBC input plugin runs a loop that periodically polls the relational database for records that were inserted or modified since the last iteration of this loop. - -This document presents: - -1. [Prerequisites](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-prerequisites) -2. [Create a deployment](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-deployment) -3. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-connect-securely) -4. [Get the MySQL JDBC driver](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-driver) -5. [Prepare a source MySQL database](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-database) -6. [Consider the database structure and design](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-database-structure) -7. [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-pipeline) -8. [Output to Elasticsearch](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-output) - -*Time required: 2 hours* - - -## Prerequisites [ece-db-logstash-prerequisites] - -For this tutorial you need a source MySQL instance for Logstash to read from. A free version of MySQL is available from the [MySQL Community Server section](https://dev.mysql.com/downloads/mysql/) of the MySQL Community Downloads site. - - -## Create a deployment [ece-db-logstash-deployment] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Connect securely [ece-db-logstash-connect-securely] - -When connecting to Elastic Cloud Enterprise you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - -1. [Download](https://www.elastic.co/downloads/logstash) and unpack Logstash on the local machine that hosts MySQL or another machine granted access to the MySQL machine. - - -## Get the MySQL JDBC driver [ece-db-logstash-driver] - -The Logstash JDBC input plugin does not include any database connection drivers. You need a JDBC driver for your relational database for the steps in the later section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-pipeline). - -1. Download and unpack the JDBC driver for MySQL from the [Connector/J section](https://dev.mysql.com/downloads/connector/j/) of the MySQL Community Downloads site. -2. Make a note of the driver’s location as it’s needed in the steps that follow. - - -## Prepare a source MySQL database [ece-db-logstash-database] - -Let’s look at a simple database from which you’ll import data and send it to Elastic Cloud Enterprise. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer to Elastic Cloud Enterprise. - - -### Consider the database structure and design [ece-db-logstash-database-structure] - -For this example, let’s create a new database *es_db* with table *es_table*, as the source of our Elasticsearch data. - -1. Run the following SQL statement to generate a new MySQL database with a three column table: - - ```txt - CREATE DATABASE es_db; - USE es_db; - DROP TABLE IF EXISTS es_table; - CREATE TABLE es_table ( - id BIGINT(20) UNSIGNED NOT NULL, - PRIMARY KEY (id), - UNIQUE KEY unique_id (id), - client_name VARCHAR(32) NOT NULL, - modification_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP - ); - ``` - - Let’s explore the key concepts in this SQL snippet: - - es_table - : The name of the table that stores the data. - - id - : The unique identifier for records. *id* is defined as both a PRIMARY KEY and UNIQUE KEY to guarantee that each *id* appears only once in the current table. This is translated to *_id* for updating or inserting the document into Elasticsearch. - - client_name - : The data that will ultimately be ingested into Elasticsearch. For simplicity, this example includes only a single data field. - - modification_time - : The timestamp of when the record was inserted or last updated. Further in, you can use this timestamp to determine what has changed since the last data transfer into Elasticsearch. - -2. Consider how to handle deletions and how to notify Elasticsearch about them. Often, deleting a record results in its immediate removal from the MySQL database. There’s no record of that deletion. The change isn’t detected by Logstash, so that record remains in Elasticsearch. - - There are two possible ways to address this: - - * You can use "soft deletes" in your source database. Essentially, a record is first marked for deletion through a boolean flag. Other programs that are currently using your source database would have to filter out "soft deletes" in their queries. The "soft deletes" are sent over to Elasticsearch, where they can be processed. After that, your source database and Elasticsearch must both remove these "soft deletes." - * You can periodically clear the Elasticsearch indices that are based off of the database, and then refresh Elasticsearch with a fresh ingest of the contents of the database. - -3. Log in to your MySQL server and add three records to your new database: - - ```txt - use es_db - INSERT INTO es_table (id, client_name) - VALUES (1,"Targaryen"), - (2,"Lannister"), - (3,"Stark"); - ``` - -4. Verify your data with a SQL statement: - - ```txt - select * from es_table; - ``` - - The output should look similar to the following: - - ```txt - +----+-------------+---------------------+ - | id | client_name | modification_time | - +----+-------------+---------------------+ - | 1 | Targaryen | 2021-04-21 12:17:16 | - | 2 | Lannister | 2021-04-21 12:17:16 | - | 3 | Stark | 2021-04-21 12:17:16 | - +----+-------------+---------------------+ - ``` - - Now, let’s go back to Logstash and configure it to ingest this data. - - - -## Configure a Logstash pipeline with the JDBC input plugin [ece-db-logstash-pipeline] - -Let’s set up a sample Logstash input pipeline to ingest data from your new JDBC Plugin and MySQL database. Beyond MySQL, you can input data from any database that supports JDBC. - -1. In `/logstash-7.12.0/`, create a new text file named `jdbc.conf`. -2. Copy and paste the following code into this new text file. This code creates a Logstash pipeline through a JDBC plugin. - - ```txt - input { - jdbc { - jdbc_driver_library => "/mysql-connector-java-.jar" <1> - jdbc_driver_class => "com.mysql.jdbc.Driver" - jdbc_connection_string => "jdbc:mysql://:3306/es_db" <2> - jdbc_user => "" <3> - jdbc_password => "" <3> - jdbc_paging_enabled => true - tracking_column => "unix_ts_in_secs" - use_column_value => true - tracking_column_type => "numeric" - schedule => "*/5 * * * * *" - statement => "SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > :sql_last_value AND modification_time < NOW()) ORDER BY modification_time ASC" - } - } - filter { - mutate { - copy => { "id" => "[@metadata][_id]"} - remove_field => ["id", "@version", "unix_ts_in_secs"] - } - } - output { - stdout { codec => "rubydebug"} - } - ``` - - 1. Specify the full path to your local JDBC driver .jar file (including version number). For example: `jdbc_driver_library => "/usr/share/mysql/mysql-connector-java-8.0.24.jar"` - 2. Provide the IP address or hostname and the port of your MySQL host. For example, `jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/es_db"` - 3. Provide your MySQL credentials. The username and password must both be enclosed in quotation marks. - - - ::::{note} - If you are using MariaDB (a popular open source community fork of MySQL), there are a couple of things that you need to do differently: - - In place of the MySQL JDBC driver, download and unpack the [JDBC driver for MariaDB](https://downloads.mariadb.org/connector-java/). - - Substitute the following lines in the `jdbc.conf` code, including the `ANSI_QUOTES` snippet in the last line: - - ```txt - jdbc_driver_library => "/mariadb-java-client-.jar" - jdbc_driver_class => "org.mariadb.jdbc.Driver" - jdbc_connection_string => "jdbc:mariadb://:3306/es_db?sessionVariables=sql_mode=ANSI_QUOTES" - ``` - - :::: - - - Following are some additional details about the Logstash pipeline code: - - jdbc_driver_library - : The Logstash JDBC plugin does not come packaged with JDBC driver libraries. The JDBC driver library must be passed explicitly into the plugin using the `jdbc_driver_library` configuration option. - - tracking_column - : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](logstash://reference/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. - - unix_ts_in_secs - : The field generated by the SELECT statement, which contains the `modification_time` as a standard [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) (seconds since the epoch). The field is referenced by the `tracking column`. A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone. - - sql_last_value - : This is a [built-in parameter](logstash://reference/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. - - schedule - : This uses cron syntax to specify how often Logstash should poll MySQL for changes. The specification `*/5 * * * * *` tells Logstash to contact MySQL every 5 seconds. Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). - - modification_time < NOW() - : This portion of the SELECT is explained in detail in the next section. - - filter - : In this section, the value `id` is copied from the MySQL record into a metadata field called `_id`, which is later referenced in the output to ensure that each document is written into Elasticsearch with the correct `_id` value. Using a metadata field ensures that this temporary value does not cause a new field to be created. The `id`, `@version`, and `unix_ts_in_secs` fields are also removed from the document, since they don’t need to be written to Elasticsearch. - - output - : This section specifies that each document should be written to the standard output using the rubydebug output to help with debugging. - -3. Launch Logstash with your new JDBC configuration file: - - ```txt - bin/logstash -f jdbc.conf - ``` - - Logstash outputs your MySQL data through standard output (`stdout`), your command line interface. The results for the initial data load should look similar to the following: - - ```txt - [INFO ] 2021-04-21 12:32:32.816 [Ruby-0-Thread-15: :1] jdbc - (0.009082s) SELECT * FROM (SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > 0 AND modification_time < NOW()) ORDER BY modification_time ASC) AS 't1' LIMIT 100000 OFFSET 0 - { - "client_name" => "Targaryen", - "modification_time" => 2021-04-21T12:17:16.000Z, - "@timestamp" => 2021-04-21T12:17:16.923Z - } - { - "client_name" => "Lannister", - "modification_time" => 2021-04-21T12:17:16.000Z, - "@timestamp" => 2021-04-21T12:17:16.961Z - } - { - "client_name" => "Stark", - "modification_time" => 2021-04-21T12:17:16.000Z, - "@timestamp" => 2021-04-21T12:17:16.963Z - } - ``` - - The Logstash results periodically display SQL SELECT statements, even when there’s nothing new or modified in the MySQL database: - - ```txt - [INFO ] 2021-04-21 12:33:30.407 [Ruby-0-Thread-15: :1] jdbc - (0.002835s) SELECT count(*) AS 'count' FROM (SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > 1618935436 AND modification_time < NOW()) ORDER BY modification_time ASC) AS 't1' LIMIT 1 - ``` - -4. Open your MySQL console. Let’s insert another record into that database using the following SQL statement: - - ```txt - use es_db - INSERT INTO es_table (id, client_name) - VALUES (4,"Baratheon"); - ``` - - Switch back to your Logstash console. Logstash detects the new record and the console displays results similar to the following: - - ```txt - [INFO ] 2021-04-21 12:37:05.303 [Ruby-0-Thread-15: :1] jdbc - (0.001205s) SELECT * FROM (SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > 1618935436 AND modification_time < NOW()) ORDER BY modification_time ASC) AS 't1' LIMIT 100000 OFFSET 0 - { - "client_name" => "Baratheon", - "modification_time" => 2021-04-21T12:37:01.000Z, - "@timestamp" => 2021-04-21T12:37:05.312Z - } - ``` - -5. Review the Logstash output results to make sure your data looks correct. Use `CTRL + C` to shut down Logstash. - - -## Output to Elasticsearch [ece-db-logstash-output] - -In this section, we configure Logstash to send the MySQL data to Elasticsearch. We modify the configuration file created in the section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ece-db-logstash-pipeline) so that data is output directly to Elasticsearch. We start Logstash to send the data, and then log into Elastic Cloud Enterprise to verify the data in Kibana. - -1. Open the `jdbc.conf` file in the Logstash folder for editing. -2. Update the output section with the one that follows: - - ```txt - output { - elasticsearch { - index => "rdbms_idx" - ilm_enabled => false - cloud_id => ":" <1> - cloud_auth => "elastic:" <2> - ssl => true - cacert => '/path/to/the/elastic-ece-ca-cert.pem' <3> - # api_key => "" - } - } - ``` - - 1. Use the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](logstash://reference/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](logstash://reference/secure-connection.md) documentation. - 3. This line is only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root certificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. - - - Following are some additional details about the configuration file settings: - - index - : The name of the Elasticsearch index, `rdbms_idx`, to associate the documents. - - api_key - : If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. - -3. **Optional**: For additional security, you can generate an Elasticsearch API key through the Elastic Cloud Enterprise console and configure Logstash to use the new key to connect securely to Elastic Cloud Enterprise. - - 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). - 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. - 3. Enter the following: - - ```json - POST /_security/api_key - { - "name": "logstash-apikey", - "role_descriptors": { - "logstash_read_write": { - "cluster": ["manage_index_templates", "monitor"], - "index": [ - { - "names": ["logstash-*","rdbms_idx"], - "privileges": ["create_index", "write", "read", "manage"] - } - ] - } - } - } - ``` - - This creates an API key with the cluster `monitor` privilege which gives read-only access for determining the cluster state, and `manage_index_templates` allows all operations on index templates. Some additional privileges also allow `create_index`, `write`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - - 4. Click **▶**. The output should be similar to the following: - - ```json - { - "api_key": "tV1dnfF-GHI59ykgv4N0U3", - "id": "2TBR42gBabmINotmvZjv", - "name": "logstash_api_key" - } - ``` - - 5. Enter your new `api_key` value into the Logstash `jdbc.conf` file, in the format `:`. If your results were as shown in this example, you would enter `2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3`. Remember to remove the pound (`#`) sign to uncomment the line, and comment out the `username` and `password` lines: - - ```txt - output { - elasticsearch { - index => "rdbms_idx" - cloud_id => "" - ssl => true - ilm_enabled => false - api_key => "2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3" - # user => "" - # password => "" - } - } - ``` - -4. At this point, if you simply restart Logstash as is with your new output, then no MySQL data is sent to our Elasticsearch index. - - Why? Logstash retains the previous `sql_last_value` timestamp and sees that no new changes have occurred in the MySQL database since that time. Therefore, based on the SQL query that we configured, there’s no new data to send to Logstash. - - Solution: Add `clean_run => true` as a new line in the JDBC input section of the `jdbc.conf` file. When set to `true`, this parameter resets `sql_last_value` back to zero. - - ```txt - input { - jdbc { - ... - clean_run => true - ... - } - } - ``` - - After running Logstash once with `sql_last_value` set to `true` you can remove the `clean_run` line, unless you prefer the reset behavior to happen again at each restart of Logstash - -5. Open a command line interface instance, go to your Logstash installation path, and start Logstash: - - ```txt - bin/logstash -f jdbc.conf - ``` - -6. Logstash outputs the MySQL data to your Elastic Cloud Enterprise deployment. Let’s take a look in Kibana and verify that data: - - 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). - 2. Select the deployment and go to **☰** > **Management** > **Dev Tools** - 3. Copy and paste the following API GET request into the Console pane, and then click **▶**. This queries all records in the new `rdbms_idx` index. - - ```txt - GET rdbms_idx/_search - { - "query": { - "match_all": {} - } - } - ``` - - 4. The Results pane lists the `client_name` records originating from your MySQL database, similar to the following example: - - ![A picture showing query results with three records](../../../images/cloud-enterprise-ec-logstash-db-results-scenarios.png "") - - -Now, you should have a good understanding of how to configure Logstash to ingest data from your relational database through the JDBC Plugin. You have some design considerations to track records that are new, modified, and deleted. You should have the basics needed to begin experimenting with your own database and Elasticsearch. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md deleted file mode 100644 index 42886a9b2a..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md +++ /dev/null @@ -1,526 +0,0 @@ -# Ingest logs from a Node.js web application using Filebeat [ece-getting-started-search-use-cases-node-logs] - -This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](ecs-logging://reference/intro.md#_get_started). - -This guide presents: - -1. [Prerequisites](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-prerequisites) -2. [Create a deployment](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-deployment) -3. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-connect-securely) -4. [Create a Node.js web application with logging](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-create-server-script) -5. [Create a Node.js HTTP request application](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-create-request-script) -6. [Set up Filebeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-filebeat) -7. [Send the Node.js logs to Elasticsearch](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-send-ess) -8. [Create log visualizations in Kibana](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ece-node-logs-view-kibana) - -*Time required: 1.5 hours* - - -## Prerequisites [ece-node-logs-prerequisites] - -To complete these steps you need the following applications installed on your system: - -* [Node.js](https://nodejs.org/) - You will set up a simple Node.js web server and client application. Check the Node.js download page for installation instructions. - -::::{tip} -For the three following packages, you can create a working directory to install the packages using the Node package manager (NPM). Then, you can run your Node.js webserver and client from the same directory so that it can use the packages. Alternatively, you can also install the Node packages globally by running the Node package install commands with the `-g` option. Refer to the NPM [package installation instructions](https://docs.npmjs.com/downloading-and-installing-packages-globally) for details. -:::: - - -* [winston](https://www.npmjs.com/package/winston) - This is a popular logging package for Node.js. Create a new, local directory and run the following command to install winston in it: - - ```sh - npm install winston - ``` - -* The [Elastic Common Schema (ECS) formatter](ecs-logging-nodejs://reference/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: - - ```sh - npm install @elastic/ecs-winston-format - ``` - -* [Got](https://www.npmjs.com/package/got) - Got is a "Human-friendly and powerful HTTP request library for Node.js." - This plugin can be used to query the sample web server used in the tutorial. To install the Got package, run the following command in your working directory: - - ```sh - npm install got - ``` - - - -## Create a deployment [ece-node-logs-deployment] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Connect securely [ece-node-logs-connect-securely] - -When connecting to Elastic Cloud Enterprise you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - - -## Create a Node.js web application with logging [ece-node-logs-create-server-script] - -Next, create a basic Node.js script that runs a web server and logs HTTP requests. - -1. In the same local directory where you installed the winston and ECS formatter packages, create a new file *webserver.js* and save it with these contents: - - ```javascript - const http = require('http') - const winston = require('winston') - const ecsFormat = require('@elastic/ecs-winston-format') - - const logger = winston.createLogger({ - level: 'debug', - format: ecsFormat({ convertReqRes: true }), - transports: [ - //new winston.transports.Console(), - new winston.transports.File({ - //path to log file - filename: 'logs/log.json', - level: 'debug' - }) - ] - }) - - const server = http.createServer(handler) - server.listen(3000, () => { - logger.info('listening at http://localhost:3000') - }) - - function handler (req, res) { - res.setHeader('Foo', 'Bar') - res.end('ok') - logger.info('handled request', { req, res }) - } - ``` - - This Node.js script runs a web server at `http://localhost:3000` and uses the winston logger to send logging events, based on HTTP requests, to the file `log.json`. - -2. Try a test run of the Node.js script: - - ```sh - node webserver.js - ``` - -3. With the script running, open a web browser to `http://localhost:3000` and there should be a simple `ok` message. -4. In the directory where you created `webserver.js`, you should now find a newly created `log.json` file. Open the file and check the contents. There should be one log entry indicating that Node.js is listening on the localhost port, and another entry for the HTTP request from when you opened `localhost` in your browser. - - Leave `webserver.js` running for now and we’ll send it some HTTP requests. - - - -## Create a Node.js HTTP request application [ece-node-logs-create-request-script] - -In this step, you’ll create a Node.js application that sends HTTP requests to your web server. - -1. In your working directory, create a file `webrequests.js` and save it with these contents: - - ```javascript - const got = require('got'); - - const addresses = [ - 'aardvark@the.zoo', - 'crocodile@the.zoo', - 'elephant@the.zoo', - 'emu@the.zoo', - 'hippopotamus@the.zoo', - 'llama@the.zoo', - 'octopus@the.zoo', - 'otter@the.zoo', - 'panda@the.zoo', - 'pangolin@the.zoo', - 'tortoise@the.zoo', - 'walrus@the.zoo' - ]; - - const method = [ - 'get', - 'put', - 'post' - ]; - - async function sleep(millis) { - return new Promise(resolve => setTimeout(resolve, millis)); - } - - (async () => { - while (true) { - var type = Math.floor(Math.random() * method.length); - var email = Math.floor(Math.random() * addresses.length); - var sleeping = Math.floor(Math.random() * 9) + 1; - - switch (method[type]) { - case 'get': - try { - const response = await got.get('http://localhost:3000/', { - headers: { - from: addresses[email] - } - }).json(); - console.log(response.body); - } catch (error) { - console.log(error.response.body); - } - break; // end case 'get' - case 'put': - try { - const response = await got.put('http://localhost:3000/', { - headers: { - from: addresses[email] - } - }).json(); - console.log(response.body); - } catch (error) { - console.log(error.response.body); - } - break; // end case 'put' - case 'post': - try { - const { - data - } = await got.post('http://localhost:3000/', { - headers: { - from: addresses[email] - } - }).json(); - console.log(data); - } catch (error) { - console.log(error.response.body); - } - break; // end case 'post' - } // end switch on method - await sleep(sleeping * 1000); - } - })(); - ``` - - This Node.js app generates HTTP requests with a random method of type `GET`, `POST`, or `PUT`, and a random `from` request header using various pretend email addresses. The requests are sent at random intervals between 1 and 10 seconds. - - The [Got package](https://www.npmjs.com/package/got) is used to send the requests, and they are directed to your web server at `http://localhost:3000`. To learn about sending custom headers such as the `from` field used in this example, check [headers](https://github.com/sindresorhus/got/blob/0fb6ec60d299fd9b48966608a4c3f201746d821c/documentation/2-options.md#headers) in the Got documentation. - -2. In a new terminal window, give the Node.js script a trial run: - - ```sh - node webrequests.js - ``` - -3. After the script has run for about 30 seconds, enter *CTRL + C* to stop it. Have a look at your Node.js `logs/log.json` file. It should contain some entries like this one: - - ```json - {"@timestamp":"2021-09-09T18:42:20.799Z","log.level":"info","message":"handled request","ecs":{"version":"1.6.0"},"http":{"version":"1.1","request":{"method":"POST","headers":{"user-agent":"got (https://github.com/sindresorhus/got)","from":"octopus@the.zoo","accept":"application/json","accept-encoding":"gzip, deflate, br","host":"localhost:3000","connection":"close","content-length":"0"},"body":{"bytes":0}},"response":{"status_code":200,"headers":{"foo":"Bar"}}},"url":{"path":"/","full":"http://localhost:3000/"},"client":{"address":"::ffff:127.0.0.1","ip":"::ffff:127.0.0.1","port":49930},"user_agent":{"original":"got (https://github.com/sindresorhus/got)"}} - ``` - - Each log entry contains details of the HTTP request. In particular, in this example you can find the timestamp of the request, a request method of type `PUT`, and a request `from` header with the email address `octopus@the.zoo`. Your example will likely be a bit different since the request type and the email address are generated randomly. - - Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - -4. After confirming that both `webserver.js` and `webrequests.js` run as expected, enter *CTRL + C* to stop the Node.js script, and also delete `log.json`. - - -## Set up Filebeat [ece-node-logs-filebeat] - -Filebeat offers a straightforward, easy to configure way to monitor your Node.js log files and port the log data into Elastic Cloud Enterprise. - -**Get Filebeat** - -[Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. - -**Configure Filebeat to access Elastic Cloud Enterprise** - -In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. - -```txt -# =============================== Elastic Cloud ================================ - -# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). - -# The cloud.id setting overwrites the `output.elasticsearch.hosts` and -# `setup.kibana.host` options. -# You can find the `cloud.id` in the Elastic Cloud web UI. -cloud.id: my-deployment:yTMtd5VzdKEuP2NwPbNsb3VkLtKzLmldJDcyMzUyNjBhZGP7MjQ4OTZiNTIxZTQyOPY2C2NeOGQwJGQ2YWQ4M5FhNjIyYjQ9ODZhYWNjKDdlX2Yz4ELhRYJ7 <1> - -# The cloud.auth setting overwrites the `output.elasticsearch.username` and -# `output.elasticsearch.password` settings. The format is `:`. -cloud.auth: elastic:591KhtuAgTP46by9C4EmhGuk <2> - -setup.kibana: - ssl.certificate_authorities: ["/path/to/your/elastic-ece-ca-cert.pem"] <3> - -output.elasticsearch: - ssl.certificate_authorities: ["/path/to/your/elastic-ece-ca-cert.pem"] <3> -# -``` - -1. Uncomment the `cloud.id` line and add the deployment’s Cloud ID. You can include or omit the *:* prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Uncomment the `cloud.auth` line and add the username and password for your deployment that you recorded when you created your deployment. The format is *:*, for example *elastic:57ugj782kvkwmSKg8uVe*. -3. The two lines related to `ssl.certificate_authorities` are used only when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM-formatted root cetificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. - - -**Configure Filebeat inputs** - -Filebeat has several ways to collect logs. For this example, you’ll configure log collection manually. - -In the *filebeat.inputs* section of *filebeat.yml*, set *enabled:* to *true*, and set *paths:* to the location of your web server log file. In this example, set the same directory where you saved *webserver.js*: - -```txt -filebeat.inputs: - -# Each - is an input. Most options can be set at the input level, so -# you can use different inputs for various configurations. -# Below are the input specific configurations. - -- type: log - - # Change to true to enable this input configuration. - enabled: true - - # Paths that should be crawled and fetched. Glob based paths. - paths: - - /path/to/logs/log.json -``` - -::::{tip} -You can specify a wildcard (***) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. -:::: - - -**Add the JSON input options** - -Filebeat’s input configuration options include several settings for decoding JSON messages. Log files are decoded line by line, so it’s important that they contain one JSON object per line. - -For this example, Filebeat uses the following four decoding options. - -```txt - json.keys_under_root: true - json.overwrite_keys: true - json.add_error_key: true - json.expand_keys: true -``` - -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. - -Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: - -```yaml -# ============================== Filebeat inputs =============================== - -filebeat.inputs: - -# Each - is an input. Most options can be set at the input level, so -# you can use different inputs for various configurations. -# Below are the input specific configurations. - -- type: log - - # Change to true to enable this input configuration. - enabled: true - - # Paths that should be crawled and fetched. Glob based paths. - paths: - - /path/to/logs/log.json - json.keys_under_root: true - json.overwrite_keys: true - json.add_error_key: true - json.expand_keys: true -``` - -**Finish setting up Filebeat** - -Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets, run the following from the Filebeat installation directory: - -```txt -./filebeat setup -e -``` - -::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. -:::: - - -The setup process takes a couple of minutes. If everything goes successfully you should get a confirmation message: - -```txt -Loaded Ingest pipelines -``` - -The Filebeat data view is now available in Elasticsearch. To verify: - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. - -**Optional: Use an API key to authenticate** - -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the Elastic Cloud Enterprise console, and then configure Filebeat to use the new key to connect securely to the Elastic Cloud Enterprise deployment. - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. -3. Enter the following request: - - ```json - POST /_security/api_key - { - "name": "filebeat-api-key", - "role_descriptors": { - "logstash_read_write": { - "cluster": ["manage_index_templates", "monitor"], - "index": [ - { - "names": ["filebeat-*"], - "privileges": ["create_index", "write", "read", "manage"] - } - ] - } - } - } - ``` - - This creates an API key with the cluster `monitor` privilege which gives read-only access for determining the cluster state, and `manage_index_templates` which allows all operations on index templates. Some additional privileges also allow `create_index`, `write`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - -4. Click **▶**. The output should be similar to the following: - - ```json - { - "api_key": "tV1dnfF-GHI59ykgv4N0U3", - "id": "2TBR42gBabmINotmvZjv", - "name": "filebeat-api-key" - } - ``` - -5. Add your API key information to the *Elasticsearch Output* section of *filebeat.yml*, just below *output.elasticsearch:*. Use the format `:`. If your results are as shown in this example, enter `2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3`. -6. Add a pound (`#`) sign to comment out the *cloud.auth: elastic:* line, since Filebeat will use the API key instead of the deployment username and password to authenticate. - - ```txt - # =============================== Elastic Cloud ================================ - - # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). - - # The cloud.id setting overwrites the `output.elasticsearch.hosts` and - # `setup.kibana.host` options. - # You can find the `cloud.id` in the Elastic Cloud web UI. - cloud.id: my-deployment:yTMtd5VzdKEuP2NwPbNsb3VkLtKzLmldJDcyMzUyNjBhZGP7MjQ4OTZiNTIxZTQyOPY2C2NeOGQwJGQ2YWQ4M5FhNjIyYjQ9ODZhYWNjKDdlX2Yz4ELhRYJ7 - - # The cloud.auth setting overwrites the `output.elasticsearch.username` and - # `output.elasticsearch.password` settings. The format is `:`. - #cloud.auth: elastic:591KhtuAgTP46by9C4EmhGuk - - # ================================== Outputs =================================== - - # Configure what output to use when sending the data collected by the beat. - - # ---------------------------- Elasticsearch Output ---------------------------- - output.elasticsearch: - # Array of hosts to connect to. - api_key: "2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3" - ``` - - - -## Send the Node.js logs to Elasticsearch [ece-node-logs-send-ess] - -It’s time to send some log data into {{es}}! - -**Launch Filebeat and webserver.js** - -Launch Filebeat by running the following from the Filebeat installation directory: - -```txt -./filebeat -e -c filebeat.yml -``` - -In this command: - -* The *-e* flag sends output to the standard error instead of the configured log output. -* The *-c* flag specifies the path to the Filebeat config file. - -::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. -:::: - - -Filebeat should now be running and monitoring the contents of *log.json*, which actually doesn’t exist yet. So, let’s create it. Open a new terminal instance and run the *webserver.js* Node.js script: - -```sh -node webserver.js -``` - -Next, run the Node.js `webrequests.js` script to send random requests to the Node.js web server. - -```sh -node webrequests.js -``` - -Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *log.json* file is generated as expected and is populated with several log entries. - -**Verify the log entries in Elastic Cloud Enterprise** - -The next step is to confirm that the log data has successfully found it’s way into Elastic Cloud Enterprise. - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. -4. Select *filebeat-**. - -The filebeat data view shows a list of fields and their details. - - -## Create log visualizations in Kibana [ece-node-logs-view-kibana] - -Now it’s time to create visualizations based off of the application log data. - -1. Open the Kibana main menu and select **Dashboard**, then **Create dashboard**. -2. Select **Create visualization**. The [Lens](../../../explore-analyze/visualize/lens.md) visualization editor opens. -3. In the data view dropdown box, select **filebeat-***, if it isn’t already selected. -4. In the **CHART TYPE** dropdown box, select **Bar vertical stacked**, if it isn’t already selected. -5. Check that the [time filter](../../../explore-analyze/query-filter/filtering.md) is set to **Last 15 minutes**. -6. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. -7. Drag and drop the **http.request.method** field onto the visualization builder. -8. A stacked bar chart now shows the relative frequency of each of the three request methods used in our example, measured over time. - - ![A screen capture of the Kibana "Bar vertical stacked" visualization with several bars. The X axis shows "Count of records" and the Y axis shows "@timestamp per 30 seconds". Each bar is divided into three HTTP request methods: GET](../../../images/cloud-enterprise-ec-node-logs-methods.png "") - -9. Select **Save and return** to add this visualization to your dashboard. - -Let’s create a second visualization. - -1. Select **Create visualization**. -2. Again, make sure that **CHART TYPE** is set to **Bar vertical stacked**. -3. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. -4. Drag and drop the **http.request.headers.from** field onto the visualization builder. -5. In the chart settings area, under **Break down by**, select **Top values of http.request.headers.from** and set **Number of values** to *12*. In this example there are twelve different email addresses used in the HTTP *from* header, so this parameter sets all of them to appear in the chart legend. -6. Select **Refresh**. A stacked bar chart now shows the relative frequency of each of the HTTP *from* headers over time. - - ![A screen capture of the visualization builder](../../../images/cloud-enterprise-ec-node-logs-content.png "") - -7. Select **Save and return** to add this visualization to your dashboard. - -And now for the final visualization. - -1. Select **Create visualization**. -2. In the **CHART TYPE** dropdown box, select **Donut**. -3. From the list of available fields, drag and drop the **http.request.method** field onto the visualization builder. A donut chart appears. - - ![A screen capture of a donut chart divided into three sections](../../../images/cloud-enterprise-ec-node-logs-donut.png "") - -4. Select **Save and return** to add this visualization to your dashboard. -5. Select **Save** and add a title to save your new dashboard. - -You now have a Kibana dashboard with three visualizations: a stacked bar chart showing the frequency of each HTTP request method over time, another stacked bar chart showing the frequency of various HTTP *from* headers over time, and a donut chart showing the relative frequency of each HTTP request method type. - -You can add titles to the visualizations, resize and position them as you like, and then save your changes. - -**View log data updates in real time** - -1. Select **Refresh** on the Kibana dashboard. Since the application `webrequests.js` continues to run and send HTTP requests to the Node.js server, `webserver.js` continues to generate log data, and your Kibana visualizations update with that data with each page refresh. - - ![A screen capture of the completed Kibana dashboard](../../../images/cloud-enterprise-ec-node-logs-final-dashboard.png "") - -2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them. - -You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md deleted file mode 100644 index 5072efa781..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md +++ /dev/null @@ -1,417 +0,0 @@ -# Ingest logs from a Python application using Filebeat [ece-getting-started-search-use-cases-python-logs] - -This guide demonstrates how to ingest logs from a Python application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](ecs-logging://reference/intro.md). - -You are going to learn how to: - -1. [Create a Python script with logging](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md#ece-python-logs-create-script) -2. [Set up Filebeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md#ece-python-logs-filebeat) -3. [Send the Python logs to {{es}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md#ece-python-logs-send-ess) -4. [Create log visualizations in Kibana](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md#ece-python-logs-view-kibana) - -*Time required: 1 hour* - - -## Prerequisites [ece_prerequisites_2] - -To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](ecs-logging-python://reference/installation.md) for the Python logging library. - -To install *ecs-logging-python*, run: - -```sh -python -m pip install ecs-logging -``` - - -## Create a deployment [ece_create_a_deployment_3] - -1. Log into the Elastic Cloud Enterprise admin console. -2. Select **Create deployment**. -3. Give your deployment a name. You can leave all other settings at their default values. -4. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. -5. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. - - -## Connect securely [ece_connect_securely_2] - -When connecting to Elastic Cloud Enterprise you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - -To connect to, stream data to, and issue queries with Elastic Cloud Enterprise, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. - - -## Create a Python script with logging [ece-python-logs-create-script] - -In this step, you’ll create a Python script that generates logs in JSON format, using Python’s standard logging module. - -1. In a local directory, create a new file *elvis.py* and save it with these contents: - - ```python - #!/usr/bin/python - - import logging - import ecs_logging - import time - from random import randint - - #logger = logging.getLogger(__name__) - logger = logging.getLogger("app") - logger.setLevel(logging.DEBUG) - handler = logging.FileHandler('elvis.json') - handler.setFormatter(ecs_logging.StdlibFormatter()) - logger.addHandler(handler) - - print("Generating log entries...") - - messages = [ - "Elvis has left the building.",# - "Elvis has left the oven on.", - "Elvis has two left feet.", - "Elvis was left out in the cold.", - "Elvis was left holding the baby.", - "Elvis left the cake out in the rain.", - "Elvis came out of left field.", - "Elvis exited stage left.", - "Elvis took a left turn.", - "Elvis left no stone unturned.", - "Elvis picked up where he left off.", - "Elvis's train has left the station." - ] - - while True: - random1 = randint(0,15) - random2 = randint(1,10) - if random1 > 11: - random1 = 0 - if(random1<=4): - logger.info(messages[random1], extra={"http.request.body.content": messages[random1]}) - elif(random1>=5 and random1<=8): - logger.warning(messages[random1], extra={"http.request.body.content": messages[random1]}) - elif(random1>=9 and random1<=10): - logger.error(messages[random1], extra={"http.request.body.content": messages[random1]}) - else: - logger.critical(messages[random1], extra={"http.request.body.content": messages[random1]}) - time.sleep(random2) - ``` - - This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). - - Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - - Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields. - -2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following: - - ```sh - python elvis.py - ``` - - After the script has run for about 15 seconds, enter *CTRL + C* to stop it. Have a look at the newly generated *elvis.json*. It should contain one or more entries like this one: - - ```json - {"@timestamp":"2021-06-16T02:19:34.687Z","log.level":"info","message":"Elvis has left the building.","ecs":{"version":"1.6.0"},"http":{"request":{"body":{"content":"Elvis has left the building."}}},"log":{"logger":"app","origin":{"file":{"line":39,"name":"elvis.py"},"function":""},"original":"Elvis has left the building."},"process":{"name":"MainProcess","pid":3044,"thread":{"id":4444857792,"name":"MainThread"}}} - ``` - -3. After confirming that *elvis.py* runs as expected, you can delete *elvis.json*. - - -## Set up Filebeat [ece-python-logs-filebeat] - -Filebeat offers a straightforward, easy to configure way to monitor your Python log files and port the log data into Elastic Cloud Enterprise. - -**Get Filebeat** - -[Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. - -**Configure Filebeat to access Elastic Cloud Enterprise** - -In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. - -```txt -# =============================== Elastic Cloud ================================ - -# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). - -# The cloud.id setting overwrites the `output.elasticsearch.hosts` and -# `setup.kibana.host` options. -# You can find the `cloud.id` in the Elastic Cloud web UI. -cloud.id: my-deployment:long-hash <1> - -# The cloud.auth setting overwrites the `output.elasticsearch.username` and -# `output.elasticsearch.password` settings. The format is `:`. -cloud.auth: elastic:password <2> - -setup.kibana: - ssl.certificate_authorities: ["/path/to/your/elastic-ece-ca-cert.pem"] <3> - -output.elasticsearch: - ssl.certificate_authorities: ["/path/to/your/elastic-ece-ca-cert.pem"] <3> -# -``` - -1. Uncomment the `cloud.id` line and add the deployment’s Cloud ID. You can include or omit the *:* prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Uncomment the `cloud.auth` line and add the username and password for your deployment that you recorded when you created your deployment. The format is *:*, for example *elastic:57ugj782kvkwmSKg8uVe*. -3. The two lines related to `ssl.certificate_authorities` are used only when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM-formatted root certificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. - - -**Configure Filebeat inputs** - -Filebeat has several ways to collect logs. For this example, you’ll configure log collection manually. - -In the *filebeat.inputs* section of *filebeat.yml*, set *enabled:* to *true*, and set *paths:* to the location of your log file or files. In this example, set the same directory where you saved *elvis.py*: - -```txt -filebeat.inputs: - -# Each - is an input. Most options can be set at the input level, so -# you can use different inputs for various configurations. -# Below are the input specific configurations. - -- type: log - - # Change to true to enable this input configuration. - enabled: true - - # Paths that should be crawled and fetched. Glob based paths. - paths: - - /path/to/log/files/*.json -``` - -You can specify a wildcard (***) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. - -**Add the JSON input options** - -Filebeat’s input configuration options include several settings for decoding JSON messages. Log files are decoded line by line, so it’s important that they contain one JSON object per line. - -For this example, Filebeat uses the following four decoding options. - -```txt - json.keys_under_root: true - json.overwrite_keys: true - json.add_error_key: true - json.expand_keys: true -``` - -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. - -Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: - -```yaml -# ============================== Filebeat inputs =============================== - -filebeat.inputs: - -# Each - is an input. Most options can be set at the input level, so -# you can use different inputs for various configurations. -# Below are the input specific configurations. - -- type: log - - # Change to true to enable this input configuration. - enabled: true - - # Paths that should be crawled and fetched. Glob based paths. - paths: - - /path/to/log/files/*.json - json.keys_under_root: true - json.overwrite_keys: true - json.add_error_key: true - json.expand_keys: true -``` - -**Finish setting up Filebeat** - -Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets, run the following from the Filebeat installation directory: - -```txt -./filebeat setup -e -``` - -::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. -:::: - - -The setup process takes a couple of minutes. If everything goes successfully you should get a confirmation message: - -```txt -Loaded Ingest pipelines -``` - -The Filebeat data view (formerly *index pattern*) is now available in Elasticsearch. To verify: - -::::{note} -Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been renamed to *data views*. To learn more, check the Kibana [What’s new in 8.0](https://www.elastic.co/guide/en/kibana/8.0/whats-new.html#index-pattern-rename) page. -:::: - - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. - -**Optional: Use an API key to authenticate** - -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the Cloud UI, and then configure Filebeat to use the new key to connect securely to the Elastic Cloud Enterprise deployment. - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. -3. Enter the following request: - - ```json - POST /_security/api_key - { - "name": "filebeat-api-key", - "role_descriptors": { - "logstash_read_write": { - "cluster": ["manage_index_templates", "monitor"], - "index": [ - { - "names": ["filebeat-*"], - "privileges": ["create_index", "write", "read", "manage"] - } - ] - } - } - } - ``` - - This creates an API key with the cluster `monitor` privilege which gives read-only access for determining the cluster state, and `manage_index_templates` which allows all operations on index templates. Some additional privileges also allow `create_index`, `write`, and `manage` operations for the specified index. The index `manage` privilege is added to enable index refreshes. - -4. Click **▶**. The output should be similar to the following: - - ```json - { - "api_key": "tV1dnfF-GHI59ykgv4N0U3", - "id": "2TBR42gBabmINotmvZjv", - "name": "filebeat-api-key" - } - ``` - -5. Add your API key information to the *Elasticsearch Output* section of `filebeat.yml`, just below *output.elasticsearch:*. Use the format `:`. If your results are as shown in this example, enter `2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3`. -6. Add a pound (`#`) sign to comment out the *cloud.auth: elastic:* line, since Filebeat will use the API key instead of the deployment username and password to authenticate. - - ```txt - # =============================== Elastic Cloud ================================ - - # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). - - # The cloud.id setting overwrites the `output.elasticsearch.hosts` and - # `setup.kibana.host` options. - # You can find the `cloud.id` in the Elastic Cloud web UI. - cloud.id: my-deployment:yTMtd5VzdKEuP2NwPbNsb3VkLtKzLmldJDcyMzUyNjBhZGP7MjQ4OTZiNTIxZTQyOPY2C2NeOGQwJGQ2YWQ4M5FhNjIyYjQ9ODZhYWNjKDdlX2Yz4ELhRYJ7 - - # The cloud.auth setting overwrites the `output.elasticsearch.username` and - # `output.elasticsearch.password` settings. The format is `:`. - #cloud.auth: elastic:591KhtuAgTP46by9C4EmhGuk - - # ================================== Outputs =================================== - - # Configure what output to use when sending the data collected by the beat. - - # ---------------------------- Elasticsearch Output ---------------------------- - output.elasticsearch: - # Array of hosts to connect to. - api_key: "2TBR42gBabmINotmvZjv:tV1dnfF-GHI59ykgv4N0U3" - ``` - - - -## Send the Python logs to Elasticsearch [ece-python-logs-send-ess] - -It’s time to send some log data into E{{es}}! - -**Launch Filebeat and elvis.py** - -Launch Filebeat by running the following from the Filebeat installation directory: - -```txt -./filebeat -e -c filebeat.yml -``` - -In this command: - -* The *-e* flag sends output to the standard error instead of the configured log output. -* The *-c* flag specifies the path to the Filebeat config file. - -::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. -:::: - - -Filebeat should now be running and monitoring the contents of *elvis.json*, which actually doesn’t exist yet. So, let’s create it. Open a new terminal instance and run the *elvis.py* Python script: - -```sh -python elvis.py -``` - -Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *elvis.json* file is generated as expected and is populated with several log entries. - -**Verify the log entries in Elastic Cloud Enterprise** - -The next step is to confirm that the log data has successfully found it’s way into Elastic Cloud Enterprise. - -1. [Login to Kibana](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md). -2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat_. You should get *filebeat-** in the search results. -4. Select *filebeat-**. - -The filebeat data view shows a list of fields and their details. - - -## Create log visualizations in Kibana [ece-python-logs-view-kibana] - -Now it’s time to create visualizations based off of the Python application log data. - -1. Open the Kibana main menu and select **Dashboard**, then **Create dashboard**. -2. Select **Create visualization**. The [Lens](../../../explore-analyze/visualize/lens.md) visualization editor opens. -3. In the data view dropdown box, select **filebeat-**, if it isn’t already selected. -4. In the **Visualization type dropdown**, select **Bar vertical stacked**, if it isn’t already selected. -5. Check that the [time filter](../../../explore-analyze/query-filter/filtering.md) is set to **Last 15 minutes**. -6. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. -7. Drag and drop the *log.level* field onto the visualization builder. -8. In the chart settings area, under **Break down by**, select **Top values of log.level** and set **Number of values** to *4*. Since there are four log severity levels, this parameter sets all of them to appear in the chart legend. -9. Select **Refresh**. A stacked bar chart now shows the relative frequency of each of the four log severity levels over time. - - ![A screen capture of the Kibana "Bar vertical stacked" visualization with several bars. The X axis shows "Count of records" and the Y axis shows "@timestamp per 30 seconds". Each bar is divided into the four log severity levels.](../../../images/cloud-enterprise-ec-python-logs-levels.png "") - -10. Select **Save and return** to add this visualization to your dashboard. - -Let’s create a second visualization. - -1. Select **Create visualization**. -2. Again, make sure that **Visualization type dropdown** is set to **Bar vertical stacked**. -3. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. -4. Drag and drop the **http.request.body.content** field onto the visualization builder. -5. In the chart settings area, under **Break down by**, select **Top values of http.request.body.content** and set **Number of values** to *12*. Since there are twelve different log messages, this parameter sets all of them to appear in the chart legend. -6. Select **Refresh**. A stacked bar chart now shows the relative frequency of each of the log messages over time. - - ![A screen capture of the visualization builder](../../../images/cloud-enterprise-ec-python-logs-content.png "") - -7. Select **Save and return** to add this visualization to your dashboard. - -And now for the final visualization. - -1. Select **Create visualization**. -2. In the **Visualization type dropdown** dropdown, select **Donut**. -3. From the list of available fields, drag and drop the **log.level** field onto the visualization builder. A donut chart appears. - - ![A screen capture of a donut chart divided into four sections](../../../images/cloud-enterprise-ec-python-logs-donut.png "") - -4. Select **Save and return** to add this visualization to your dashboard. -5. Select **Save** and add a title to save your new dashboard. - -You now have a Kibana dashboard with three visualizations: a stacked bar chart showing the frequency of each log severity level over time, another stacked bar chart showing the frequency of various message strings over time (from the added *http.request.body.content* parameter), and a donut chart showing the relative frequency of each log severity type. - -You can add titles to the visualizations, resize and position them as you like, and then save your changes. - -**View log data updates in real time** - -1. Select **Refresh** on the Kibana dashboard. Since *elvis.py* continues to run and generate log data, your Kibana visualizations update with each refresh. - - ![A screen capture of the completed Kibana dashboard](../../../images/cloud-enterprise-ec-python-logs-final-dashboard.png "") - -2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal. - -You now know how to monitor log files from a Python application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md b/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md index 6a8df2a487..d3c48bc78e 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md @@ -172,7 +172,7 @@ When shipping logs to a monitoring deployment there are more logging features av #### For {{es}}: [ech-extra-logging-features-elasticsearch] -* [Audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment * [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md) - helps find and debug slow queries and indexing * Verbose logging - helps debug stack issues by increasing component logs @@ -181,7 +181,7 @@ After you’ve enabled log delivery on your deployment, you can [add the Elastic #### For Kibana: [ech-extra-logging-features-kibana] -* [Audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment After you’ve enabled log delivery on your deployment, you can [add the Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to enable this feature. diff --git a/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md b/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md index fb308686a5..1ec417f0d2 100644 --- a/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md +++ b/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md @@ -172,7 +172,7 @@ When shipping logs to a monitoring deployment there are more logging features av #### For {{es}}: [ec-extra-logging-features-elasticsearch] -* [Audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment * [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md) - helps find and debug slow queries and indexing * Verbose logging - helps debug stack issues by increasing component logs @@ -181,7 +181,7 @@ After you’ve enabled log delivery on your deployment, you can [add the Elastic #### For Kibana: [ec-extra-logging-features-kibana] -* [Audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +* [Audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment After you’ve enabled log delivery on your deployment, you can [add the Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to enable this feature. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md b/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md index 528cc9e8a6..af81e68920 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md @@ -272,7 +272,7 @@ const client = new Client({ }) ``` -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). ### Best practices [ec_best_practices] diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-python.md b/raw-migrated-files/cloud/cloud/ec-getting-started-python.md index 4f6f6967b0..2da1771fcc 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-python.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-python.md @@ -333,7 +333,7 @@ es = Elasticsearch( ) ``` -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](elasticsearch-py://reference/examples.md). diff --git a/raw-migrated-files/docs-content/serverless/ai-assistant-knowledge-base.md b/raw-migrated-files/docs-content/serverless/ai-assistant-knowledge-base.md deleted file mode 100644 index 0d5fadaf30..0000000000 --- a/raw-migrated-files/docs-content/serverless/ai-assistant-knowledge-base.md +++ /dev/null @@ -1,151 +0,0 @@ -# AI Assistant Knowledge Base [ai-assistant-knowledge-base] - -AI Assistant’s Knowledge Base feature enables AI Assistant to recall specific documents and other specified information. This information, which can include everything from the location of your datacenters to the latest threat research, provides additional context that can improve the quality of AI Assistant’s responses to your queries. This topic describes how to enable and add information to Knowledge Base. - -::::{note} -When you upgrade from {{elastic-sec}} version 8.15 to a newer version, information previously stored by AI Assistant will be lost. -:::: - - -::::{admonition} Requirements -* To use Knowledge Base, you need the `Elastic AI Assistant: All` privilege. To edit global Knowledge Base entries (information that will affect the AI Assistant experience for other users in the {{kib}} space), you need the `Allow Changes to Global Entries` privilege. -* You must [enable machine learning](../../../solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md) with a minimum ML node size of 4 GB. - -:::: - - - -## Role-based access control (RBAC) for Knowledge Base [knowledge-base-rbac] - -The `Elastic AI Assistant: All` role privilege allows you to use AI Assistant and access its settings. It has two sub-privileges, `Field Selection and Anonymization`, which allows you to customize which alert fields are sent to AI Assistant and Attack Discovery, and Knowledge Base, which allows you to edit and create new Knowledge Base entries. - -:::{image} ../../../images/serverless-knowledge-base-rbac.png -:alt: Knowledge Base's RBAC settings -::: - - -## Enable Knowledge Base [enable-knowledge-base] - -There are two ways to enable Knowledge Base. - -::::{note} -You must individually enable Knowledge Base for each {{kib}} space where you want to use it. -:::: - - - -### Option 1: Enable Knowledge Base from an AI Assistant conversation [_option_1_enable_knowledge_base_from_an_ai_assistant_conversation] - -Open a conversation with AI Assistant, select a large language model, then click **Setup Knowledge Base**. If the button doesn’t appear, Knowledge Base is already enabled. - -:::{image} ../../../images/serverless-knowledge-base-assistant-setup-button.png -:alt: An AI Assistant conversation showing the Setup Knowledge Base button -::: - -Knowledge Base setup may take several minutes. It will continue in the background if you close the conversation. After setup is complete, you can access Knowledge Base settings from AI Assistant’s conversation settings menu (access the conversation settings menu by clicking the three dots button next to the model selection dropdown). - -:::{image} ../../../images/serverless-knowledge-base-assistant-menu-dropdown.png -:alt: AI Assistant's dropdown menu with the Knowledge Base option highlighted -::: - - -### Option 2: Enable Knowledge Base from the Security AI settings [_option_2_enable_knowledge_base_from_the_security_ai_settings] - -1. To open Security AI settings, use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find "AI Assistant for Security." -2. On the **Knowledge Base** tab, click **Setup Knowledge Base**. If the button doesn’t appear, Knowledge Base is already enabled. - -:::{image} ../../../images/serverless-knowledge-base-assistant-settings-kb-tab.png -:alt: AI Assistant's settings menu open to the Knowledge Base tab -::: - - -## Knowledge Base for alerts [rag-for-alerts] - -When Knowledge Base is enabled, AI Assistant receives `open` or `acknowledged` alerts from your environment from the last 24 hours. It uses these as context for each of your prompts. This enables it to answer questions about multiple alerts in your environment rather than just about individual alerts you choose to send it. It receives alerts ordered by risk score, then by the most recently generated. Building block alerts are excluded. - -To enable Knowledge Base for alerts: - -1. Ensure that Knowledge Base is [enabled](../../../solutions/security/ai/ai-assistant-knowledge-base.md#enable-knowledge-base). -2. Use the slider on the Security AI settings' **Knowledge Base** tab to select the number of alerts to send to AI Assistant. Click **Save**. - -::::{note} -Including a large number of alerts may cause your request to exceed the maximum token length of your third-party generative AI provider. If this happens, try selecting a lower number of alerts to send. -:::: - - - -## Add knowledge [knowledge-base-add-knowledge] - -To view all Knowledge Base entries, go to the Security AI settings and select the **Knowledge Base** tab. You can add individual documents or entire indices containing multiple documents. Each entry in the Knowledge Base (a document or index) has a **Sharing** setting of `private` or `global`. Private entries apply to the current user only and do not affect other users in the {{kib}} space, whereas global entries affect all users. Each entry can also have a `Required knowledge` setting, which means it will be included as context for every message sent to AI Assistant. - -::::{note} -When you enable Knowledge Base, it comes pre-populated with articles from [Elastic Security Labs](https://www.elastic.co/security-labs), current through September 30, 2024, which allows AI Assistant to leverage Elastic’s security research during your conversations. This enables it to answer questions such as, “Are there any new tactics used against Windows hosts that I should be aware of when investigating my alerts?” -:::: - - - -### Add an individual document [knowledge-base-add-knowledge-document] - -Add an individual document to Knowledge Base when you want AI Assistant to remember a specific piece of information. - -1. To open Security AI settings, use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find "AI Assistant for Security." Select the **Knowledge Base** tab. -2. Click **New → Document** and give it a name. -3. Under **Sharing**, select whether this knowledge should be **Global** or **Private**. -4. Write the knowledge AI Assistant should remember in the **Markdown text** field. -5. In the **Markdown text** field, enter the information you want AI Assistant to remember. -6. If it should be **Required knowledge**, select the option. Otherwise, leave it blank. Alternatively, you can simply send a message to AI Assistant that instructs it to "Remember" the information. For example, "Remember that I changed my password today, October 24, 2024", or "Remember we always use the Threat Hunting Timeline template when investigating potential threats". Entries created in this way are private to you. By default, they are not required knowledge, but you can make them required by instructing AI Assistant to "Always remember", for example "Always remember to address me as madam", or "Always remember that our primary data center is located in Austin, Texas". - -Refer to the following video for an example of adding a document to Knowledge Base from the settings menu. - -::::{admonition} - - -
-:::: - - - -### Add an index [knowledge-base-add-knowledge-index] - -Add an index as a knowledge source when you want new information added to that index to automatically inform AI Assistant’s responses. Common security examples include asset inventories, network configuration information, on-call matrices, threat intelligence reports, and vulnerability scans. - -::::{important} -Indices added to Knowledge Base must have at least one field mapped as [semantic text](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md). -:::: - - -1. To open Security AI settings, use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find "AI Assistant for Security." Select the **Knowledge Base** tab. -2. Click **New → Index**. -3. Name the knowledge source. -4. Under **Sharing**, select whether this knowledge should be **Global** or **Private**. -5. Under **Index**, enter the name of the index you want to use as a knowledge source. -6. Under **Field**, enter the names of one or more semantic text fields within the index. -7. Under **Data Description**, describe when this information should be used by AI Assistant. -8. Under **Query Instruction**, describe how AI Assistant should query this index to retrieve relevant information. -9. Under **Output Fields**, list the fields which should be sent to AI Assistant. If none are listed, all fields will be sent. - -:::{image} ../../../images/serverless-knowledge-base-add-index-config.png -:alt: Knowledge base's Edit index entry menu -::: - -Refer to the following video for an example of adding an index to Knowledge Base. - -::::{admonition} - - -
-:::: diff --git a/raw-migrated-files/docs-content/serverless/attack-discovery.md b/raw-migrated-files/docs-content/serverless/attack-discovery.md deleted file mode 100644 index 4768bc5785..0000000000 --- a/raw-migrated-files/docs-content/serverless/attack-discovery.md +++ /dev/null @@ -1,105 +0,0 @@ -# Attack Discovery [attack-discovery] - -::::{warning} -This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features. -:::: - - -Attack Discovery leverages large language models (LLMs) to analyze alerts in your environment and identify threats. Each "discovery" represents a potential attack and describes relationships among multiple alerts to tell you which users and hosts are involved, how alerts correspond to the MITRE ATT&CK matrix, and which threat actor might be responsible. This can help make the most of each security analyst’s time, fight alert fatigue, and reduce your mean time to respond. - -For a demo, refer to the following video. - -::::{admonition} - - -
-:::: - - -This page describes: - -* [How to generate discoveries](../../../solutions/security/ai/attack-discovery.md#attack-discovery-generate-discoveries) -* [What information each discovery includes](../../../solutions/security/ai/attack-discovery.md#attack-discovery-what-info) -* [How you can interact with discoveries to enhance {{elastic-sec}} workflows](../../../solutions/security/ai/attack-discovery.md#attack-discovery-workflows) - - -## Role-based access control (RBAC) for Attack Discovery [attack-discovery-rbac] - -The `Attack Discovery: All` privilege allows you to use Attack Discovery. - -:::{image} ../../../images/serverless-attck-disc-rbac.png -:alt: Attack Discovery's RBAC settings -::: - - -## Generate discoveries [attack-discovery-generate-discoveries] - -When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md). To get started: - -1. Click the **Attack Discovery** page from {{elastic-sec}}'s navigation menu. -2. Select an existing connector from the dropdown menu, or add a new one. - - ::::{admonition} Recommended models - While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best. - - :::: - - - :::{image} ../../../images/serverless-attck-disc-select-model-empty.png - :alt: attck disc select model empty - ::: - -3. Once you’ve selected a connector, click **Generate** to start the analysis. - -It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected. - -::::{important} -By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon (![Settings icon](../../../images/serverless-gear.svg "")) next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error. -:::: - - -:::{image} ../../../images/serverless-attck-disc-alerts-number-menu.png -:alt: Attack Discovery's settings menu -::: - -::::{important} -Attack Discovery uses the same data anonymization settings as [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md). To configure which alert fields are sent to the LLM and which of those fields are obfuscated, use the Elastic AI Assistant settings. Consider the privacy policies of third-party LLMs before sending them sensitive data. -:::: - - -Once the analysis is complete, any threats it identifies will appear as discoveries. Click each one’s title to expand or collapse it. Click **Generate** at any time to start the Attack Discovery process again with the most current alerts. - - -## What information does each discovery include? [attack-discovery-what-info] - -Each discovery includes the following information describing the potential threat, generated by the connected LLM: - -1. A descriptive title and a summary of the potential threat. -2. The number of associated alerts and which parts of the [MITRE ATT&CK matrix](https://attack.mitre.org/) they correspond to. -3. The implicated entities (users and hosts), and what suspicious activity was observed for each. - -:::{image} ../../../images/serverless-attck-disc-example-disc.png -:alt: Attack Discovery detail view -::: - - -## Incorporate discoveries with other workflows [attack-discovery-workflows] - -There are several ways you can incorporate discoveries into your {{elastic-sec}} workflows: - -* Click an entity’s name to open the user or host details flyout and view more details that may be relevant to your investigation. -* Hover over an entity’s name to either add the entity to Timeline (![Add to timeline icon](../../../images/serverless-timelineWithArrow.svg "")) or copy its field name and value to the clipboard (![Copy to clipboard icon](../../../images/serverless-copyClipboard.svg "")). -* Click **Take action**, then select **Add to new case** or **Add to existing case** to add a discovery to a [case](../../../solutions/security/investigate/cases.md). This makes it easy to share the information with your team and other stakeholders. -* Click **Investigate in timeline** to explore the discovery in [Timeline](../../../solutions/security/investigate/timeline.md). -* Click **View in AI Assistant** to attach the discovery to a conversation with AI Assistant. You can then ask follow-up questions about the discovery or associated alerts. - -:::{image} ../../../images/serverless-add-discovery-to-assistant.gif -:alt: Attack Discovery view in AI Assistant -::: diff --git a/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md b/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md deleted file mode 100644 index d4fa16bb94..0000000000 --- a/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md +++ /dev/null @@ -1,213 +0,0 @@ -# Connect to your own local LLM [connect-to-byo-llm] - -This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within {{elastic-sec}}. You’ll first need to set up a reverse proxy to communicate with {{elastic-sec}}, then set up LM Studio on a server, and finally configure the connector in your Elastic deployment. [Learn more about the benefits of using a local LLM](https://www.elastic.co/blog/ai-assistant-locally-hosted-models). - -This example uses a single server hosted in GCP to run the following components: - -* LM Studio with the [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) model -* A reverse proxy using Nginx to authenticate to Elastic Cloud - -:::{image} ../../../images/serverless-lms-studio-arch-diagram.png -:alt: Architecture diagram for this guide -::: - -::::{note} -For testing, you can use alternatives to Nginx such as [Azure Dev Tunnels](https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview) or [Ngrok](https://ngrok.com/), but using Nginx makes it easy to collect additional telemetry and monitor its status by using Elastic’s native Nginx integration. While this example uses cloud infrastructure, it could also be replicated locally without an internet connection. -:::: - - -::::{note} -For information about the performance of open-source models on tasks within {{elastic-sec}}, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md). -:::: - - - -## Configure your reverse proxy [_configure_your_reverse_proxy] - -::::{note} -If your Elastic instance is on the same host as LM Studio, you can skip this step. Also, check out our [blog post](https://www.elastic.co/blog/herding-llama-3-1-with-elastic-and-lm-studio) that walks through the whole process of setting up a single-host implementation. -:::: - - -You need to set up a reverse proxy to enable communication between LM Studio and Elastic. For more complete instructions, refer to a guide such as [this one](https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04). - -The following is an example Nginx configuration file: - -```txt -server { - listen 80; - listen [::]:80; - server_name ; - server_tokens off; - add_header x-xss-protection "1; mode=block" always; - add_header x-frame-options "SAMEORIGIN" always; - add_header X-Content-Type-Options "nosniff" always; - return 301 https://$server_name$request_uri; -} - -server { - - listen 443 ssl http2; - listen [::]:443 ssl http2; - server_name ; - server_tokens off; - ssl_certificate /etc/letsencrypt/live//fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live//privkey.pem; - ssl_session_timeout 1d; - ssl_session_cache shared:SSL:50m; - ssl_session_tickets on; - ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256'; - ssl_protocols TLSv1.3 TLSv1.2; - ssl_prefer_server_ciphers on; - add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; - add_header x-xss-protection "1; mode=block" always; - add_header x-frame-options "SAMEORIGIN" always; - add_header X-Content-Type-Options "nosniff" always; - add_header Referrer-Policy "strict-origin-when-cross-origin" always; - ssl_stapling on; - ssl_stapling_verify on; - ssl_trusted_certificate /etc/letsencrypt/live//fullchain.pem; - resolver 1.1.1.1; - location / { - - if ($http_authorization != "Bearer ") { - return 401; -} - - proxy_pass http://localhost:1234/; - } - -} -``` - -::::{important} -If using the example configuration file above, you must replace several values: - -* Replace `` with your actual token, and keep it safe since you’ll need it to set up the {{elastic-sec}} connector. -* Replace `` with your actual domain name. -* Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234. - -:::: - - - -### (Optional) Set up performance monitoring for your reverse proxy [_optional_set_up_performance_monitoring_for_your_reverse_proxy] - -You can use Elastic’s [Nginx integration](https://docs.elastic.co/en/integrations/nginx) to monitor performance and populate monitoring dashboards in the {{security-app}}. - - -## Configure LM Studio and download a model [_configure_lm_studio_and_download_a_model] - -First, install [LM Studio](https://lmstudio.ai/). LM Studio supports the OpenAI SDK, which makes it compatible with Elastic’s OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace. - -You must launch the application using its GUI before doing so using the CLI. For example, use Chrome RDP with an [X Window System](https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine). After you’ve opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI. - -Once you’ve launched LM Studio: - -1. Go to LM Studio’s Search window. -2. Search for an LLM (for example, `Mistral-Nemo-Instruct-2407`). Your chosen model must include `instruct` in its name in order to work with Elastic. -3. After you find a model, view download options and select a recommended version (green). For best performance, select one with the thumbs-up icon that indicates good performance on your hardware. -4. Download one or more models. - -::::{important} -For security reasons, before downloading a model, verify that it is from a trusted source. It can be helpful to review community feedback on the model (for example using a site like Hugging Face). -:::: - - -:::{image} ../../../images/serverless-lms-model-select.png -:alt: The LM Studio model selection interface -::: - -In this example we used [`mistralai/Mistral-Nemo-Instruct-2407`](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407). It has 12B total parameters, a 128,000 token context window, and uses GGUF [quanitization](https://huggingface.co/docs/transformers/main/en/quantization/overview). For more information about model names and format information, refer to the following table. - -| Model Name | Parameter Size | Tokens/Context Window | Quantization Format | -| --- | --- | --- | --- | -| Name of model, sometimes with a version number. | LLMs are often compared by their number of parameters — higher numbers mean more powerful models. | Tokens are small chunks of input information. Tokens do not necessarily correspond to characters. You can use [Tokenizer](https://platform.openai.com/tokenizer) to see how many tokens a given prompt might contain. | Quantization reduces overall parameters and helps the model to run faster, but reduces accuracy. | -| Examples: Llama, Mistral, Phi-3, Falcon. | The number of parameters is a measure of the size and the complexity of the model. The more parameters a model has, the more data it can process, learn from, generate, and predict. | The context window defines how much information the model can process at once. If the number of input tokens exceeds this limit, input gets truncated. | Specific formats for quantization vary, most models now support GPU rather than CPU offloading. | - - -## Load a model in LM Studio [_load_a_model_in_lm_studio] - -After downloading a model, load it in LM Studio using the GUI or LM Studio’s [CLI tool](https://lmstudio.ai/blog/lms). - - -### Option 1: load a model using the CLI (Recommended) [_option_1_load_a_model_using_the_cli_recommended] - -It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI allows you to use `lms get` to search for models. The CLI provides a good interface for loading and unloading. - -Once you’ve downloaded a model, use the following commands in your CLI: - -1. Verify LM Studio is installed: `lms` -2. Check LM Studio’s status: `lms status` -3. List all downloaded models: `lms ls` -4. Load a model: `lms load`. - -:::{image} ../../../images/serverless-lms-cli-welcome.png -:alt: The CLI interface during execution of initial LM Studio commands -::: - -After the model loads, you should see a `Model loaded successfully` message in the CLI. Select a model using the arrow and **Enter** keys. - -:::{image} ../../../images/serverless-lms-studio-model-loaded-msg.png -:alt: The CLI message that appears after a model loads -::: - -To verify which model is loaded, use the `lms ps` command. - -:::{image} ../../../images/serverless-lms-ps-command.png -:alt: The CLI message that appears after running lms ps -::: - -If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command. - - -### Option 2: load a model using the GUI [_option_2_load_a_model_using_the_gui] - -Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**. - -::::{admonition} - - -
-:::: - - - -## (Optional) Collect logs using Elastic’s Custom Logs integration [_optional_collect_logs_using_elastics_custom_logs_integration] - -You can monitor the performance of the host running LM Studio using Elastic’s [Custom Logs integration](https://docs.elastic.co/en/integrations/log). This can also help with troubleshooting. Note that the default path for LM Studio logs is `/tmp/lmstudio-server-log.txt`, as in the following screenshot: - -:::{image} ../../../images/serverless-lms-custom-logs-config.png -:alt: The configuration window for the custom logs integration -::: - - -## Configure the connector in your Elastic deployment [_configure_the_connector_in_your_elastic_deployment] - -Finally, configure the connector: - -1. Log in to your Elastic deployment. -2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **OpenAI**. The OpenAI connector enables this use case because LM Studio uses the OpenAI SDK. -3. Name your connector to help keep track of the model version you are using. -4. Under **Select an OpenAI provider**, select **Other (OpenAI Compatible Service)**. -5. Under **URL**, enter the domain name specified in your Nginx configuration file, followed by `/v1/chat/completions`. -6. Under **Default model**, enter `local-model`. -7. Under **API key**, enter the secret token specified in your Nginx configuration file. -8. Click **Save**. - -:::{image} ../../../images/serverless-lms-edit-connector.png -:alt: The Edit connector page in the {security-app} -::: - -Setup is now complete. You can use the model you’ve loaded in LM Studio to power Elastic’s generative AI features. You can test a variety of models as you interact with AI Assistant to see what works best without having to update your connector. - -::::{note} -While local models work well for [AI Assistant](../../../solutions/security/ai/ai-assistant.md), we recommend you use one of [these models](../../../solutions/security/ai/large-language-model-performance-matrix.md) for interacting with [Attack discovery](../../../solutions/security/ai/attack-discovery.md). As local models become more performant over time, this is likely to change. -:::: diff --git a/raw-migrated-files/docs-content/serverless/detections-logsdb-index-mode-impact.md b/raw-migrated-files/docs-content/serverless/detections-logsdb-index-mode-impact.md deleted file mode 100644 index ebb99b4ceb..0000000000 --- a/raw-migrated-files/docs-content/serverless/detections-logsdb-index-mode-impact.md +++ /dev/null @@ -1,58 +0,0 @@ -# Using logsdb index mode with {{sec-serverless}} [detections-logsdb-index-mode-impact] - -Logsdb is enabled by default for {{serverless-full}}. This topic explains the impact of using logsdb index mode with {{sec-serverless}}. - -With logsdb index mode, the original `_source` field is not stored in the index but can be reconstructed using [synthetic `_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source). - -When the `_source` is reconstructed, [modifications](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source-modifications) are possible. Therefore, there could be a mismatch between users' expectations and how fields are formatted. - -Continue reading to find out how this affects specific {{sec-serverless}} components. - - -## Alerts [logsdb-alerts] - -When alerts are generated, the `_source` event is copied into the alert to retain the original data. When the logsdb index mode is applied, the `_source` event stored in the alert is reconstructed using synthetic `_source`. - -If you’re switching to use logsdb index mode, the `_source` field stored in the alert might look different in certain situations: - -* [Arrays can be reconstructed differently or deduplicated](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source-modifications-leaf-arrays) -* [Field names](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source-modifications-field-names) -* `geo_point` data fields (refer to [Representation of ranges](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source-modifications-ranges) and [Reduced precision of `geo_point` values](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source-precision-loss-for-point-types) for more information) - -Alerts generated by the following rule types could be affected: - -* Custom query -* Event correlation (non-sequence only) -* Non-aggregate rule types (for example, {{esql}} rules that use non-aggregating queries) - -Alerts that are generated by threshold, {{ml}}, and event correlation sequence rules are not affected since they do not contain copies of the original source. - - -## Rule actions [logsdb-rule-actions] - -While we do not recommend using `_source` for actions, in cases where the action relies on the `_source`, the same limitations and changes apply. - -If you send alert notifications by enabling [actions](../../../explore-analyze/alerts-cases/alerts.md#rules-actions) to the external systems that have workflows or automations based on fields formatted from the original source, they may be affected. In particular, this can happen when the fields used are arrays of objects. - -We recommend checking and adjusting the rule actions using `_source` before switching to logsdb index mode. - - -## Runtime fields [logsdb-runtime-fields] - -Runtime fields that reference `_source` may be affected. Some runtime fields might not work and need to be adjusted. For example, if an event was indexed with the value of `agent.name` in the dot-notation form, it will be returned in the nested form and might not work. - -The following is an example of accessing `_source` that works with the logsdb index mode enabled: - -```console -"source": """ emit(params._source.agent.name + "_____" + doc['agent.name'].value ); """ -"source": """ emit(params._source['agent']['name'] + "_____" + doc['agent.name'].value ); """ -"source": """ emit(field('agent.name').get(null) + "_____" + doc['agent.name'].value ); """ -"source": """ emit($('agent.name', null) + "_____" + doc['agent.name'].value ); """ -``` - -The following will not work with synthetic source (logsdb index mode enabled): - -```console -"source": """ emit(params._source['agent.name'] + "_____" + doc['agent.name'].value ); """ -``` - diff --git a/raw-migrated-files/docs-content/serverless/security-about-rules.md b/raw-migrated-files/docs-content/serverless/security-about-rules.md deleted file mode 100644 index f062e5c869..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-about-rules.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -navigation_title: "Rules" ---- - -# About detection rules [security-about-rules] - - -Rules run periodically and search for source events, matches, sequences, or {{ml}} job anomaly results that meet their criteria. When a rule’s criteria are met, a detection alert is created. - - -## Rule types [rule-types] - -You can create the following types of rules: - -* [**Custom query**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule): Query-based rule, which searches the defined indices and creates an alert when one or more documents match the rule’s query. -* [**Machine learning**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule): {{ml-cap}} rule, which creates an alert when a {{ml}} job discovers an anomaly above the defined threshold (see [Detect anomalies](../../../solutions/security/advanced-entity-analytics/anomaly-detection.md)). - - For {{ml}} rules, the associated {{ml}} job must be running. If the {{ml}} job isn’t running, the rule will: - - * Run and create alerts if existing anomaly results with scores above the defined threshold are discovered. - * Issue an error stating the {{ml}} job was not running when the rule executed. - -* [**Threshold**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-threshold-rule): Searches the defined indices and creates a detections alert when the number of times the specified field’s value is present and meets the threshold during a single execution. When multiple values meet the threshold, an alert is generated for each value. - - For example, if the threshold `field` is `source.ip` and its `value` is `10`, an alert is generated for every source IP address that appears in at least 10 of the rule’s search results. - -* [**Event correlation**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-eql-rule): Searches the defined indices and creates an alert when results match an [Event Query Language (EQL)](../../../explore-analyze/query-filter/languages/eql.md) query. -* [**Indicator match**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule): Creates an alert when {{elastic-sec}} index field values match field values defined in the specified indicator index patterns. For example, you can create an indicator index for IP addresses and use this index to create an alert whenever an event’s `destination.ip` equals a value in the index. Indicator index field mappings should be [ECS-compliant](https://www.elastic.co/guide/en/ecs/current). For information on creating {{es}} indices and field types, see [Index some documents](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-general-purpose.html#gp-gs-add-data), [Create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create), and [Field data types](elasticsearch://reference/elasticsearch/mapping-reference/field-data-types.md). If you have indicators in a standard file format, such as CSV or JSON, you can also use the Machine Learning Data Visualizer to import your indicators into an indicator index. See [Explore the data in {{kib}}](../../../explore-analyze/machine-learning/anomaly-detection/ml-getting-started.md#sample-data-visualizer) and use the **Import Data** option to import your indicators. - - ::::{tip} - You can also use value lists as the indicator match index. See [Use value lists with indicator match rules](../../../solutions/security/detect-and-alert/create-detection-rule.md#indicator-value-lists) at the end of this topic for more information. - - :::: - -* [**New terms**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-new-terms-rule): Generates an alert for each new term detected in source documents within a specified time range. You can also detect a combination of up to three new terms (for example, a `host.ip` and `host.id` that have never been observed together before). -* [**{{esql}}**](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule): Searches the defined indices and creates an alert when results match an [{{esql}} query](../../../explore-analyze/query-filter/languages/esql.md). - -:::{image} ../../../images/serverless--detections-all-rules.png -:alt: Shows the Rules page -:screenshot: -::: - - -## Data views and index patterns [views-index-patterns] - -When you create a rule, you must either specify the {{es}} index pattens for which you’d like the rule to run, or select a [data view field](../../../solutions/security/get-started/data-views-elastic-security.md) as the data source. If you select a data view, you can select [runtime fields](../../../solutions/security/get-started/create-runtime-fields-in-elastic-security.md) associated with that data view to create a query for the rule (with the exception of {{ml}} rules, which do not use queries). - -::::{note} -To access data views, you need either the appropriate [predefined Security user role](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles) or a [custom role](../../../deploy-manage/users-roles/cloud-organization/user-roles.md) with the right privileges. - -:::: - - - -## Notifications [about-notifications] - -For both prebuilt and custom rules, you can send notifications when alerts are created. Notifications can be sent via {{jira}}, Microsoft Teams, PagerDuty, Slack, and others, and can be configured when you create or edit a rule. - - -## Authorization [alerting-authorization-model] - -Rules, including all background detection and the actions they generate, are authorized using an [API key](../../../deploy-manage/api-keys/serverless-project-api-keys.md) associated with the last user to edit the rule. Upon creating or modifying a rule, an API key is generated for that user, capturing a snapshot of their privileges. The API key is then used to run all background tasks associated with the rule including detection checks and executing actions. - -::::{important} -If a rule requires certain privileges to run, such as index privileges, keep in mind that if a user without those privileges updates the rule, the rule will no longer function. - -:::: - - - -## Exceptions [about-exceptions] - -When modifying rules or managing detection alerts, you can [add exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md) that prevent a rule from generating alerts even when its criteria are met. This is useful for reducing noise, such as preventing alerts from trusted processes and internal IP addresses. - -::::{note} -You can add exceptions to custom query, machine learning, event correlation, and indicator match rule types. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md b/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md deleted file mode 100644 index 01bfe51013..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md +++ /dev/null @@ -1,11 +0,0 @@ -# Generate, customize, and learn about {{esql}} queries [security-ai-assistant-esql-queries] - -Elastic AI Assistant can help you learn about and leverage the Elasticsearch Query Language ({{esql}}) in many ways, including: - -* **Education and training**: AI Assistant can serve as a powerful {{esql}} learning tool. Ask it for examples, explanations of complex queries, and best practices. -* **Writing new queries**: Prompt AI Assistant to provide a query that accomplishes a particular task, and it will generate a query matching your description. For example: "Write a query to identify documents with `curl.exe` usage and calculate the sum of `destination.bytes`" or "What query would return all user logins to [a host] in the last six hours?" -* **Providing feedback to optimize existing queries**: Send AI Assistant a query you want to work on and ask it for improvements, refactoring, a general assessment, or to optimize the query’s performance with large data sets. -* **Customizing queries for your environment**: Since each environment is unique, you may need to customize queries that you used in other contexts. AI Assistant can suggest necessary modifications based on contextual information you provide. -* **Troubleshooting**: Having trouble with a query or getting unexpected results? Ask AI Assistant to help you troubleshoot. - -In these ways and others, AI Assistant can enable you to make use of {{esql}}'s advanced search capabilities to accomplish goals across {{elastic-sec}}. diff --git a/raw-migrated-files/docs-content/serverless/security-ai-assistant.md b/raw-migrated-files/docs-content/serverless/security-ai-assistant.md deleted file mode 100644 index 505664a013..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-ai-assistant.md +++ /dev/null @@ -1,171 +0,0 @@ -# Elastic AI Assistant [security-ai-assistant] - -The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity operations team. It allows users to interact with {{elastic-sec}} for tasks such as alert investigation, incident response, and query generation or conversion using natural language and much more. - -:::{image} ../../../images/serverless-assistant-basic-view.png -:alt: Image of AI Assistant chat window -:screenshot: -::: - -::::{important} -Elastic AI Assistant is designed to enhance your analysis with smart dialogues. Its capabilities are still developing. Users should exercise caution as the quality of its responses might vary. Your insights and feedback will help us improve this feature. Always cross-verify AI-generated advice for accuracy. - -:::: - - -::::{admonition} Requirements -:class: note - -* This feature requires the Security Analytics Complete [project feature](../../../deploy-manage/deploy/elastic-cloud/project-settings.md). -* You need a [generative AI connector](../../../solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses. - -:::: - - - -## Your data and AI Assistant [data-information] - -Elastic does not store or examine prompts or results used by AI Assistant, or use this data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to AI Assistant will be processed by the third-party large language model (LLM) provider you connected as part of AI Assistant setup. - -Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. - -::::{note} -Elastic can automatically anonymize event data that you provide to AI Assistant as context. To learn more, refer to [Configure AI Assistant](../../../solutions/security/ai/ai-assistant.md#configure-ai-assistant). - -:::: - - - -## Set up AI Assistant [set-up-ai-assistant] - -You must create a generative AI connector before you can use AI Assistant. AI Assistant can connect to multiple large language model (LLM) providers so you can select the best model for your needs. To set up a connector, refer to [LLM connector setup guides](../../../solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). - -::::{admonition} Recommended models -:class: note - -While AI Assistant is compatible with many different models, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md) to select models that perform well with your desired use cases. - -:::: - - - -## Start chatting [start-chatting] - -To open AI Assistant, select the **AI Assistant** button in the top toolbar from anywhere in the {{security-app}}. You can also use the keyboard shortcut **Cmd + ;** (or **Ctrl + ;** on Windows). - -:::{image} ../../../images/serverless-ai-assistant-button.png -:alt: AI Assistant button -:screenshot: -::: - -This opens the **Welcome** chat interface, where you can ask general questions about {{elastic-sec}}. - -You can also chat with AI Assistant from several particular pages in {{elastic-sec}} where you can easily send context-specific data and prompts to AI Assistant. - -* [Alert details](../../../solutions/security/detect-and-alert/view-detection-alert-details.md) or Event details flyout: Click **Chat** while viewing the details of an alert or event. -* [Rules page](../../../solutions/security/detect-and-alert/manage-detection-rules.md): Use AI Assistant to help create or correct rule queries. -* [Data Quality dashboard](../../../solutions/security/dashboards/data-quality-dashboard.md): Select the **Incompatible fields** tab, then click **Chat**. (This is only available for fields marked red, indicating they’re incompatible). -* [Timeline](../../../solutions/security/investigate/timeline.md): Select the **Security Assistant** tab. - -::::{note} -Each user’s chat history (up to the 99 most recent conversations) and custom Quick Prompts are automatically saved, so you can leave {{elastic-sec}} and return to a conversation later. Chat history appears to the left of the AI Assistant chat window, and on the **Conversations** tab of the **AI Assistant settings** menu. To access the settings menu, use the global search field to search for "AI Assistant for Security". -:::: - - - -## Interact with AI Assistant [interact-with-assistant] - -Use these features to adjust and act on your conversations with AI Assistant: - -* (Optional) Select a *System Prompt* at the beginning of a conversation by using the **Select Prompt** menu. System Prompts provide context to the model, informing its response. To create a System Prompt, open the System Prompts dropdown menu and click **+ Add new System Prompt…​**. -* (Optional) Select a *Quick Prompt* at the bottom of the chat window to get help writing a prompt for a specific purpose, such as summarizing an alert or converting a query from a legacy SIEM to {{elastic-sec}}. - - :::{image} ../../../images/serverless-quick-prompts.png - :alt: Quick Prompts highlighted below a conversation - :screenshot: - ::: - -* System Prompts and Quick Prompts can also be configured from the corresponding tabs on the **Security AI settings** page. - - :::{image} ../../../images/serverless-assistant-settings-system-prompts.png - :alt: The Security AI settings menu's System Prompts tab - :screenshot: - ::: - -* Quick Prompt availability varies based on context—for example, the **Alert summarization** Quick Prompt appears when you open AI Assistant while viewing an alert. To customize existing Quick Prompts and create new ones, click **Add Quick Prompt**. -* In an active conversation, you can use the inline actions that appear on messages to incorporate AI Assistant’s responses into your workflows: - - * **Add note to timeline** (![Comment](../../../images/serverless-editorComment.svg "")): Add the selected text to your currently active Timeline as a note. - * **Add to existing case** (![Add to case](../../../images/serverless-casesApp.svg "")): Add a comment to an existing case using the selected text. - * **Copy to clipboard** (![Copy to clipboard](../../../images/serverless-copyClipboard.svg "")): Copy the text to clipboard to paste elsewhere. Also helpful for resubmitting a previous prompt. - * **Add to timeline** (![Timeline](../../../images/serverless-timeline.svg "")): Add a filter or query to Timeline using the text. This button appears for particular queries in AI Assistant’s responses. - - Be sure to specify which language you’d like AI Assistant to use when writing a query. For example: "Can you generate an Event Query Language query to find four failed logins followed by a successful login?" - - -::::{tip} -AI Assistant can remember particular information you tell it to remember. For example, you could tell it: "When anwering any question about srv-win-s1-rsa or an alert that references it, mention that this host is in the New York data center". This will cause it to remember the detail you highlighted. - -:::: - - - -## Configure AI Assistant [configure-ai-assistant] - -The **Security AI settings** page allows you to configure AI Assistant. To access it, use the global search field to search for "AI Assistant for Security". - -It has the following tabs: - -* **Conversations:** When you open AI Assistant from certain pages, such as **Timeline*** or ***Alerts**, it defaults to the relevant conversation type. For each conversation type, choose the default System Prompt, the default connector, and the default model (if applicable). The **Streaming** setting controls whether AI Assistant’s responses appear word-by-word (streamed), or as a complete block of text. Streaming is currently only available for OpenAI models. -* **Connectors:** Manage all LLM connectors. -* **System Prompts:** Edit existing System Prompts or create new ones. To create a new System Prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the System Prompt’s text. Under **Contexts**, select where the System Prompt should appear. -* **Quick Prompts:** Modify existing Quick Prompts or create new ones. To create a new Quick Prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the Quick Prompt’s text. -* **Anonymization:** Select fields to include as plaintext, to obfuscate, and to not send when you provide events to AI Assistant as context. [Learn more](../../../solutions/security/ai/ai-assistant.md#ai-assistant-anonymization). -* **Knowledge Base:** Provide additional context to AI Assistant. [Learn more](../../../solutions/security/ai/ai-assistant-knowledge-base.md). - - -### Anonymization [ai-assistant-anonymization] - -::::{admonition} Requirements -To modify Anonymization settings, you need the **Elastic AI Assistant: All** privilege, with **Customize sub-feature privileges** enabled. - -:::: - - -The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed*** toggled on are included in events provided to AI Assistant. ***Allowed*** fields with ***Anonymized** set to **Yes** are included, but with their values obfuscated. - -::::{note} -You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings (![Settings icon](../../../images/serverless-gear.svg "")) button next to the model selection dropdown menu. -:::: - - -:::{image} ../../../images/serverless-assistant-anonymization-menu.png -:alt: AI Assistant's settings menu -:screenshot: -::: - -The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings. It also doesn’t affect how event fields appear *before* being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you. - -When you include a particular event as context, such as an alert from the Alerts page, you can adjust anonymization behavior for the specific event. Be sure the anonymization behavior meets your specifications before sending a message with the event attached. - - -### Knowlege base [ai-assistant-page-knowledge-base] - -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - -The **Knowledge Base** tab of the **Security AI settings** page allows you to enable AI Assistant to remember specified information, and use it as context to improve response quality. To learn more, refer to [AI Assistant Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md). - - -### Get the most from your queries [rag-for-esql] - -Elastic AI Assistant allows you to take full advantage of the {{elastic-sec}} platform to improve your security operations. It can help you write an {{esql}} query for a particular use case, or answer general questions about how to use the platform. Its ability to assist you depends on the specificity and detail of your questions. The more context and detail you provide, the more tailored and useful its responses will be. - -To maximize its usefulness, consider using more detailed prompts or asking for additional information. For instance, after asking for an ES|QL query example, you could ask a follow-up question like, “Could you give me some other examples?” You can also ask for clarification or further exposition, for example "Please provide comments explaining the query you just gave." - -In addition to practical advice, AI Assistant can offer conceptual advice, tips, and best practices for enhancing your security measures. You can ask it, for example: - -* “How do I set up a machine learning job in Elastic Security to detect anomalies in network traffic volume over time?” -* “I need to monitor for unusual file creation patterns that could indicate ransomware activity. How would I construct this query using EQL?” diff --git a/raw-migrated-files/docs-content/serverless/security-ai-for-security.md b/raw-migrated-files/docs-content/serverless/security-ai-for-security.md deleted file mode 100644 index f5d8af4380..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-ai-for-security.md +++ /dev/null @@ -1,8 +0,0 @@ -# AI for Security [security-ai-for-security] - -You can use {{elastic-sec}}’s built-in AI tools to speed up your work and augment your team’s capabilities. The pages in this section describe [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md), which answers questions and enhances your workflows throughout Elastic Security, and [*Attack Discovery*](../../../solutions/security/ai/attack-discovery.md), which speeds up the triage process by finding patterns and identifying attacks spanning multiple alerts. - - - - - diff --git a/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md b/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md deleted file mode 100644 index 8cd24a7d5f..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md +++ /dev/null @@ -1,11 +0,0 @@ -# Use cases [security-ai-use-cases] - -The guides in this section describe use cases for AI Assistant and Attack discovery. Refer to them for examples of each tool’s individual capabilities, and of what they can do together. - -* [Identify, investigate, and document threats](../../../solutions/security/ai/identify-investigate-document-threats.md) -* [Triage alerts](../../../solutions/security/ai/triage-alerts.md) -* [Generate, customize, and learn about {{esql}} queries](../../../solutions/security/ai/generate-customize-learn-about-esorql-queries.md) - - - - diff --git a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md b/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md deleted file mode 100644 index 7d51d50b88..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md +++ /dev/null @@ -1,69 +0,0 @@ -# Identify, investigate, and document threats [security-ai-usecase-incident-reporting] - -Together, [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md) and [Attack Discovery](../../../solutions/security/ai/attack-discovery.md) can help you identify and mitigate threats, investigate incidents, and generate incident reports in various languages so you can monitor and protect your environment. - -In this guide, you’ll learn how to: - -* [Use Attack discovery to identify threats](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-use-attack-discovery-to-identify-threats) -* [Use AI Assistant to analyze a threat](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-use-ai-assistant-to-analyze-a-threat) -* [Create a case using AI Assistant](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-create-a-case-using-ai-assistant) -* [Translate incident information to a different human language using AI Assistant](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-translate) - - -## Use Attack discovery to identify threats [use-case-incident-reporting-use-attack-discovery-to-identify-threats] - -Attack Discovery can detect a wide range of threats by finding relationships among alerts that may indicate a coordinated attack. This enables you to comprehend how threats move through and affect your systems. Attack Discovery generates a detailed summary of each potential threat, which can serve as the basis for further analysis. Learn how to [get started with Attack Discovery](../../../solutions/security/ai/attack-discovery.md). - -:::{image} ../../../images/serverless-attck-disc-11-alerts-disc.png -:alt: An Attack discovery card showing an attack with 11 related alerts -:screenshot: -::: - -In the example above, Attack discovery found connections between thirteen alerts, and used them to identify and describe an attack chain. - -After Attack discovery outlines your threat landscape, use Elastic AI Assistant to quickly analyze a threat in detail. - - -## Use AI Assistant to analyze a threat [use-case-incident-reporting-use-ai-assistant-to-analyze-a-threat] - -From a discovery on the Attack discovery page, click **View in AI Assistant** to start a chat that includes the discovery as context. - -:::{image} ../../../images/serverless-attck-disc-remediate-threat.gif -:alt: A dialogue with AI Assistant that has the attack discovery as context -:screenshot: -::: - -AI Assistant can quickly compile essential data and provide suggestions to help you generate an incident report and plan an effective response. You can ask it to provide relevant data or answer questions, such as “How can I remediate this threat?” or “What {{esql}} query would isolate actions taken by this user?” - -:::{image} ../../../images/serverless-attck-disc-esql-query-gen-example.png -:alt: An AI Assistant dialogue in which the user asks for a purpose-built ES|QL query -:screenshot: -::: - -The image above shows an {{esql}} query generated by AI Assistant in response to a user prompt. Learn more about [using AI Assistant for ES|QL](../../../solutions/security/ai/generate-customize-learn-about-esorql-queries.md). - -At any point in a conversation with AI Assistant, you can add data, narrative summaries, and other information from its responses to {{elastic-sec}}'s case management system to generate incident reports. - - -## Generate reports [use-case-incident-reporting-create-a-case-using-ai-assistant] - -From the AI Assistant dialog window, click **Add to case** (![Add data](../../../images/serverless-addDataApp.svg "")) next to a message to add the information in that message to a [case](../../../solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders. - -If you add a message that contains a discovery to a case, AI Assistant automatically adds the attack summary and all associated alerts to the case. You can also add AI Assistant messages that contain remediation steps and relevant data to the case. - - -## Translate incident information to a different human language using AI Assistant [use-case-incident-reporting-translate] - -:::{image} ../../../images/serverless-attck-disc-translate-japanese.png -:alt: An AI Assistant dialogue in which the assistant translates from English to Japanese -:screenshot: -::: - -AI Assistant can translate its findings into other human languages, helping to enable collaboration among global security teams, and making it easier to operate within multilingual organizations. - -After AI Assistant provides information in one language, you can ask it to translate its responses. For example, if it provides remediation steps for an incident, you can instruct it to “Translate these remediation steps into Japanese.” You can then add the translated output to a case. This can help team members receive the same information and insights regardless of their primary language. - -::::{note} -In our internal testing, AI Assistant translations preserved the accuracy of the original content. However, all LLMs can make mistakes, so use caution. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-alert-suppression.md b/raw-migrated-files/docs-content/serverless/security-alert-suppression.md deleted file mode 100644 index 750dab3685..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-alert-suppression.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -navigation_title: "Suppress alerts" ---- - -# Suppress detection alerts [security-alert-suppression] - - -::::{admonition} Requirements and notice -:class: important - -* {{ml-cap}} rules have [additional requirements](../../../solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md) for alert suppression. -* Alert suppression is in technical preview for event correlation rules. The functionality may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. - -:::: - - -Alert suppression allows you to reduce the number of repeated or duplicate detection alerts created by these detection rule types: - -* [Custom query](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule) -* [Threshold](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-threshold-rule) -* [Indicator match](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule) -* [Event correlation](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-eql-rule) -* [New terms](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-new-terms-rule) -* [ES|QL](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule) -* [Machine learning](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule) - -Normally, when a rule meets its criteria repeatedly, it creates multiple alerts, one for each time the rule’s criteria are met. When alert suppression is configured, duplicate qualifying events are grouped, and only one alert is created for each group. Depending on the rule type, you can configure alert suppression to create alerts each time the rule runs, or once within a specified time window. You can also specify multiple fields to group events by unique combinations of values. - -The {{security-app}} displays several indicators in the Alerts table and the alert details flyout when a detection alert is created with alert suppression enabled. You can view the original events associated with suppressed alerts by investigating the alert in Timeline. - -::::{note} -Alert suppression is not available for Elastic prebuilt rules. However, if you want to suppress alerts for a prebuilt rule, you can duplicate it, then configure alert suppression on the duplicated rule. - -:::: - - - -## Configure alert suppression [security-alert-suppression-configure-alert-suppression] - -You can configure alert suppression when you create or edit a supported rule type. Refer to documentation for creating [custom query](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule), [threshold](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-threshold-rule), [indicator match](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule), [event correlation](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-eql-rule), [new terms](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-new-terms-rule), [ES|QL](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule), or [machine learning](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule) rules for detailed instructions. - -1. When configuring the rule type (the **Define rule** step for a new rule, or the **Definition** tab for an existing rule), specify how you want to group events for alert suppression: - - * **Custom query rule, indicator match, threshold, event correlation, new terms, {{esql}}, or {{ml}} rules:** In **Suppress alerts by**, enter 1-3 field names to group events by the fields' values. - * **Threshold rule:** In **Group by**, enter up to 3 field names to group events by the fields' values, or leave the setting empty to group all qualifying events together. - - ::::{note} - If you specify a field with multiple values, alerts with that field are handled as follows: - - * **Custom query or threshold rules:** Alerts are grouped by each unique value. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts will be suppressed separately for each value of `127.0.0.1`, `127.0.0.2`, and `127.0.0.3`. - * **Indicator match, event correlation (non-sequence queries only), new terms, {{esql}}, or {{ml}} rules:** Alerts with the specified field name and identical array values are grouped together. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts with the entire array are grouped and only one alert is created for the group. - * **Event correlation (sequence queries only) rules:** If the suppression field is an array of values, the suppressed alert will only suppress values that are an exact match. The values must be equivalent and be in the same position. For example, if you configure suppresson on the field `myips` and one sequence alert has [1.1.1.1, 0.0.0.0] and another sequence alert has [1.1.1.1, 192.168.0.1], neither of those alerts will be suppressed, despite sharing an array element. - - :::: - -2. If available, select how often to create alerts for duplicate events: - - ::::{admonition} NOTE - :class: note - - Both options are available for custom query, indicator match, event correlation, new terms, {{esql}}, and {{ml}} rules. Threshold rules only have the **Per time period** option. - - :::: - - - * **Per rule execution**: Create an alert each time the rule runs and an event meets its criteria. - * **Per time period**: Create one alert for all qualifying events that occur within a specified time window, beginning from when an event first meets the rule criteria and creates the alert. - - For example, if a rule runs every 5 minutes but you don’t need alerts that frequently, you can set the suppression time period to a longer time, such as 1 hour. If the rule meets its criteria, it creates an alert at that time, and for the next hour, it’ll suppress any subsequent qualifying events. - - ![Alert suppression options](../../../images/serverless--detections-alert-suppression-options.png "") - -3. Under **If a suppression field is missing**, choose how to handle events with missing suppression fields (events in which one or more of the **Suppress alerts by** fields don’t exist): - - ::::{admonition} NOTE - :class: note - - These options are not available for threshold rules. - - :::: - - - * **Suppress and group alerts for events with missing fields**: Create one alert for each group of events with missing fields. Missing fields get a `null` value, which is used to group and suppress alerts. - * **Do not suppress alerts for events with missing fields**: Create a separate alert for each matching event. This basically falls back to normal alert creation for events with missing suppression fields. - -4. Configure other rule settings, then save and enable the rule. - -::::{admonition} Tips -:class: tip - -* Use the **Rule preview** before saving the rule to visualize how alert suppression will affect the alerts created, based on historical data. -* If a rule times out while suppression is turned on, try shortening the rule’s [look-back](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule) time or turn off suppression to improve the rule’s performance. - -:::: - - - -## Confirm suppressed alerts [security-alert-suppression-confirm-suppressed-alerts] - -The {{security-app}} displays several indicators of whether a detection alert was created with alert suppression enabled, and how many duplicate alerts were suppressed. - -::::{important} -After an alert is moved to the `Closed` status, it will no longer suppress new alerts. To prevent interruptions or unexpected changes in suppression, avoid closing alerts before the suppression interval ends. - -:::: - - -* **Alerts** table — Icon in the **Rule** column. Hover to display the number of suppressed alerts: - - ![Suppressed alerts icon and tooltip in Alerts table](../../../images/serverless--detections-suppressed-alerts-table.png "") - -* **Alerts** table — Column for suppressed alerts count. Select **Fields** to open the fields browser, then add `kibana.alert.suppression.docs_count` to the table. - - ![Suppressed alerts count field column in Alerts table](../../../images/serverless--detections-suppressed-alerts-table-column.png "") - -* Alert details flyout — **Insights** section: - - ![Suppressed alerts Insights section in alert details flyout](../../../images/serverless--detections-suppressed-alerts-details.png "") - - - -## Investigate events for suppressed alerts [security-alert-suppression-investigate-events-for-suppressed-alerts] - -With alert suppression, detection alerts aren’t created for the grouped source events, but you can still retrieve the events for further analysis or investigation. Do one of the following to open Timeline with the original events associated with both the created alert and the suppressed alerts: - -* **Alerts** table — Select **Investigate in timeline** in the **Actions** column. - - ![Investigate in timeline button](../../../images/serverless--detections-timeline-button.png "") - -* Alert details flyout — Select **Take action** → **Investigate in timeline**. - - -## Alert suppression limit by rule type [security-alert-suppression-alert-suppression-limit-by-rule-type] - -Some rule types have a maximum number of alerts that can be suppressed (custom query rules don’t have a suppression limit): - -* **Threshold, event correlation, {{esql}}, and {{ml}}:** The maximum number is the value you choose for the rule’s **Max alerts per run** [advanced setting](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params), which is `100` by default. -* **Indicator match and new terms:** The maximum number is five times the value you choose for the rule’s **Max alerts per run** [advanced setting](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params). The default value is `100`, which means the default maximum limit for indicator match rules and new terms rules is `500`. diff --git a/raw-migrated-files/docs-content/serverless/security-alerts-manage.md b/raw-migrated-files/docs-content/serverless/security-alerts-manage.md deleted file mode 100644 index 403c006cca..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-alerts-manage.md +++ /dev/null @@ -1,289 +0,0 @@ ---- -navigation_title: "Alerts" ---- - -# Manage detection alerts [security-alerts-manage] - - -The Alerts page displays all detection alerts. - -:::{image} ../../../images/serverless--detections-alert-page.png -:alt: Alerts page overview -:screenshot: -::: - - -## View and filter detection alerts [detection-view-and-filter-alerts] - -The Alerts page offers various ways for you to organize and triage detection alerts as you investigate suspicious events. You can: - -* View an alert’s details. Click the **View details** button from the Alerts table to open the alert details flyout. Learn more at [View detection alert details](../../../solutions/security/detect-and-alert/view-detection-alert-details.md). - - ![View details button](../../../images/serverless--detections-view-alert-details.png "") - -* View the rule that created an alert. Click a name in the **Rule** column to open the rule’s details. -* View the details of the host and user associated with the alert. In the Alerts table, click a host name to open the [host details flyout](/solutions/security/explore/hosts-page.md#host-details-flyout), or a user name to open the [user details flyout](/solutions/security/explore/users-page.md#user-details-flyout). -* Filter for a specific rule in the KQL bar (for example, `kibana.alert.rule.name :"SSH (Secure Shell) from the Internet"`). KQL autocomplete is available for `.alerts-security.alerts-*` indices. -* Use the date and time filter to define a specific time range. By default, this filter is set to search the last 24 hours. -* Use the drop-down filter controls to filter alerts by up to four fields. By default, you can filter alerts by **Status**, **Severity***, ***User**, and **Host**, and you can [edit the controls](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#drop-down-filter-controls) to use other fields. -* Visualize and group alerts by specific fields in the visualization section. Use the buttons on the left to select a view type (**Summary**, **Trend***, ***Counts**, or **Treemap**), and use the menus on the right to select the ECS fields used for grouping alerts. Refer to [Visualize detection alerts](../../../solutions/security/detect-and-alert/visualize-detection-alerts.md) for more on each view type. -* Hover over a value to display available [inline actions](../../../solutions/security/get-started/elastic-security-ui.md#inline-actions), such as **Filter In**, **Filter Out***, and ***Add to timeline***. Click the expand icon for more options, including ***Show top *x*** and **Copy to Clipboard**. The available options vary based on the type of data. - - ![Inline additional actions menu](../../../images/serverless--detections-inline-actions-menu.png "") - -* Filter alert results to include building block alerts or to only show alerts from indicator match rules by selecting the **Additional filters** drop-down. By default, [building block alerts](../../../solutions/security/detect-and-alert/about-building-block-rules.md) are excluded from the Overview and Alerts pages. You can choose to include building block alerts on the Alerts page, which expands the number of alerts. - - :::{image} ../../../images/serverless--detections-additional-filters.png - :alt: Alerts table with Additional filters menu highlighted - :screenshot: - ::: - - ::::{note} - When updating alert results to include building block alerts, the Security app searches the `.alerts-security.alerts-` index for the `kibana.alert.building_block_type` field. When looking for alerts created from indicator match rules, the app searches the same index for `kibana.alert.rule.type:'threat_match'`. - - :::: - -* View detection alerts generated by a specific rule. Go to **Rules** → **Detection rules (SIEM)**, then select a rule name in the table. The rule details page displays a comprehensive view of the rule’s settings, and the Alerts table under the Trend histogram displays the alerts associated with the rule, including alerts from any previous or deleted revision of that rule. - - -## Edit drop-down filter controls [drop-down-filter-controls] - -By default, the drop-down controls on the Alerts page filter alerts by **Status**, **Severity***, ***User**, and **Host**. You can edit them to filter by different fields, as well as remove, add, and reorder them if you prefer a different order. - -:::{image} ../../../images/serverless--detections-alert-page-dropdown-controls.png -:alt: Alerts page with drop-down controls highlighted -:screenshot: -::: - -::::{note} -* You can have a maximum of four controls on the Alerts page. -* You can’t remove the **Status** control. -* If you make any changes to the controls, you *must* save the pending changes for them to persist. -* Saved changes are stored in your browser’s local storage, not your [user profile](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md). If you clear your browser’s storage or log into your user profile from a different browser, you will lose your customizations. - -:::: - - -1. Click the three-dot icon next to the controls (![More actions](../../../images/serverless-boxesHorizontal.svg "")), then select **Edit Controls**. -2. Do any of the following: - - * To reorder controls, click and drag a control by its handle (![Reorder](../../../images/serverless-grabHorizontal.svg "")). - * To remove a control, hover over it and select **Remove control** (![Remove](../../../images/serverless-cross.svg "")). - * To edit a control, hover over it and select **Edit control** (![Edit](../../../images/serverless-pencil.svg "")). - * To add a new control, click **Add Controls** (![Add](../../../images/serverless-plusInCircle.svg "")). If you already have four controls, you must first remove one to make room for the new one. - -3. If you’re editing or adding a control, do the following in the configuration flyout that opens: - - 1. In the **Field** list, select the field for the filter. The **Control type** is automatically applied to the field you selected. - 2. Enter a **Label** to identify the control. - 3. Click **Save and close**. - -4. Click **Save pending changes** (![Save](../../../images/serverless-save.svg "")). - - -## Group alerts [group-alerts] - -You can group alerts by rule name, user name, host name, source IP address, or any other field. Select **Group alerts by**, then select an option or **Custom field** to specify a different field. - -Select up to three fields for grouping alerts. The groups will nest in the order you selected them, and the nesting order is displayed above the table next to **Group alerts by**. - -:::{image} ../../../images/serverless--detections-group-alerts.png -:alt: Alerts table with Group alerts by drop-down -:screenshot: -::: - -Each group displays information such as the alerts' severity and how many users, hosts, and alerts are in the group. The information displayed varies depending on the selected fields. - -To interact with grouped alerts: - -* Select the **Take actions** menu to perform a bulk action on all alerts in a group, such as [changing their status](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#detection-alert-status). -* Click a group’s name or the expand icon (![Next](../../../images/serverless-arrowRight.svg "")) to display alerts within that group. You can filter and customize this view like any other alerts table. - - :::{image} ../../../images/serverless--detections-group-alerts-expand.png - :alt: Expanded alert group with alerts table - :screenshot: - ::: - - - -## Customize the Alerts table [customize-the-alerts-table] - -Use the toolbar buttons in the upper-left of the Alerts table to customize the columns you want displayed: - -* **Columns**: Reorder the columns. -* **Sort fields *x***: Sort the table by one or more columns. -* **Fields**: Select the fields to display in the table. You can also add [runtime fields](../../../solutions/security/get-started/create-runtime-fields-in-elastic-security.md) to detection alerts and display them in the Alerts table. - -Click the **Full screen** button in the upper-right to view the table in full-screen mode. - -:::{image} ../../../images/serverless--detections-alert-table-toolbar-buttons.png -:alt: Alerts table with toolbar buttons highlighted -:screenshot: -::: - -Use the view options drop-down in the upper-right of the Alerts table to control how alerts are displayed: - -* **Grid view**: Displays alerts in a traditional table view with columns for each field -* **Event rendered view**: Display alerts in a descriptive event flow that includes relevant details and context about the event. - -:::{image} ../../../images/serverless--detections-event-rendered-view.png -:alt: Alerts table with the Event rendered view enabled -:screenshot: -::: - -::::{tip} -When using grid view, you can view alert-rendered reason statements and event renderings for specific alerts by clicking the expand icon in the **Reason** column. Some events do not have event renderings. - -:::: - - - -## Take actions on an alert [alert-actions] - -From the Alerts table or the alert details flyout, you can: - -* [Add detection alerts to cases](../../../solutions/security/detect-and-alert/add-detection-alerts-to-cases.md) -* [Change an alert’s status](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#detection-alert-status) -* [Add a rule exception from an alert](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#add-exception-from-alerts) -* [Apply and filter alert tags](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#apply-alert-tags) -* [Assign users to alerts](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#assign-users-to-alerts) -* [Filter assigned alerts](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#filter-assigned-alerts) -* [Add an endpoint exception from an alert](../../../solutions/security/detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions) -* [Isolate an alert’s host](../../../solutions/security/endpoint-response-actions/isolate-host.md) -* [Perform response actions on an alert’s host](../../../solutions/security/endpoint-response-actions.md) (Alert details flyout only) -* [Run Osquery against an alert](../../../solutions/security/investigate/run-osquery-from-alerts.md) -* [View alerts in Timeline](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#signals-to-timelines) -* [Visually analyze an alert’s process relationships](../../../solutions/security/investigate/visual-event-analyzer.md) -* [Add notes to alerts](../../../solutions/security/investigate/notes.md#notes-alerts-events) - - -### Change an alert’s status [detection-alert-status] - -You can set an alert’s status to indicate whether it needs to be investigated (**Open**), is under active investigation (**Acknowledged***), or has been resolved (***Closed***). By default, the Alerts page displays open alerts. To filter alerts that are ***Acknowledged*** or ***Closed**, use the **Status** drop-down filter at the top of the Alerts page. - -To change an alert’s status, do one of the following: - -* In the Alerts table, click **More actions** (**…​**) in the alert’s row, then select a status. -* In the Alerts table, select the alerts you want to change, click **Selected *x* alerts** at the upper-left above the table, and then select a status. - - ![Bulk action menu with multiple alerts selected](../../../images/serverless--detections-alert-change-status.png "") - -* [beta] To bulk-change the status of [grouped alerts](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#group-alerts), select the **Take actions** menu for the group, then select a status. -* In an alert’s details flyout, click **Take action** and select a status. - - -### Apply and filter alert tags [apply-alert-tags] - -Use alert tags to organize related alerts into categories that you can filter and group. For example, use the `False Positive` alert tag to label a group of alerts as false positives. Then, search for them by entering the `kibana.alert.workflow_tags : "False Positive"` query into the KQL bar. Alternatively, use the Alert table’s [drop-down filters](../../../solutions/security/detect-and-alert/manage-detection-alerts.md#drop-down-filter-controls) to filter for tagged alerts. - -::::{note} -You can manage alert tag options by updating the `securitySolution:alertTags` advanced setting. Refer to [Manage alert tag options](../../../solutions/security/get-started/configure-advanced-settings.md#manage-alert-tags) for more information. - -:::: - - -::::{tip} -To display alert tags in the Alerts table, click **Fields** and add the `kibana.alert.workflow_tags` field. - -:::: - - -To apply or remove alert tags on individual alerts, do one of the following: - -* In the Alerts table, click **More actions** (**…​***) in an alert’s row, then click ***Apply alert tags**. Select or unselect tags, then click **Apply tags**. -* In an alert’s details flyout, click **Take action → Apply alert tags**. Select or unselect tags, then click **Apply tags**. - -To apply or remove alert tags on multiple alerts, select the alerts you want to change, then click **Selected *x* alerts** at the upper-left above the table. Click **Apply alert tags**, select or unselect tags, then click **Apply tags**. - -:::{image} ../../../images/serverless--detections-bulk-apply-alert-tag.png -:alt: Bulk action menu with multiple alerts selected -:screenshot: -::: - - -### Assign users to alerts [assign-users-to-alerts] - -Assign users to alerts that you want them to investigate, and manage alert assignees throughout an alert’s lifecycle. - -::::{admonition} Requirements -:class: note - -All [Security roles](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles), except for the Viewer role, can assign and unassign users to alerts. - -:::: - - -::::{important} -Users are not notified when they’ve been assigned to, or unassigned from, alerts. - -:::: - - -| Action | Instructions | -| --- | --- | -| Assign users to an alert | Choose one of the following:

* **Alerts table** - Click **More actions*** (***…​***) in an alert’s row, then click ***Assign alert***. Select users, then click ***Apply***.
* ***Alert details flyout*** - Click ***Take action → Assign alert***. Alternatively, click the ***Assign alert** icon (![Assign alert](../../../images/serverless-plusInCircle.svg "")) at the top of the alert details flyout, select users, then click **Apply**.
| -| Unassign all users from an alert | Choose one of the following:

* **Alerts table** - Click **More actions*** (***…​***) in an alert’s row, then click ***Unassign alert***.
* ***Alert details flyout** - Click **Take action → Unassign alert**.
| -| Assign users to multiple alerts | From the Alerts table, select the alerts you want to change. Click **Selected *x* alerts** at the upper-left above the table, then click **Assign alert**. Select users, then click **Apply**.

::::{note}
Users assigned to some of the selected alerts will be displayed as unassigned in the selection list. Selecting said users will assign them to all alerts they haven’t been assigned to yet.

::::

| -| Unassign users from multiple alerts | From the Alerts table, select the alerts you want to change and click **Selected *x* alerts** at the upper-left above the table. Click **Unassign alert** to remove users from the alert. | - -Show users that have been assigned to alerts by adding the **Assignees** column to the Alerts table (**Fields** → `kibana.alert.workflow_assignee_ids`). Up to four assigned users can appear in the **Assignees** column. If an alert is assigned to five or more users, a number appears instead. - -:::{image} ../../../images/serverless--detections-alert-assigned-alerts.png -:alt: Alert assignees in the Alerts table -:screenshot: -::: - -Assigned users are automatically displayed in the alert details flyout. Up to two assigned users can be shown in the flyout. If an alert is assigned to three or more users, a numbered badge displays instead. - -:::{image} ../../../images/serverless--detections-alert-flyout-assignees.png -:alt: Alert assignees in the alert details flyout -:screenshot: -::: - - -### Filter assigned alerts [filter-assigned-alerts] - -Click the **Assignees** filter above the Alerts table, then select the users you want to filter by. - -:::{image} ../../../images/serverless--detections-alert-filter-assigned-alerts.png -:alt: Filtering assigned alerts -:screenshot: -::: - - -### Add a rule exception from an alert [add-exception-from-alerts] - -You can add exceptions to the rule that generated an alert directly from the Alerts table. Exceptions prevent a rule from generating alerts even when its criteria are met. - -To add an exception, click the **More actions** menu (**…​***) in the Alerts table, then select ***Add exception***. Alternatively, select ***Take action** → **Add rule exception** in the alert details flyout. - -For information about exceptions and how to use them, refer to [Add and manage exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md). - - -### View alerts in Timeline [signals-to-timelines] - -* To view a single alert in Timeline, click the **Investigate in timeline** button in the Alerts table. Alternatively, select **Take action** → **Investigate in timeline** in the alert details flyout. - - ![Investigate in timeline button](../../../images/serverless--detections-timeline-button.png "") - -* To view multiple alerts in Timeline (up to 2,000), select the checkboxes next to the alerts, then click **Selected *x* alerts** → **Investigate in timeline**. - - ![Bulk add alerts to timeline button](../../../images/serverless--detections-bulk-add-alerts-to-timeline.png "") - - -::::{tip} -When you send an alert generated by a [threshold rule](../../../solutions/security/detect-and-alert/create-detection-rule.md) to Timeline, all matching events are listed in the Timeline, even ones that did not reach the threshold value. For example, if you have an alert generated by a threshold rule that detects 10 failed login attempts, when you send that alert to Timeline, all failed login attempts detected by the rule are listed. - -:::: - - -Suppose the rule that generated the alert uses a Timeline template. In this case, when you investigate the alert in Timeline, the dropzone query values defined in the template are replaced with their corresponding alert values. - -**Example** - -This Timeline template uses the `host.name: "{host.name}"` dropzone filter in the rule. When alerts generated by the rule are investigated in Timeline, the `{host.name}` value is replaced with the alert’s `host.name` value. If the alerts’s `host.name` value is `Windows-ArsenalFC`, the Timeline dropzone query is `host.name: "Windows-ArsenalFC"`. - -::::{note} -Refer to [Investigate events in Timeline](../../../solutions/security/investigate/timeline.md) for information on creating Timelines and Timeline templates. For information on how to add Timeline templates to rules, refer to [Create a detection rule](../../../solutions/security/detect-and-alert/create-detection-rule.md). - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-building-block-rules.md b/raw-migrated-files/docs-content/serverless/security-building-block-rules.md deleted file mode 100644 index 30e94b21a3..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-building-block-rules.md +++ /dev/null @@ -1,29 +0,0 @@ -# Use building block rules [security-building-block-rules] - -Create building block rules when you do not want to see their generated alerts in the UI. This is useful when you want: - -* A record of low-risk alerts without producing noise in the Alerts table. -* Rules that execute on the alert indices (`.alerts-security.alerts-`). You can then use building block rules to create hidden alerts that act as a basis for an *ordinary* rule to generate visible alerts. - - -## Set up rules that run on alert indices [security-building-block-rules-set-up-rules-that-run-on-alert-indices] - -To create a rule that searches alert indices, select **Index Patterns** as the rule’s **Source** and enter the index pattern for alert indices (`.alerts-security.alerts-*`): - -:::{image} ../../../images/serverless--detections-alert-indices-ui.png -:alt: detections alert indices ui -:screenshot: -::: - - -## View building block alerts in the UI [security-building-block-rules-view-building-block-alerts-in-the-ui] - -By default, building block alerts are excluded from the Overview and Alerts pages. You can choose to include building block alerts on the Alerts page, which expands the number of alerts. - -1. Go to **Alerts**. -2. In the Alerts table, select **Additional filters** → **Include building block alerts**, located on the far-right. - -::::{note} -On a building block rule details page, the rule’s alerts are displayed (by default, **Include building block alerts** is selected). - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md b/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md deleted file mode 100644 index 295a6da198..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md +++ /dev/null @@ -1,99 +0,0 @@ -# Connect to Azure OpenAI [security-connect-to-azure-openai] - -This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure Azure, then configure the connector in {{kib}}. - - -## Configure Azure [security-connect-to-azure-openai-configure-azure] - - -### Configure a deployment [security-connect-to-azure-openai-configure-a-deployment] - -First, set up an Azure OpenAI deployment: - -1. Log in to the Azure console and search for Azure OpenAI. -2. In **Azure AI services**, select **Create**. -3. For the **Project Details**, select your subscription and resource group. If you don’t have a resource group, select **Create new** to make one. -4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`. -5. Select the **Standard** pricing tier, then click **Next**. -6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. -7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. - -The following video demonstrates these steps. - - - -### Configure keys [security-connect-to-azure-openai-configure-keys] - -Next, create access keys for the deployment: - -1. From within your Azure OpenAI deployment, select **Click here to manage keys**. -2. Store your keys in a secure location. - -The following video demonstrates these steps. - - - -### Configure a model [security-connect-to-azure-openai-configure-a-model] - -Now, set up the Azure OpenAI model: - -1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**. -2. On the **Deployments** page, select **Create new deployment**. -3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`. -4. Set the model version to "Auto-update to default". -5. Under **Deployment type**, select **Standard**. -6. Name your deployment. -7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. -8. Click **Create**. - -::::{important} -The models available to you will depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md). - -:::: - - -The following video demonstrates these steps. - - - -## Configure Elastic AI Assistant [security-connect-to-azure-openai-configure-elastic-ai-assistant] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **OpenAI**. -3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`. -4. For **Select an OpenAI provider**, choose **Azure OpenAI**. -5. Update the **URL** field. We recommend doing the following: - - * Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays. - * Select **View code**, then from the drop-down, change the **Sample code** to `Curl`. - * Highlight and copy the URL without the quotes, then paste it into the **URL** field in {{kib}}. - * (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually. - -6. Under **API key**, enter one of your API keys. -7. Click **Save & test**, then click **Run**. - -The following video demonstrates these steps. - - diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md deleted file mode 100644 index 8f3b15d1a8..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md +++ /dev/null @@ -1,146 +0,0 @@ -# Connect to Amazon Bedrock [security-connect-to-bedrock] - -This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure AWS, then configure the connector in {{kib}}. - -::::{note} -Only Amazon Bedrock’s `Anthropic` models are supported: `Claude` and `Claude instant`. - -:::: - - - -## Configure AWS [security-connect-to-bedrock-configure-aws] - - -### Configure an IAM policy [security-connect-to-bedrock-configure-an-iam-policy] - -First, configure an IAM policy with the necessary permissions: - -1. Log into the AWS console and search for Identity and Access Management (IAM). -2. From the **IAM** menu, select **Policies** → **Create policy**. -3. To provide the necessary permissions, paste the following JSON into the **Specify permissions** menu. - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": [ - "bedrock:InvokeModel", - "bedrock:InvokeModelWithResponseStream" - ], - "Resource": "*" - } - ] -} -``` - -::::{note} -These are the minimum required permissions. IAM policies with additional permissions are also supported. - -:::: - - -1. Click **Next**. Name your policy. - -The following video demonstrates these steps. - - - -### Configure an IAM User [security-connect-to-bedrock-configure-an-iam-user] - -Next, assign the policy you just created to a new user: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu, then click **Create User**. -2. Name the user, then click **Next**. -3. Select **Attach policies directly**. -4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**. -5. Review the configuration then click **Create user**. - -The following video demonstrates these steps. - - - -### Create an access key [security-connect-to-bedrock-create-an-access-key] - -Create the access keys that will authenticate your Elastic connector: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu. -2. Search for the user you just created, and click its name. -3. Go to the **Security credentials** tab. -4. Under **Access keys**, click **Create access key**. -5. Select **Third-party service**, check the box under **Confirmation***, click ***Next**, then click **Create access key**. -6. Click **Download .csv file** to download the key. Store it securely. - -The following video demonstrates these steps. - - - -### Enable model access [security-connect-to-bedrock-enable-model-access] - -Make sure the supported Amazon Bedrock LLMs are enabled: - -1. Search the AWS console for Amazon Bedrock. -2. From the Amazon Bedrock page, click **Get started**. -3. Select **Model access** from the left navigation menu, then click **Manage model access**. -4. Check the boxes for **Claude** and/or **Claude Instant**, depending which model or models you plan to use. -5. Click **Save changes**. - -The following video demonstrates these steps. - - - -## Configure the Amazon Bedrock connector [security-connect-to-bedrock-configure-the-amazon-bedrock-connector] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **Amazon Bedrock**. -3. Name your connector. -4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`. -5. (Optional) Add one of the following strings if you want to use a model other than the default: - - * For Haiku: `anthropic.claude-3-haiku-20240307-v1:0` - * For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0` - * For Opus: `anthropic.claude-3-opus-20240229-v1:0` - -6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**. - -Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](https://docs.elastic.co/security/ai-assistant). - -::::{important} -If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. - -:::: - - -The following video demonstrates these steps. - - diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md b/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md deleted file mode 100644 index fd4af758ca..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md +++ /dev/null @@ -1,98 +0,0 @@ -# Connect to Google Vertex AI [security-connect-to-google-vertex] - -This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your {{elastic-sec}} project. - -::::{important} -Before continuing, you should have an active project in one of Google Vertex AI’s [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability). - -:::: - - - -## Enable the Vertex AI API [security-connect-to-google-vertex-enable-the-vertex-ai-api] - -1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**. -2. If you’re new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**. - -The following video demonstrates these steps. - - - -::::{note} -For more information about enabling the Vertex AI API, refer to [Google’s documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment). - -:::: - - - -## Create a Vertex AI service account [security-connect-to-google-vertex-create-a-vertex-ai-service-account] - -1. In the GCP console, navigate to **APIs & Services → Library**. -2. Search for **Vertex AI API**, select it, and click **MANAGE**. -3. In the left menu, navigate to **Credentials** then click **+ CREATE CREDENTIALS** and select **Service account**. -4. Name the new service account, then click **CREATE AND CONTINUE**. -5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**. -6. Click **Done**. - -The following video demonstrates these steps. - - - - -## Generate an API key [security-connect-to-google-vertex-generate-an-api-key] - -1. Return to Vertex AI’s **Credentials** menu and click **Manage service accounts**. -2. Search for the service account you just created, select it, then click the link that appears under **Email**. -3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**. -4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure. - -The following video demonstrates these steps. - - - - -## Configure the Google Gemini connector [security-connect-to-google-vertex-configure-the-google-gemini-connector] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **Google Gemini**. -3. Name your connector to help keep track of the model version you are using. -4. Under **URL**, enter the URL for your region. -5. Enter your **GCP Region** and **GCP Project ID**. -6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models). -7. Under **Authentication**, enter your API key. -8. Click **Save**. - -The following video demonstrates these steps. - - - diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md b/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md deleted file mode 100644 index 67a31b9de2..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md +++ /dev/null @@ -1,58 +0,0 @@ -# Connect to OpenAI [security-connect-to-openai] - -This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI’s large language models (LLMs) within {{kib}}. You’ll first need to create an OpenAI API key, then configure the connector in {{kib}}. - - -## Configure OpenAI [security-connect-to-openai-configure-openai] - - -### Select a model [security-connect-to-openai-select-a-model] - -Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you’ll need it when configuring {{kib}}. - -::::{note} -`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md). - -:::: - - - -### Create an API key [security-connect-to-openai-create-an-api-key] - -To generate an API key: - -1. Log in to the OpenAI platform and navigate to **API keys**. -2. Select **Create new secret key**. -3. Name your key, select an OpenAI project, and set the desired permissions. -4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. - -The following video demonstrates these steps. - - - -## Configure the OpenAI connector [security-connect-to-openai-configure-the-openai-connector] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **OpenAI**. -3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. -4. Under **Select an OpenAI provider**, choose **OpenAI**. -5. The **URL** field can be left as default. -6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use. -7. Paste the API key that you created into the corresponding field. -8. Click **Save**. - -The following video demonstrates these steps. - - diff --git a/raw-migrated-files/docs-content/serverless/security-detections-requirements.md b/raw-migrated-files/docs-content/serverless/security-detections-requirements.md deleted file mode 100644 index 09bc54003a..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-detections-requirements.md +++ /dev/null @@ -1,40 +0,0 @@ -# Detections requirements [security-detections-requirements] - -To use the [Detections feature](../../../solutions/security/detect-and-alert.md), you first need to configure a few settings. You also need the appropriate role to send [notifications](../../../solutions/security/detect-and-alert/create-detection-rule.md) when detection alerts are generated. - -Additionally, there are some [advanced settings](../../../solutions/security/detect-and-alert/detections-requirements.md) used to configure [value list](../../../solutions/security/detect-and-alert/create-manage-value-lists.md) upload limits. - - -## Enable and access detections [enable-detections-ui] - -To use the Detections feature, it must be enabled and you must have either the appropriate [predefined Security user role](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles) or a [custom role](../../../deploy-manage/users-roles/cloud-organization/user-roles.md) with privileges to access rules and alerts. If your role doesn’t have the privileges needed to enable this feature, you can request someone who has these privileges to visit your {{kib}} space, which will turn it on for you. - -::::{note} -For instructions about using {{ml}} jobs and rules, refer to [Machine learning job and rule requirements](../../../solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md). - -:::: - - - -### Custom role privileges [security-detections-requirements-custom-role-privileges] - -The following table describes the required custom role privileges to access the Detections feature, including rules and alerts. For more information on {{kib}} privileges, refer to [Custom roles](../../../deploy-manage/users-roles/cloud-organization/user-roles.md). - -| Action | Cluster Privilege | Index Privileges | {{kib}} Privileges | -| --- | --- | --- | --- | -| Enable detections in your space | `manage` | `manage`, `write`, `read`, and `view_index_metadata` for these system indices and data streams, where `` is the space name:

* `.alerts-security.alerts-`
* `.lists-`
* `.items-`
| `All` for the `Security` feature | -| Enable detections in all spaces

**NOTE:** To turn on detections, visit the Rules and Alerts pages for each space.
| `manage` | `manage`, `write`, `read`, and `view_index_metadata` for these system indices and data streams:

* `.alerts-security.alerts-`
* `.lists-`
* `.items-`
| `All` for the `Security` feature | -| Preview rules | N/A | `read` for these indices:

* `.preview.alerts-security.alerts-`
* `.internal.preview.alerts-security.alerts--*`
| `All` for the `Security` feature | -| Manage rules | N/A | `manage`, `write`, `read`, and `view_index_metadata` for these system indices and data streams, where `` is the space name:

* `.alerts-security.alerts-* `.lists-`
* `.items-`
| `All` for the `Security` feature

**NOTE:** You need additional `Action and Connectors` feature privileges (**Management → Action and Connectors**) to manage rules with actions and connectors:

* To provide full access to rule actions and connectors, give your role `All` privileges. With `Read` privileges, you can edit rule actions, but will have limited capabilities to manage connectors. For example, `Read` privileges allow you to add or remove an existing connector from a rule, but does not allow you to create a new connector.
* To import rules with actions, you need at least `Read` privileges for the `Action and Connectors` feature. To overwrite or add new connectors, you need `All` privileges for the `Actions and Connectors` feature. To import rules without actions, you don’t need `Actions and Connectors` privileges.
| -| Manage alerts

**NOTE**: Allows you to manage alerts, but not modify rules.
| N/A | `maintenance`, `write`, `read`, and `view_index_metadata` for these system indices and data streams, where `` is the space name:

* `.alerts-security.alerts-`
* `.internal.alerts-security.alerts--*`
* `.lists-`
* `.items-`
| `Read` for the `Security` feature | -| Create the `.lists` and `.items` data streams in your space

**NOTE**: To initiate the process that creates the data streams, you must visit the Rules page for each appropriate space.
| `manage` | `manage`, `write`, `read`, and `view_index_metadata` for these data streams, where `` is the space name:

* `.lists-`
* `.items-`
| `All` for the `Security` and `Saved Objects Management` features | - - -### Authorization [alerting-auth-model] - -Rules, including all background detection and the actions they generate, are authorized using an [API key](../../../deploy-manage/api-keys/elasticsearch-api-keys.md) associated with the last user to edit the rule. Upon creating or modifying a rule, an API key is generated for that user, capturing a snapshot of their privileges. The API key is then used to run all background tasks associated with the rule including detection checks and executing actions. - -::::{important} -If a rule requires certain privileges to run, such as index privileges, keep in mind that if a user without those privileges updates the rule, the rule will no longer function. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md b/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md deleted file mode 100644 index 8265d28e0f..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -navigation_title: "Use Elastic prebuilt rules" ---- - -# Install and manage Elastic prebuilt rules [security-prebuilt-rules-management] - - -Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](security-docs://reference/prebuilt-rules/index.md), keep them updated, and make sure they have the data needed to run successfully. - -* [Install and enable Elastic prebuilt rules](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#load-prebuilt-rules) -* [Prebuilt rule tags](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#prebuilt-rule-tags) -* [Select and duplicate all prebuilt rules](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#select-all-prebuilt-rules) -* [Update Elastic prebuilt rules](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#update-prebuilt-rules) -* [Confirm rule prerequisites](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) - -::::{note} -* Most prebuilt rules don’t start running by default. You can use the **Install and enable** option to start running rules as you install them, or first install the rules, then enable them manually. After installation, only a few prebuilt rules will be enabled by default, such as the Endpoint Security rule. -* You can’t modify most settings on Elastic prebuilt rules. You can only edit [rule actions](../../../solutions/security/detect-and-alert/create-detection-rule.md) and [add exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md). If you want to modify other settings on a prebuilt rule, you must first duplicate it, then make your changes to the duplicated rule. However, your customized rule is entirely separate from the original prebuilt rule, and will not get updates from Elastic if the prebuilt rule is updated. - -:::: - - - -## Install and enable Elastic prebuilt rules [load-prebuilt-rules] - -1. Go to **Rules** → **Detection rules (SIEM)**. The badge next to **Add Elastic rules** shows the number of prebuilt rules available for installation. - - :::{image} ../../../images/serverless--detections-prebuilt-rules-add-badge.png - :alt: The Add Elastic Rules page - :screenshot: - ::: - -2. Click **Add Elastic rules**. - - ::::{tip} - To examine the details of a rule before you install it, select the rule name. This opens the rule details flyout. - - :::: - -3. Do one of the following: - - * Install all available rules: Click **Install all** at the top of the page. (This doesn’t enable the rules; you still need to do that manually.) - * Install a single rule: In the rules table, either click **Install** to install a rule without enabling it, or click ![Vertical boxes button](../../../images/serverless-boxesVertical.svg "") → **Install and enable** to start running the rule once it’s installed. - * Install multiple rules: Select the rules, and then at the top of the page either click **Install *x* selected rule(s)** to install without enabling the rules, or click ![Vertical boxes button](../../../images/serverless-boxesVertical.svg "") → **Install and enable** to install and start running the rules. - - ::::{tip} - Use the search bar and **Tags** filter to find the rules you want to install. For example, filter by `OS: Windows` if your environment only includes Windows endpoints. For more on tag categories, refer to [Prebuilt rule tags](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#prebuilt-rule-tags). - - :::: - - - :::{image} ../../../images/serverless--detections-prebuilt-rules-add.png - :alt: The Add Elastic Rules page - :screenshot: - ::: - -4. For any rules you haven’t already enabled, go back to the **Rules** page, search or filter for the rules you want to run, and do either of the following: - - * Enable a single rule: Turn on the rule’s **Enabled** switch. - * Enable multiple rules: Select the rules, then click **Bulk actions** → **Enable**. - - -Once you enable a rule, it starts running on its configured schedule. To confirm that it’s running successfully, check its **Last response** status in the rules table, or open the rule’s details page and check the [**Execution results**](../../../solutions/security/detect-and-alert/monitor-rule-executions.md#rule-execution-logs) tab. - - -## Prebuilt rule tags [prebuilt-rule-tags] - -Each prebuilt rule includes several tags identifying the rule’s purpose, detection method, associated resources, and other information to help categorize your rules. These tags are category-value pairs; for example, `OS: Windows` indicates rules designed for Windows endpoints. Categories include: - -* `Data Source`: The application, cloud provider, data shipper, or Elastic integration providing data for the rule. -* `Domain`: A general category of data source types (such as cloud, endpoint, or network). -* `OS`: The host operating system, which could be considered another data source type. -* `Resources`: Additional rule resources such as investigation guides. -* `Rule Type`: Identifies if the rule depends on specialized resources (such as machine learning jobs or threat intelligence indicators), or if it’s a higher-order rule built from other rules' alerts. -* `Tactic`: MITRE ATT&CK tactics that the rule addresses. -* `Threat`: Specific threats the rule detects (such as Cobalt Strike or BPFDoor). -* `Use Case`: The type of activity the rule detects and its purpose. Use cases include: - - * `Active Directory Monitoring`: Detects changes related to Active Directory. - * `Asset Visibility`: Detects changes to specified asset types. - * `Configuration Audit`: Detects undesirable configuration changes. - * `Guided Onboarding`: Example rule, used for {{elastic-sec}}'s guided onboarding tour. - * `Identity and Access Audit`: Detects activity related to identity and access management (IAM). - * `Log Auditing`: Detects activity on log configurations or storage. - * `Network Security Monitoring`: Detects network security configuration activity. - * `Threat Detection`: Detects threats. - * `Vulnerability`: Detects exploitation of specific vulnerabilities. - - - -## Select and duplicate all prebuilt rules [select-all-prebuilt-rules] - -1. Go to **Rules** → **Detection rules (SIEM)**, then select the **Elastic rules** filter. -2. Click **Select all *x* rules** above the rules table. -3. Click **Bulk actions** → **Duplicate**. -4. Select whether to duplicate the rules' exceptions, then click **Duplicate**. - -You can then modify the duplicated rules and, if required, delete the prebuilt ones. However, your customized rules are entirely separate from the original prebuilt rules, and will not get updates from Elastic if the prebuilt rules are updated. - - -## Update Elastic prebuilt rules [update-prebuilt-rules] - -Elastic regularly updates prebuilt rules to optimize their performance and ensure they detect the latest threats and techniques. When updated versions are available for your installed prebuilt rules, the **Rule Updates** tab appears on the **Rules** page, allowing you to update your installed rules with the latest versions. - -1. Go to **Rules** → **Detection rules (SIEM)**, then select the **Rule Updates** tab. - - ::::{note} - The **Rule Updates** tab doesn’t appear if all your installed prebuilt rules are up to date. - - :::: - - - :::{image} ../../../images/serverless--detections-prebuilt-rules-update.png - :alt: The Rule Updates tab on the Rules page - :screenshot: - ::: - -2. (Optional) To examine the details of a rule’s latest version before you update it, select the rule name. This opens the rule details flyout. - - Select the **Updates** tab to view rule changes field by field, or the **JSON view*** tab to view changes for the entire rule in JSON format. Both tabs display side-by-side comparisons of the ***Current rule** (what you currently have installed) and the **Elastic update** version (what you can choose to install). Deleted characters are highlighted in red; added characters are highlighted in green. - - To accept the changes and install the updated version, select **Update**. - - :::{image} ../../../images/serverless-prebuilt-rules-update-diff.png - :alt: Prebuilt rule comparison - :screenshot: - ::: - -3. Do one of the following to update prebuilt rules on the **Rules** page: - - * Update all available rules: Click **Update all**. - * Update a single rule: Click **Update rule** for that rule. - * Update multiple rules: Select the rules and click **Update *x* selected rule(s)**. - - ::::{tip} - Use the search bar and **Tags** filter to find the rules you want to update. For example, filter by `OS: Windows` if your environment only includes Windows endpoints. For more on tag categories, refer to [Prebuilt rule tags](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#prebuilt-rule-tags). - - :::: diff --git a/raw-migrated-files/docs-content/serverless/security-query-alert-indices.md b/raw-migrated-files/docs-content/serverless/security-query-alert-indices.md deleted file mode 100644 index 777ceacdde..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-query-alert-indices.md +++ /dev/null @@ -1,14 +0,0 @@ -# Query alert indices [security-query-alert-indices] - -This page explains how you should query alert indices, for example, when building rule queries, custom dashboards, or visualizations. For more information about alert event field definitions, review the [Alert schema](/reference/security/fields-and-object-schemas/alert-schema.md). - - -## Alert index aliases [security-query-alert-indices-alert-index-aliases] - -We recommend querying the `.alerts-security.alerts-` index alias. You should not include a dash or wildcard after the space ID. To query all spaces, use the following syntax: `.alerts-security.alerts-*`. - - -## Alert indices [security-query-alert-indices-alert-indices] - -For additional context, alert events are stored in hidden {{es}} indices. We do not recommend querying them directly. The naming convention for these indices and their aliases is `.internal.alerts-security.alerts--NNNNNN`, where `NNNNNN` is a number that increases over time, starting from 000001. - diff --git a/raw-migrated-files/docs-content/serverless/security-reduce-notifications-alerts.md b/raw-migrated-files/docs-content/serverless/security-reduce-notifications-alerts.md deleted file mode 100644 index aab8f5f5b4..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-reduce-notifications-alerts.md +++ /dev/null @@ -1,10 +0,0 @@ -# Reduce notifications and alerts [security-reduce-notifications-alerts] - -{{elastic-sec}} offers several features to help reduce the number of notifications and alerts generated by your detection rules. This table provides a general comparison of these features, with links for more details: - -| | | -| --- | --- | -| [Rule action snoozing](../../../solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) | **Stops a specific rule’s notification actions from running**.

Use to avoid unnecessary notifications from a specific rule. The rule continues to run and generate alerts during the snooze period, but its [notification actions](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-response-action) don’t run.
| -| [Maintenance window](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md) | **Prevents all rules' notification actions from running**.

Use to avoid false alarms and unnecessary notifications during planned outages. All rules continue to run and generate alerts during the maintenance window, but their [notification actions](../../../solutions/security/detect-and-alert/create-detection-rule.md) don’t run.
| -| [Alert suppression](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) | **Reduces repeated or duplicate alerts**.

Use to reduce the number of alerts created when a rule meets its criteria repeatedly. Duplicate qualifying events are grouped, and only one alert is created for each group.
| -| [Rule exception](../../../solutions/security/detect-and-alert/rule-exceptions.md) | **Prevents a rule from creating alerts under specific conditions**.

Use to reduce false positive alerts by preventing trusted processes and network activity from generating unnecessary alerts. You can configure an exception to be used by a single rule or shared among multiple rules, but they typically don’t affect *all* rules.
| diff --git a/raw-migrated-files/docs-content/serverless/security-rules-coverage.md b/raw-migrated-files/docs-content/serverless/security-rules-coverage.md deleted file mode 100644 index 6ed97c1dd0..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-rules-coverage.md +++ /dev/null @@ -1,55 +0,0 @@ -# MITRE ATT&CK® coverage [security-rules-coverage] - -The **MITRE ATT&CK® coverage** page (**Rules** → **MITRE ATT&CK® Coverage**) shows which [MITRE ATT&CK®](https://attack.mitre.org) adversary tactics and techniques are covered by your installed and enabled detection rules. This includes both Elastic prebuilt rules and custom rules. - -Mirroring the MITRE ATT&CK® framework, columns represent major tactics, and cells within each column represent a tactic’s related techniques. Cells are darker when a technique has more rules matching the current filters, as indicated in the **Legend** at the top. - -::::{note} -This page only includes the detection rules you currently have installed, and only rules that are mapped to MITRE ATT&CK®. The coverage page maps detections to the following [MITRE ATT&CK® version](https://attack.mitre.org/resources/updates/updates-april-2024) used by {{elastic-sec}}: `v15.1`. Elastic prebuilt rules that aren’t installed and custom rules that are either unmapped or mapped to a deprecated tactic or technique will not appear on the coverage map. - -You can map custom rules to tactics in **Advanced settings** when creating or editing a rule. - -:::: - - -:::{image} ../../../images/serverless--detections-rules-coverage.png -:alt: MITRE ATT&CK® coverage page -:screenshot: -::: - - -## Filter rules [security-rules-coverage-filter-rules] - -Use the drop-down filters at the top of the page to control which of your installed detection rules are included in calculating coverage. - -* **Installed rule status**: Select to include **Enabled rules**, **Disabled rules**, or both. -* **Installed rule type**: Select to include **Elastic rules** (prebuilt rules), **Custom rules** (user-created rules), or both. - -You can also search for a tactic or technique name, technique number, or rule name in the search bar. The search bar acts as a filter for the coverage grid: only rules matching the search term will be included. - -::::{note} -Searches for tactics and techniques must match exactly, are case sensitive, and do *not* support wildcards. - -:::: - - - -## Expand and collapse cells [security-rules-coverage-expand-and-collapse-cells] - -Click **Collapse cells** or **Expand cells** to change how much information the cells display. Cells always include the technique’s name and the number of sub-techniques covered by enabled rules. Expand the cells to also display counts of disabled and enabled rules for each technique. - -::::{note} -The counts inside cells are affected by how you filter the page. For example, if you filter the **Installed rule status** to only include **Enabled rules**, then all disabled rule counts will be 0 because disabled rules are filtered out. - -:::: - - - -## Enable rules [security-rules-coverage-enable-rules] - -You can quickly enable all the rules for a specific technique that you’ve installed, but not enabled. Click the technique’s cell, then click **Enable all disabled** in the popup that appears. - - -## Learn more about techniques and sub-techniques [security-rules-coverage-learn-more-about-techniques-and-sub-techniques] - -For more information on a specific technique and its sub-techniques, click the technique’s cell, then click the title in the popup that appears. This opens a new browser tab with the technique’s MITRE ATT&CK® documentation. diff --git a/raw-migrated-files/docs-content/serverless/security-rules-create.md b/raw-migrated-files/docs-content/serverless/security-rules-create.md deleted file mode 100644 index 8a802820b6..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-rules-create.md +++ /dev/null @@ -1,861 +0,0 @@ -# Create a detection rule [security-rules-create] - -To create a new detection rule, follow these steps: - -1. Define the [**rule type**](../../../solutions/security/detect-and-alert/about-detection-rules.md#rule-types). The configuration for this step varies depending on the rule type. -2. Configure basic rule settings. -3. Configure advanced rule settings (optional). -4. Set the rule’s schedule. -5. Set up rule actions (optional). -6. Set up response actions (optional). - -::::{admonition} Requirements -:class: note - -* To create detection rules, you must have access to data views, which requires the appropriate user role. -* You’ll also need permissions to enable and view detections, manage rules, manage alerts, and preview rules. These permissions depend on the user role. Refer to [Detections requirements](../../../solutions/security/detect-and-alert/detections-requirements.md) for more information. - -:::: - - -::::{tip} -At any step, you can [preview the rule](../../../solutions/security/detect-and-alert/create-detection-rule.md#preview-rules) before saving it to see what kind of results you can expect. - -:::: - - - -## Create a custom query rule [create-custom-rule] - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create a rule based on a KQL or Lucene query, select **Custom query**, then: - - 1. Define which {{es}} indices or data view the rule searches for alerts. - 2. Use the filter and query fields to create the criteria used for detecting alerts. - - The following example (based on the prebuilt rule Volume Shadow Copy Deleted or Resized via VssAdmin) detects when the `vssadmin delete shadows` Windows command is executed: - - * **Index patterns**: `winlogbeat-*` - - Winlogbeat ships Windows event logs to {{elastic-sec}}. - - * **Custom query**: `event.action:"Process Create (rule: ProcessCreate)" and process.name:"vssadmin.exe" and process.args:("delete" and "shadows")` - - Searches the `winlogbeat-*` indices for `vssadmin.exe` executions with the `delete` and `shadow` arguments, which are used to delete a volume’s shadow copies. - - :::{image} ../../../images/serverless--detections-rule-query-example.png - :alt: Rule query example - :screenshot: - ::: - - 3. You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - When you use a saved query, the **Load saved query "*query name*" dynamically on each rule execution** check box appears: - - * Select this to use the saved query every time the rule runs. This links the rule to the saved query, and you won’t be able to modify the rule’s **Custom query** field or filters because the rule will only use settings from the saved query. To make changes, modify the saved query itself. - * Deselect this to load the saved query as a one-time way of populating the rule’s **Custom query** field and filters. This copies the settings from the saved query to the rule, so you can then further adjust the rule’s query and filters as needed. If the saved query is later changed, the rule will not inherit those changes. - -3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -## Create a machine learning rule [create-ml-rule] - -::::{important} -To create or edit {{ml}} rules, you need an appropriate user role. Additionally, the selected {{ml}} job must be running for the rule to function correctly. - -:::: - - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create a rule based on a {{ml}} anomaly threshold, select **Machine Learning**, then select: - - 1. The required {{ml}} jobs. - - ::::{note} - If a required job isn’t currently running, it will automatically start when you finish configuring and enable the rule. - - :::: - - 2. The anomaly score threshold above which alerts are created. - -3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. - - ::::{note} - Because {{ml}} rules generate alerts from anomalies, which don’t contain source event fields, you can only use anomaly fields when configuring alert suppression. - - :::: - -4. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -5. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -## Create a threshold rule [create-threshold-rule] - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create a rule based on a source event field threshold, select **Threshold**, then: - - 1. Define which {{es}} indices the rule analyzes for alerts. - 2. Use the filter and query fields to create the criteria used for detecting alerts. - - ::::{note} - You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - :::: - - 3. Use the **Group by** and **Threshold** fields to determine which source event field is used as a threshold and the threshold’s value. - 4. Use the **Count** field to limit alerts by cardinality of a certain field. - - For example, if **Group by** is `source.ip, destination.ip` and its **Threshold** is `10`, an alert is generated for every pair of source and destination IP addresses that appear in at least 10 of the rule’s search results. - - You can also leave the **Group by** field undefined. The rule then creates an alert when the number of search results is equal to or greater than the threshold value. If you set **Count** to limit the results by `process.name` >= 2, an alert will only be generated for source/destination IP pairs that appear with at least 2 unique process names across all events. - - ::::{important} - Alerts created by threshold rules are synthetic alerts that do not resemble the source documents. The alert itself only contains data about the fields that were aggregated over (the **Group by** fields). Other fields are omitted, because they can vary across all source documents that were counted toward the threshold. Additionally, you can reference the actual count of documents that exceeded the threshold from the `kibana.alert.threshold_result.count` field. - - :::: - -3. (Optional) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -## Create an event correlation rule [create-eql-rule] - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create an event correlation rule using EQL, select **Event Correlation**, then: - - 1. Define which {{es}} indices or data view the rule searches when querying for events. - 2. Write an [EQL query](elasticsearch://reference/query-languages/eql-syntax.md) that searches for matching events or a series of matching events. - - ::::{tip} - To find events that are missing in a sequence, use the [missing events](elasticsearch://reference/query-languages/eql-syntax.md#eql-missing-events) syntax. - - :::: - - - For example, the following rule detects when `msxsl.exe` makes an outbound network connection: - - * **Index patterns**: `winlogbeat-*` - - Winlogbeat ships Windows events to {{elastic-sec}}. - - * **EQL query**: - - ```eql - sequence by process.entity_id - [process - where event.type in ("start", "process_started") - and process.name == "msxsl.exe"] - [network - where event.type == "connection" - and process.name == "msxsl.exe" - and network.direction == "outgoing"] - ``` - - Searches the `winlogbeat-*` indices for sequences of a `msxsl.exe` process start event followed by an outbound network connection event that was started by the `msxsl.exe` process. - - :::{image} ../../../images/serverless--detections-eql-rule-query-example.png - :alt: detections eql rule query example - :screenshot: - ::: - - ::::{note} - For sequence events, the {{security-app}} generates a single alert when all events listed in the sequence are detected. To see the matched sequence events in more detail, you can view the alert in the Timeline, and, if all events came from the same process, open the alert in Analyze Event view. - - :::: - -3. (Optional) Click the EQL settings icon (![EQL settings](../../../images/serverless-controlsVertical.svg "")) to configure additional fields used by [EQL search](../../../explore-analyze/query-filter/languages/eql.md#specify-a-timestamp-or-event-category-field): - - * **Event category field**: Contains the event classification, such as `process`, `file`, or `network`. This field is typically mapped as a field type in the [keyword family](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md). Defaults to the `event.category` ECS field. - * **Tiebreaker field**: Sets a secondary field for sorting events (in ascending, lexicographic order) if they have the same timestamp. - * **Timestamp field**: Contains the event timestamp used for sorting a sequence of events. This is different from the **Timestamp override** advanced setting, which is used for querying events within a range. Defaults to the `@timestamp` ECS field. - -4. [preview] (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -7. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -## Create an indicator match rule [create-indicator-rule] - -::::{note} -{{elastic-sec}} provides limited support for indicator match rules. See [Limited support for indicator match rules](../../../solutions/security/detect-and-alert.md#support-indicator-rules) for more information. - -:::: - - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create a rule that searches for events whose specified field value matches the specified indicator field value in the indicator index patterns, select **Indicator Match**, then fill in the following fields: - - 1. **Source**: The individual index patterns or data view that specifies what data to search. - 2. **Custom query**: The query and filters used to retrieve the required results from the {{elastic-sec}} event indices. For example, if you want to match documents that only contain a `destination.ip` address field, add `destination.ip : *`. - - ::::{tip} - If you want the rule to check every field in the indices, use this wildcard expression: `*:*`. - - :::: - - - ::::{note} - You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - :::: - - 3. **Indicator index patterns**: The indicator index patterns containing field values for which you want to generate alerts. This field is automatically populated with indices specified in the `securitySolution:defaultThreatIndex` advanced setting. For more information, see [Update default Elastic Security threat intelligence indices](../../../solutions/security/get-started/configure-advanced-settings.md#update-threat-intel-indices). - - ::::{important} - Data in indicator indices must be [ECS compatible](/reference/security/fields-and-object-schemas/siem-field-reference.md), and so it must contain a `@timestamp` field. - - :::: - - 4. **Indicator index query**: The query and filters used to filter the fields from the indicator index patterns. The default query `@timestamp > "now-30d/d"` searches specified indicator indices for indicators ingested during the past 30 days and rounds the start time down to the nearest day (resolves to UTC `00:00:00`). - 5. **Indicator mapping**: Compares the values of the specified event and indicator fields, and generates an alert if the values are identical. - - ::::{note} - Only single-value fields are supported. - - :::: - - - To define which field values are compared from the indices, add the following: - - * **Field**: The field used for comparing values in the {{elastic-sec}} event indices. - * **Indicator index field**: The field used for comparing values in the indicator indices. - - 6. You can add `AND` and `OR` clauses to define when alerts are generated. - - For example, to create a rule that generates alerts when `host.name` **and** `destination.ip` field values in the `logs-*` or `packetbeat-*` {{elastic-sec}} indices are identical to the corresponding field values in the `mock-threat-list` indicator index, enter the rule parameters seen in the following image: - - :::{image} ../../../images/serverless--detections-indicator-rule-example.png - :alt: Indicator match rule settings - :screenshot: - ::: - - ::::{tip} - Before you create rules, create [Timeline templates](../../../solutions/security/investigate/timeline-templates.md) so you can select them under **Timeline template** at the end of the **Define rule** section. When alerts generated by the rule are investigated in the Timeline, Timeline query values are replaced with their corresponding alert field values. - - :::: - -3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -### Use value lists with indicator match rules [indicator-value-lists] - -While there are numerous ways you can add data into indicator indices, you can use value lists as the indicator match index in an indicator match rule. Take the following scenario, for example: - -You uploaded a value list of known ransomware domains, and you want to be notified if any of those domains matches a value contained in a domain field in your security event index pattern. - -1. Upload a value list of indicators. -2. Create an indicator match rule and fill in the following fields: - - 1. **Index patterns**: The Elastic Security event indices on which the rule runs. - 2. **Custom query**: The query and filters used to retrieve the required results from the Elastic Security event indices (e.g., `host.domain :*`). - 3. **Indicator index patterns**: Value lists are stored in a hidden index called `.items-`. Enter the name of the {{kib}} space in which this rule will run in this field. - 4. **Indicator index query**: Enter the value `list_id :`, followed by the name of the value list you want to use as your indicator index (uploaded in Step 1 above). - 5. **Indicator mapping** - - * **Field**: Enter the field from the Elastic Security event indices to be used for comparing values. - * **Indicator index field**: Enter the type of value list you created (i.e., `keyword`, `text`, or `IP`). - - ::::{tip} - If you don’t remember this information, go to **Rules** → **Detection rules (SIEM)** → **Manage value lists**. Locate the appropriate value list and note the field in the corresponding `Type` column. (Examples include keyword, text, and IP.) - - :::: - - -:::{image} ../../../images/serverless--detections-indicator_value_list.png -:alt: detections indicator value list -:screenshot: -::: - - -## Create a new terms rule [create-new-terms-rule] - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page displays. -2. To create a rule that searches for each new term detected in source documents, select **New Terms**, then: - - 1. Specify what data to search by entering individual {{es}} index patterns or selecting an existing data view. - 2. Use the filter and query fields to create the criteria used for detecting alerts. - - ::::{note} - You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. - - :::: - - 3. Use the **Fields** menu to select a field to check for new terms. You can also select up to three fields to detect a combination of new terms (for example, a `host.ip` and `host.id` that have never been observed together before). - - ::::{important} - When checking multiple fields, each unique combination of values from those fields is evaluated separately. For example, a document with `host.name: ["host-1", "host-2", "host-3"]` and `user.name: ["user-1", "user-2", "user-3"]` has 9 (3x3) unique combinations of `host.name` and `user.name`. A document with 11 values in `host.name` and 10 values in `user.name` has 110 (11x10) unique combinations. The new terms rule only evaluates 100 unique combinations per document, so selecting fields with large arrays of values might cause incorrect results. - - :::: - - 4. Use the **History Window Size** menu to specify the time range to search in minutes, hours, or days to determine if a term is new. The history window size must be larger than the rule interval plus additional look-back time, because the rule will look for terms where the only time(s) the term appears within the history window is *also* within the rule interval and additional look-back time. - - For example, if a rule has an interval of 5 minutes, no additional look-back time, and a history window size of 7 days, a term will be considered new only if the time it appears within the last 7 days is also within the last 5 minutes. Configure the rule interval and additional look-back time when you [set the rule’s schedule](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule). - -3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -## Create an {{esql}} rule [create-esql-rule] - -Use [{{esql}}](../../../explore-analyze/query-filter/languages/esql.md) to query your source events and aggregate event data. Query results are returned in a table with rows and columns. Each row becomes an alert. - -To create an {{esql}} rule: - -1. Go to **Rules** → **Detection rules (SIEM)*** → ***Create new rule**. The **Create new rule** page appears. -2. Select **{{esql}}**, then write a query. - - ::::{note} - Refer to the sections below to learn more about [{{esql}} query types](../../../solutions/security/detect-and-alert/create-detection-rule.md#esql-rule-query-types), [query design considerations](../../../solutions/security/detect-and-alert/create-detection-rule.md#esql-query-design), and [rule limitations](../../../solutions/security/detect-and-alert/create-detection-rule.md#esql-rule-limitations). - - :::: - - - ::::{tip} - Click the help icon (![Click the ES|QL help icon](../../../images/serverless-iInCircle.svg "")) to open the in-product reference documentation for all {{esql}} commands and functions. - - :::: - -3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. - - 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. - 2. Enter the field’s data type. - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - -### {{esql}} query types [esql-rule-query-types] - -{{esql}} rule queries are loosely categorized into two types: aggregating and non-aggregating. - - -#### Aggregating query [esql-agg-query] - -Aggregating queries use [`STATS...BY`](elasticsearch://reference/query-languages/esql/esql-functions-operators.md#esql-agg-functions) functions to aggregate source event data. Alerts generated by a rule with an aggregating query only contain the fields that the {{esql}} query returns and any new fields that the query creates. - -::::{note} -A *new field* is a field that doesn’t exist in the query’s source index and is instead created when the rule runs. You can access new fields in the details of any alerts that are generated by the rule. For example, if you use the `STATS...BY` function to create a column with aggregated values, the column is created when the rule runs and is added as a new field to any alerts that are generated by the rule. - -:::: - - -Here is an example aggregating query: - -```esql -FROM logs-* -| STATS host_count = COUNT(host.name) BY host.name -| SORT host_count DESC -| WHERE host_count > 20 -``` - -* This query starts by searching logs from indices that match the pattern `logs-*`. -* The query then aggregates the count of events by `host.name`. -* Next, it sorts the result by `host_count` in descending order. -* Then, it filters for events where the `host_count` field appears more than 20 times during the specified rule interval. - -::::{note} -Rules that use aggregating queries might create duplicate alerts. This can happen when events that occur in the additional look-back time are aggregated both in the current rule execution and in a previous rule execution. - -:::: - - - -#### Non-aggregating query [esql-non-agg-query] - -Non-aggregating queries don’t use `STATS...BY` functions and don’t aggregate source event data. Alerts generated by a non-aggregating query contain source event fields that the query returns, new fields the query creates, and all other fields in the source event document. - -::::{note} -A *new field* is a field that doesn’t exist in the query’s source index and is instead created when the rule runs. You can access new fields in the details of any alerts that are generated by the rule. For example, if you use the [`EVAL`](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-eval) command to append new columns with calculated values, the columns are created when the rule runs and are added as new fields to any alerts generated by the rule. - -:::: - - -Here is an example non-aggregating query: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| LIMIT 10 -``` - -* This query starts by querying logs from indices that match the pattern `logs-*`. The `METADATA _id, _index, _version` operator allows [alert deduplication](../../../solutions/security/detect-and-alert/create-detection-rule.md#esql-non-agg-query-dedupe). -* Next, the query filters events where the `event.category` is a process and the `event.id` is `8a4f500d`. -* Then, it limits the output to the top 10 results. - - -#### Turn on alert deduplication for rules using non-aggregating queries [esql-non-agg-query-dedupe] - -To deduplicate alerts, a query needs access to the `_id`, `_index`, and `_version` metadata fields of the queried source event documents. You can allow this by adding the `METADATA _id, _index, _version` operator after the `FROM` source command, for example: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| LIMIT 10 -``` - -When those metadata fields are provided, unique alert IDs are created for each alert generated by the query. - -When developing the query, make sure you don’t [`DROP`](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-drop) or filter out the `_id`, `_index`, or `_version` metadata fields. - -Here is an example of a query that fails to deduplicate alerts. It uses the `DROP` command to omit the `_id` property from the results table: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| DROP _id -| LIMIT 10 -``` - -Here is another example of an invalid query that uses the `KEEP` command to only return `event.*` fields in the results table: - -```esql -FROM logs-* METADATA _id, _index, _version -| WHERE event.category == "process" AND event.id == "8a4f500d" -| KEEP event.* -| LIMIT 10 -``` - - -### Query design considerations [esql-query-design] - -When writing your query, consider the following: - -* The [`LIMIT`](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-limit) command specifies the maximum number of rows an {{esql}} query returns and the maximum number of alerts created per rule run. Similarly, a detection rule’s **Max alerts per run** setting specifies the maximum number of alerts it can create every time it runs. - - If the `LIMIT` value and **Max alerts per run** value are different, the rule uses the lower value to determine the maximum number of alerts the rule generates. - -* When writing an aggregating query, use the [`STATS...BY`](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-stats-by) command with fields that you want to search and filter for after alerts are created. For example, using the `host.name`, `user.name`, `process.name` fields with the `BY` operator of the `STATS...BY` command returns these fields in alert documents, and allows you to search and filter for them from the Alerts table. -* When configuring alert suppression on a non-aggregating query, we recommend sorting results by ascending `@timestamp` order. Doing so ensures that alerts are properly suppressed, especially if the number of alerts generated is higher than the **Max alerts per run** value. - - -### {{esql}} rule limitations [esql-rule-limitations] - -If your {{esql}} query creates new fields that aren’t part of the ECS schema, they aren’t mapped to the alerts index, so you can’t search for or filter them in the Alerts table. As a workaround, create [runtime fields](../../../solutions/security/get-started/create-runtime-fields-in-elastic-security.md). - - -### Highlight fields returned by the {{esql}} rule query [custom-highlighted-esql-fields] - -When configuring an {{esql}} rule’s **[Custom highlighted fields](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params)**, you can specify any fields that the rule’s aggregating or non-aggregating query return. This can help ensure that returned fields are visible in the alert details flyout while you’re investigating alerts. - - -## Configure basic rule settings [rule-ui-basic-params] - -1. In the **About rule** pane, fill in the following fields: - - 1. **Name**: The rule’s name. - 2. **Description**: A description of what the rule does. - 3. **Default severity**: Select the severity level of alerts created by the rule: - - * **Low**: Alerts that are of interest but generally are not considered to be security incidents. Sometimes a combination of low severity alerts can indicate suspicious activity. - * **Medium**: Alerts that require investigation. - * **High**: Alerts that require an immediate investigation. - * **Critical**: Alerts that indicate it is highly likely a security incident has occurred. - - 4. **Severity override** (optional): Select to use source event values to override the **Default severity** in generated alerts. When selected, a UI component is displayed where you can map the source event field values to severity levels. The following example shows how to map severity levels to `host.name` values: - - :::{image} ../../../images/serverless--detections-severity-mapping-ui.png - :alt: detections severity mapping ui - :screenshot: - ::: - - ::::{note} - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. Please also note that overrides are not supported for event correlation rules. - - :::: - - 5. **Default risk score**: A numerical value between 0 and 100 that indicates the risk of events detected by the rule. This setting changes to a default value when you change the **Severity** level, but you can adjust the risk score as needed. General guidelines are: - - * `0` - `21` represents low severity. - * `22` - `47` represents medium severity. - * `48` - `73` represents high severity. - * `74` - `100` represents critical severity. - - 6. **Risk score override** (optional): Select to use a source event value to override the **Default risk score** in generated alerts. When selected, a UI component is displayed to select the source field used for the risk score. For example, if you want to use the source event’s risk score in alerts: - - :::{image} ../../../images/serverless--detections-risk-source-field-ui.png - :alt: detections risk source field ui - :screenshot: - ::: - - ::::{note} - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. - - :::: - - 7. **Tags** (optional): Words and phrases used to categorize, filter, and search the rule. - -2. Continue with **one** of the following: - - * [Configure advanced rule settings (optional)](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params) - * [Set the rule’s schedule](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule) - - - -## Configure advanced rule settings (optional) [rule-ui-advanced-params] - -1. Click **Advanced settings** and fill in the following fields where applicable: - - 1. **Reference URLs** (optional): References to information that is relevant to the rule. For example, links to background information. - 2. **False positive examples** (optional): List of common scenarios that may produce false-positive alerts. - 3. **MITRE ATT&CKTM threats** (optional): Add relevant [MITRE](https://attack.mitre.org/) framework tactics, techniques, and subtechniques. - 4. **Custom highlighted fields** (optional): Specify highlighted fields for unique alert investigation flows. You can choose any fields that are available in the you selected for the rule’s data source. - - After you create the rule, you can find all custom highlighted fields in the About section of the rule details page. If the rule has alerts, you can find custom highlighted fields in the [Highlighted fields](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#investigation-section) section of the alert details flyout. - - 5. **Setup guide** (optional): Instructions on rule prerequisites such as required integrations, configuration steps, and anything else needed for the rule to work correctly. - 6. **Investigation guide** (optional): Information for analysts investigating alerts created by the rule. You can also add action buttons to [run Osquery](../../../solutions/security/investigate/run-osquery-from-investigation-guides.md) or [launch Timeline investigations](../../../solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md) using alert data. - 7. **Author** (optional): The rule’s authors. - 8. **License** (optional): The rule’s license. - 9. **Elastic endpoint exceptions** (optional): Adds all [{{elastic-endpoint}} exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md#endpoint-rule-exceptions) to this rule. - - ::::{note} - If you select this option, you can add {{elastic-endpoint}} exceptions on the Rule details page. Additionally, all future exceptions added to [endpoint protection rules](../../../solutions/security/manage-elastic-defend/endpoint-protection-rules.md) will also affect this rule. - - :::: - - 10. **Building block** (optional): Select to create a building-block rule. By default, alerts generated from a building-block rule are not displayed in the UI. See [Use building block rules](../../../solutions/security/detect-and-alert/about-building-block-rules.md) for more information. - 11. **Max alerts per run** (optional): Specify the maximum number of alerts the rule can create each time it runs. Default is 100. - 12. **Indicator prefix override**: Define the location of indicator data within the structure of indicator documents. When the indicator match rule executes, it queries specified indicator indices and references this setting to locate fields with indicator data. This data is used to enrich indicator match alerts with metadata about matched threat indicators. The default value for this setting is `threat.indicator`. - - ::::{important} - If your threat indicator data is at a different location, update this setting accordingly to ensure alert enrichment can still be performed. - - :::: - - 13. **Rule name override** (optional): Select a source event field to use as the rule name in the UI (Alerts table). This is useful for exposing, at a glance, more information about an alert. For example, if the rule generates alerts from Suricata, selecting `event.action` lets you see what action (Suricata category) caused the event directly in the Alerts table. - - ::::{note} - For threshold rules, not all source event values can be used for overrides; only the fields that were aggregated over (the `Group by` fields) will contain data. - - :::: - - 14. **Timestamp override** (optional): Select a source event timestamp field. When selected, the rule’s query uses the selected field, instead of the default `@timestamp` field, to search for alerts. This can help reduce missing alerts due to network or server outages. Specifically, if your ingest pipeline adds a timestamp when events are sent to {{es}}, this can prevent missing alerts from ingestion delays. - - If the selected field is unavailable, the rule query will use the `@timestamp` field instead. In the case that you don’t want to use the `@timestamp` field because you know your data source has an inaccurate `@timestamp` value, we recommend selecting the **Do not use @timestamp as a fallback timestamp field** option instead. This will ensure that the rule query ignores the `@timestamp` field entirely. - - ::::{tip} - The [Microsoft](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-microsoft.md) and [Google Workspace](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-google_workspace.md) {{filebeat}} modules have an `event.ingested` timestamp field that can be used instead of the default `@timestamp` field. - - :::: - -2. Click **Continue**. The **Schedule rule** pane is displayed. - - :::{image} ../../../images/serverless--detections-schedule-rule.png - :alt: detections schedule rule - :screenshot: - ::: - -3. Continue with [setting the rule’s schedule](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule). - - -## Set the rule’s schedule [rule-schedule] - -1. Select how often the rule runs. -2. Optionally, add `Additional look-back time` to the rule. When defined, the rule searches indices with the additional time. - - For example, if you set a rule to run every 5 minutes with an additional look-back time of 1 minute, the rule runs every 5 minutes but analyzes the documents added to indices during the last 6 minutes. - - ::::{important} - It is recommended to set the `Additional look-back time` to at least 1 minute. This ensures there are no missing alerts when a rule does not run exactly at its scheduled time. - - {{elastic-sec}} prevents duplication. Any duplicate alerts that are discovered during the `Additional look-back time` are *not* created. - - :::: - -3. Click **Continue**. The **Rule actions** pane is displayed. -4. Do either of the following: - - * Continue onto [setting up alert notifications](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) and [Response Actions](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-response-action) (optional). - * Create the rule (with or without activation). - - - -## Set up rule actions (optional) [rule-notifications] - -Use actions to set up notifications sent via other systems when alerts are generated. - -::::{note} -To use actions for alert notifications, you need the appropriate user role. For more information, see [Cases requirements](../../../solutions/security/investigate/cases-requirements.md). - -:::: - - -1. Select a connector type to determine how notifications are sent. For example, if you select the {{jira}} connector, notifications are sent to your {{jira}} system. - - ::::{note} - Each action type requires a connector. Connectors store the information required to send the notification from the external system. You can configure connectors while creating the rule or in **Project settings** → **Stack Management** → **{{connectors-ui}}**. For more information, see [Action and connector types](../../../deploy-manage/manage-connectors.md). - - Some connectors that perform actions require less configuration. For example, you do not need to set the action frequency or variables for the [Cases connector](kibana://reference/connectors-kibana/cases-action-type.md). - - :::: - -2. After you select a connector, set its action frequency to define when notifications are sent: - - * **Summary of alerts**: Select this option to get a report that summarizes generated alerts, which you can review at your convenience. Alert summaries will be sent at the specified time intervals. - - ::::{note} - When setting a custom notification frequency, do not choose a time that is shorter than the rule’s execution schedule. - - :::: - - * **For each alert**: Select this option to ensure notifications are sent every time new alerts are generated. - -3. (Optional) Specify additional conditions that need to be met for notifications to send. Click the toggle to enable a setting, then add the required details: - - * **If alert matches query**: Enter a KQL query that defines field-value pairs or query conditions that must be met for notifications to send. The query only searches alert documents in the indices specified for the rule. - * **If alert is generated during timeframe**: Set timeframe details. Notifications are only sent if alerts are generated within the timeframe you define. - -4. Complete the required connector type fields. Here is an example with {{jira}}: - - :::{image} ../../../images/serverless--detections-selected-action-type.png - :alt: detections selected action type - :screenshot: - ::: - -5. Use the default notification message or customize it. You can add more context to the message by clicking the icon above the message text box and selecting from a list of available [alert notification variables](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-action-variables). -6. Create the rule with or without activation. - - ::::{note} - When you activate a rule, it is queued, and its schedule is determined by its initial run time. For example, if you activate a rule that runs every 5 minutes at 14:03 but it does not run until 14:04, it will run again at 14:09. - - :::: - - -::::{important} -After you activate a rule, you can check if it is running as expected using the [Monitoring tab](../../../troubleshoot/security/detection-rules.md) on the Rules page. If you see values in the `Gap` column, you can [Troubleshoot missing alerts](../../../troubleshoot/security/detection-rules.md#troubleshoot-signals). - -When a rule fails to run, the {{security-app}} tries to rerun it at its next scheduled run time. - -:::: - - - -### Alert notification placeholders [rule-action-variables] - -You can use [mustache syntax](http://mustache.github.io/) to add variables to notification messages. The action frequency you choose determines the variables you can select from. - -The following variables can be passed for all rules: - -::::{note} -Refer to [Action frequency: Summary of alerts](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md#alert-summary-action-variables) to learn about additional variables that can be passed if the rule’s action frequency is **Summary of alerts**. - -:::: - - -* `{{context.alerts}}`: Array of detected alerts -* `{{{context.results_link}}}`: URL to the alerts -* `{{context.rule.anomaly_threshold}}`: Anomaly threshold score above which alerts are generated ({{ml}} rules only) -* `{{context.rule.description}}`: Rule description -* `{{context.rule.false_positives}}`: Rule false positives -* `{{context.rule.filters}}`: Rule filters (query rules only) -* `{{context.rule.id}}`: Unique rule ID returned after creating the rule -* `{{context.rule.index}}`: Indices rule runs on (query rules only) -* `{{context.rule.language}}`: Rule query language (query rules only) -* `{{context.rule.machine_learning_job_id}}`: ID of associated {{ml}} job ({{ml}} rules only) -* `{{context.rule.max_signals}}`: Maximum allowed number of alerts per rule execution -* `{{context.rule.name}}`: Rule name -* `{{context.rule.query}}`: Rule query (query rules only) -* `{{context.rule.references}}`: Rule references -* `{{context.rule.risk_score}}`: Default rule risk score - - ::::{note} - This placeholder contains the rule’s default values even when the **Risk score override** option is used. - - :::: - -* `{{context.rule.rule_id}}`: Generated or user-defined rule ID that can be used as an identifier across systems -* `{{context.rule.saved_id}}`: Saved search ID -* `{{context.rule.severity}}`: Default rule severity - - ::::{note} - This placeholder contains the rule’s default values even when the **Severity override** option is used. - - :::: - -* `{{context.rule.threat}}`: Rule threat framework -* `{{context.rule.threshold}}`: Rule threshold values (threshold rules only) -* `{{context.rule.timeline_id}}`: Associated Timeline ID -* `{{context.rule.timeline_title}}`: Associated Timeline name -* `{{context.rule.type}}`: Rule type -* `{{context.rule.version}}`: Rule version -* `{{date}}`: Date the rule scheduled the action -* `{{kibanaBaseUrl}}`: Configured `server.publicBaseUrl` value, or empty string if not configured -* `{{rule.id}}`: ID of the rule -* `{{rule.name}}`: Name of the rule -* `{{rule.spaceId}}`: Space ID of the rule -* `{{rule.tags}}`: Tags of the rule -* `{{rule.type}}`: Type of rule -* `{{state.signals_count}}`: Number of alerts detected - -The following variables can only be passed if the rule’s action frequency is for each alert: - -* `{{alert.actionGroup}}`: Action group of the alert that scheduled actions for the rule -* `{{alert.actionGroupName}}`: Human-readable name of the action group of the alert that scheduled actions for the rule -* `{{alert.actionSubgroup}}`: Action subgroup of the alert that scheduled actions for the rule -* `{{alert.id}}`: ID of the alert that scheduled actions for the rule -* `{{alert.flapping}}`: A flag on the alert that indicates whether the alert status is changing repeatedly - - -#### Alert placeholder examples [placeholder-examples] - -To understand which fields to parse, see the [Detections API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-detections-api) to view the JSON representation of rules. - -Example using `{{context.rule.filters}}` to output a list of filters: - -```json -{{#context.rule.filters}} -{{^meta.disabled}}{{meta.key}} {{#meta.negate}}NOT {{/meta.negate}}{{meta.type}} {{^exists}}{{meta.value}}{{meta.params.query}}{{/exists}}{{/meta.disabled}} -{{/context.rule.filters}} -``` - -Example using `{{context.alerts}}` as an array, which contains each alert generated since the last time the action was executed: - -```json -{{#context.alerts}} -Detection alert for user: {{user.name}} -{{/context.alerts}} -``` - -Example using the mustache "current element" notation `{{.}}` to output all the rule references in the `signal.rule.references` array: - -```json -{{#signal.rule.references}} {{.}} {{/signal.rule.references}} -``` - - -### Set up response actions (optional) [rule-response-action] - -Use response actions to set up additional functionality that will run whenever a rule executes: - -* **Osquery**: Include live Osquery queries with a custom query rule. When an alert is generated, Osquery automatically collects data on the system related to the alert. Refer to [Add Osquery Response Actions](../../../solutions/security/investigate/add-osquery-response-actions.md) to learn more. -* **{{elastic-defend}}**: Automatically run response actions on an endpoint when rule conditions are met. For example, you can automatically isolate a host or terminate a process when specific activities or events are detected on the host. Refer to [Automated response actions](../../../solutions/security/endpoint-response-actions/automated-response-actions.md) to learn more. - -::::{important} -Host isolation involves quarantining a host from the network to prevent further spread of threats and limit potential damage. Be aware that automatic host isolation can cause unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. - -:::: - - - -## Preview your rule (optional) [preview-rules] - -You can preview any custom or prebuilt rule to find out how noisy it will be. For a custom rule, you can then adjust the rule’s query or other settings. - -::::{note} -To preview rules, you must have the appropriate user role. Refer to [Detections requirements](../../../solutions/security/detect-and-alert/detections-requirements.md) for more information. - -:::: - - -Click the **Rule preview** button while creating or editing a rule. The preview opens in a side panel, showing a histogram and table with the alerts you can expect, based on the defined rule settings and past events in your indices. - -:::{image} ../../../images/serverless--detections-preview-rule.png -:alt: Rule preview -:screenshot: -::: - -The preview also includes the effects of rule exceptions and override fields. In the histogram, alerts are stacked by `event.category` (or `host.name` for machine learning rules), and alerts with multiple values are counted more than once. - -To interact with the rule preview: - -* Use the date and time picker to define the preview’s time range. - - ::::{tip} - Avoid setting long time ranges with short rule intervals, or the rule preview might time out. - - :::: - -* Click **Refresh** to update the preview. - - * When you edit the rule’s settings or the preview’s time range, the button changes from blue to green to indicate that the rule has been edited since the last preview. - * For a relative time range (such as `Last 1 hour`), refresh the preview to check for the latest results. (Previews don’t automatically refresh with new incoming data.) - -* Click the **View details** icon (![View details](../../../images/serverless-expand.svg "")) in the alerts table to view the details of a particular alert. -* To resize the preview, hover between the rule settings and preview, then click and drag the border. You can also click the border, then the collapse icon (![Collapse menu](../../../images/serverless-menuRight.svg "")) to collapse and expand the preview. -* To close the preview, click the **Rule preview** button again. - - -### View your rule’s {{es}} queries (optional) [view-rule-es-queries] - -::::{note} -This option is only offered for {{esql}} and event correlation rules. - -:::: - - -When previewing a rule, you can also learn about its {{es}} queries, which are submitted when the rule runs. This information can help you identify and troubleshoot potential rule issues. You can also use it to confirm that your rule is retrieving the expected data. - -To learn more about your rule’s {{es}} queries, preview its results and do the following: - -1. Select the **Show {{es}} requests, ran during rule executions** option below the preview’s date and time picker. The **Preview logged results** section displays under the histogram and alerts table. -2. Click the **Preview logged results** section to expand it. Within the section, each rule execution is shown on an individual row. -3. Expand each row to learn more about the {{es}} queries that the rule submits each time it executes. The following details are provided: - - * When it started, and how long it took to complete - * A brief explanation of what the {{es}} queries do - * The actual {{es}} queries that the rule submits to indices containing events that are used during the rule execution - - ::::{tip} - Run the queries in [Console](https://www.elastic.co/guide/en/serverless/current/devtools-run-api-requests-in-the-console.html) to determine if your rule is retrieving the expected data. For example, to test your rule’s exceptions, run the rule’s {{es}} queries, which will also contain exceptions added to the rule. If your rule’s exceptions are working as intended, the query will not return events that should be ignored. - - :::: diff --git a/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md b/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md deleted file mode 100644 index 47a1aaeec1..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md +++ /dev/null @@ -1,250 +0,0 @@ -# Manage detection rules [security-rules-ui-management] - -The Rules page allows you to view and manage all prebuilt and custom detection rules. - -:::{image} ../../../images/serverless--detections-all-rules.png -:alt: The Rules page -:screenshot: -::: - -On the Rules page, you can: - -* [Sort and filter the rules list](../../../solutions/security/detect-and-alert/manage-detection-rules.md#sort-filter-rules) -* [Check the current status of rules](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-status) -* [Modify existing rules settings](../../../solutions/security/detect-and-alert/manage-detection-rules.md#edit-rules-settings) -* [Manage rules](../../../solutions/security/detect-and-alert/manage-detection-rules.md#manage-rules-ui) -* [Manually run rules](../../../solutions/security/detect-and-alert/manage-detection-rules.md#manually-run-rules) -* [Snooze rule actions](../../../solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) -* [Export and import rules](../../../solutions/security/detect-and-alert/manage-detection-rules.md#import-export-rules-ui) -* [Confirm rule prerequisites](../../../solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) -* [Troubleshoot missing alerts](../../../troubleshoot/security/detection-rules.md#troubleshoot-signals) - - -## Sort and filter the rules list [sort-filter-rules] - -To sort the rules list, click any column header. To sort in descending order, click the column header again. - -To filter the rules list, enter a search term in the search bar and press **Return**: - -* Rule name — Enter a word or phrase from a rule’s name. -* Index pattern — Enter an index pattern (such as `filebeat-*`) to display all rules that use it. -* MITRE ATT&CK tactic or technique — Enter a MITRE ATT&CK tactic name (such as `Defense Evasion`) or technique number (such as `TA0005`) to display all associated rules. - -::::{note} -Searches for index patterns and MITRE ATT&CK tactics and techniques must match exactly, are case sensitive, and do *not* support wildcards. For example, to find rules using the `filebeat-*` index pattern, the search term `filebeat-*` is valid, but `filebeat` and `file*` are not because they don’t exactly match the index pattern. Likewise, the MITRE ATT&CK tactic `Defense Evasion` is valid, but `Defense`, `defense evasion`, and `Defense*` are not. - -:::: - - -You can also filter the rules list by selecting the **Tags**, **Last response***, ***Elastic rules***, ***Custom rules***, ***Enabled rules**, and **Disabled rules** filters next to the search bar. - -The rules list retains your sorting and filtering settings when you navigate away and return to the page. These settings are also preserved when you copy the page’s URL and paste into another browser. Select **Clear filters** above the table to revert to the default view. - - -## Check the current status of rules [rule-status] - -The **Last response** column displays the current status of each rule, based on the most recent attempt to run the rule: - -* **Succeeded**: The rule completed its defined search. This doesn’t necessarily mean it generated an alert, just that it ran without error. -* **Failed**: The rule encountered an error that prevented it from running. For example, a {{ml}} rule whose corresponding {{ml}} job wasn’t running. -* **Warning**: Nothing prevented the rule from running, but it might have returned unexpected results. For example, a custom query rule tried to search an index pattern that couldn’t be found in {{es}}. - -For {{ml}} rules, an indicator icon (![Error](../../../images/serverless-warning.svg "")) also appears in this column if a required {{ml}} job isn’t running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule’s details page, where you can start the {{ml}} job. - - -## Modify existing rules settings [edit-rules-settings] - -You can edit an existing rule’s settings, and can bulk edit settings for multiple rules at once. - -::::{note} -For prebuilt Elastic rules, you can’t modify most settings. You can only edit [rule actions](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule) and [add exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md). If you try to bulk edit with both prebuilt and custom rules selected, the action will affect only the rules that can be modified. - -Similarly, rules will be skipped if they can’t be modified by a bulk edit. For example, if you try to apply a tag to rules that already have that tag, or apply an index pattern to rules that use data views. - -:::: - - -1. Go to **Rules** → **Detection rules (SIEM)**. -2. Do one of the following: - - * **Edit a single rule**: Select the **All actions*** menu (![Actions menu](../../../images/serverless-boxesHorizontal.svg "")) on a rule, then select ***Edit rule settings**. The **Edit rule settings** view opens, where you can modify the [rule’s settings](../../../solutions/security/detect-and-alert/create-detection-rule.md). - * **Bulk edit multiple rules**: Select the rules you want to edit, then select an action from the **Bulk actions** menu: - - * **Index patterns**: Add or delete the index patterns used by all selected rules. - * **Tags**: Add or delete tags on all selected rules. - * **Custom highlighted fields**: Add custom highlighted fields on all selected rules. You can choose any fields that are available in the [default {{elastic-sec}} indices](../../../solutions/security/get-started/configure-advanced-settings.md#update-sec-indices), or enter field names from other indices. To overwrite a rule’s current set of custom highlighted fields, select the **Overwrite all selected rules' custom highlighted fields** option, then click **Save**. - * **Add rule actions**: Add [rule actions](../../../solutions/security/detect-and-alert/create-detection-rule.md) on all selected rules. If you add multiple actions, you can specify an action frequency for each of them. To overwrite the frequency of existing actions select the option to **Overwrite all selected rules actions**. - - ::::{note} - Rule actions won’t run during a [maintenance window](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md). They’ll resume running after the maintenance window ends. - - :::: - - * **Update rule schedules**: Update the [schedules](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule) and look-back times on all selected rules. - * **Apply Timeline template**: Apply a specified [Timeline template](../../../solutions/security/investigate/timeline-templates.md) to the selected rules. You can also choose **None** to remove Timeline templates from the selected rules. - -3. On the page or flyout that opens, update the rule settings and actions. - - ::::{tip} - To [snooze](../../../solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) rule actions, go to the **Actions** tab and click the bell icon. - - :::: - -4. If available, select **Overwrite all selected *x*** to overwrite the settings on the rules. For example, if you’re adding tags to multiple rules, selecting **Overwrite all selected rules tags** removes all the rules' original tags and replaces them with the tags you specify. -5. Click **Save**. - - -## Manage rules [manage-rules-ui] - -You can duplicate, enable, disable, delete, and snooze actions for rules: - -::::{note} -When duplicating a rule with exceptions, you can choose to duplicate the rule and its exceptions (active and expired), the rule and active exceptions only, or only the rule. If you duplicate the rule and its exceptions, copies of the exceptions are created and added to the duplicated rule’s [default rule list](../../../solutions/security/detect-and-alert/rule-exceptions.md). If the original rule used exceptions from a shared exception list, the duplicated rule will reference the same shared exception list. - -:::: - - -1. Go to **Rules** → **Detection rules (SIEM)**. -2. Do one of the following: - - * Select the **All actions** menu (![Actions menu](../../../images/serverless-boxesHorizontal.svg "")) on a rule, then select an action. - * Select all the rules you want to modify, then select an action from the **Bulk actions** menu. - * To enable or disable a single rule, switch on the rule’s **Enabled** toggle. - * To [snooze](../../../solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) actions for rules, click the bell icon. - - - -## Run rules manually [manually-run-rules] - -[beta] - -Manually run enabled rules for a specfied period of time for testing purposes or additional rule coverage. - -::::{important} -Before manually running rules, make sure you properly understand and plan for rule dependencies. Incorrect scheduling can lead to inconsistent rule results. - -:::: - - -1. Navigate to the detection rules page, and do one of the following: - - * Select the **All actions** menu (![Actions menu](../../../images/serverless-boxesHorizontal.svg "")) on a rule, then select **Manual run**. - * Select all the rules you want to manually run, select the **Bulk actions** menu, then select **Manual run**. - -2. Specify when the manual run starts and ends. The default selection is the current day starting three hours in the past. The rule will search for events during the selected time range. -3. Click **Run** to manually run the rule. - - ::::{note} - Manual runs can produce multiple rule executions. This is determined by the manual run’s time range and the rule’s execution schedule. - - :::: - - -The manual run’s details are shown in the [Manual runs](../../../solutions/security/detect-and-alert/monitor-rule-executions.md#manual-runs-table) table on the **Execution results** tab. Changes you make to the manual run or rule settings will display in the Manual runs table after the current run completes. - -::::{note} -Be mindful of the following: - -* Rule actions are not activated during manual runs. -* Except for threshold rules, duplicate alerts aren’t created if you manually run a rule during a time range that was already covered by a scheduled run. -* Manual runs are executed with low priority and limited concurrency, meaning they might take longer to complete. This can be especially apparent for rules requiring multiple executions. - -:::: - - - -## Snooze rule actions [snooze-rule-actions] - -Instead of turning rules off to stop alert notifications, you can snooze rule actions for a specified time period. When you snooze rule actions, the rule continues to run on its defined schedule, but won’t perform any actions or send alert notifications. - -You can snooze notifications temporarily or indefinitely. When actions are snoozed, you can cancel or change the duration of the snoozed state. You can also schedule and manage recurring downtime for actions. - -You can snooze rule notifications from the **Installed Rules** tab, the rule details page, or the **Actions** tab when editing a rule. - -:::{image} ../../../images/serverless--detections-rule-snoozing.png -:alt: Rules snooze options -:screenshot: -::: - - -## Export and import rules [import-export-rules-ui] - -You can export custom detection rules to an `.ndjson` file, which you can then import into another {{elastic-sec}} environment. - -::::{note} -You cannot export Elastic prebuilt rules, but you can duplicate a prebuilt rule, then export the duplicated rule. - -If you try to export with both prebuilt and custom rules selected, only the custom rules are exported. - -:::: - - -The `.ndjson` file also includes any actions, connectors, and exception lists related to the exported rules. However, other configuration items require additional handling when exporting and importing rules: - -* **Data views**: For rules that use a {{kib}} data view as a data source, the exported file contains the associated `data_view_id`, but does *not* include any other data view configuration. To export/import between {{kib}} spaces, first use the [Saved Objects](../../../explore-analyze/find-and-organize.md) UI (**Project settings*** → ***Stack Management** → **Saved Objects**) to share the data view with the destination space. - -To import into a different {{stack}} deployment, the destination cluster must include a data view with a matching data view ID (configured in the [data view’s advanced settings](../../../explore-analyze/find-and-organize/data-views.md)). Alternatively, after importing, you can manually reconfigure the rule to use an appropriate data view in the destination system. - -* **Actions and connectors**: Rule actions and connectors are included in the exported file, but sensitive information about the connector (such as authentication credentials) *is not* included. You must re-add missing connector details after importing detection rules. - - ::::{tip} - You can also use the [Saved Objects](../../../explore-analyze/find-and-organize.md) UI (**Project settings** → **Stack Management** → **Saved Objects**) to export and import necessary connectors before importing detection rules. - - :::: - -* **Value lists**: Any value lists used for rule exceptions are *not* included in rule exports or imports. Use the [Manage value lists](../../../solutions/security/detect-and-alert/create-manage-value-lists.md#manage-value-lists) UI (**Rules*** → ***Detection rules (SIEM)** → **Manage value lists**) to export and import value lists separately. - -To export and import detection rules: - -1. Go to **Rules** → **Detection rules (SIEM)**. -2. To export rules: - - 1. In the rules table, select the rules you want to export. - 2. Select **Bulk actions** → **Export**, then save the exported file. - -3. To import rules: - - ::::{note} - To import rules with and without actions, and to manage rule connectors, you must have the appropriate user role. Refer to [Enable and access detections](../../../solutions/security/detect-and-alert/detections-requirements.md#enable-detections-ui) for more information. - - :::: - - - 1. Click **Import rules**. - 2. Drag and drop the file that contains the detection rules. - - ::::{note} - Imported rules must be in an `.ndjson` file. - - :::: - - 3. (Optional) Select **Overwrite existing detection rules with conflicting "rule_id"** to update existing rules if they match the `rule_id` value of any rules in the import file. Configuration data included with the rules, such as actions, is also overwritten. - 4. (Optional) Select **Overwrite existing exception lists with conflicting "list_id"** to replace existing exception lists with exception lists from the import file if they have a matching `list_id` value. - 5. (Optional) Select **Overwrite existing connectors with conflicting action "id"** to update existing connectors if they match the `action id` value of any rule actions in the import file. Configuration data included with the actions is also overwritten. - 6. Click **Import rule**. - 7. (Optional) If a connector is missing sensitive information after the import, a warning displays and you’re prompted to fix the connector. In the warning, click **Go to connector**. On the Connectors page, find the connector that needs to be updated, click **Fix**, then add the necessary details. - - - -## Confirm rule prerequisites [rule-prerequisites] - -Many detection rules are designed to work with specific [Elastic integrations](https://docs.elastic.co/en/integrations) and data fields. These prerequisites are identified in **Related integrations** and **Required fields*** on a rule’s details page (***Rules*** → ***Detection rules (SIEM)**, then click a rule’s name). **Related integrations** also displays each integration’s installation status and includes links for installing and configuring the listed integrations. - -Additionally, the **Setup guide** section provides guidance on setting up the rule’s requirements. - -:::{image} ../../../images/serverless--detections-rule-details-prerequisites.png -:alt: Rule details page with Related integrations -:screenshot: -::: - -You can also check rules' related integrations in the **Installed Rules** and **Rule Monitoring** tables. Click the **integrations** badge to display the related integrations in a popup. - -:::{image} ../../../images/serverless--detections-rules-table-related-integrations.png -:alt: Rules table with related integrations popup -:screenshot: -::: - -::::{tip} -You can hide the **integrations** badge in the rules tables by turning off the `securitySolution:showRelatedIntegrations` advanced setting. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md b/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md deleted file mode 100644 index 292e7fcca0..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -navigation_title: "Add alerts to cases" ---- - -# Add detection alerts to cases [security-signals-to-cases] - - -From the Alerts table, you can attach one or more alerts to a [new case](../../../solutions/security/detect-and-alert/add-detection-alerts-to-cases.md#signals-to-new-cases) or [an existing one](../../../solutions/security/detect-and-alert/add-detection-alerts-to-cases.md#signals-to-existing-cases). Alerts from any rule type can be added to a case. - -::::{note} -* After you add an alert to a case, you can remove it from the case activity under the alert summary or by using the [cases API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-cases). -* Each case can have a maximum of 1,000 alerts. - -:::: - - -:::{image} ../../../images/serverless--detections-add-alert-to-case.gif -:alt: Animation of adding an alert to a case -:screenshot: -::: - - -## Add alerts to a new case [signals-to-new-cases] - -To add alerts to a new case: - -1. Do one of the following: - - * To add a single alert to a case, select the **More actions** menu (*…​*) in the Alerts table or **Take action** in the alert details flyout, then select **Add to a new case**. - * To add multiple alerts, select the alerts, then select **Add to a new case** from the **Bulk actions** menu. - -2. Give the case a name, assign a severity level, and provide a description. You can use [Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) syntax in the case description. - - ::::{note} - If you do not assign your case a severity level, it will be assigned **Low** by default. - - :::: - -3. Optionally, add a category, assignees and relevant tags. You can add users only if they meet the necessary [prerequisites](../../../solutions/security/investigate/cases-requirements.md). -4. Specify whether you want to sync the status of associated alerts. It is enabled by default; however, you can toggle this setting on or off at any time. If it remains enabled, the alert’s status updates whenever the case’s status is modified. -5. Select a connector. If you’ve previously added one, that connector displays as the default selection. Otherwise, the default setting is `No connector selected`. -6. Click **Create case** after you’ve completed all of the required fields. A confirmation message is displayed with an option to view the new case. Click the link in the notification or go to the Cases page to view the case. - - ![Create case flyout with sample data filled in](../../../images/serverless--detections-add-alert-to-new-case.png "") - - - -## Add alerts to an existing case [signals-to-existing-cases] - -To add alerts to an existing case: - -1. Do one of the following: - - * To add a single alert to a case, select the **More actions** menu (*…​*) in the Alerts table or **Take action** in the alert details flyout, then select **Add to existing case**. - * To add multiple alerts, select the alerts, then select **Add to an existing case** from the **Bulk actions** menu. - -2. From the **Select case** dialog box, select the case to which you want to attach the alert. A confirmation message is displayed with an option to view the updated case. Click the link in the notification or go to the Cases page to view the case’s details. - - ::::{note} - If you attach the alert to a case that has been configured to sync its status with associated alerts, the alert’s status updates any time the case’s status is modified. - - :::: - - - :::{image} ../../../images/serverless--detections-add-alert-to-existing-case.png - :alt: Select case dialog listing existing cases - :screenshot: - ::: diff --git a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md b/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md deleted file mode 100644 index 9041a6d1e9..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md +++ /dev/null @@ -1,49 +0,0 @@ -# Triage alerts [security-triage-alerts-with-elastic-ai-assistant] - -Elastic AI Assistant can help you enhance and streamline your alert triage workflows by assessing multiple recent alerts in your environment, and helping you interpret an alert and its context. - -When you view an alert in {{elastic-sec}}, details such as related documents, hosts, and users appear alongside a synopsis of the events that triggered the alert. This data provides a starting point for understanding a potential threat. AI Assistant can answer questions about this data and offer insights and actionable recommendations to remediate the issue. - -To enable AI Assistant to answer questions about alerts, you need to provide alert data as context for your prompts. You can either provide multiple alerts using the [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md) feature, or provide individual alerts directly. - - -## Use AI Assistant to triage multiple alerts [ai-assistant-triage-alerts-knowledge-base] - -Enable the [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md) **Alerts** setting to send AI Assistant data for up to 500 alerts as context for each of your prompts. Use the slider on the Security AI settings' **Knowledge Base** tab to select the number of alerts to send to AI Assistant. - -For more information, refer to [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md). - - -## Use AI Assistant to triage a specific alert [use-ai-assistant-to-triage-an-alert] - -Once you have chosen an alert to investigate: - -1. Click its **View details** button from the Alerts table. -2. In the alert details flyout, click **Chat** to launch the AI assistant. Data related to the selected alert is automatically added to the prompt. -3. Click **Alert (from summary)** to view which alert fields will be shared with AI Assistant. - - ::::{note} - For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](../../../solutions/security/ai/ai-assistant.md). - :::: - -4. (Optional) Click a quick prompt to use it as a starting point for your query, for example **Alert summarization**. Improve the quality of AI Assistant’s response by customizing the prompt and adding detail. - - Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions. - -5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report. - - -## Generate triage reports [generate-triage-reports] - -Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once the AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as: - -* “Generate a detailed report about this incident, including timeline, impact analysis, and response actions. Also, include a diagram of events.” -* “Generate a summary of this incident/alert and include diagrams of events.” -* “Provide more details on the mitigation strategies used.” - -After you review the report, click **Add to existing case** at the top of AI Assistant’s response. This allows you to save a record of the report and make it available to your team. - -:::{image} ../../../images/serverless-ai-triage-add-to-case.png -:alt: An AI Assistant dialogue with the add to existing case button highlighted -:screenshot: -::: diff --git a/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md b/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md deleted file mode 100644 index f93ed327ec..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md +++ /dev/null @@ -1,143 +0,0 @@ -# Tune detection rules [security-tune-detection-signals] - -Using the {{security-app}}, you can tune prebuilt and custom detection rules to optimize alert generation. To reduce noise, you can: - -* Add [exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md) to detection rules. - - ::::{tip} - Using exceptions is recommended as this ensure excluded source event values persist even after prebuilt rules are updated. - - :::: - -* Disable detection rules that rarely produce actionable alerts because they match expected local behavior, workflows, or policy exceptions. -* [Clone and modify](../../../solutions/security/detect-and-alert/manage-detection-rules.md#manage-rules-ui) detection rule queries so they are aligned with local policy exceptions. This reduces noise while retaining actionable alerts. -* Clone and modify detection rule risk scores, and use branching logic to map higher risk scores to higher priority workflows. -* Enable [alert suppression](../../../solutions/security/detect-and-alert/suppress-detection-alerts.md) for custom query rules to reduce the number of repeated or duplicate alerts. - -For details about tuning rules for specific categories: - -* [Tune rules detecting authorized processes](../../../solutions/security/detect-and-alert/tune-detection-rules.md#tune-authorized-processes) -* [Tune Windows child process and PowerShell rules](../../../solutions/security/detect-and-alert/tune-detection-rules.md#tune-windows-rules) -* [Tune network rules](../../../solutions/security/detect-and-alert/tune-detection-rules.md#tune-network-rules) -* [Tune indicator match rules](../../../solutions/security/detect-and-alert/tune-detection-rules.md#tune-indicator-rules) - - -## Filter out uncommon application alerts [filter-rule-process] - -Organizations frequently use uncommon and in-house applications. Occasionally, these can trigger unwanted alerts. To stop a rule matching on an application, add an exception for the required application. - -For example, to prevent the **Unusual Process Execution Path - Alternate Data Stream** rule from producing alerts for an in-house application named `myautomatedbuild`: - -1. Go to **Rules** → **Detection rules (SIEM)**. -2. Search for and then click on the **Unusual Process Execution Path - Alternate Data Stream** rule. - - The **Unusual Process Execution Path - Alternate Data Stream** rule details page is displayed. - - :::{image} ../../../images/serverless--detections-prebuilt-rules-rule-details-page.png - :alt: Rule details page - :screenshot: - ::: - -3. Select the **Rule exceptions** tab, then click **Add rule exception**. -4. Fill in these options: - - * **Field**: `process.name` - * **Operator**: `is` - * **Value**: `myautomatedbuild` - - :::{image} ../../../images/serverless--detections-prebuilt-rules-process-exception.png - :alt: Add Rule Exception UI - :screenshot: - ::: - -5. Click **Add rule exception**. - - -## Tune rules detecting authorized processes [tune-authorized-processes] - -Authorized security testing, system tools, management frameworks, and administrative activity may trigger detection rules. These legitimate activities include: - -* Authorized security research. -* System and software management processes running scripts, including scripts that start child processes. -* Administrative and management frameworks that create users, schedule tasks, make `psexec` connections, and run WMI commands. -* Legitimate scripts using the `whoami` command. -* Applications that work with file shares, such as backup programs, and use the server message block (SMB) protocol. - -To reduce noise for authorized activity, you can do any of these: - -* Add an exception to the rules that exclude specific servers, such as the relevant host names, agent names, or other common identifiers. For example, `host.name is `. -* Add an exception to the rules that [exclude specific processes](../../../solutions/security/detect-and-alert/tune-detection-rules.md#filter-rule-process). For example, `process.name is `. -* Add an exception to the rules that exclude a common user. For example, `user.name is `. - -Another useful technique is to assign lower risk scores to rules triggered by authorized activity. This enables detections while keeping the resulting alerts out of high-priority workflows. Use these steps: - -1. Before adding exceptions, duplicate the prebuilt rule. -2. Add an exception to the original prebuilt rule that excludes the relevant user or process name (`user.name is ` or `process.name is "process-name"`). -3. Edit the duplicated rule as follows: - - * Lower the `Risk score` (**Edit rule settings** → **About** tab). - * Add an exception so the rule only matches the user or process name excluded in original prebuilt rules. (`user.name is not ` or `process.name is not `). - - :::{image} ../../../images/serverless--detections-prebuilt-rules-process-specific-exception.png - :alt: Example of is not exception in the Add Rule Exception UI - :screenshot: - ::: - -4. Click **Add rule exception**. - - -## Tune Windows child process and PowerShell rules [tune-windows-rules] - -Normal user activity may sometimes trigger one or more of these rules: - -* **Suspicious MS Office Child Process** -* **Suspicious MS Outlook Child Process** -* **System Shells via Services** -* **Unusual Parent-Child Relationship** -* **Windows Script Executing PowerShell** - -While all rules can be adjusted as needed, use care when adding exceptions to these rules. Exceptions could result in an undetected client-side execution, or a persistence or malware threat going unnoticed. - -Examples of when these rules may create noise include: - -* Receiving and opening email-attached Microsoft Office files, which include active content such as macros or scripts, from a trusted third-party source. -* Authorized technical support personnel who provide remote workers with scripts to gather troubleshooting information. - -In these cases, exceptions can be added to the rules using the relevant `process.name`, `user.name`, and `host.name` conditions. Additionally, you can create duplicate rules with lower risk scores. - - -## Tune network rules [tune-network-rules] - -The definition of normal network behavior varies widely across different organizations. Different networks conform to different security policies, standards, and regulations. When normal network activity triggers alerts, network rules can be disabled or modified. For example: - -* To exclude a specific source, add a `source.ip` exception with the relevant IP address, and a `destination.port` exception with the relevant port number (`source.ip is 196.1.0.12` and `destination.port is 445`). -* To exclude source network traffic for an entire subnet, add a `source.ip` exception with the relevant CIDR notation (`source.ip is 192.168.0.0/16`). -* To exclude a destination IP for a specific destination port, add a `destination.ip` exception with the IP address, and a `destination.port` exception with the port number (`destination.ip is 38.160.150.31` and `destination.port is 445`) -* To exclude a destination subnet for a specific destination port, add a `destination.ip` exception using CIDR notation, and a ‘destination.port’ exception with the port number (`destination.ip is 172.16.0.0/12` and `destination.port is 445`). - - -## Tune indicator match rules [tune-indicator-rules] - -Take the following steps to tune indicator match rules: - -* Specify a detailed query as part of the indicator index query. Results of the indicator index query are used by the detection engine to query the indices specified in your rule definition’s index pattern. Using no query or the wildcard `***` query may result in your rule executing very large queries. -* Limit your rule’s additional look-back time to as short a duration as possible, and no more than 24 hours. -* Avoid cluster performance issues by scheduling your rule to run in one-hour intervals or longer. For example, avoid scheduling an indicator match rule to check for indicators every five minutes. - -::::{note} -{{elastic-sec}} provides limited support for indicator match rules. Visit [support limitations](../../../solutions/security/detect-and-alert.md#support-indicator-rules) for more information. - -:::: - - - -### Noise from common cloud-based network traffic [security-tune-detection-signals-noise-from-common-cloud-based-network-traffic] - -In cloud-based organizations, remote workers sometimes access services over the internet. The security policies of home networks probably differ from the security policies of managed corporate networks, and these rules might need tuning to reduce noise from legitimate administrative activities: - -* **RDP (Remote Desktop Protocol) from the Internet** - -::::{tip} -If your organization is widely distributed and the workforce travels a lot, use the `windows_anomalous_user_name_ecs`, `linux_anomalous_user_name_ecs`, and `suspicious_login_activity_ecs` [{{ml}}](../../../solutions/security/advanced-entity-analytics/anomaly-detection.md) jobs to detect suspicious authentication activity. - -:::: diff --git a/raw-migrated-files/docs-content/serverless/security-view-alert-details.md b/raw-migrated-files/docs-content/serverless/security-view-alert-details.md deleted file mode 100644 index 8d6fadd92e..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-view-alert-details.md +++ /dev/null @@ -1,354 +0,0 @@ ---- -navigation_title: "View alert details" ---- - -# View detection alert details [security-view-alert-details] - - -To learn more about an alert, click the **View details** button from the Alerts table. This opens the alert details flyout, which helps you understand and manage the alert. - -:::{image} ../../../images/serverless--detections-open-alert-details-flyout.gif -:alt: Expandable flyout -:screenshot: -::: - -Use the alert details flyout to begin an investigation, open a case, or plan a response. Click **Take action** at the bottom of the flyout to find more options for interacting with the alert. - - -## Alert details flyout UI [alert-details-flyout-ui] - -The alert details flyout has a right panel, a preview panel, and a left panel. Each panel provides a different perspective of the alert. - - -### Right panel [right-panel] - -The right panel provides an overview of the alert. Expand any of the collapsed sections to learn more about the alert. You can also hover over fields on the **Overview** and **Table** tabs to display available [inline actions](../../../solutions/security/get-started/elastic-security-ui.md#inline-actions). - -:::{image} ../../../images/serverless--detections-alert-details-flyout-right-panel.png -:alt: Right panel of the alert details flyout -:screenshot: -::: - -From the right panel, you can also: - -* Click **Expand details** to open the [left panel](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#left-panel), which shows more information about sections in the right panel. -* Click the **Chat** icon (![AI assistant chat icon](../../../images/serverless--detections-ai-assistant-chat.png "")) to access the [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md). -* Click the **Share alert** icon (![Share alert](../../../images/serverless-share.svg "")) to get a shareable alert URL. We *do not* recommend copying the URL from your browser’s address bar, which can lead to inconsistent results if you’ve set up filters or relative time ranges for the Alerts page. - - ::::{note} - If you’ve configured the [`server.publicBaseUrl`](../../../deploy-manage/deploy/self-managed/configure.md#server-publicBaseUrl) setting in the `kibana.yml` file, the shareable URL is also in the `kibana.alert.url` field. You can find the field by searching for `kibana.alert.url` on the **Table** tab. - - :::: - - - ::::{important} - If you’ve enabled grouping on the Alerts page, the alert details flyout won’t open until you expand a collapsed group and select an individual alert. - - :::: - -* Click the **Flyout settings** icon (![Flyout settings icon](../../../images/serverless--detections-flyout-settings.png "")) to access options for displaying the alert details flyout. The **Overlay*** option (which displays the flyout over the Alerts table) is selected by default. Select ***Push** to display the flyout to the side of the table instead. In either display, you can resize the flyout panels to your liking. Clicking **Reset size** reverts the flyout to its default dimensions. -* Find basic details about the alert, such as the: - - * Associated rule - * Alert status and when the alert was created - * Date and time the alert was created - * Alert severity and risk score (these are inherited from rule that generated the alert) - * Users assigned to the alert (click the ![Assign alert](../../../images/serverless-plusInCircle.svg "") icon to assign more users) - * Notes attached to the alert (click the ![Add note](../../../images/serverless-plusInCircle.svg "") icon to create a new note) - -* Click the **Table** or **JSON** tabs to display the alert details in table or JSON format. In table format, alert details are displayed as field-value pairs. - - -### Preview panel [preview-panel] - -Some areas in the flyout provide previews when you click on them. For example, clicking **Show rule summary** in the rule description displays a preview of the rule’s details. To close the preview, click **Back** or **x**. - -:::{image} ../../../images/serverless--detections-alert-details-flyout-preview-panel.gif -:alt: Preview panel of the alert details flyout -:screenshot: -::: - - -### Left panel [left-panel] - -The left panel provides an expanded view of what’s shown in the right panel. To open the left panel, do one of the following: - -* Click **Expand details** at the top of the right panel. - - ![Expand details button at the top of the alert details flyout](../../../images/serverless--detections-expand-details-button.png "") - -* Click one of the section titles on the **Overview** tab within the right panel. - - ![Left panel of the alert details flyout](../../../images/serverless--detections-alert-details-flyout-left-panel.png "") - - - -## About [about-section] - -The About section is located on the **Overview** tab in the right panel. It provides a brief description of the rule that’s related to the alert and an explanation of what generated the alert. - -:::{image} ../../../images/serverless--detections-about-section-rp.png -:alt: About section of the Overview tab -:screenshot: -::: - -The About section has the following information: - -* **Rule description**: Describes the rule’s purpose or detection goals. Click **Show rule summary** to display a preview of the rule’s details. From the preview, click **Show rule details** to view the rule’s details page. -* **Alert reason**: Describes the source event that generated the alert. Event details are displayed in plain text and ordered logically to provide context for the alert. Click **Show full reason** to display the alert reason in the event rendered format within the [preview panel](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#preview-panel). - - ::::{note} - The event renderer only displays if an event renderer exists for the alert type. Fields are interactive; hover over them to access the available actions. - - :::: - -* **Last alert status change**: Shows the last time the alert’s status was changed, along with the user who changed it. - - -## Investigation [investigation-section] - -The Investigation section is located on the **Overview** tab in the right panel. It offers a couple of ways to begin investigating the alert. - -:::{image} ../../../images/serverless--detections-investigation-section-rp.png -:alt: Investigation section of the Overview tab -:screenshot: -::: - -The Investigation section provides the following information: - -* **Investigation guide**: The **Show investigation guide** button displays if the rule associated with the alert has an investigation guide. Click the button to open the expanded Investigation view in the left panel. - - ::::{tip} - Add an [investigation guide](../../../solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md#add-ig-actions-rule) to a rule when creating a new custom rule or modifying an existing custom rule’s settings. - - :::: - -* **Highlighted fields**: Shows relevant fields for the alert and any [custom highlighted fields](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params) you added to the rule. Custom highlighted fields with values are added to this section. Those without values aren’t added. - - -## Visualizations [visualizations-section] - -The Visualizations section is located on the **Overview** tab in the right panel. It offers a glimpse of the processes that led up to the alert and occurred after it. - -:::{image} ../../../images/serverless--detections-visualizations-section-rp.png -:alt: Visualizations section of the Overview tab -:screenshot: -::: - -Click **Visualizations** to display the following previews: - -* **Session view preview**: Shows a preview of [Session View](../../../solutions/security/investigate/session-view.md) data. Click **Session viewer preview** to open the **Session View** tab in Timeline. -* **Analyzer preview**: Shows a preview of the [visual analyzer graph](../../../solutions/security/investigate/visual-event-analyzer.md). The preview displays up to three levels of the analyzed event’s ancestors and up to three levels of the event’s descendants and children. The ellipses symbol (**`...`***) indicates the event has more ancestors and descendants to examine. Click ***Analyzer preview** to open the **Event Analyzer** tab in Timeline. - - -### Expanded visualizations view [expanded-visualizations-view] - -::::{warning} -This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. -:::: - - -::::{admonition} Requirements -:class: note - -To use the **Visualize** tab, you must turn on the `securitySolution:enableVisualizationsInFlyout` [advanced setting](../../../solutions/security/get-started/configure-advanced-settings.md#visualizations-in-flyout). - -:::: - - -The **Visualize** tab allows you to maintain the context of the Alerts table, while providing a more detailed view of alerts that you’re investigating in the event analyzer or Session View. To open the tab, click **Session view preview** or **Analyzer preview** from the right panel. - -:::{image} ../../../images/serverless--detections-visualize-tab-lp.png -:alt: Expanded view of visualization details -:screenshot: -::: - -As you examine the alert’s related processes, you can also preview the alerts and events which are associated with those processes. Then, if you want to learn more about a particular alert or event, you can click **Show full alert details** to open the full details flyout. - -:::{image} ../../../images/serverless--detections-visualize-tab-lp-alert-details.gif -:alt: Examine alert details from event analyzer -:screenshot: -::: - - -## Insights [insights-section] - -The Insights section is located on the **Overview** tab in the right panel. It offers different perspectives from which you can assess the alert. Click **Insights** to display overviews for [related entities](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#entities-overview), [threat intelligence](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#threat-intelligence-overview), [correlated data](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#correlations-overview), and [host and user prevalence](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#prevalence-overview). - -:::{image} ../../../images/serverless--detections-insights-section-rp.png -:alt: Insights section of the Overview tab -:screenshot: -::: - - -### Entities [entities-overview] - -The Entities overview provides high-level details about the user and host that are related to the alert. Host and user risk classifications are also available if you have the Security Analytics Complete [project feature](../../../deploy-manage/deploy/elastic-cloud/project-settings.md). - -:::{image} ../../../images/serverless--detections-entities-overview.png -:alt: Overview of the entity details section in the right panel -:screenshot: -::: - - -#### Expanded entities view [expanded-entities-view] - -From the right panel, click **Entities** to open a detailed view of the host and user associated with the alert. The expanded view also includes risk scores and classifications (if you have the Security Analytics Complete [project feature](../../../deploy-manage/deploy/elastic-cloud/project-settings.md)) and activity on related hosts and users. - -:::{image} ../../../images/serverless--detections-expanded-entities-view.png -:alt: Expanded view of entity details -:screenshot: -::: - - -### Threat intelligence [threat-intelligence-overview] - -The Threat intelligence overview shows matched indicators, which provide threat intelligence relevant to the alert. - -:::{image} ../../../images/serverless--detections-threat-intelligence-overview.png -:alt: Overview of threat intelligence on the alert -:screenshot: -::: - -The Threat intelligence overview provides the following information: - -* **Threat match detected**: Only available when examining an alert generated from an [indicator match](../../../solutions/security/detect-and-alert/create-detection-rule.md#create-indicator-rule) rule. Shows the number of matched indicators that are present in the alert document. Shows zero if there are no matched indicators or you’re examining an alert generated by another type of rule. -* **Fields enriched with threat intelligence**: Shows the number of matched indicators that are present on an alert that *wasn’t* generated from an indicator match rule. If none exist, the total number of matched indicators is zero. - - -#### Expanded threat intelligence view [expanded-threat-intel-view] - -From the right panel, click **Threat intelligence** to open the expanded Threat intelligence view within the left panel. - -::::{note} -The expanded threat intelligence view queries indices specified in the `securitySolution:defaultThreatIndex` advanced setting. Refer to [Update default Elastic Security threat intelligence indices](../../../solutions/security/get-started/configure-advanced-settings.md#update-threat-intel-indices) to learn more about threat intelligence indices. - -:::: - - -:::{image} ../../../images/serverless--detections-expanded-threat-intelligence-view.png -:alt: Expanded view of threat intelligence on the alert -:screenshot: -::: - -The expanded Threat intelligence view shows individual indicators within the alert document. You can expand and collapse indicator details by clicking the arrow button at the end of the indicator label. Each indicator is labeled with values from the `matched.field` and `matched.atomic` fields and displays the threat intelligence provider. - -Matched threats are organized into two sections, described below. Within each section, matched threats are shown in reverse chronological order, with the most recent at the top. All mapped fields are displayed for each matched threat. - -**Threat match detected** - -The Threat match detected section is only populated with indicator match details if you’re examining an alert that was generated from an indicator match rule. Indicator matches occur when alert field values match with threat intelligence data you’ve ingested. - -**Fields enriched with threat intelligence** - -Threat intelligence can also be found on alerts that weren’t generated from indicator match rules. To find this information, {{elastic-sec}} queries alert documents from the past 30 days and searches for fields that contain known threat intelligence. If any are found, they’re logged in this section. - -::::{tip} -Use the date time picker to modify the query time frame, which looks at the past 30 days by default. You can also click the **Inspect** button to examine the query that the Fields enriched with threat intelligence section uses. - -:::: - - -When searching for threat intelligence, {{elastic-sec}} queries the alert document for the following fields: - -* `file.hash.md5`: The MD5 hash -* `file.hash.sha1`: The SHA1 hash -* `file.hash.sha256`: The SHA256 hash -* `file.pe.imphash`: Imports in a PE file -* `file.elf.telfhash`: Imports in an ELF file -* `file.hash.ssdeep`: The SSDEEP hash -* `source.ip`: The IP address of the source (IPv4 or IPv6) -* `destination.ip`: The event’s destination IP address -* `url.full`: The full URL of the event source -* `registry.path`: The full registry path, including the hive, key, and value - - -### Correlations [correlations-overview] - -The Correlations overview shows how an alert is related to other alerts and offers ways to investigate related alerts. Use this information to quickly find patterns between alerts and then take action. - -:::{image} ../../../images/serverless--detections-correlations-overview.png -:alt: Overview of available correlation data -:screenshot: -::: - -The Correlations overview provides the following information: - -* **Suppressed alerts**: Indicates that the alert was created with alert suppression, and shows how many duplicate alerts were suppressed. This information only appears if alert suppression is enabled for the rule. -* **Alerts related by source event**: Shows the number of alerts that were created by the same source event. -* **Cases related to the alert**: Shows the number of cases to which the alert has been added. -* **Alerts related by session ID**: Shows the number of alerts generated by the same session. -* **Alerts related by process ancestry**: Shows the number of alerts that are related by process events on the same linear branch. - - -#### Expanded correlations view [expanded-correlations-view] - -From the right panel, click **Correlations** to open the expanded Correlations view within the left panel. - -:::{image} ../../../images/serverless--detections-expanded-correlations-view.png -:alt: Expanded view of correlation data -:screenshot: -::: - -In the expanded view, corelation data is organized into several tables: - -* **Suppressed alerts**: [preview] Shows how many duplicate alerts were suppressed. This information only appears if alert suppression is enabled for the rule. -* **Related cases**: Shows cases to which the alert has been added. Click a case’s name to open its details. -* **Alerts related by source event**: Shows alerts created by the same source event. This can help you find alerts with a shared origin and provide more context about the source event. Click the **Investigate in timeline** button to examine related alerts in Timeline. -* **Alerts related by session**: Shows alerts generated during the same [session](../../../solutions/security/investigate/session-view.md). These alerts share the same session ID, which is a unique ID for tracking a given Linux session. To use this feature, you must enable the **Collect session data** setting in your {{elastic-defend}} integration policy. Refer to [Enable Session View data](../../../solutions/security/investigate/session-view.md#enable-session-view) for more information. -* **Alerts related by ancestry**: Shows alerts that are related by process events on the same linear branch. Note that alerts generated from processes on child or related branches are not shown. To further examine alerts, click **Investigate in timeline**. - - -### Prevalence [prevalence-overview] - -The Prevalence overview shows whether data from the alert was frequently observed on other host events from the last 30 days. Prevalence calculations use values from the alert’s highlighted fields. Highlighted field values that are observed on less than 10% of hosts in your environment are considered uncommon (not prevalent) and are listed individually in the Prevalence overview. Highlighted field values that are observed on more than 10% of hosts in your environment are considered common (prevalent) and are described as frequently observed in the Prevalence overview. - - -#### Expanded prevalence view [expanded-prevalence-view] - -From the right panel, click **Prevalence** to open the expanded Prevalence view within the left panel. Examine the table to understand the alert’s relationship with other alerts, events, users, and hosts. - -::::{tip} -Update the date time picker for the table to show data from a different time range. - -:::: - - -:::{image} ../../../images/serverless--detections-expanded-prevalence-view.png -:alt: Expanded view of prevalence data -:screenshot: -::: - -The expanded Prevalence view provides the following details: - -* **Field**: Shows [highlighted fields](../../../solutions/security/detect-and-alert/view-detection-alert-details.md#investigation-section) for the alert and any custom highlighted fields that were added to the alert’s rule. -* **Value**: Shows values for highlighted fields and any custom highlighted fields that were added to the alert’s rule. -* **Alert count**: Shows the total number of alert documents that have identical highlighted field values, including the alert you’re currently examining. For example, if the `host.name` field has an alert count of 5, that means there are five total alerts with the same `host.name` value. The Alert count column only retrieves documents that contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. -* **Document count**: Shows the total number of event documents that have identical field values. A dash (`——`) displays if there are no event documents that match the field value. The Document count column only retrieves documents that don’t contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. -* **Host prevalence**: Shows the percentage of unique hosts that have identical field values. Host prevalence for highlighted fields is calculated by taking the number of unique hosts with identical highlighted field values and dividing that number by the total number of unique hosts in your environment. -* **User prevalence**: Shows the percentage of unique users that have identical highlighted field values. User prevalence for highlighted fields is calculated by taking the number of unique users with identical field values and dividing that number by the total number of unique users in your environment. - - -## Response [response-overview] - -The **Response** section is located on the **Overview** tab in the right panel. It shows [response actions](../../../solutions/security/detect-and-alert/create-detection-rule.md) that were added to the rule associated with the alert. Click **Response** to display the response action’s results in the left panel. - -:::{image} ../../../images/serverless--detections-response-action-rp.png -:alt: Response section of the Overview tab -:screenshot: -::: - - -## Notes [expanded-notes-view] - -The **Notes** tab (located in the left panel) shows all notes attached to the alert, in addition to the user who created them and when they were created. When you add a new note, the alert’s summary also updates and shows how many notes are attached to the alert. - -::::{tip} -Go to the **Notes** [page](../../../solutions/security/investigate/notes.md#manage-notes) to find notes that were added to other alerts. - -:::: - - -:::{image} ../../../images/serverless--detections-notes-tab-lp.png -:alt: Notes tab in the left panel -::: diff --git a/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md b/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md deleted file mode 100644 index a6759a388f..0000000000 --- a/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -navigation_title: "Visualize alerts" ---- - -# Visualize detection alerts [security-visualize-alerts] - - -Visualize and group detection alerts by specific parameters in the visualization section of the Alerts page. - -:::{image} ../../../images/serverless--detections-alert-page-visualizations.png -:alt: Alerts page with visualizations section highlighted -:screenshot: -::: - -Use the left buttons to select a view type (**Summary**, **Trend***, ***Counts**, or **Treemap**), and use the right menus to select the ECS fields to use for grouping: - -* **Top alerts by** or **Group by**: Primary field for grouping alerts. -* **Group by top** (if available): Secondary field for further subdividing grouped alerts. - -For example, you can group first by rule name (`Group by: kibana.alert.rule.name`), then by host name (`Group by top: host.name`) to visualize which detection rules generated alerts, and which hosts triggered each of those rules. For groupings with a lot of unique values, the top 1,000 results are displayed. - -::::{note} -Some view types don’t have the **Group by top** option. You can also leave **Group by top** blank to group by only the primary field in **Group by**. - -:::: - - -To reset a view to default settings, hover over it and click the options menu (![More actions](../../../images/serverless-boxesHorizontal.svg "")) that appears, then select **Reset group by fields**. - -::::{tip} -The options menu also lets you inspect the visualization’s queries. For the trend and counts views, you can add the visualization to a new or existing case, or open it in Lens. - -:::: - - -Click the collapse icon (![Markdown](../../../images/serverless-arrowDown.svg "")) to minimize the visualization section and display a summary of key information instead. - -:::{image} ../../../images/serverless--detections-alert-page-viz-collapsed.png -:alt: Alerts page with visualizations section collapsed -:screenshot: -::: - - -## Summary [security-visualize-alerts-summary] - -On the Alerts page, the summary visualization displays by default and shows how alerts are distributed across these indicators: - -* **Severity levels**: How many alerts are in each severity level. -* **Alerts by name**: How many alerts each detection rule created. -* **Top alerts by**: Percentage of alerts with a specified field value: `host.name` (default), `user.name`, `source.ip`, or `destination.ip`. - -You can hover and click on elements within the summary — such as severity levels, rule names, and host names — to add filters with those values to the Alerts page. - -:::{image} ../../../images/serverless--detections-alerts-viz-summary.png -:alt: Summary visualization for alerts -:screenshot: -::: - - -## Trend [security-visualize-alerts-trend] - -The trend view shows the occurrence of alerts over time. By default, it groups alerts by detection rule name (`kibana.alert.rule.name`). - -::::{note} -The **Group by top** menu is unavailable for the trend view. - -:::: - - -:::{image} ../../../images/serverless--detections-alerts-viz-trend.png -:alt: Trend visualization for alerts -:screenshot: -::: - - -## Counts [security-visualize-alerts-counts] - -The counts view shows the count of alerts in each group. By default, it groups alerts first by detection rule name (`kibana.alert.rule.name`), then by host name (`host.name`). - -:::{image} ../../../images/serverless--detections-alerts-viz-counts.png -:alt: Counts visualization for alerts -:screenshot: -::: - - -## Treemap [security-visualize-alerts-treemap] - -The treemap view shows the distribution of alerts as nested, proportionally-sized tiles. This view can help you quickly pinpoint the most prevalent and critical alerts. - -:::{image} ../../../images/serverless--detections-alerts-viz-treemap.png -:alt: Treemap visualization for alerts -:screenshot: -::: - -Larger tiles represent more frequent alerts, and each tile’s color is based on the alerts' risk score: - -* **Green**: Low risk (`0` - `46`) -* **Yellow**: Medium risk (`47` - `72`) -* **Orange**: High risk (`73` - `98`) -* **Red**: Critical risk (`99` - `100`) - -By default, the treemap groups alerts first by detection rule name (`kibana.alert.rule.name`), then by host name (`host.name`). This shows which rules generated the most alerts, and which hosts were responsible. - -::::{note} -Depending on the amount of alerts, some tiles and text might be very small. Hover over the treemap to display information in a tooltip. - -:::: - - -You can click on the treemap to narrow down the alerts displayed in both the treemap and the alerts table below. Click the label above a group to display the alerts in that group, or click an individual tile to display the alerts related to that tile. This adds filters under the KQL search bar, which you can edit or remove to further customize the view. - -:::{image} ../../../images/serverless--detections-treemap-click.gif -:alt: Animation of clicking the treemap -:screenshot: -::: diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md index fd450af95e..bd2e70731e 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-cluster.md @@ -42,5 +42,5 @@ See [Configure security for the {{stack}}](../../../deploy-manage/security/secur Keeping a system secure takes vigilance. By using {{stack-security-features}} to maintain an audit trail, you can easily see who is accessing your cluster and what they’re doing. You can configure the audit level, which accounts for the type of events that are logged. These events include failed authentication attempts, user access denied, node connection denied, and more. By analyzing access patterns and failed attempts to access your cluster, you can gain insights into attempted attacks and data breaches. Keeping an auditable log of the activity in your cluster can also help diagnose operational issues. -See [Enable audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md). +See [Enable audit logging](../../../deploy-manage/security/logging-configuration/enabling-audit-logs.md). diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md index b175b11e12..df451f0362 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/security-files.md @@ -6,7 +6,7 @@ The {{es}} {{security-features}} use the following files: * `ES_PATH_CONF/elasticsearch-users` defines the users and their hashed passwords for the `file` realm. See [File-based user authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md). * `ES_PATH_CONF/elasticsearch-users_roles` defines the user roles assignment for the `file` realm. See [File-based user authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md). * `ES_PATH_CONF/role_mapping.yml` defines the role assignments for a Distinguished Name (DN) to a role. This allows for LDAP and Active Directory groups and users and PKI users to be mapped to roles. See [Mapping users and groups to roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). -* `ES_PATH_CONF/log4j2.properties` contains audit information. See [Logfile audit output](../../../deploy-manage/monitor/logging-configuration/logfile-audit-output.md). +* `ES_PATH_CONF/log4j2.properties` contains audit information. See [Logfile audit output](../../../deploy-manage/security/logging-configuration/logfile-audit-output.md). ::::{important} :name: security-files-location diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 383d1c2ee1..2de3048a01 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -33,12 +33,6 @@ toc: - file: cloud/cloud-enterprise/ece-create-deployment.md - file: cloud/cloud-enterprise/ece-delete-deployment.md - file: cloud/cloud-enterprise/ece-find.md - - file: cloud/cloud-enterprise/ece-getting-started-node-js.md - - file: cloud/cloud-enterprise/ece-getting-started-python.md - - file: cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md - - file: cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md - - file: cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md - - file: cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md - file: cloud/cloud-enterprise/ece-ingest-guides.md - file: cloud/cloud-enterprise/ece-integrations-server-api-example.md - file: cloud/cloud-enterprise/ece-manage-apm-settings.md @@ -150,10 +144,6 @@ toc: - file: docs-content/serverless/index.md children: - file: docs-content/serverless/intro.md - - file: docs-content/serverless/ai-assistant-knowledge-base.md - - file: docs-content/serverless/attack-discovery.md - - file: docs-content/serverless/connect-to-byo-llm.md - - file: docs-content/serverless/detections-logsdb-index-mode-impact.md - file: docs-content/serverless/elasticsearch-differences.md - file: docs-content/serverless/elasticsearch-explore-your-data.md - file: docs-content/serverless/elasticsearch-http-apis.md @@ -171,33 +161,8 @@ toc: - file: docs-content/serverless/project-setting-data.md - file: docs-content/serverless/project-settings-alerts.md - file: docs-content/serverless/project-settings-content.md - - file: docs-content/serverless/security-about-rules.md - - file: docs-content/serverless/security-ai-assistant-esql-queries.md - - file: docs-content/serverless/security-ai-assistant.md - - file: docs-content/serverless/security-ai-for-security.md - - file: docs-content/serverless/security-ai-use-cases.md - - file: docs-content/serverless/security-ai-usecase-incident-reporting.md - - file: docs-content/serverless/security-alert-suppression.md - - file: docs-content/serverless/security-alerts-manage.md - file: docs-content/serverless/security-automatic-import.md - - file: docs-content/serverless/security-building-block-rules.md - - file: docs-content/serverless/security-connect-to-azure-openai.md - - file: docs-content/serverless/security-connect-to-bedrock.md - - file: docs-content/serverless/security-connect-to-google-vertex.md - - file: docs-content/serverless/security-connect-to-openai.md - file: docs-content/serverless/security-detection-engine-overview.md - - file: docs-content/serverless/security-detections-requirements.md - - file: docs-content/serverless/security-prebuilt-rules-management.md - - file: docs-content/serverless/security-query-alert-indices.md - - file: docs-content/serverless/security-reduce-notifications-alerts.md - - file: docs-content/serverless/security-rules-coverage.md - - file: docs-content/serverless/security-rules-create.md - - file: docs-content/serverless/security-rules-ui-management.md - - file: docs-content/serverless/security-signals-to-cases.md - - file: docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md - - file: docs-content/serverless/security-tune-detection-signals.md - - file: docs-content/serverless/security-view-alert-details.md - - file: docs-content/serverless/security-visualize-alerts.md - file: docs-content/serverless/security-vuln-management-faq.md - file: docs-content/serverless/spaces.md - file: docs-content/serverless/what-is-observability-serverless.md @@ -290,7 +255,6 @@ toc: - file: stack-docs/elastic-stack/index.md children: - file: stack-docs/elastic-stack/air-gapped-install.md - - file: stack-docs/elastic-stack/install-stack-demo-secure.md - file: stack-docs/elastic-stack/installing-stack-demo-self.md - file: stack-docs/elastic-stack/overview.md - file: stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md diff --git a/redirects.yml b/redirects.yml index 9f65b82505..8fd77cc2c4 100644 --- a/redirects.yml +++ b/redirects.yml @@ -18,7 +18,12 @@ redirects: anchors: 'spaces-control-feature-visibility': 'deploy-manage/deploy/cloud-enterprise/deploy-large-installation-cloud.md': '!deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md' - + ## audit logging movement to security section + 'deploy-manage/monitor/logging-configuration/configuring-audit-logs.md': 'deploy-manage/security/logging-configuration/configuring-audit-logs.md' + 'deploy-manage/monitor/logging-configuration/enabling-audit-logs.md': 'deploy-manage/security/logging-configuration/enabling-audit-logs.md' + 'deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md': 'deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md' + 'deploy-manage/monitor/logging-configuration/auditing-search-queries.md': 'deploy-manage/security/logging-configuration/auditing-search-queries.md' + 'deploy-manage/monitor/logging-configuration/logfile-audit-output.md': 'deploy-manage/security/logging-configuration/logfile-audit-output.md' ## explore-analyze 'explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md': '!deploy-manage/autoscaling/trained-model-autoscaling.md' diff --git a/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md index fbf67c32bb..6383bd5202 100644 --- a/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md +++ b/reference/data-analysis/machine-learning/ootb-ml-jobs-siem.md @@ -7,7 +7,6 @@ mapped_pages: These {{anomaly-jobs}} automatically detect file system and network anomalies on your hosts. They appear in the **Anomaly Detection** interface of the {{security-app}} in {{kib}} when you have data that matches their configuration. For more information, refer to [Anomaly detection with machine learning](/solutions/security/advanced-entity-analytics/anomaly-detection.md). - ## Security: Authentication [security-authentication] Detect anomalous activity in your ECS-compatible authentication logs. @@ -26,7 +25,6 @@ By default, when you create these job in the {{security-app}}, it uses a {{data- | auth_rare_user | Looks for an unusual user name in the authentication logs. An unusual user name is one way of detecting credentialed access by means of a new or dormant user account. A user account that is normally inactive, because the user has left the organization, which becomes active, may be due to credentialed access using a compromised account password. Threat actors will sometimes also create new users as a means of persisting in a compromised web application. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_user.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_user.json) | | suspicious_login_activity | Detect unusually high number of authentication attempts. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_suspicious_login_activity.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/suspicious_login_activity.json) | - ## Security: CloudTrail [security-cloudtrail-jobs] Detect suspicious activity recorded in your CloudTrail logs. @@ -41,21 +39,19 @@ In the {{ml-app}} app, these configurations are available only when data exists | rare_method_for_a_country | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (country) that is unusual. This can be the result of compromised credentials or keys. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_country.json) | | rare_method_for_a_username | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a user context that does not normally call the method. This can be the result of compromised credentials or keys as someone uses a valid account to persist, move laterally, or exfil data. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_username.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_username.json) | - ## Security: Host [security-host-jobs] Anomaly detection jobs for host-based threat hunting and detection. In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. -To access the host traffic anomalies dashboard in Kibana, go to: `Security -> Dashboards -> Host Traffic Anomalies`. +To access the host traffic anomalies dashboard in Kibana, install the `Host Traffic Anomalies` integration by navigating to `Management -> Integrations`. Follow the instructions on the integration's `Overview` page to complete the installation. Once the dashboard is successfully installed and configured, you can find it under `Security -> Dashboards -> Host Traffic Anomalies`. | Name | Description | Job (JSON) | Datafeed | | --- | --- | --- | --- | | high_count_events_for_a_host_name | Looks for a sudden spike in host based traffic. This can be due to a range of security issues, such as a compromised system, DDoS attacks, malware infections, privilege escalation, or data exfiltration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/high_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_high_count_events_for_a_host_name.json) | | low_count_events_for_a_host_name | Looks for a sudden drop in host based traffic. This can be due to a range of security issues, such as a compromised system, a failed service, or a network misconfiguration. | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/low_count_events_for_a_host_name.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_low_count_events_for_a_host_name.json) | - ## Security: Linux [security-linux-jobs] Anomaly detection jobs for Linux host-based threat hunting and detection. @@ -79,7 +75,6 @@ In the {{ml-app}} app, these configurations are available only when data exists | v3_linux_system_user_discovery | Looks for commands related to system user or owner discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system owner or user discovery to identify currently active or primary users of a system. This may be a precursor to additional discovery, credential dumping, or privilege elevation activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_user_discovery.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_user_discovery.json) | | v3_rare_process_by_host_linux | Looks for processes that are unusual to a particular Linux host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_rare_process_by_host_linux.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_rare_process_by_host_linux.json) | - ## Security: Network [security-network-jobs] Detect anomalous network activity in your ECS-compatible network logs. @@ -95,7 +90,6 @@ By default, when you create these jobs in the {{security-app}}, it uses a {{data | high_count_network_events | Looks for an unusually large spike in network traffic. Such a burst of traffic, if not caused by a surge in business activity, can be due to suspicious or malicious activity. Large-scale data exfiltration may produce a burst of network traffic; this could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_events.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_events.json) | | rare_destination_country | Looks for an unusual destination country name in the network logs. This can be due to initial access, persistence, command-and-control, or exfiltration activity. For example, when a user clicks on a link in a phishing email or opens a malicious document, a request may be sent to download and run a payload from a server in a country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/rare_destination_country.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_rare_destination_country.json) | - ## Security: {{packetbeat}} [security-packetbeat-jobs] Detect suspicious network activity in {{packetbeat}} data. @@ -110,7 +104,6 @@ In the {{ml-app}} app, these configurations are available only when data exists | packetbeat_rare_urls | Looks for unusual web browsing URL activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_urls.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_urls.json) | | packetbeat_rare_user_agent | Looks for unusual HTTP user agent activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_user_agent.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_user_agent.json) | - ## Security: Windows [security-windows-jobs] Anomaly detection jobs for Windows host-based threat hunting and detection. @@ -134,23 +127,22 @@ If there are additional requirements such as installing the Windows System Monit | v3_windows_rare_user_runas_event | Unusual user context switches can be due to privilege escalation. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_runas_event.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_runas_event.json) | | v3_windows_rare_user_type10_remote_login | Unusual RDP (remote desktop protocol) user logins can indicate account takeover or credentialed access. | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_type10_remote_login.json) | [code](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_type10_remote_login.json) | - ## Security: Elastic Integrations [security-integrations-jobs] [Elastic Integrations](kibana://reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, {{transforms}}, and in this case, {{ml}} capabilities for Security. The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment. -* [Data Exfiltration Detection](integration-docs://reference/ded.md) -* [Domain Generation Algorithm Detection](integration-docs://reference/dga.md) -* [Lateral Movement Detection](integration-docs://reference/lmd.md) -* [Living off the Land Attack Detection](integration-docs://reference/problemchild.md) +* [Data Exfiltration Detection](integration-docs://reference/ded/index.md) +* [Domain Generation Algorithm Detection](integration-docs://reference/dga/index.md) +* [Lateral Movement Detection](integration-docs://reference/lmd/index.md) +* [Living off the Land Attack Detection](integration-docs://reference/problemchild/index.md) **Domain Generation Algorithm (DGA) Detection** {{ml-cap}} solution package to detect domain generation algorithm (DGA) activity in your network data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/dga.md). +To download, refer to the [documentation](integration-docs://reference/dga/index.md). | Name | Description | | --- | --- | @@ -162,7 +154,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} solution package to detect Living off the Land (LotL) attacks in your environment. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. (Also known as ProblemChild). -To download, refer to the [documentation](integration-docs://reference/problemchild.md). +To download, refer to the [documentation](integration-docs://reference/problemchild/index.md). | Name | Description | | --- | --- | @@ -179,7 +171,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} package to detect data exfiltration in your network and file data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/ded.md). +To download, refer to the [documentation](integration-docs://reference/ded/index.md). | Name | Description | | --- | --- | @@ -197,7 +189,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} package to detect lateral movement based on file transfer activity and Windows RDP events. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/lmd.md). +To download, refer to the [documentation](integration-docs://reference/lmd/index.md). | Name | Description | | --- | --- | @@ -214,4 +206,3 @@ To download, refer to the [documentation](integration-docs://reference/lmd.md). | lmd_high_mean_rdp_process_args | Detects unusually high number of process arguments in an RDP session. | The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/lmd/kibana/ml_module/lmd-ml.json). - diff --git a/reference/data-analysis/observability/index.md b/reference/data-analysis/observability/index.md index 5f4d4483e9..e27fc8f1ab 100644 --- a/reference/data-analysis/observability/index.md +++ b/reference/data-analysis/observability/index.md @@ -1,6 +1,7 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/metrics-reference.html + - https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html --- # Metrics reference [metrics-reference] diff --git a/reference/data-analysis/observability/metrics-reference-serverless.md b/reference/data-analysis/observability/metrics-reference-serverless.md deleted file mode 100644 index 70b5380366..0000000000 --- a/reference/data-analysis/observability/metrics-reference-serverless.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html ---- - -# Metrics reference [observability-metrics-reference] - -Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. - -* [Host metrics](/reference/data-analysis/observability/observability-host-metrics-serverless.md) -* [Kubernetes pod metrics](/reference/data-analysis/observability/observability-kubernetes-pod-metrics-serverless.md) -* [Container metrics](/reference/data-analysis/observability/observability-container-metrics-serverless.md) -* [AWS metrics](/reference/data-analysis/observability/observability-aws-metrics-serverless.md) - - - - - diff --git a/reference/index.md b/reference/index.md index 9ef2cb6601..7cddeb96dc 100644 --- a/reference/index.md +++ b/reference/index.md @@ -13,5 +13,5 @@ Explore the reference documentation for Elastic APIs. | {{kib}} | * [{{kib}}](https://www.elastic.co/docs/api/doc/kibana)
* [{{kib}} Serverless](https://www.elastic.co/docs/api/doc/serverless)
* [{{fleet}}](/reference/ingestion-tools/fleet/fleet-api-docs.md)
* [{{observability}} Serverless SLOs](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-slo)
* [{{elastic-sec}}](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-ai-assistant-api)
* [{{elastic-sec}} Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-security-ai-assistant-api)
| | {{ls}} | * [Monitoring {{ls}}](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html)
| | APM | * [APM](/solutions/observability/apps/apm-server-api.md)
* [APM Serverless](https://www.elastic.co/docs/api/doc/serverless/group/endpoint-apm-agent-configuration)
* [Observability intake Serverless](https://www.elastic.co/docs/api/doc/observability-serverless)
| -| {{ecloud}} | * [{{ech}}](https://www.elastic.co/docs/api/doc/cloud)
* [{{ecloud}} Serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless)
* [{{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise)
* [{{eck}}](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md)
| +| {{ecloud}} | * [{{ech}}](https://www.elastic.co/docs/api/doc/cloud)
* [{{ecloud}} Serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless)
* [{{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise)
* [{{eck}}](cloud-on-k8s://reference/api-docs.md)
| diff --git a/reference/ingestion-tools/fleet/data-streams-scenario1.md b/reference/ingestion-tools/fleet/data-streams-scenario1.md index 9018035f8b..2e503d9fac 100644 --- a/reference/ingestion-tools/fleet/data-streams-scenario1.md +++ b/reference/ingestion-tools/fleet/data-streams-scenario1.md @@ -73,7 +73,7 @@ To confirm that the index template is using the `logs@custom` component template 2. Search for `logs@` and select the `logs@custom` component template. 3. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. -New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of each data stream using the {{ref}}/indices-rollover-index.html[{{es}} rollover API. +New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of each data stream using the {{es}} [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover). For example: diff --git a/reference/ingestion-tools/fleet/data-streams-scenario2.md b/reference/ingestion-tools/fleet/data-streams-scenario2.md index c7841f4e63..eeb63eb170 100644 --- a/reference/ingestion-tools/fleet/data-streams-scenario2.md +++ b/reference/ingestion-tools/fleet/data-streams-scenario2.md @@ -69,7 +69,7 @@ To confirm that the index template is using the `logs@custom` component template 2. Search for `system` and select the `logs-system.auth@custom` component template. 3. The **Summary** shows the list of all data streams that use the component template, and the **Settings** view shows your newly configured ILM policy. -New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of the data stream using the {{ref}}/indices-rollover-index.html[{{es}} rollover API: +New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of the data stream using the {{es}} [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover): ```bash POST /logs-system.auth/_rollover/ diff --git a/reference/ingestion-tools/fleet/enrollment-handling-containerized-agent.md b/reference/ingestion-tools/fleet/enrollment-handling-containerized-agent.md new file mode 100644 index 0000000000..c6996057cd --- /dev/null +++ b/reference/ingestion-tools/fleet/enrollment-handling-containerized-agent.md @@ -0,0 +1,15 @@ +# Enrollment handing for containerized agents + +For {{fleet}}-managed {{agents}} that run in a containerized environment (including Docker, Kubernetes, and others), enrollment handling is managed as follows: + +* **Enrollment Verification:** {{agent}} checks the stored enrollment conditions within its container environment and re-enrolls only when necessary. +* **Unenrollment Handling:** If an {{agent}} is unenrolled via the {{fleet}} UI but still references a valid enrollment token provided through environment variables, it will automatically re-enroll on the next container restart. + +In versions lower than 8.18, an unenrolled agent remains unenrolled and does not re-enroll, even if a valid enrollment token is still available. + + + + + + + diff --git a/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md b/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md index be3919e9c4..73046b4c56 100644 --- a/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md +++ b/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md @@ -23,6 +23,10 @@ To learn how to run {{agent}}s in a containerized environment, see: * [Using a custom ingest pipeline with the {{k8s}} Integration](/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md) * [Run {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md) — for {{eck}} users +::::{note} +Enrollment handling for {{agent}} in a containerized environment has some special nuances. +For details refer to [Enrollment handing for containerized agents](./enrollment-handling-containerized-agent.md). +:::: diff --git a/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md index b73f9d4262..0eb0f11927 100644 --- a/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md +++ b/reference/ingestion-tools/fleet/migrate-auditbeat-to-agent.md @@ -32,7 +32,7 @@ The following table describes the integrations you can use instead of {{auditbea | [System.package](asciidocalypse://docs/beats/docs/reference/auditbeat/auditbeat-dataset-system-package.md) dataset | [System Audit](asciidocalypse://docs/integration-docs/docs/reference/system_audit.md) integration | This integration is a direct replacement of the System Package dataset. Starting in {{stack}} 8.7, you can port rules and configuration settings to this integration. This integration currently schedules collection of information such as:

* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
| | [Osquery](asciidocalypse://docs/integration-docs/docs/reference/osquery.md) or [Osquery Manager](asciidocalypse://docs/integration-docs/docs/reference/osquery_manager.md) integration | Schedule collection of information like:

* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)
* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)
* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)
* [apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)
* [programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)
* [npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)
* [atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)
* [chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)
* [portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)
* [python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)
| | [System.process](asciidocalypse://docs/beats/docs/reference/auditbeat/auditbeat-dataset-system-process.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because out of the box it reports events forevery process in [ECS](asciidocalypse://docs/integration-docs/docs/reference/index.md) format and has excellentintegration in [Kibana](/get-started/the-stack.md). | -| [Custom Windows event log](asciidocalypse://docs/integration-docs/docs/reference/winlog.md) and{{integrations-docs}}/windows#sysmonoperational[Sysmon] integrations | Provide process data. | +| [Custom Windows event log](asciidocalypse://docs/integration-docs/docs/reference/winlog.md) and [Sysmon](asciidocalypse://docs/integration-docs/docs/reference/sysmon_linux.md) integrations | Provide process data. | | [Osquery](asciidocalypse://docs/integration-docs/docs/reference/osquery.md) or[Osquery Manager](asciidocalypse://docs/integration-docs/docs/reference/osquery_manager.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. | | [System.socket](asciidocalypse://docs/beats/docs/reference/auditbeat/auditbeat-dataset-system-socket.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because it supports monitoring network connections on Linux,Windows, and MacOS. Includes process and user metadata. Currently does notdo flow accounting (byte and packet counts) or domain name enrichment (but doescollect DNS queries separately). | | [Osquery](asciidocalypse://docs/integration-docs/docs/reference/osquery.md) or [Osquery Manager](asciidocalypse://docs/integration-docs/docs/reference/osquery_manager.md) integration | Monitor socket events via the [socket_events](https://www.osquery.io/schema/5.1.0/#socket_events) tablefor Linux and MacOS. | diff --git a/reference/ingestion-tools/fleet/unenroll-elastic-agent.md b/reference/ingestion-tools/fleet/unenroll-elastic-agent.md index af23b47d2e..2103cec1e7 100644 --- a/reference/ingestion-tools/fleet/unenroll-elastic-agent.md +++ b/reference/ingestion-tools/fleet/unenroll-elastic-agent.md @@ -20,4 +20,7 @@ Unenrolled agents will continue to run, but will not be able to send data. They If unenrollment hangs, select **Force unenroll** to invalidate all API keys related to the agent and change the status to `inactive` so that the agent no longer appears in {{fleet}}. :::: - +::::{note} +Enrollment handling for {{agent}} in a containerized environment has some special nuances. +For details refer to [Enrollment handing for containerized agents](./enrollment-handling-containerized-agent.md). +:::: \ No newline at end of file diff --git a/reference/security/prebuilt-anomaly-detection-jobs.md b/reference/security/prebuilt-anomaly-detection-jobs.md index 07fec7faaf..63cc440bf0 100644 --- a/reference/security/prebuilt-anomaly-detection-jobs.md +++ b/reference/security/prebuilt-anomaly-detection-jobs.md @@ -141,16 +141,16 @@ If there are additional requirements such as installing the Windows System Monit The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment. -* [Data Exfiltration Detection](integration-docs://reference/ded.md) -* [Domain Generation Algorithm Detection](integration-docs://reference/dga.md) -* [Lateral Movement Detection](integration-docs://reference/lmd.md) -* [Living off the Land Attack Detection](integration-docs://reference/problemchild.md) +* [Data Exfiltration Detection](integration-docs://reference/ded/index.md) +* [Domain Generation Algorithm Detection](integration-docs://reference/dga/index.md) +* [Lateral Movement Detection](integration-docs://reference/lmd/index.md) +* [Living off the Land Attack Detection](integration-docs://reference/problemchild/index.md) **Domain Generation Algorithm (DGA) Detection** {{ml-cap}} solution package to detect domain generation algorithm (DGA) activity in your network data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/dga.md). +To download, refer to the [documentation](integration-docs://reference/dga/index.md). | Name | Description | | --- | --- | @@ -162,7 +162,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} solution package to detect Living off the Land (LotL) attacks in your environment. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. (Also known as ProblemChild). -To download, refer to the [documentation](integration-docs://reference/problemchild.md). +To download, refer to the [documentation](integration-docs://reference/problemchild/index.md). | Name | Description | | --- | --- | @@ -179,7 +179,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} package to detect data exfiltration in your network and file data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/ded.md). +To download, refer to the [documentation](integration-docs://reference/ded/index.md). | Name | Description | | --- | --- | @@ -197,7 +197,7 @@ The job configurations and datafeeds can be found [here](https://github.com/elas {{ml-cap}} package to detect lateral movement based on file transfer activity and Windows RDP events. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. -To download, refer to the [documentation](integration-docs://reference/lmd.md). +To download, refer to the [documentation](integration-docs://reference/lmd/index.md). | Name | Description | | --- | --- | diff --git a/reference/toc.yml b/reference/toc.yml index 63067ec16f..e527651363 100644 --- a/reference/toc.yml +++ b/reference/toc.yml @@ -104,6 +104,7 @@ toc: - file: ingestion-tools/fleet/monitor-elastic-agent.md - file: ingestion-tools/fleet/agent-health-status.md - file: ingestion-tools/fleet/filter-agent-list-by-tags.md + - file: ingestion-tools/fleet/enrollment-handling-containerized-agent.md - file: ingestion-tools/fleet/agent-policy.md children: - file: ingestion-tools/fleet/create-policy-no-ui.md @@ -255,7 +256,6 @@ toc: - file: data-analysis/machine-learning/ml-sum-functions.md - file: data-analysis/machine-learning/ml-time-functions.md - file: data-analysis/observability/index.md - - file: data-analysis/observability/metrics-reference-serverless.md children: - file: data-analysis/observability/observability-host-metrics-serverless.md - file: data-analysis/observability/observability-container-metrics-serverless.md diff --git a/release-notes/breaking-changes/elastic-apm.md b/release-notes/breaking-changes/elastic-apm.md index ffb5c9bc88..5fd06672ad 100644 --- a/release-notes/breaking-changes/elastic-apm.md +++ b/release-notes/breaking-changes/elastic-apm.md @@ -17,12 +17,19 @@ To learn how to upgrade, check out . % **Action**
Steps for mitigating deprecation impact. % :::: -% ## 9.0.0 [elastic-apm-900-breaking-changes] -% **Release date:** March 25, 2025 +## 9.0.0 [elastic-apm-9-0-0-breaking-changes] +**Release date:** March 25, 2025 % ::::{dropdown} Title of breaking change % Description of the breaking change. % For more information, check [PR #](PR link). % **Impact**
Impact of the breaking change. % **Action**
Steps for mitigating deprecation impact. -% :::: \ No newline at end of file +% :::: + +::::{dropdown} Change server information endpoint "/" to only accept GET and HEAD requests +This will surface any agent misconfiguration causing data to be sent to `/` instead of the correct endpoint (for example, `/v1/traces` for OTLP/HTTP). +For more information, check [PR #15976](https://github.com/elastic/apm-server/pull/15976). +**Impact**
Any methods other than `GET` and `HEAD` to `/` will return HTTP 405 Method Not Allowed. +**Action**
Update any existing usage, for example, update `POST /` to `GET /`. +:::: diff --git a/solutions/observability.md b/solutions/observability.md index eff34fb37b..6a747a5487 100644 --- a/solutions/observability.md +++ b/solutions/observability.md @@ -3,8 +3,8 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/what-is-observability-serverless.html - https://www.elastic.co/guide/en/observability/current/index.html applies_to: - stack: ga - serverless: ga + stack: + serverless: --- # Observability diff --git a/solutions/observability/apps/apm-agent-central-configuration.md b/solutions/observability/apps/apm-agent-central-configuration.md index df3b37fe4a..ffdaae2e53 100644 --- a/solutions/observability/apps/apm-agent-central-configuration.md +++ b/solutions/observability/apps/apm-agent-central-configuration.md @@ -59,7 +59,7 @@ Ruby agent : [Configuration reference](asciidocalypse://docs/apm-agent-ruby/docs/reference/configuration.md) Real User Monitoring (RUM) agent -: [Configuration reference](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuration.md) +: [Configuration reference](apm-agent-rum-js://reference/configuration.md) ## APM Server configuration [_apm_server_configuration] diff --git a/solutions/observability/apps/apm-server-binary.md b/solutions/observability/apps/apm-server-binary.md index daf19e4ec3..a85b4b2c4b 100644 --- a/solutions/observability/apps/apm-server-binary.md +++ b/solutions/observability/apps/apm-server-binary.md @@ -749,8 +749,8 @@ const apm = initApm({ **Learn more in the agent reference** -* [Supported technologies](asciidocalypse://docs/apm-agent-rum-js/docs/reference/supported-technologies.md) -* [Advanced configuration](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuration.md) +* [Supported technologies](apm-agent-rum-js://reference/supported-technologies.md) +* [Advanced configuration](apm-agent-rum-js://reference/configuration.md) :::::: ::::::{tab-item} OpenTelemetry diff --git a/solutions/observability/apps/apm-server-information-api.md b/solutions/observability/apps/apm-server-information-api.md index 8e496fa15d..8987bdb800 100644 --- a/solutions/observability/apps/apm-server-information-api.md +++ b/solutions/observability/apps/apm-server-information-api.md @@ -16,11 +16,11 @@ This is the server information endpoint: http(s)://{hostname}:{port}/ ``` -Sending an `HTTP GET` or `HTTP POST` request to the server information endpoint will return an HTTP 200, indicating that the server is up. +Sending an `HTTP GET` request to the server information endpoint will return an HTTP 200, indicating that the server is up. To configure authenticated access to the APM server, the instructions at [APM API key](api-keys.md) or [APM Secret Token](secret-token.md), must be followed to configure the correct permissions for APM access. -If an [API keys](api-keys.md) or a [Secret token](secret-token.md) is passed along with the `HTTP GET` or `HTTP POST` request, in addition to an HTTP 200, the response payload will include some information about the APM server. +If an [API keys](api-keys.md) or a [Secret token](secret-token.md) is passed along with the `HTTP GET` request, in addition to an HTTP 200, the response payload will include some information about the APM server. ### Example: GET, without credentials [apm-api-info-example-get-without-credentials] @@ -48,12 +48,12 @@ curl --verbose -X GET http://127.0.0.1:8200 ``` -### Example: POST, with secret token [apm-api-info-example-post-with-secret-token] +### Example: GET, with secret token [apm-api-info-example-get-with-secret-token] -Example APM Server information request with POST, with a [Secret token](secret-token.md): +Example APM Server information request with GET, with a [Secret token](secret-token.md): ```sh -curl -X POST http://127.0.0.1:8200/ \ +curl -X GET http://127.0.0.1:8200/ \ -H "Authorization: Bearer secret_token" { diff --git a/solutions/observability/apps/built-in-data-filters.md b/solutions/observability/apps/built-in-data-filters.md index cc83446eb7..474dff704c 100644 --- a/solutions/observability/apps/built-in-data-filters.md +++ b/solutions/observability/apps/built-in-data-filters.md @@ -71,7 +71,7 @@ This setting only prevents APM Server from capturing already ingested personal d ## Real user monitoring data [apm-filters-real-user-data] -Protecting user data is important. For that reason, individual RUM instrumentations can be disabled in the RUM agent with the [`disableInstrumentations`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuration.md#disable-instrumentations) configuration variable. Disabled instrumentations produce no spans or transactions. +Protecting user data is important. For that reason, individual RUM instrumentations can be disabled in the RUM agent with the [`disableInstrumentations`](apm-agent-rum-js://reference/configuration.md#disable-instrumentations) configuration variable. Disabled instrumentations produce no spans or transactions. | Disable | Configuration value | | --- | --- | diff --git a/solutions/observability/apps/configure-real-user-monitoring-rum.md b/solutions/observability/apps/configure-real-user-monitoring-rum.md index 2e1cacf2fe..7932500ab8 100644 --- a/solutions/observability/apps/configure-real-user-monitoring-rum.md +++ b/solutions/observability/apps/configure-real-user-monitoring-rum.md @@ -19,7 +19,7 @@ Most options in this section are supported by all APM Server deployment methods. :::: -The [Real User Monitoring (RUM) agent](asciidocalypse://docs/apm-agent-rum-js/docs/reference/index.md) captures user interactions with clients such as web browsers. These interactions are sent as events to the APM Server. Because the RUM agent runs on the client side, the connection between agent and server is unauthenticated. As a security precaution, RUM is therefore disabled by default. +The [Real User Monitoring (RUM) agent](apm-agent-rum-js://reference/index.md) captures user interactions with clients such as web browsers. These interactions are sent as events to the APM Server. Because the RUM agent runs on the client side, the connection between agent and server is unauthenticated. As a security precaution, RUM is therefore disabled by default. :::::::{tab-set} @@ -55,7 +55,7 @@ Configure and customize Fleet-managed APM settings directly in {{kib}}: :::::: ::::::: -In addition, if APM Server is deployed in an origin different than the page’s origin, you will need to configure [Cross-Origin Resource Sharing (CORS)](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuring-cors.md) in the Agent. +In addition, if APM Server is deployed in an origin different than the page’s origin, you will need to configure [Cross-Origin Resource Sharing (CORS)](apm-agent-rum-js://reference/configuring-cors.md) in the Agent. ## Configuration reference [apm-enable-rum-support] diff --git a/solutions/observability/apps/filter-application-data.md b/solutions/observability/apps/filter-application-data.md index dc9b737331..f511509b74 100644 --- a/solutions/observability/apps/filter-application-data.md +++ b/solutions/observability/apps/filter-application-data.md @@ -42,4 +42,4 @@ Service environments are defined when configuring your APM agents. It’s vital * **PHP:** [`environment`](asciidocalypse://docs/apm-agent-php/docs/reference/ingestion-tools/apm-agent-php/configuration-reference.md#config-environment) * **Python:** [`environment`](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/configuration.md#config-environment) * **Ruby:** [`environment`](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-environment) -* **Real User Monitoring (Elastic Stack only):** [`environment`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/ingestion-tools/apm-agent-rum-js/configuration.md#environment) \ No newline at end of file +* **Real User Monitoring (Elastic Stack only):** [`environment`](apm-agent-rum-js://reference/configuration.md#environment) \ No newline at end of file diff --git a/solutions/observability/apps/fleet-managed-apm-server.md b/solutions/observability/apps/fleet-managed-apm-server.md index 116f54cfa5..e76a4be7ad 100644 --- a/solutions/observability/apps/fleet-managed-apm-server.md +++ b/solutions/observability/apps/fleet-managed-apm-server.md @@ -794,8 +794,8 @@ const apm = initApm({ **Learn more in the agent reference** -* [Supported technologies](asciidocalypse://docs/apm-agent-rum-js/docs/reference/supported-technologies.md) -* [Advanced configuration](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuration.md) +* [Supported technologies](apm-agent-rum-js://reference/supported-technologies.md) +* [Advanced configuration](apm-agent-rum-js://reference/configuration.md) :::::: ::::::{tab-item} OpenTelemetry diff --git a/solutions/observability/apps/metadata.md b/solutions/observability/apps/metadata.md index 5a77fe72d3..a95b0f1929 100644 --- a/solutions/observability/apps/metadata.md +++ b/solutions/observability/apps/metadata.md @@ -34,7 +34,7 @@ Avoid defining too many user-specified labels. Defining too many unique fields i * PHP: [`Transaction` `setLabel`](asciidocalypse://docs/apm-agent-php/docs/reference/public-api.md#api-transaction-interface-set-label) | [`Span` `setLabel`](asciidocalypse://docs/apm-agent-php/docs/reference/public-api.md#api-span-interface-set-label) * Python: [`elasticapm.label()`](asciidocalypse://docs/apm-agent-python/docs/reference/api-reference.md#api-label) * Ruby: [`set_label`](asciidocalypse://docs/apm-agent-ruby/docs/reference/api-reference.md#api-agent-set-label) -* Rum: [`addLabels`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/agent-api.md#apm-add-labels) +* Rum: [`addLabels`](apm-agent-rum-js://reference/agent-api.md#apm-add-labels) ## Custom context [apm-data-model-custom] @@ -64,7 +64,7 @@ Setting a circular object, a large object, or a non JSON serializable object can * PHP: *coming soon* * Python: [`set_custom_context`](asciidocalypse://docs/apm-agent-python/docs/reference/api-reference.md#api-set-custom-context) * Ruby: [`set_custom_context`](asciidocalypse://docs/apm-agent-ruby/docs/reference/api-reference.md#api-agent-set-custom-context) -* Rum: [`setCustomContext`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/agent-api.md#apm-set-custom-context) +* Rum: [`setCustomContext`](apm-agent-rum-js://reference/agent-api.md#apm-set-custom-context) ## User context [apm-data-model-user] @@ -87,5 +87,5 @@ User context adds **indexed** user information to transactions and errors. Index * PHP: *coming soon* * Python: [`set_user_context`](asciidocalypse://docs/apm-agent-python/docs/reference/api-reference.md#api-set-user-context) * Ruby: [`set_user`](asciidocalypse://docs/apm-agent-ruby/docs/reference/api-reference.md#api-agent-set-user) -* Rum: [`setUserContext`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/agent-api.md#apm-set-user-context) +* Rum: [`setUserContext`](apm-agent-rum-js://reference/agent-api.md#apm-set-user-context) diff --git a/solutions/observability/apps/tail-based-sampling.md b/solutions/observability/apps/tail-based-sampling.md index 6051a0f6e8..09d0d48915 100644 --- a/solutions/observability/apps/tail-based-sampling.md +++ b/solutions/observability/apps/tail-based-sampling.md @@ -5,7 +5,7 @@ applies_to: stack: all --- -# Tail-based sampling [apm-tail-based-samling-config] +# Tail-based sampling [apm-tail-based-sampling-config] ::::{note} ![supported deployment methods](../../../images/observability-binary-yes-fm-yes.svg "") diff --git a/solutions/observability/apps/traces.md b/solutions/observability/apps/traces.md index fa3108a40d..ae868e8ed2 100644 --- a/solutions/observability/apps/traces.md +++ b/solutions/observability/apps/traces.md @@ -429,4 +429,4 @@ end Some additional setup may be required to correlate requests correctly with the Real User Monitoring (RUM) agent. -See the [RUM distributed tracing guide](asciidocalypse://docs/apm-agent-rum-js/docs/reference/distributed-tracing.md) for information on enabling cross-origin requests, setting up server configuration, and working with dynamically-generated HTML. \ No newline at end of file +See the [RUM distributed tracing guide](apm-agent-rum-js://reference/distributed-tracing.md) for information on enabling cross-origin requests, setting up server configuration, and working with dynamically-generated HTML. \ No newline at end of file diff --git a/solutions/observability/cloud.md b/solutions/observability/cloud.md index 7c51b8a82f..ac88fb65b2 100644 --- a/solutions/observability/cloud.md +++ b/solutions/observability/cloud.md @@ -2,6 +2,8 @@ navigation_title: "Cloud" mapped_pages: - https://www.elastic.co/guide/en/observability/current/cloud-monitoring.html +applies_to: + stack: --- diff --git a/solutions/observability/incident-management.md b/solutions/observability/incident-management.md index 0b77224d5e..ed6ce0a010 100644 --- a/solutions/observability/incident-management.md +++ b/solutions/observability/incident-management.md @@ -2,6 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/incident-management.html - https://www.elastic.co/guide/en/serverless/current/incident-management.html +applies_to: + stack: + serverless: --- # Incident management [incident-management] diff --git a/solutions/observability/infra-and-hosts.md b/solutions/observability/infra-and-hosts.md index 45adedd303..86c92f5493 100644 --- a/solutions/observability/infra-and-hosts.md +++ b/solutions/observability/infra-and-hosts.md @@ -4,6 +4,9 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/infrastructure-and-host-monitoring-intro.html navigation_title: "Infrastructure and hosts" +applies_to: + stack: + serverless: --- # Infrastructure and host monitoring [infrastructure-and-host-monitoring-intro] diff --git a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md index e3e7d8bfbc..007b58a212 100644 --- a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md +++ b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md @@ -2,6 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/analyze-hosts.html - https://www.elastic.co/guide/en/serverless/current/observability-analyze-hosts.html +applies_to: + stack: + serverless: --- # Analyze and compare hosts [observability-analyze-hosts] @@ -16,7 +19,9 @@ The **Hosts** page provides a metrics-driven view of your infrastructure backed * View historical data to rule out false alerts and identify root causes. * Filter and search the data to focus on the hosts you care about the most. -To open **Hosts**, find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +To access the **Hosts** page in: +- **Elastic Stack,** find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +- **Serverless,** go to **Infrastructure → Hosts** in your Elastic Observability Serverless project. :::{image} ../../../images/serverless-hosts.png :alt: Screenshot of the Hosts page diff --git a/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md b/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md index 0900dd6386..fe46dc0dd2 100644 --- a/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md +++ b/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md @@ -2,6 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/monitor-infrastructure-and-hosts.html - https://www.elastic.co/guide/en/serverless/current/observability-infrastructure-monitoring.html +applies_to: + stack: + serverless: --- # Analyze infrastructure and host metrics [observability-infrastructure-monitoring] diff --git a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md index 3106a2ac6b..4fcee741ce 100644 --- a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md +++ b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/explore-metrics.html +applies_to: + stack: --- # Explore infrastructure metrics over time [explore-metrics] diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md index a506c5c2ca..b4b3a129de 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/monitor-kubernetes.html +applies_to: + stack: --- # Tutorial: Observe your Kubernetes deployments [monitor-kubernetes] diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md index 0e6c73eab8..96797707b6 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/monitor-nginx.html +applies_to: + stack: --- # Tutorial: Observe your nginx instances [monitor-nginx] diff --git a/solutions/observability/infra-and-hosts/universal-profiling.md b/solutions/observability/infra-and-hosts/universal-profiling.md index 3c52f87541..93315c3c67 100644 --- a/solutions/observability/infra-and-hosts/universal-profiling.md +++ b/solutions/observability/infra-and-hosts/universal-profiling.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/observability/current/universal-profiling.html +applies_to: + stack: --- # Universal Profiling [universal-profiling] diff --git a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md index 95727b6bdb..c2c15a42d0 100644 --- a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md +++ b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md @@ -2,13 +2,18 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/view-infrastructure-metrics.html - https://www.elastic.co/guide/en/serverless/current/observability-view-infrastructure-metrics.html +applies_to: + stack: + serverless: --- # View infrastructure metrics by resource type [observability-view-infrastructure-metrics] The **Infrastructure Inventory** page provides a metrics-driven view of your entire infrastructure grouped by the resources you are monitoring. All monitored resources emitting a core set of infrastructure metrics are displayed to give you a quick view of the overall health of your infrastructure. -To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +To open the **Infrastructure inventory** page in: +- **Elastic Stack,** find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +- **Serverless,** go to **Infrastructure inventory** in your Observability Serverless project. :::{image} ../../../images/observability-metrics-app.png :alt: Infrastructure UI in {kib} diff --git a/solutions/security/advanced-entity-analytics/anomaly-detection.md b/solutions/security/advanced-entity-analytics/anomaly-detection.md index 0d7b6fd082..38442e295a 100644 --- a/solutions/security/advanced-entity-analytics/anomaly-detection.md +++ b/solutions/security/advanced-entity-analytics/anomaly-detection.md @@ -26,7 +26,7 @@ If you have the appropriate role, you can use the **ML job settings** interface You can also check the status of {{ml}} detection rules, and start or stop their associated {{ml}} jobs: -* On the **Rules** page, the **Last response** column displays the rule’s current [status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-status). An indicator icon (![Error icon from rules table](../../../images/security-rules-table-error-icon.png "")) also appears if a required {{ml}} job isn’t running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule’s details page. +* On the **Rules** page, the **Last response** column displays the rule’s current [status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-status). An indicator icon (![Error icon from rules table](../../../images/security-rules-table-error-icon.png "title =20x20")) also appears if a required {{ml}} job isn’t running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule’s details page. :::{image} ../../../images/security-rules-table-ml-job-error.png :alt: Rules table {{ml}} job error diff --git a/solutions/security/ai/ai-assistant-knowledge-base.md b/solutions/security/ai/ai-assistant-knowledge-base.md index c5936e7538..a9fe3a84ae 100644 --- a/solutions/security/ai/ai-assistant-knowledge-base.md +++ b/solutions/security/ai/ai-assistant-knowledge-base.md @@ -6,28 +6,16 @@ mapped_urls: # AI Assistant Knowledge Base -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/ai-assistant-knowledge-base.md -% - [ ] ./raw-migrated-files/docs-content/serverless/ai-assistant-knowledge-base.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$enable-knowledge-base$$$ - -$$$knowledge-base-add-knowledge-index$$$ - AI Assistant’s Knowledge Base feature enables AI Assistant to recall specific documents and other specified information. This information, which can include everything from the location of your datacenters to the latest threat research, provides additional context that can improve the quality of AI Assistant’s responses to your queries. This topic describes how to enable and add information to Knowledge Base. ::::{note} -When you upgrade from {{elastic-sec}} version 8.15 to a newer version, information previously stored by AI Assistant will be lost. +{{stack}} users: when you upgrade from {{elastic-sec}} version 8.15 to a newer version, information previously stored by AI Assistant will be lost. :::: ::::{admonition} Requirements -* To use Knowledge Base, you need the `Elastic AI Assistant: All` privilege. To edit global Knowledge Base entries (information that will affect the AI Assistant experience for other users in the {{kib}} space), you need the `Allow Changes to Global Entries` privilege. +* To use Knowledge Base, the `Elastic AI Assistant: All` privilege. +* To edit global Knowledge Base entries (information that will affect the AI Assistant experience for other users in the {{kib}} space), the `Allow Changes to Global Entries` privilege. * You must [enable machine learning](/solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md) with a minimum ML node size of 4 GB. :::: @@ -114,20 +102,9 @@ Add an individual document to Knowledge Base when you want AI Assistant to remem 5. In the **Markdown text** field, enter the information you want AI Assistant to remember. 6. If it should be **Required knowledge**, select the option. Otherwise, leave it blank. Alternatively, you can simply send a message to AI Assistant that instructs it to "Remember" the information. For example, "Remember that I changed my password today, October 24, 2024", or "Remember we always use the Threat Hunting Timeline template when investigating potential threats". Entries created in this way are private to you. By default they are not required knowledge, but you can make them required by instructing AI Assistant to "Always remember", for example "Always remember to address me as madam", or "Always remember that our primary data center is located in Austin, Texas". -Refer to the following video for an example of adding a document to Knowledge Base from the settings menu. - -::::{admonition} - - -
-:::: +Refer to the following video for an example of adding a document to Knowledge Base from the settings menu (click to play video). + +[![Add knowledge document video](https://play.vidyard.com/rQsTujEfikpx3vv1vrbfde.jpg)](https://videos.elastic.co/watch/rQsTujEfikpx3vv1vrbfde?) @@ -154,20 +131,10 @@ Indices added to Knowledge Base must have at least one field mapped as [semantic :alt: Knowledge base's Edit index entry menu ::: -Refer to the following video for an example of adding an index to Knowledge Base. - -::::{admonition} - - -
-:::: +Refer to the following video for an example of adding an index to Knowledge Base (click to play video). + + +[![Add knowledge index video](https://play.vidyard.com/Q5CjXMN4R2GYLGLUy5P177.jpg)](https://videos.elastic.co/watch/Q5CjXMN4R2GYLGLUy5P177?) @@ -185,23 +152,22 @@ First, you’ll need to set up a web crawler to add the desired data to an index 1. From the **Search** section of {{kib}}, find **Web crawlers** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Click **New web crawler**. - 1. Under **Index name**, name the index where the data from your new web crawler will be stored, for example `threat_intelligence_feed_1`. Click **Create index**. - 2. Under **Domain URL**, enter the URL where the web crawler should collect data. Click **Validate Domain** to test it, then **Add domain**. + * Under **Index name**, name the index where the data from your new web crawler will be stored, for example `threat_intelligence_feed_1`. Click **Create index**. + * Under **Domain URL**, enter the URL where the web crawler should collect data. Click **Validate Domain** to test it, then **Add domain**. 3. The previous step opens a page with the details of your new index. Go to its **Mappings** tab, then click **Add field**. - ::::{note} - Remember, each index added to Knowledge Base must have at least one semantic text field. - :::: - + ::::{note} + Remember, each index added to Knowledge Base must have at least one semantic text field. + :::: - 1. Under **Field type**, select `Semantic text`. Under **Select an inference endpoint***, select `elastic-security-ai-assistant-elser2`. Click ***Add field**, then **Save mapping**. + * Under **Field type**, select `Semantic text`. Under **Select an inference endpoint**, select `elastic-security-ai-assistant-elser2`. Click **Add field**, then **Save mapping**. 4. Go to the **Scheduling** tab. Enable the **Enable recurring crawls with the following schedule** setting, and define your desired schedule. 5. Go to the **Manage Domains** tab. Select the domain associated with your new web crawler, then go the its **Crawl rules** tab and click **Add crawl rule**. For more information, refer to [Web crawler content extraction rules](https://www.elastic.co/guide/en/enterprise-search/current/crawler-extraction-rules.html). - 1. Click **Add crawl rule** again. Under **Policy***, select `Disallow`. Under ***Rule***, select `Regex`. Under ***Path pattern**, enter `.*`. Click **Save**. - 2. Under **Policy**, select `Allow`. Under **Rule***, select `Contains`. Under ***Path pattern**, enter your path pattern, for example `threat-intelligence`. Click **Save**. Make sure this rule appears below the rule created in the previous step on the list. + 1. Click **Add crawl rule** again. Under **Policy**, select `Disallow`. Under **Rule**, select `Regex`. Under **Path pattern**, enter `.*`. Click **Save**. + 2. Under **Policy**, select `Allow`. Under **Rule**, select `Contains`. Under **Path pattern**, enter your path pattern, for example `threat-intelligence`. Click **Save**. Make sure this rule appears below the rule created in the previous step on the list. 3. Click **Crawl**, then **Crawl all domains on this index**. A success message appears. The crawl process will take longer for larger data sources. Once it finishes, your new web crawler’s index will contain documents provided by the crawler. 6. Finally, follow the instructions to [add an index to Knowledge Base](/solutions/security/ai/ai-assistant-knowledge-base.md#knowledge-base-add-knowledge-index). Add the index that contains the data from your new web crawler (`threat_intelligence_feed_1` in this example). @@ -210,15 +176,4 @@ Your new threat intelligence data is now included in Knowledge Base and can info Refer to the following video for an example of creating a web crawler to ingest threat intelligence data and adding it to Knowledge Base. -::::{admonition} - - -
-:::: +[![Add knowledge via web crawler video](https://play.vidyard.com/eYo1e1ZRwT2mjfM7Yr9MuZ.jpg)](https://videos.elastic.co/watch/eYo1e1ZRwT2mjfM7Yr9MuZ?) \ No newline at end of file diff --git a/solutions/security/ai/ai-assistant.md b/solutions/security/ai/ai-assistant.md index be200bd104..1149a7fada 100644 --- a/solutions/security/ai/ai-assistant.md +++ b/solutions/security/ai/ai-assistant.md @@ -6,19 +6,6 @@ mapped_urls: # AI Assistant -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/security-assistant.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-assistant.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$configure-ai-assistant$$$ - -$$$ai-assistant-anonymization$$$ - The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity operations team. It allows users to interact with {{elastic-sec}} for tasks such as alert investigation, incident response, and query generation or conversion using natural language and much more. :::{image} ../../../images/security-assistant-basic-view.png @@ -32,11 +19,12 @@ The Elastic AI Assistant is designed to enhance your analysis with smart dialogu ::::{admonition} Requirements -* The Elastic AI Assistant and Generative AI connector are available in {{stack}} versions 8.8.1 and later. The Generative AI connector is renamed to OpenAI connector in 8.11.0. -* This feature requires an [Enterprise subscription](https://www.elastic.co/pricing). -* To use AI Assistant, you need at least the **Elastic AI Assistant : All** and **Actions and Connectors : Read** [privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md). -* To set up AI Assistant, you need the **Actions and Connectors : All** [privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md). -* You need a [generative AI connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses. +* {{stack}} users: {{stack}} version 8.8.1 or later. Also note the Generative AI connector was renamed to OpenAI connector in 8.11.0. +* {{stack}} users: an [Enterprise subscription](https://www.elastic.co/pricing). +* {{serverless-short}} users: a [Security Analytics Complete subscription](/deploy-manage/deploy/elastic-cloud/project-settings.md). +* To use AI Assistant, the **Elastic AI Assistant : All** and **Actions and Connectors : Read** [privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md). +* To set up AI Assistant, the **Actions and Connectors : All** [privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md). +* A [generative AI connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md), which AI Assistant uses to generate responses. :::: @@ -81,7 +69,6 @@ You can also chat with AI Assistant from several particular pages in {{elastic-s * [Alert details](/solutions/security/detect-and-alert/view-detection-alert-details.md) or Event details flyout: Click **Chat** while viewing the details of an alert or event. * [Rules page](/solutions/security/detect-and-alert/manage-detection-rules.md): Use AI Assistant to help create or correct rule queries. * [Data Quality dashboard](/solutions/security/dashboards/data-quality-dashboard.md): Select the **Incompatible fields** tab, then click **Chat**. (This is only available for fields marked red, indicating they’re incompatible). -* [Timeline](/solutions/security/investigate/timeline.md): Select the **Security Assistant** tab. ::::{note} Each user’s chat history (up to the 99 most recent conversations) and custom Quick Prompts are automatically saved, so you can leave {{elastic-sec}} and return to a conversation later. Chat history appears to the left of the AI Assistant chat window, and on the **Conversations** tab of the **AI Assistant settings** menu. To access the settings menu, use the global search field to search for "AI Assistant for Security". @@ -110,10 +97,10 @@ Use these features to adjust and act on your conversations with AI Assistant: * Quick Prompt availability varies based on context—for example, the **Alert summarization** Quick Prompt appears when you open AI Assistant while viewing an alert. To customize existing Quick Prompts and create new ones, click **Add Quick Prompt**. * In an active conversation, you can use the inline actions that appear on messages to incorporate AI Assistant’s responses into your workflows: - * **Add note to timeline** (![Add note icon](../../../images/security-icon-add-note.png "")): Add the selected text to your currently active Timeline as a note. - * **Add to existing case** (![Add to case icon](../../../images/security-icon-add-to-case.png "")): Add a comment to an existing case using the selected text. - * **Copy to clipboard** (![Copy to clipboard icon](../../../images/security-icon-copy.png "")): Copy the text to clipboard to paste elsewhere. Also helpful for resubmitting a previous prompt. - * **Add to timeline** (![Add to timeline icon](../../../images/security-icon-add-to-timeline.png "")): Add a filter or query to Timeline using the text. This button appears for particular queries in AI Assistant’s responses. + * **Add note to timeline** (![Add note icon](../../../images/security-icon-add-note.png "title =20x20")): Add the selected text to your currently active Timeline as a note. + * **Add to existing case** (![Add to case icon](../../../images/security-icon-add-to-case.png "title =20x20")): Add a comment to an existing case using the selected text. + * **Copy to clipboard** (![Copy to clipboard icon](../../../images/security-icon-copy.png "title =20x20")): Copy the text to clipboard to paste elsewhere. Also helpful for resubmitting a previous prompt. + * **Add to timeline** (![Add to timeline icon](../../../images/security-icon-add-to-timeline.png "title =20x20")): Add a filter or query to Timeline using the text. This button appears for particular queries in AI Assistant’s responses. Be sure to specify which language you’d like AI Assistant to use when writing a query. For example: "Can you generate an Event Query Language query to find four failed logins followed by a successful login?" @@ -130,7 +117,7 @@ The **Security AI settings** page allows you to configure AI Assistant. To acces It has the following tabs: -* **Conversations:** When you open AI Assistant from certain pages, such as **Timeline*** or ***Alerts**, it defaults to the relevant conversation type. For each conversation type, choose the default System Prompt, the default connector, and the default model (if applicable). The **Streaming** setting controls whether AI Assistant’s responses appear word-by-word (streamed), or as a complete block of text. Streaming is currently only available for OpenAI models. +* **Conversations:** When you open AI Assistant from certain pages, such as ***Alerts**, it defaults to the relevant conversation type. For each conversation type, choose the default System Prompt, the default connector, and the default model (if applicable). The **Streaming** setting controls whether AI Assistant’s responses appear word-by-word (streamed), or as a complete block of text. Streaming is currently only available for OpenAI models. * **Connectors:** Manage all LLM connectors. * **System Prompts:** Edit existing System Prompts or create new ones. To create a new System Prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the System Prompt’s text. Under **Contexts**, select where the System Prompt should appear. * **Quick Prompts:** Modify existing Quick Prompts or create new ones. To create a new Quick Prompt, type a unique name in the **Name** field, then press **enter**. Under **Prompt**, enter or update the Quick Prompt’s text. @@ -149,7 +136,7 @@ To modify Anonymization settings, you need the **Elastic AI Assistant: All** pri The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed*** toggled on are included in events provided to AI Assistant. ***Allowed*** fields with ***Anonymized** set to **Yes** are included, but with their values obfuscated. ::::{note} -You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings (![Settings icon](../../../images/security-icon-settings.png "")) button next to the model selection dropdown menu. +You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings (![Settings icon](../../../images/security-icon-settings.png "title =20x20")) button next to the model selection dropdown menu. :::: diff --git a/solutions/security/ai/attack-discovery.md b/solutions/security/ai/attack-discovery.md index 59d92420db..6be4a4ef8c 100644 --- a/solutions/security/ai/attack-discovery.md +++ b/solutions/security/ai/attack-discovery.md @@ -6,21 +6,6 @@ mapped_urls: # Attack Discovery -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/attack-discovery.md -% - [ ] ./raw-migrated-files/docs-content/serverless/attack-discovery.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$attack-discovery-generate-discoveries$$$ - -$$$attack-discovery-what-info$$$ - -$$$attack-discovery-workflows$$$ - ::::{warning} This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features. :::: @@ -28,20 +13,9 @@ This feature is in technical preview. It may change in the future, and you shoul Attack Discovery leverages large language models (LLMs) to analyze alerts in your environment and identify threats. Each "discovery" represents a potential attack and describes relationships among multiple alerts to tell you which users and hosts are involved, how alerts correspond to the MITRE ATT&CK matrix, and which threat actor might be responsible. This can help make the most of each security analyst’s time, fight alert fatigue, and reduce your mean time to respond. -For a demo, refer to the following video. - -::::{admonition} - - -
-:::: +For a demo, refer to the following video (click to view). + +[![Attack Discovery video](https://play.vidyard.com/eT92arEbpRddmSM4JeyzdX.jpg)](https://videos.elastic.co/watch/eT92arEbpRddmSM4JeyzdX?) This page describes: @@ -55,22 +29,19 @@ This page describes: The `Attack Discovery: All` privilege allows you to use Attack Discovery. -:::{image} ../../../images/security-attck-disc-rbac.png -:alt: Attack Discovery's RBAC settings -::: - +![attack-discovery-rbac](../../../images/security-attck-disc-rbac.png) ## Generate discoveries [attack-discovery-generate-discoveries] -When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [*AI Assistant*](/solutions/security/ai/ai-assistant.md). To get started: +When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [AI Assistant](/solutions/security/ai/ai-assistant.md). To get started: 1. Click the **Attack Discovery** page from {{elastic-sec}}'s navigation menu. 2. Select an existing connector from the dropdown menu, or add a new one. - ::::{admonition} Recommended models - While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best. + :::{admonition} Recommended models + While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best. - :::: + ::: :::{image} ../../../images/security-attck-disc-select-model-empty.png @@ -82,12 +53,13 @@ When you access Attack Discovery for the first time, you’ll need to select an It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected. ::::{important} -By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon (![Settings icon](../../../images/security-icon-settings.png "")) next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error. +By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon (![Settings icon](../../../images/security-icon-settings.png "title=20px")) next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error. :::: :::{image} ../../../images/security-attck-disc-alerts-number-menu.png :alt: Attack Discovery's settings menu +:width: 600px ::: ::::{important} @@ -116,7 +88,7 @@ Each discovery includes the following information describing the potential threa There are several ways you can incorporate discoveries into your {{elastic-sec}} workflows: * Click an entity’s name to open the user or host details flyout and view more details that may be relevant to your investigation. -* Hover over an entity’s name to either add the entity to Timeline (![Add to timeline icon](../../../images/security-icon-add-to-timeline.png "")) or copy its field name and value to the clipboard (![Copy to clipboard icon](../../../images/security-icon-copy.png "")). +* Hover over an entity’s name to either add the entity to Timeline (![Add to timeline icon](../../../images/security-icon-add-to-timeline.png "title=70%")) or copy its field name and value to the clipboard (![Copy to clipboard icon](../../../images/security-icon-copy.png "title=70%")). * Click **Take action**, then select **Add to new case** or **Add to existing case** to add a discovery to a [case](/solutions/security/investigate/cases.md). This makes it easy to share the information with your team and other stakeholders. * Click **Investigate in timeline** to explore the discovery in [Timeline](/solutions/security/investigate/timeline.md). * Click **View in AI Assistant** to attach the discovery to a conversation with AI Assistant. You can then ask follow-up questions about the discovery or associated alerts. diff --git a/solutions/security/ai/connect-to-amazon-bedrock.md b/solutions/security/ai/connect-to-amazon-bedrock.md index 543f3c0888..cf5c7c8a6a 100644 --- a/solutions/security/ai/connect-to-amazon-bedrock.md +++ b/solutions/security/ai/connect-to-amazon-bedrock.md @@ -6,13 +6,6 @@ mapped_urls: # Connect to Amazon Bedrock -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-bedrock.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md - This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure AWS, then configure the connector in {{kib}}. ::::{note} @@ -55,20 +48,9 @@ First, configure an IAM policy with the necessary permissions: 4. Click **Next**. Name your policy. -The following video demonstrates these steps. +The following video demonstrates these steps (click to watch). -::::{admonition} - - -
-:::: +[![azure-openai-configure-model-video](https://play.vidyard.com/ek6NpHaj6u4keZyEjPWXcT.jpg)](https://videos.elastic.co/watch/ek6NpHaj6u4keZyEjPWXcT?) @@ -82,20 +64,9 @@ Next, assign the policy you just created to a new user: 4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**. 5. Review the configuration then click **Create user**. -The following video demonstrates these steps. +The following video demonstrates these steps (click to watch). -::::{admonition} - - -
-:::: +[![bedrock-iam-video](https://play.vidyard.com/5BQb2P818SMddRo6gA79hd.jpg)](https://videos.elastic.co/watch/5BQb2P818SMddRo6gA79hd?) @@ -112,19 +83,9 @@ Create the access keys that will authenticate your Elastic connector: The following video demonstrates these steps. -::::{admonition} - - -
-:::: +The following video demonstrates these steps (click to watch). +[![bedrock-accesskey-video](https://play.vidyard.com/8oXgP1fbaQCqjWUgncF9at.jpg)](https://videos.elastic.co/watch/8oXgP1fbaQCqjWUgncF9at?) ### Enable model access [_enable_model_access] @@ -137,21 +98,9 @@ Make sure the supported Amazon Bedrock LLMs are enabled: 4. Check the boxes for **Claude** and/or **Claude Instant**, depending which model or models you plan to use. 5. Click **Save changes**. -The following video demonstrates these steps. - -::::{admonition} - - -
-:::: +The following video demonstrates these steps (click to watch). +[![bedrock-model-video](https://play.vidyard.com/Z7zpHq4N9uvUxegBUMbXDj.jpg)](https://videos.elastic.co/watch/Z7zpHq4N9uvUxegBUMbXDj?) ## Configure the Amazon Bedrock connector [_configure_the_amazon_bedrock_connector] @@ -159,14 +108,14 @@ The following video demonstrates these steps. Finally, configure the connector in {{kib}}: 1. Log in to {{kib}}. -2. . Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**. +2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**. 3. Name your connector. 4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`. 5. (Optional) Add one of the following strings if you want to use a model other than the default: - 1. For Haiku: `anthropic.claude-3-haiku-20240307-v1:0` - 2. For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0` - 3. For Opus: `anthropic.claude-3-opus-20240229-v1:0` + * For Haiku: `anthropic.claude-3-haiku-20240307-v1:0` + * For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0` + * For Opus: `anthropic.claude-3-opus-20240229-v1:0` 6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**. @@ -178,17 +127,6 @@ If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/l :::: -The following video demonstrates these steps. +The following video demonstrates these steps (click to watch). -::::{admonition} - - -
-:::: +[![bedrock-configure-model-video](https://play.vidyard.com/QJe4RcTJbp6S6m9CkReEXs.jpg)](https://videos.elastic.co/watch/QJe4RcTJbp6S6m9CkReEXs?) \ No newline at end of file diff --git a/solutions/security/ai/connect-to-azure-openai.md b/solutions/security/ai/connect-to-azure-openai.md index 892ec43960..e68a6f6c91 100644 --- a/solutions/security/ai/connect-to-azure-openai.md +++ b/solutions/security/ai/connect-to-azure-openai.md @@ -6,13 +6,6 @@ mapped_urls: # Connect to Azure OpenAI -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-azure-openai.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md - This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure Azure, then configure the connector in {{kib}}. @@ -31,22 +24,9 @@ First, set up an Azure OpenAI deployment: 6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. 7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. -The following video demonstrates these steps. - -::::{admonition} - - -
-:::: - +The following video demonstrates these steps (click to watch). +[![azure-openai-configure-deployment-video](https://play.vidyard.com/7NEa5VkVJ67RHWBuK8qMXA.jpg)](https://videos.elastic.co/watch/7NEa5VkVJ67RHWBuK8qMXA?) ### Configure keys [_configure_keys] @@ -55,21 +35,9 @@ Next, create access keys for the deployment: 1. From within your Azure OpenAI deployment, select **Click here to manage keys**. 2. Store your keys in a secure location. -The following video demonstrates these steps. - -::::{admonition} - - -
-:::: +The following video demonstrates these steps (click to watch). +[![azure-openai-configure-keys-video](https://play.vidyard.com/cQXw96XjaeF4RiB3V4EyTT.jpg)](https://videos.elastic.co/watch/cQXw96XjaeF4RiB3V4EyTT?) ### Configure a model [_configure_a_model] @@ -81,29 +49,18 @@ Now, set up the Azure OpenAI model: 3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`. 4. Set the model version to "Auto-update to default". - ::::{important} - The models available to you depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md). - :::: + :::{important} + The models available to you depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md). + ::: 5. Under **Deployment type**, select **Standard**. 6. Name your deployment. 7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. 8. Click **Create**. -The following video demonstrates these steps. +The following video demonstrates these steps (click to watch). -::::{admonition} - - -
-:::: +[![azure-openai-configure-model-video](https://play.vidyard.com/PdadFyV1p1DbWRyCr95whT.jpg)](https://videos.elastic.co/watch/PdadFyV1p1DbWRyCr95whT?) @@ -125,17 +82,6 @@ Finally, configure the connector in {{kib}}: 6. Under **API key**, enter one of your API keys. 7. Click **Save & test**, then click **Run**. -Your LLM connector is now configured. The following video demonstrates these steps. - -::::{admonition} - - -
-:::: +Your LLM connector is now configured. The following video demonstrates these steps (click to watch). + +[![azure-openai-configure-model-video](https://play.vidyard.com/RQZVcnXHokC3RcV6ZB2pmF.jpg)](https://videos.elastic.co/watch/RQZVcnXHokC3RcV6ZB2pmF?) \ No newline at end of file diff --git a/solutions/security/ai/connect-to-openai.md b/solutions/security/ai/connect-to-openai.md index b8acafb1ba..2434b76867 100644 --- a/solutions/security/ai/connect-to-openai.md +++ b/solutions/security/ai/connect-to-openai.md @@ -6,13 +6,6 @@ mapped_urls: # Connect to OpenAI -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-openai.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-openai.md - This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI’s large language models (LLMs) within {{kib}}. You’ll first need to create an OpenAI API key, then configure the connector in {{kib}}. @@ -38,21 +31,9 @@ To generate an API key: 3. Name your key, select an OpenAI project, and set the desired permissions. 4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. -The following video demonstrates these steps. - -::::{admonition} - - -
-:::: +The following video demonstrates these steps (click to watch). +[![openai-apikey-video](https://play.vidyard.com/vbD7fGBGgyxK4TRbipeacL.jpg)](https://videos.elastic.co/watch/vbD7fGBGgyxK4TRbipeacL?) ## Configure the OpenAI connector [_configure_the_openai_connector] @@ -68,17 +49,6 @@ To integrate with {{kib}}: 7. Paste the API key that you created into the corresponding field. 8. Click **Save**. -The following video demonstrates these steps. - -::::{admonition} - - -
-:::: +The following video demonstrates these steps (click to watch). + +[![openai-configure-connector-video](https://play.vidyard.com/BGaQ73KBJCzeqWoxXkQvy9.jpg)](https://videos.elastic.co/watch/BGaQ73KBJCzeqWoxXkQvy9?) \ No newline at end of file diff --git a/solutions/security/ai/connect-to-own-local-llm.md b/solutions/security/ai/connect-to-own-local-llm.md index a6196d835c..5f235b29ad 100644 --- a/solutions/security/ai/connect-to-own-local-llm.md +++ b/solutions/security/ai/connect-to-own-local-llm.md @@ -6,13 +6,6 @@ mapped_urls: # Connect to your own local LLM -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/connect-to-byo-llm.md -% - [ ] ./raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md - This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within {{elastic-sec}}. You’ll first need to set up a reverse proxy to communicate with {{elastic-sec}}, then set up LM Studio on a server, and finally configure the connector in your Elastic deployment. [Learn more about the benefits of using a local LLM](https://www.elastic.co/blog/ai-assistant-locally-hosted-models). This example uses a single server hosted in GCP to run the following components: @@ -46,7 +39,7 @@ You need to set up a reverse proxy to enable communication between LM Studio and The following is an example Nginx configuration file: -```txt +```nginx server { listen 80; listen [::]:80; @@ -176,21 +169,9 @@ If your model uses NVIDIA drivers, you can check the GPU performance with the `s ### Option 2: load a model using the GUI [_option_2_load_a_model_using_the_gui] -Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**. - -::::{admonition} - - -
-:::: +Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**. The following video demonstrates this process (click to watch). +[![byollm-load-model-gui-video](https://play.vidyard.com/c4AxH8d9tWMnwNp5J6bcfX.jpg)](https://videos.elastic.co/watch/c4AxH8d9tWMnwNp5J6bcfX?) ## (Optional) Collect logs using Elastic’s Custom Logs integration [_optional_collect_logs_using_elastics_custom_logs_integration] diff --git a/solutions/security/ai/generate-customize-learn-about-esorql-queries.md b/solutions/security/ai/generate-customize-learn-about-esorql-queries.md index d291b563d4..cb106d4139 100644 --- a/solutions/security/ai/generate-customize-learn-about-esorql-queries.md +++ b/solutions/security/ai/generate-customize-learn-about-esorql-queries.md @@ -6,13 +6,6 @@ mapped_urls: # Generate, customize, and learn about ES|QL queries -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/esql-queries-assistant.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md - Elastic AI Assistant can help you learn about and leverage the Elasticsearch Query Language ({{esql}}) in many ways, including: * **Education and training**: AI Assistant can serve as a powerful {{esql}} learning tool. Ask it for examples, explanations of complex queries, and best practices. diff --git a/solutions/security/ai/identify-investigate-document-threats.md b/solutions/security/ai/identify-investigate-document-threats.md index 065faf29c4..e0ce7d931b 100644 --- a/solutions/security/ai/identify-investigate-document-threats.md +++ b/solutions/security/ai/identify-investigate-document-threats.md @@ -6,23 +6,6 @@ mapped_urls: # Identify, investigate, and document threats -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/attack-discovery-ai-assistant-incident-reporting.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$use-case-incident-reporting-create-a-case-using-ai-assistant$$$ - -$$$use-case-incident-reporting-translate$$$ - -$$$use-case-incident-reporting-use-ai-assistant-to-analyze-a-threat$$$ - -$$$use-case-incident-reporting-use-attack-discovery-to-identify-threats$$$ - Together, [Elastic AI Assistant](/solutions/security/ai/ai-assistant.md) and [Attack discovery](/solutions/security/ai/attack-discovery.md) can help you identify and mitigate threats, investigate incidents, and generate incident reports in various languages so you can monitor and protect your environment. In this guide, you’ll learn how to: @@ -67,7 +50,7 @@ At any point in a conversation with AI Assistant, you can add data, narrative su ## Generate reports [use-case-incident-reporting-create-a-case-using-ai-assistant] -From the AI Assistant dialog window, click **Add to case** (![Add to case icon](../../../images/security-icon-add-to-case.png "")) next to a message to add the information in that message to a [case](/solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders. +From the AI Assistant dialog window, click **Add to case** (![Add to case icon](../../../images/security-icon-add-to-case.png "title =20x20")) next to a message to add the information in that message to a [case](/solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders. If you add a message that contains a discovery to a case, AI Assistant automatically adds the attack summary and all associated alerts to the case. You can also add AI Assistant messages that contain remediation steps and relevant data to the case. diff --git a/solutions/security/ai/large-language-model-performance-matrix.md b/solutions/security/ai/large-language-model-performance-matrix.md index 1ab04f1470..9e95b9860c 100644 --- a/solutions/security/ai/large-language-model-performance-matrix.md +++ b/solutions/security/ai/large-language-model-performance-matrix.md @@ -6,13 +6,6 @@ mapped_urls: # Large language model performance matrix -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/llm-performance-matrix.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-llm-performance-matrix.md - This page describes the performance of various large language models (LLMs) for different use cases in {{elastic-sec}}, based on our internal testing. To learn more about these use cases, refer to [Attack discovery](/solutions/security/ai/attack-discovery.md) or [AI Assistant](/solutions/security/ai/ai-assistant.md). ::::{note} diff --git a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md index d66d9fbb60..1805aa9eaa 100644 --- a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md +++ b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md @@ -6,13 +6,6 @@ mapped_urls: # Set up connectors for large language models (LLM) -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/llm-connector-guides.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-llm-connector-guides.md - This section contains instructions for setting up connectors for LLMs so you can use [Elastic AI Assistant](/solutions/security/ai/ai-assistant.md) and [Attack discovery](/solutions/security/ai/attack-discovery.md). Setup guides are available for the following LLM providers: diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md index c1e532aa8f..84cd757f6e 100644 --- a/solutions/security/ai/triage-alerts.md +++ b/solutions/security/ai/triage-alerts.md @@ -6,13 +6,6 @@ mapped_urls: # Triage alerts -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/assistant-triage.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md - Elastic AI Assistant can help you enhance and streamline your alert triage workflows by assessing multiple recent alerts in your environment, and helping you interpret an alert and its context. When you view an alert in {{elastic-sec}}, details such as related documents, hosts, and users appear alongside a synopsis of the events that triggered the alert. This data provides a starting point for understanding a potential threat. AI Assistant can answer questions about this data and offer insights and actionable recommendations to remediate the issue. @@ -35,20 +28,20 @@ Once you have chosen an alert to investigate: 2. In the alert details flyout, click **Chat** to launch the AI assistant. Data related to the selected alert is automatically added to the prompt. 3. Click **Alert (from summary)** to view which alert fields will be shared with AI Assistant. - ::::{note} - For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md). - :::: + :::{note} + For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md). + ::: 4. (Optional) Click a quick prompt to use it as a starting point for your query, for example **Alert summarization**. Improve the quality of AI Assistant’s response by customizing the prompt and adding detail. - Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions. + Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions. 5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report. ## Generate triage reports [ai-triage-reportgen] -Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once the AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as: +Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as: * “Generate a detailed report about this incident including timeline, impact analysis, and response actions. Also, include a diagram of events.” * “Generate a summary of this incident/alert and include diagrams of events.” diff --git a/solutions/security/ai/use-cases.md b/solutions/security/ai/use-cases.md index 8b21da9838..6198771029 100644 --- a/solutions/security/ai/use-cases.md +++ b/solutions/security/ai/use-cases.md @@ -6,13 +6,6 @@ mapped_urls: # Use cases -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/assistant-use-cases.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-use-cases.md - The guides in this section describe use cases for AI Assistant and Attack discovery. Refer to them for examples of each tool’s individual capabilities and of what they can do together. * [Triage alerts](/solutions/security/ai/triage-alerts.md) diff --git a/solutions/security/cloud/get-started-with-cspm-for-aws.md b/solutions/security/cloud/get-started-with-cspm-for-aws.md index 3df4e07dc3..807066e470 100644 --- a/solutions/security/cloud/get-started-with-cspm-for-aws.md +++ b/solutions/security/cloud/get-started-with-cspm-for-aws.md @@ -293,7 +293,7 @@ You can use the AWS CLI to generate temporary credentials. For example, you coul sts get-session-token --serial-number arn:aws:iam::1234:mfa/your-email@example.com --duration-seconds 129600 --token-code 123456 ``` -The output from this command includes the following fields, which you should provide when configuring the KSPM integration: +The output from this command includes the following fields, which you should provide when configuring the CSPM integration: * `Access key ID`: The first part of the access key. * `Secret Access Key`: The second part of the access key. diff --git a/solutions/security/dashboards/detection-rule-monitoring-dashboard.md b/solutions/security/dashboards/detection-rule-monitoring-dashboard.md index fa2b65117c..f228a45de7 100644 --- a/solutions/security/dashboards/detection-rule-monitoring-dashboard.md +++ b/solutions/security/dashboards/detection-rule-monitoring-dashboard.md @@ -43,7 +43,7 @@ The following visualizations are included: ## Visualization panel actions [rule-visualization-actions] -Open a panel’s options menu (![Options menu](../../../images/security-three-dot-icon.png "")) customize the panel or use its data for further analysis and investigation: +Open a panel’s options menu (![Options menu](../../../images/security-three-dot-icon.png "title =20x20")) customize the panel or use its data for further analysis and investigation: * **Edit panel settings**: Customize the panel’s display settings. Options vary by visualization type. * **Inspect**: Examine the panel’s underlying data and queries. diff --git a/solutions/security/dashboards/entity-analytics-dashboard.md b/solutions/security/dashboards/entity-analytics-dashboard.md index 667e6d5b0b..267eec6662 100644 --- a/solutions/security/dashboards/entity-analytics-dashboard.md +++ b/solutions/security/dashboards/entity-analytics-dashboard.md @@ -54,7 +54,7 @@ Interact with the table to filter data, view more details, and take action: * Click a user name link to open the user details flyout. * Hover over a user name link to display inline actions: **Add to timeline**, which adds the selected value to Timeline, and **Copy to Clipboard**, which copies the user name value for you to paste later. * Click **View all** in the upper-right to display all user risk information on the Users page. -* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** (![Investigate in timeline icon](../../../images/security-timeline-button-osquery.png "")) to launch Timeline with a query that includes the associated user name value. +* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** (![Investigate in timeline icon](../../../images/security-timeline-button-osquery.png "title =20x20")) to launch Timeline with a query that includes the associated user name value. For more information about user risk scores, refer to [Entity risk scoring](/solutions/security/advanced-entity-analytics/entity-risk-scoring.md). @@ -79,7 +79,7 @@ Interact with the table to filter data, view more details, and take action: * Click a host name link to open the host details flyout. * Hover over a host name link to display inline actions: **Add to timeline**, which adds the selected value to Timeline, and **Copy to Clipboard**, which copies the host name value for you to paste later. * Click **View all** in the upper-right to display all host risk information on the Hosts page. -* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** (![Investigate in timeline icon](../../../images/security-timeline-button-osquery.png "")) to launch Timeline with a query that includes the associated host name value. +* Click the number link in the **Alerts** column to view the alerts on the Alerts page. Hover over the number and select **Investigate in timeline** (![Investigate in timeline icon](../../../images/security-timeline-button-osquery.png "title =20x20")) to launch Timeline with a query that includes the associated host name value. For more information about host risk scores, refer to [Entity risk scoring](/solutions/security/advanced-entity-analytics/entity-risk-scoring.md). @@ -120,7 +120,7 @@ Interact with the table to filter data and view more details: * Select the **Risk level** dropdown to filter the table by the selected user or host risk level. * Select the **Criticality** dropdown to filter the table by the selected asset criticality level. * Select the **Source** dropdown to filter the table by the data source. -* Click the **View details** icon (![View details icon](../../../images/security-view-details-icon.png "")) to open the entity details flyout. +* Click the **View details** icon (![View details icon](../../../images/security-view-details-icon.png "title =20x20")) to open the entity details flyout. ## Anomalies [entity-anomalies] diff --git a/solutions/security/dashboards/overview-dashboard.md b/solutions/security/dashboards/overview-dashboard.md index bfea217a22..b444253010 100644 --- a/solutions/security/dashboards/overview-dashboard.md +++ b/solutions/security/dashboards/overview-dashboard.md @@ -31,7 +31,7 @@ The **Security news** section provides the latest {{elastic-sec}} news to help y Time-based histograms show the number of detections, alerts, and events that have occurred within the selected time range. To focus on a particular time, click and drag to select a time range, or choose a preset value. The **Stack by** menu lets you select which field is used to organize the data. For example, in the Alert trend histogram, stack by `kibana.alert.rule.name` to display alert counts by rule name within the specified time frame. -Hover over histograms, graphs, and tables to display an **Inspect** button (![Inspect icon](../../../images/security-inspect-icon.png "")) or options menu (![Three-dot menu icon](../../../images/security-three-dot-icon.png "")). Click to inspect the visualization’s {{es}} queries, add it to a new or existing case, or open it in Lens for customization. +Hover over histograms, graphs, and tables to display an **Inspect** button (![Inspect icon](../../../images/security-inspect-icon.png "title =20x20")) or options menu (![Three-dot menu icon](../../../images/security-three-dot-icon.png "title =20x20")). Click to inspect the visualization’s {{es}} queries, add it to a new or existing case, or open it in Lens for customization. ## Host and network events [_host_and_network_events] diff --git a/solutions/security/detect-and-alert/about-building-block-rules.md b/solutions/security/detect-and-alert/about-building-block-rules.md index 44cc26b351..ac42acf369 100644 --- a/solutions/security/detect-and-alert/about-building-block-rules.md +++ b/solutions/security/detect-and-alert/about-building-block-rules.md @@ -4,14 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-building-block-rules.html --- -# About building block rules - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/building-block-rule.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-building-block-rules.md +# About building block rules [security-building-block-rules] Create building block rules when you do not want to see their generated alerts in the UI. This is useful when you want: @@ -19,7 +12,7 @@ Create building block rules when you do not want to see their generated alerts i * Rules that execute on the alert indices (`.alerts-security.alerts-`). You can then use building block rules to create hidden alerts that act as a basis for an *ordinary* rule to generate visible alerts. -## Set up rules that run on alert indices [_set_up_rules_that_run_on_alert_indices] +## Set up rules that run on alert indices [security-building-block-rules-set-up-rules-that-run-on-alert-indices] To create a rule that searches alert indices, select **Index Patterns** as the rule’s **Source** and enter the index pattern for alert indices (`.alerts-security.alerts-*`): @@ -29,7 +22,7 @@ To create a rule that searches alert indices, select **Index Patterns** as the r ::: -## View building block alerts in the UI [_view_building_block_alerts_in_the_ui] +## View building block alerts in the UI [security-building-block-rules-view-building-block-alerts-in-the-ui] By default, building block alerts are excluded from the Overview and Alerts pages. You can choose to include building block alerts on the Alerts page, which expands the number of alerts. diff --git a/solutions/security/detect-and-alert/about-detection-rules.md b/solutions/security/detect-and-alert/about-detection-rules.md index 3253620f99..5d18077b4c 100644 --- a/solutions/security/detect-and-alert/about-detection-rules.md +++ b/solutions/security/detect-and-alert/about-detection-rules.md @@ -2,20 +2,10 @@ mapped_urls: - https://www.elastic.co/guide/en/security/current/about-rules.html - https://www.elastic.co/guide/en/serverless/current/security-about-rules.html ---- - -# About detection rules - -% What needs to be done: Align serverless/stateful -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/about-rules.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-about-rules.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +--- -$$$rule-types$$$ +# About detection rules [security-about-rules] Rules run periodically and search for source events, matches, sequences, or {{ml}} job anomaly results that meet their criteria. When a rule’s criteria are met, a detection alert is created. @@ -25,7 +15,7 @@ Rules run periodically and search for source events, matches, sequences, or {{ml You can create the following types of rules: * [**Custom query**](/solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule): Query-based rule, which searches the defined indices and creates an alert when one or more documents match the rule’s query. -* [**Machine learning](/solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule)**: {{ml-cap}} rule, which creates an alert when a {{ml}} job discovers an anomaly above the defined threshold (see [Anomaly detection](/solutions/security/advanced-entity-analytics/anomaly-detection.md)). +* [**Machine learning**](/solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule): {{ml-cap}} rule, which creates an alert when a {{ml}} job discovers an anomaly above the defined threshold (see [Anomaly detection](/solutions/security/advanced-entity-analytics/anomaly-detection.md)). For {{ml}} rules, the associated {{ml}} job must be running. If the {{ml}} job isn’t running, the rule will: @@ -44,7 +34,7 @@ You can create the following types of rules: :::: * [**New terms**](/solutions/security/detect-and-alert/create-detection-rule.md#create-new-terms-rule): Generates an alert for each new term detected in source documents within a specified time range. You can also detect a combination of up to three new terms (for example, a `host.ip` and `host.id` that have never been observed together before). -* [**ES|QL**](/solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule): Searches the defined indices and creates an alert when results match an [Elasticsearch Query Language (ES|QL)](/explore-analyze/query-filter/languages/esql.md) query. +* [**ES|QL**](/solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule): Searches the defined indices and creates an alert when results match an [Elasticsearch Query Language {{esql}}](/explore-analyze/query-filter/languages/esql.md) query. ::::{note} {{esql}} is enabled by default in {{kib}}. It can be disabled using the `enableESQL` setting from the [Advanced Settings](kibana://reference/advanced-settings.md). This will hide the {{esql}} user interface from various applications. However, users will be able to access existing {{esql}} artifacts like saved searches and visualizations. @@ -62,7 +52,7 @@ You can create the following types of rules: When you create a rule, you must either specify the {{es}} index pattens for which you’d like the rule to run, or select a [data view field](/solutions/security/get-started/data-views-elastic-security.md) as the data source. If you select a data view, you can select [runtime fields](/solutions/security/get-started/create-runtime-fields-in-elastic-security.md) associated with that data view to create a query for the rule (with the exception of {{ml}} rules, which do not use queries). ::::{note} -To access data views, ensure you have the [required permissions](/explore-analyze/find-and-organize/data-views.md#data-views-read-only-access). +To access data views in {{stack}}, you must have the [required permissions](/explore-analyze/find-and-organize/data-views.md#data-views-read-only-access). To access them in {{serverless-short}}, you must have the appropriate [predefined Security user role](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles) or a [custom role](../../../deploy-manage/users-roles/cloud-organization/user-roles.md) with the right privileges. :::: diff --git a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md index 261513b345..f6ff535e6f 100644 --- a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md +++ b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md @@ -4,20 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-signals-to-cases.html --- -# Add detection alerts to cases - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/signals-to-cases.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-signals-to-cases.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$signals-to-existing-cases$$$ - -$$$signals-to-new-cases$$$ +# Add detection alerts to cases [security-signals-to-cases] From the Alerts table, you can attach one or more alerts to a [new case](/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md#signals-to-new-cases) or [an existing one](/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md#signals-to-existing-cases). Alerts from any rule type can be added to a case. diff --git a/solutions/security/detect-and-alert/add-manage-exceptions.md b/solutions/security/detect-and-alert/add-manage-exceptions.md index 2f7762e1c7..726353a42a 100644 --- a/solutions/security/detect-and-alert/add-manage-exceptions.md +++ b/solutions/security/detect-and-alert/add-manage-exceptions.md @@ -199,7 +199,7 @@ Additionally, to add an Endpoint exception to an endpoint protection rule, there ::::{note} - * Fields with conflicts are marked with a warning icon (![Field conflict warning icon](../../../images/security-field-warning-icon.png "")). Using these fields might cause unexpected exceptions behavior. For more information, refer to [Troubleshooting type conflicts and unmapped fields](../../../troubleshoot/security/detection-rules.md#rule-exceptions-field-conflicts). + * Fields with conflicts are marked with a warning icon (![Field conflict warning icon](../../../images/security-field-warning-icon.png "title =20x20")). Using these fields might cause unexpected exceptions behavior. For more information, refer to [Troubleshooting type conflicts and unmapped fields](../../../troubleshoot/security/detection-rules.md#rule-exceptions-field-conflicts). * The `is one of` and `is not one of` operators support identical, case-sensitive values. For example, if you want to match the values `Windows` and `windows`, add both values to the **Value** field. :::: diff --git a/solutions/security/detect-and-alert/create-detection-rule.md b/solutions/security/detect-and-alert/create-detection-rule.md index a37dee0ca8..106e65d0ae 100644 --- a/solutions/security/detect-and-alert/create-detection-rule.md +++ b/solutions/security/detect-and-alert/create-detection-rule.md @@ -4,54 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-rules-create.html --- -# Create a detection rule - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/rules-ui-create.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-rules-create.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$rule-ui-advanced-params$$$ - -$$$preview-rules$$$ - -$$$create-esql-rule$$$ - -$$$create-custom-rule$$$ - -$$$create-eql-rule$$$ - -$$$create-indicator-rule$$$ - -$$$create-ml-rule$$$ - -$$$create-new-terms-rule$$$ - -$$$create-threshold-rule$$$ - -$$$esql-non-agg-query-dedupe$$$ - -$$$esql-query-design$$$ - -$$$esql-rule-limitations$$$ - -$$$esql-rule-query-types$$$ - -$$$indicator-value-lists$$$ - -$$$rule-action-variables$$$ - -$$$rule-notifications$$$ - -$$$rule-response-action$$$ - -$$$rule-schedule$$$ - -$$$rule-ui-basic-params$$$ +# Create a detection rule [security-rules-create] To create a new detection rule, follow these steps: @@ -63,15 +16,17 @@ To create a new detection rule, follow these steps: 6. Set up response actions (optional). ::::{admonition} Requirements -* To create detection rules, you must have access to data views, which requires the [{{kib}} privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) `Data View Management`. -* You’ll also need permissions to enable and view detections, manage rules, manage alerts, and preview rules. These permissions depend on the user role. Refer to [*Detections requirements*](/solutions/security/detect-and-alert/detections-requirements.md) for more information. +To create detection rules, you must have: + +* Access to data views, which requires the `Data View Management` [{{kib}} privilege](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) in {{stack}} or the appropriate [user role](/deploy-manage/users-roles/cloud-organization/user-roles.md) in {{serverless-short}}. +* Permissions to enable and view detections, manage rules, manage alerts, and preview rules. These permissions depend on the user role. Refer to [Detections requirements](/solutions/security/detect-and-alert/detections-requirements.md) for more information. :::: ::::{tip} * At any step, you can [preview the rule](/solutions/security/detect-and-alert/create-detection-rule.md#preview-rules) before saving it to see what kind of results you can expect. -* To ensure rules don’t search cold and frozen data when executing, either configure the `excludedDataTiersForRuleExecution` [advanced setting](/solutions/security/get-started/configure-advanced-settings.md#exclude-cold-frozen-data-rule-executions) (which applies to all rules in a space), or add a [Query DSL filter](/solutions/security/detect-and-alert/exclude-cold-frozen-data-from-individual-rules.md) to individual rules. +* To ensure rules don’t search cold and frozen data when executing, either configure the `excludedDataTiersForRuleExecution` [advanced setting](/solutions/security/get-started/configure-advanced-settings.md#exclude-cold-frozen-data-rule-executions) (which applies to all rules in a space), or add a [Query DSL filter](/solutions/security/detect-and-alert/exclude-cold-frozen-data-from-individual-rules.md) to individual rules. These options are only available if you're on the {{stack}}. :::: @@ -80,47 +35,10 @@ To create a new detection rule, follow these steps: Additional configuration is required for detection rules using cross-cluster search. Refer to [Cross-cluster search and detection rules](/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md). :::: - - -## Create a machine learning rule [create-ml-rule] - -::::{important} -To create or edit {{ml}} rules, you must have the [appropriate license](https://www.elastic.co/subscriptions) or use a [cloud deployment](https://cloud.elastic.co/registration?page=docs&placement=docs-body). Additionally, you must have the [`machine_learning_admin`](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) user role, and the selected {{ml}} job must be running for the rule to function correctly. - -:::: - - -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create a rule based on a {{ml}} anomaly threshold, select **Machine Learning** on the **Create new rule** page, then select: - - 1. The required {{ml}} jobs. - - ::::{note} - If a required job isn’t currently running, it will automatically start when you finish configuring and enable the rule. - :::: - - 2. The anomaly score threshold above which alerts are created. - -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. - - ::::{note} - Because {{ml}} rules generate alerts from anomalies, which don’t contain source event fields, you can only use anomaly fields when configuring alert suppression. - :::: - -5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. - - 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. - -6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). - - ## Create a custom query rule [create-custom-rule] -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create a rule based on a KQL or Lucene query, select **Custom query** on the **Create new rule** page, then: +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create a rule based on a KQL or Lucene query, select **Custom query** on the **Create new rule** page, then: 1. Define which {{es}} indices or data view the rule searches for alerts. 2. Use the filter and query fields to create the criteria used for detecting alerts. @@ -140,38 +58,72 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :screenshot: ::: - 3. You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. + 3. You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "title =20x20")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. When you use a saved query, the **Load saved query "*query name*" dynamically on each rule execution** check box appears: * Select this to use the saved query every time the rule runs. This links the rule to the saved query, and you won’t be able to modify the rule’s **Custom query** field or filters because the rule will only use settings from the saved query. To make changes, modify the saved query itself. * Deselect this to load the saved query as a one-time way of populating the rule’s **Custom query** field and filters. This copies the settings from the saved query to the rule, so you can then further adjust the rule’s query and filters as needed. If the saved query is later changed, the rule will not inherit those changes. -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. + 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). + +## Create a machine learning rule [create-ml-rule] + +::::{admonition} Requirements +To create or edit {{ml}} rules, you need: +* The appropriate [{{stack}} subscription](https://www.elastic.co/pricing) or [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md). +* The [`machine_learning_admin`](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) in {{stack}} or the appropriate [user role](/deploy-manage/users-roles/cloud-organization/user-roles.md) in {{serverless-short}}. +* The selected {{ml}} job to be running for the rule to function correctly. + +:::: + + +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create a rule based on a {{ml}} anomaly threshold, select **Machine Learning** on the **Create new rule** page, then select: + + 1. The required {{ml}} jobs. + + ::::{note} + If a required job isn’t currently running, it will automatically start when you finish configuring and enable the rule. + :::: + + 2. The anomaly score threshold above which alerts are created. + +3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. + + ::::{note} + Because {{ml}} rules generate alerts from anomalies, which don’t contain source event fields, you can only use anomaly fields when configuring alert suppression. + :::: + +4. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. + + 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. + 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. + +5. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ## Create a threshold rule [create-threshold-rule] -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create a rule based on a source event field threshold, select **Threshold** on the **Create new rule** page, then: +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create a rule based on a source event field threshold, select **Threshold**, then: 1. Define which {{es}} indices the rule analyzes for alerts. 2. Use the filter and query fields to create the criteria used for detecting alerts. ::::{note} - You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. + You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "title =20x20")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. :::: 3. Use the **Group by** and **Threshold** fields to determine which source event field is used as a threshold and the threshold’s value. @@ -190,26 +142,24 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// Alerts created by threshold rules are synthetic alerts that do not resemble the source documents. The alert itself only contains data about the fields that were aggregated over (the **Group by** fields). Other fields are omitted, because they can vary across all source documents that were counted toward the threshold. Additionally, you can reference the actual count of documents that exceeded the threshold from the `kibana.alert.threshold_result.count` field. :::: -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +3. (Optional) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ## Create an event correlation rule [create-eql-rule] -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create an event correlation rule using EQL, select **Event Correlation** on the **Create new rule** page, then: -4. To create an event correlation rule using EQL, select **Event Correlation**, then: +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create an event correlation rule using EQL, select **Event Correlation** on the **Create new rule** page, then: 1. Define which {{es}} indices or data view the rule searches when querying for events. 2. Write an [EQL query](elasticsearch://reference/query-languages/eql-syntax.md) that searches for matching events or a series of matching events. @@ -249,24 +199,24 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// For sequence events, the {{security-app}} generates a single alert when all events listed in the sequence are detected. To see the matched sequence events in more detail, you can view the alert in the Timeline, and, if all events came from the same process, open the alert in Analyze Event view. :::: -5. (Optional) Click the EQL settings icon (![EQL settings icon](../../../images/security-eql-settings-icon.png "")) to configure additional fields used by [EQL search](/explore-analyze/query-filter/languages/eql.md#specify-a-timestamp-or-event-category-field): +3. (Optional) Click the EQL settings icon (![EQL settings icon](../../../images/security-eql-settings-icon.png "title =20x20")) to configure additional fields used by [EQL search](/explore-analyze/query-filter/languages/eql.md#specify-a-timestamp-or-event-category-field): * **Event category field**: Contains the event classification, such as `process`, `file`, or `network`. This field is typically mapped as a field type in the [keyword family](elasticsearch://reference/elasticsearch/mapping-reference/keyword.md). Defaults to the `event.category` ECS field. * **Tiebreaker field**: Sets a secondary field for sorting events (in ascending, lexicographic order) if they have the same timestamp. * **Timestamp field**: Contains the event timestamp used for sorting a sequence of events. This is different from the **Timestamp override** advanced setting, which is used for querying events within a range. Defaults to the `@timestamp` ECS field. -6. Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -7. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +4. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -8. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. + 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -9. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ## Create an indicator match rule [create-indicator-rule] @@ -276,9 +226,8 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :::: -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create a rule that searches for events whose specified field value matches the specified indicator field value in the indicator index patterns, select **Indicator Match** on the **Create new rule** page, then fill in the following fields: +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create a rule that searches for events whose specified field value matches the specified indicator field value in the indicator index patterns, select **Indicator Match**, then fill in the following fields: 1. **Source**: The individual index patterns or data view that specifies what data to search. 2. **Custom query**: The query and filters used to retrieve the required results from the {{elastic-sec}} event indices. For example, if you want to match documents that only contain a `destination.ip` address field, add `destination.ip : *`. @@ -289,7 +238,7 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// ::::{note} - You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. + You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. :::: 3. **Indicator index patterns**: The indicator index patterns containing field values for which you want to generate alerts. This field is automatically populated with indices specified in the `securitySolution:defaultThreatIndex` advanced setting. For more information, see [Update default Elastic Security threat intelligence indices](/solutions/security/get-started/configure-advanced-settings.md#update-threat-intel-indices). @@ -306,7 +255,7 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :::: - To define which field values are compared from the indices add the following: + To define which field values are compared from the indices, add the following: * **Field**: The field used for comparing values in the {{elastic-sec}} event indices. * **Indicator index field**: The field used for comparing values in the indicator indices. @@ -321,21 +270,21 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// ::: ::::{tip} - Before you create rules, create [Timeline templates](/solutions/security/investigate/timeline.md) so they can be selected here. When alerts generated by the rule are investigated in the Timeline, Timeline query values are replaced with their corresponding alert field values. + Before you create rules, create [Timeline templates](/solutions/security/investigate/timeline.md) so you can select them under **Timeline template** at the end of the **Define rule** section. When alerts generated by the rule are investigated in the Timeline, Timeline query values are replaced with their corresponding alert field values. :::: -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +3. (Optional) Select **Suppress alerts** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. + 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ### Use value lists with indicator match rules [indicator-value-lists] @@ -369,15 +318,14 @@ You uploaded a value list of known ransomware domains, and you want to be notifi ## Create a new terms rule [create-new-terms-rule] -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. To create a rule that searches for each new term detected in source documents, select **New Terms** on the **Create new rule** page, then: +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. To create a rule that searches for each new term detected in source documents, select **New Terms** on the **Create new rule** page, then: 1. Specify what data to search by entering individual {{es}} index patterns or selecting an existing data view. 2. Use the filter and query fields to create the criteria used for detecting alerts. ::::{note} - You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. + You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. :::: 3. Use the **Fields** menu to select a field to check for new terms. You can also select up to three fields to detect a combination of new terms (for example, a `host.ip` and `host.id` that have never been observed together before). @@ -390,18 +338,18 @@ You uploaded a value list of known ransomware domains, and you want to be notifi For example, if a rule has an interval of 5 minutes, no additional look-back time, and a history window size of 7 days, a term will be considered new only if the time it appears within the last 7 days is also within the last 5 minutes. Configure the rule interval and additional look-back time when you [set the rule’s schedule](/solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule). -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ## Create an {{esql}} rule [create-esql-rule] @@ -410,9 +358,8 @@ Use [{{esql}}](/explore-analyze/query-filter/languages/esql.md) to query your so To create an {{esql}} rule: -1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Click **Create new rule**. -3. Select **{{esql}}**, then write a query. +1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then click **Create new rule**. +2. Select **{{esql}}**, then write a query. ::::{note} Refer to the sections below to learn more about [{{esql}} query types](/solutions/security/detect-and-alert/create-detection-rule.md#esql-rule-query-types), [query design considerations](/solutions/security/detect-and-alert/create-detection-rule.md#esql-query-design), and [rule limitations](/solutions/security/detect-and-alert/create-detection-rule.md#esql-rule-limitations). @@ -420,21 +367,21 @@ To create an {{esql}} rule: ::::{tip} - Click the help icon (![Click the ES|QL help icon](../../../images/security-esql-help-ref-button.png "")) to open the in-product reference documentation for all {{esql}} commands and functions. + Click the help icon (![Click the ES|QL help icon](../../../images/security-esql-help-ref-button.png "title =20x20")) to open the in-product reference documentation for all {{esql}} commands and functions. :::: -4. (Optional, [Platinum or higher subscription](https://www.elastic.co/pricing) required) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. -5. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. +3. (Optional) Use **Suppress alerts by** to reduce the number of repeated or duplicate alerts created by the rule. Refer to [Suppress detection alerts](/solutions/security/detect-and-alert/suppress-detection-alerts.md) for more information. +4. (Optional) Create a list of **Required fields** that the rule needs to function. This list is informational only, to help users understand the rule; it doesn’t affect how the rule actually runs. 1. Click **Add required field**, then select a field from the index patterns or data view you specified for the rule. You can also start typing a field’s name to find it faster, or type in an entirely new custom field. 2. Enter the field’s data type. -6. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. +5. (Optional) Add **Related integrations** to associate the rule with one or more [Elastic integrations](https://docs.elastic.co/en/integrations). This indicates the rule’s dependency on specific integrations and the data they generate, and allows users to confirm each integration’s [installation status](/solutions/security/detect-and-alert/manage-detection-rules.md#rule-prerequisites) when viewing the rule. 1. Click **Add integration**, then select an integration from the list. You can also start typing an integration’s name to find it faster. - 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. + 2. Enter the version of the integration you want to associate with the rule, using [semantic versioning](https://semver.org/). For version ranges, you must use tilde or caret syntax. For example, `~1.2.3` is from 1.2.3 to any patch version less than 1.3.0, and `^1.2.3` is from 1.2.3 to any minor and patch version less than 2.0.0. -7. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). +6. Click **Continue** to [configure basic rule settings](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-basic-params). ### {{esql}} query types [esql-rule-query-types] @@ -540,7 +487,7 @@ When writing your query, consider the following: ### {{esql}} rule limitations [esql-rule-limitations] -If your {{esql}} query creates new fields that aren’t part of the ECS schema, they aren’t mapped to the alerts index so you can’t search for or filter them in the Alerts table. As a workaround, create [runtime fields](/solutions/security/get-started/create-runtime-fields-in-elastic-security.md). +If your {{esql}} query creates new fields that aren’t part of the ECS schema, they aren’t mapped to the alerts index, so you can’t search for or filter them in the Alerts table. As a workaround, create [runtime fields](/solutions/security/get-started/create-runtime-fields-in-elastic-security.md). ### Highlight fields returned by the {{esql}} rule query [custom-highlighted-esql-fields] @@ -606,7 +553,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s 1. **Reference URLs** (optional): References to information that is relevant to the rule. For example, links to background information. 2. **False positive examples** (optional): List of common scenarios that may produce false-positive alerts. 3. **MITRE ATT&CKTM threats** (optional): Add relevant [MITRE](https://attack.mitre.org/) framework tactics, techniques, and subtechniques. - 4. **Custom highlighted fields** (optional): Specify one or more highlighted fields for unique alert investigation flows. You can choose any fields that are available in the indices you selected for the rule’s data source. + 4. **Custom highlighted fields** (optional): Specify highlighted fields for unique alert investigation flows. You can choose any fields that are available in the indices you selected for the rule’s data source. After you create the rule, you can find all custom highlighted fields in the About section of the rule details page. If the rule has alerts, you can find custom highlighted fields in the [Highlighted fields](/solutions/security/detect-and-alert/view-detection-alert-details.md#investigation-section) section of the alert details flyout. @@ -620,7 +567,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s If you select this option, you can add {{elastic-endpoint}} exceptions on the Rule details page. Additionally, all future exceptions added to [endpoint protection rules](/solutions/security/manage-elastic-defend/endpoint-protection-rules.md) will also affect this rule. :::: - 10. **Building block** (optional): Select to create a building-block rule. By default, alerts generated from a building-block rule are not displayed in the UI. See [*About building block rules*](/solutions/security/detect-and-alert/about-building-block-rules.md) for more information. + 10. **Building block** (optional): Select to create a building-block rule. By default, alerts generated from a building-block rule are not displayed in the UI. See [About building block rules](/solutions/security/detect-and-alert/about-building-block-rules.md) for more information. 11. **Max alerts per run** (optional): Specify the maximum number of alerts the rule can create each time it runs. Default is 100. ::::{note} @@ -681,17 +628,17 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s ## Set up rule actions (optional) [rule-notifications] -Use {{kib}} actions to set up notifications sent via other systems when alerts are generated. +Use actions to set up notifications sent via other systems when alerts are generated. ::::{note} -To use {{kib}} actions for alert notifications, you need the [appropriate license](https://www.elastic.co/subscriptions) and your role needs **All** privileges for the **Action and Connectors** feature. For more information, see [Cases requirements](/solutions/security/investigate/cases-requirements.md). +To use actions for alert notifications, you need the [appropriate license](https://www.elastic.co/subscriptions). For more information, see [Cases requirements](/solutions/security/investigate/cases-requirements.md). :::: 1. Select a connector type to determine how notifications are sent. For example, if you select the {{jira}} connector, notifications are sent to your {{jira}} system. ::::{note} - Each action type requires a connector. Connectors store the information required to send the notification from the external system. You can configure connectors while creating the rule or in **{{stack-manage-app}}** → **{{connectors-ui}}**. For more information, see [Action and connector types](/deploy-manage/manage-connectors.md). + Each action type requires a connector. Connectors store the information required to send the notification from the external system. You can configure connectors while creating the rule or from the **{{connectors-ui}}** page. For more information, refer to [Action and connector types](/deploy-manage/manage-connectors.md). Some connectors that perform actions require less configuration. For example, you do not need to set the action frequency or variables for the [Cases connector](kibana://reference/connectors-kibana/cases-action-type.md) @@ -801,7 +748,7 @@ The following variables can only be passed if the rule’s action frequency is f #### Alert placeholder examples [placeholder-examples] -To understand which fields to parse, see the [*Detections API*](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-detections-api) to view the JSON representation of rules. +To understand which fields to parse, see the [Detections API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-security-detections-api) to view the JSON representation of rules. Example using `{{context.rule.filters}}` to output a list of filters: @@ -831,7 +778,7 @@ Example using the mustache "current element" notation `{{.}}` to output all the Use response actions to set up additional functionality that will run whenever a rule executes: * **Osquery**: Include live Osquery queries with a custom query rule. When an alert is generated, Osquery automatically collects data on the system related to the alert. Refer to [Add Osquery Response Actions](/solutions/security/investigate/add-osquery-response-actions.md) to learn more. -* **{{elastic-defend}}**: Automatically run response actions on an endpoint when rule conditions are met. For example, you can automatically isolate a host or terminate a process when specific activities or events are detected on the host. Refer to [*Automated response actions*](/solutions/security/endpoint-response-actions/automated-response-actions.md) to learn more. +* **{{elastic-defend}}**: Automatically run response actions on an endpoint when rule conditions are met. For example, you can automatically isolate a host or terminate a process when specific activities or events are detected on the host. Refer to [Automated response actions](/solutions/security/endpoint-response-actions/automated-response-actions.md) to learn more. ::::{important} Host isolation involves quarantining a host from the network to prevent further spread of threats and limit potential damage. Be aware that automatic host isolation can cause unintended consequences, such as disrupting legitimate user activities or blocking critical business processes. @@ -844,7 +791,7 @@ Host isolation involves quarantining a host from the network to prevent further You can preview any custom or prebuilt rule to find out how noisy it will be. For a custom rule, you can then adjust the rule’s query or other settings. ::::{note} -To preview rules, you need the `read` privilege for the `.preview.alerts-security.alerts-` and `.internal.preview.alerts-security.alerts--*` indices, plus `All` privileges for the Security feature. Refer to [*Detections requirements*](/solutions/security/detect-and-alert/detections-requirements.md) for more information. +To preview rules, you must have the appropriate user role. Refer to [Detections requirements](/solutions/security/detect-and-alert/detections-requirements.md) for more information. :::: @@ -867,11 +814,11 @@ To interact with the rule preview: * Click **Refresh** to update the preview. - * When you edit the rule’s settings or the preview’s time range, the button changes from blue (![Blue circular refresh icon](../../../images/security-rule-preview-refresh-circle.png "")) to green (![Green right-pointing arrow refresh icon](../../../images/security-rule-preview-refresh-arrow.png "")) to indicate that the rule has been edited since the last preview. + * When you edit the rule’s settings or the preview’s time range, the button changes from blue (![Blue circular refresh icon](../../../images/security-rule-preview-refresh-circle.png "title =20x20")) to green (![Green right-pointing arrow refresh icon](../../../images/security-rule-preview-refresh-arrow.png "title =20x20")) to indicate that the rule has been edited since the last preview. * For a relative time range (such as `Last 1 hour`), refresh the preview to check for the latest results. (Previews don’t automatically refresh with new incoming data.) -* Click the **View details** icon (![View details icon](../../../images/security-view-details-icon.png "")) in the alerts table to view the details of a particular alert. -* To resize the preview, hover between the rule settings and preview, then click and drag the border. You can also click the border, then the collapse icon (![Collapse icon](../../../images/security-collapse-right-icon.png "")) to collapse and expand the preview. +* Click the **View details** icon (![View details icon](../../../images/security-view-details-icon.png "title =20x20")) in the alerts table to view the details of a particular alert. +* To resize the preview, hover between the rule settings and preview, then click and drag the border. You can also click the border, then the collapse icon (![Collapse icon](../../../images/security-collapse-right-icon.png "title =20x20")) to collapse and expand the preview. * To close the preview, click the **Rule preview** button again. diff --git a/solutions/security/detect-and-alert/create-manage-value-lists.md b/solutions/security/detect-and-alert/create-manage-value-lists.md index e4384b0f9d..cde26bd809 100644 --- a/solutions/security/detect-and-alert/create-manage-value-lists.md +++ b/solutions/security/detect-and-alert/create-manage-value-lists.md @@ -93,8 +93,8 @@ You can also edit value lists while creating and managing exceptions that use va 2. Click **Manage value lists**. The **Manage value lists** window opens. 3. From the **Value lists** table, you can: - 1. Click the **Export value list** button (![Export button from Manage value lists window](../../../images/security-export-value-list.png "")) to export the value list. - 2. Click the **Remove value list** button (![Remove button from Manage value lists window](../../../images/security-remove-value-list.png "")) to delete the value list. + 1. Click the **Export value list** button (![Export button from Manage value lists window](../../../images/security-export-value-list.png "title =20x20")) to export the value list. + 2. Click the **Remove value list** button (![Remove button from Manage value lists window](../../../images/security-remove-value-list.png "title =20x20")) to delete the value list. :::{image} ../../../images/security-manage-value-list.png :alt: Import value list flyout with action buttons highlighted diff --git a/solutions/security/detect-and-alert/detections-requirements.md b/solutions/security/detect-and-alert/detections-requirements.md index fa9026d329..2d0760453d 100644 --- a/solutions/security/detect-and-alert/detections-requirements.md +++ b/solutions/security/detect-and-alert/detections-requirements.md @@ -6,33 +6,18 @@ mapped_urls: # Detections requirements -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/detections-permissions-section.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-detections-requirements.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$enable-detections-ui$$$ - -$$$adv-list-settings$$$ - -$$$detections-on-prem-requirements$$$ - -To use the [Detections feature](/solutions/security/detect-and-alert.md), you first need to configure a few settings. You also need the [appropriate license](https://www.elastic.co/subscriptions) to send [notifications](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) when detection alerts are generated. +To use the [Detections feature](/solutions/security/detect-and-alert.md), you first need to configure a few settings. You also need the appropriate [{{stack}} subscription](https://www.elastic.co/pricing) or [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) to send [notifications](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) when detection alerts are generated. Additionally, there are some [advanced settings](/solutions/security/detect-and-alert/detections-requirements.md#adv-list-settings) used to configure {{kib}} [value list](/solutions/security/detect-and-alert/create-manage-value-lists.md) upload limits. ::::{important} Several steps are **only** required for **self-managed** {{stack}} deployments. If you’re using an Elastic Cloud deployment, you only need to [enable detections](/solutions/security/detect-and-alert/detections-requirements.md#enable-detections-ui). :::: - -Additionally, there are some [advanced settings](/solutions/security/detect-and-alert/detections-requirements.md#adv-list-settings) used to configure {{kib}} [value list](/solutions/security/detect-and-alert/create-manage-value-lists.md) upload limits. - - ## Configure self-managed {{stack}} deployments [detections-on-prem-requirements] +```yaml {applies_to} +stack: +``` + These steps are only required for **self-managed** deployments: * HTTPS must be configured for communication between [{{es}} and {{kib}}](/deploy-manage/security/set-up-basic-security-plus-https.md#encrypt-kibana-http). @@ -57,11 +42,7 @@ For instructions about using {{ml}} jobs and rules, refer to [Machine learning j :::: -::::{important} -In {{stack}} version 8.0.0, the `.siem-signals-` index was renamed to `.alerts-security.alerts-`. Detection alert indices are created for each {{kib}} space. For the default space, the alerts index is named `.alerts-security.alerts-default`. If you’re upgrading to 8.0.0 or later, users should have privileges for the `.alerts-security.alerts-` AND `.siem-signals-` indices. If you’re newly installing the {{stack}}, then users do not need privileges for the `.siem-signals-` index. -:::: - - +### Custom role privileges [security-detections-requirements-custom-role-privileges] The following table describes the required privileges to access the Detections feature, including rules and alerts. For more information on {{kib}} privileges, refer to [Feature access based on user privileges](/deploy-manage/manage-spaces.md#spaces-control-user-access). | Action | Cluster Privileges | Index Privileges | Kibana Privileges | @@ -73,16 +54,13 @@ The following table describes the required privileges to access the Detections f | Manage alerts
**NOTE**: Allows you to manage alerts, but not modify rules. | N/A | `maintenance`, `write`, `read`, and `view_index_metadata` for these system indices and data streams, where `` is the space name:

* `.alerts-security.alerts-`
* `.internal.alerts-security.alerts--*`
* `.siem-signals-`1
* `.lists-`
* `.items-`

1 **NOTE**: If you’re upgrading to {{stack}} 8.0.0 or later, users should have privileges for the `.alerts-security.alerts-` AND `.siem-signals-` indices. If you’re newly installing the {{stack}}, then users do not need privileges for the `.siem-signals-` index.
| `Read` for the `Security` feature | | Create the `.lists` and `.items` data streams in your space
**NOTE**: To initiate the process that creates the data streams, you must visit the Rules page for each appropriate space. | `manage` | `manage`, `write`, `read`, and `view_index_metadata` for these data streams, where `` is the space name:

* `.lists-`
* `.items-`
| `All` for the `Security` and `Saved Objects Management` features | -Here is an example of a user who has the Detections feature enabled in all {{kib}} spaces: - -:::{image} ../../../images/security-sec-admin-user.png -:alt: Shows user with the Detections feature enabled in all Kibana spaces -:screenshot: -::: - ### Authorization [alerting-auth-model] +```yaml {applies_to} +stack: +``` + Rules, including all background detection and the actions they generate, are authorized using an [API key](/deploy-manage/api-keys/elasticsearch-api-keys.md) associated with the last user to edit the rule. Upon creating or modifying a rule, an API key is generated for that user, capturing a snapshot of their privileges. The API key is then used to run all background tasks associated with the rule including detection checks and executing actions. ::::{important} @@ -94,6 +72,10 @@ If a rule requires certain privileges to run, such as index privileges, keep in ## Configure list upload limits [adv-list-settings] +```yaml {applies_to} +stack: +``` + You can set limits to the number of bytes and the buffer size used to upload [value lists](/solutions/security/detect-and-alert/create-manage-value-lists.md) to {{elastic-sec}}. To set the value: @@ -105,9 +87,4 @@ To set the value: For example, on a Kibana instance with 2 gigabytes of RAM, you can set this value up to 20000000 (20 megabytes). - * `xpack.lists.importBufferSize`: Sets the buffer size used for uploading {{elastic-sec}} value lists (default `1000`). Change the value if you’re experiencing slow upload speeds or larger than wanted memory usage when uploading value lists. Set to a higher value to increase throughput at the expense of using more Kibana memory, or a lower value to decrease throughput and reduce memory usage. - - -::::{note} -For information on how to configure Elastic Cloud deployments, refer to [Add Kibana user settings](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). -:::: + * `xpack.lists.importBufferSize`: Sets the buffer size used for uploading {{elastic-sec}} value lists (default `1000`). Change the value if you’re experiencing slow upload speeds or larger than wanted memory usage when uploading value lists. Set to a higher value to increase throughput at the expense of using more Kibana memory, or a lower value to decrease throughput and reduce memory usage. \ No newline at end of file diff --git a/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md b/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md index 92fba203ae..fac1f74a10 100644 --- a/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md +++ b/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md @@ -2,26 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/security/current/prebuilt-rules-management.html - https://www.elastic.co/guide/en/serverless/current/security-prebuilt-rules-management.html ---- - -# Install and manage Elastic prebuilt rules - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: -% - [x] ./raw-migrated-files/security-docs/security/prebuilt-rules-management.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md +navigation_title: "Use Elastic prebuilt rules" -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$load-prebuilt-rules$$$ - -$$$prebuilt-rule-tags$$$ - -$$$select-all-prebuilt-rules$$$ +--- -$$$update-prebuilt-rules$$$ +# Install and manage Elastic prebuilt rules [security-prebuilt-rules-management] Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](security-docs://reference/prebuilt-rules/index.md), keep them updated, and make sure they have the data needed to run successfully. @@ -34,7 +20,7 @@ Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](s ::::{note} * Most prebuilt rules don’t start running by default. You can use the **Install and enable** option to start running rules as you install them, or first install the rules, then enable them manually. After installation, only a few prebuilt rules will be enabled by default, such as the Endpoint Security rule. * You can’t modify most settings on Elastic prebuilt rules. You can only edit [rule actions](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) and [add exceptions](/solutions/security/detect-and-alert/add-manage-exceptions.md). If you want to modify other settings on a prebuilt rule, you must first duplicate it, then make your changes to the duplicated rule. However, your customized rule is entirely separate from the original prebuilt rule, and will not get updates from Elastic if the prebuilt rule is updated. -* Automatic updates of Elastic prebuilt rules are supported for the current {{elastic-sec}} version and the latest three previous minor releases. For example, if you’re on {{elastic-sec}} 8.10, you’ll be able to use the Rules UI to update your prebuilt rules until {{elastic-sec}} 8.14 is released. After that point, you can still manually download and install updated prebuilt rules, but you must upgrade to the latest {{elastic-sec}} version to receive automatic updates. +* On {{stack}}, automatic updates of Elastic prebuilt rules are supported for the current {{elastic-sec}} version and the latest three previous minor releases. For example, if you’re on {{elastic-sec}} 9.0, you’ll be able to use the Rules UI to update your prebuilt rules until {{elastic-sec}} 9.4 is released. After that point, you can still manually download and install updated prebuilt rules, but you must upgrade to the latest {{elastic-sec}} version to receive automatic updates. :::: diff --git a/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md b/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md index 05a3da6375..a4fd7ad57a 100644 --- a/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md +++ b/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md @@ -4,28 +4,10 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-interactive-investigation-guides.html --- -# Launch Timeline from investigation guides - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/interactive-investigation-guides.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-interactive-investigation-guides.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$add-ig-actions-rule$$$ - -$$$query-button-syntax$$$ +# Launch Timeline from investigation guides [security-interactive-investigation-guides] Detection rule investigation guides suggest steps for triaging, analyzing, and responding to potential security issues. For custom rules, you can create an interactive investigation guide that includes buttons for launching runtime queries in [Timeline](/solutions/security/investigate/timeline.md), using alert data and hard-coded literal values. This allows you to start detailed Timeline investigations directly from an alert using relevant data. -::::{important} -Interactive investigation guides are compatible between {{stack}} versions 8.7.0 and later. Query buttons created in 8.6.x use different syntax and won’t render correctly in later versions, and vice versa. -:::: - - :::{image} ../../../images/security-ig-alert-flyout.png :alt: Alert details flyout with interactive investigation guide :screenshot: @@ -62,7 +44,7 @@ You can configure an interactive investigation guide when you [create a new rule :screenshot: ::: -2. Place the editor cursor where you want to add the query button in the investigation guide, then select the Investigate icon (![Investigate icon](../../../images/security-ig-investigate-icon.png "")) in the toolbar. The **Add investigation query** builder form appears. +2. Place the editor cursor where you want to add the query button in the investigation guide, then select the Investigate icon (![Investigate icon](../../../images/security-ig-investigate-icon.png "title =20x20")) in the toolbar. The **Add investigation query** builder form appears. :::{image} ../../../images/security-ig-investigation-query-builder.png :alt: Add investigation guide UI @@ -106,7 +88,7 @@ The following syntax defines a query button in an interactive investigation guid | `relativeFrom`, `relativeTo` | (Optional) The start and end, respectively, of the relative time range for the query. Times are relative to the alert’s creation time, represented as `now` in [date math](elasticsearch://reference/elasticsearch/rest-apis/common-options.md#date-math) format. For example, selecting **Last 15 minutes** in the query builder form creates the syntax `"relativeFrom": "now-15m", "relativeTo": "now"`. | ::::{note} -Some characters must be escaped with a backslash, such as `\"` for a quotation mark and `\\` for a literal backslash. Divide Windows paths with double backslashes (for example, `C:\\Windows\\explorer.exe`), and paths that already include double backslashes might require four backslashes for each divider. A clickable error icon (![Error icon](../../../images/security-ig-error-icon.png "")) displays below the Markdown editor if there are any syntax errors. +Some characters must be escaped with a backslash, such as `\"` for a quotation mark and `\\` for a literal backslash. Divide Windows paths with double backslashes (for example, `C:\\Windows\\explorer.exe`), and paths that already include double backslashes might require four backslashes for each divider. A clickable error icon (![Error icon](../../../images/security-ig-error-icon.png "title =20x20")) displays below the Markdown editor if there are any syntax errors. :::: diff --git a/solutions/security/detect-and-alert/manage-detection-alerts.md b/solutions/security/detect-and-alert/manage-detection-alerts.md index f451410e4c..bf3bae0060 100644 --- a/solutions/security/detect-and-alert/manage-detection-alerts.md +++ b/solutions/security/detect-and-alert/manage-detection-alerts.md @@ -4,34 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-alerts-manage.html --- -# Manage detection alerts - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/alerts-ui-manage.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-alerts-manage.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$add-exception-from-alerts$$$ - -$$$apply-alert-tags$$$ - -$$$assign-users-to-alerts$$$ - -$$$customize-the-alerts-table$$$ - -$$$detection-alert-status$$$ - -$$$drop-down-filter-controls$$$ - -$$$filter-assigned-alerts$$$ - -$$$group-alerts$$$ - -$$$signals-to-timelines$$$ +# Manage detection alerts [security-alerts-manage] The Alerts page displays all detection alerts. From the Alerts page, you can filter alerts, view alerting trends, change the status of alerts, add alerts to cases, and start investigating and analyzing alerts. @@ -98,13 +71,13 @@ By default, the drop-down controls on the Alerts page filter alerts by **Status* :::: -1. Click the three-dot icon next to the controls (![Three-dot icon](../../../images/security-three-dot-icon-blue.png "")), then select **Edit Controls**. +1. Click the three-dot icon next to the controls (![More actions](../../../images/security-three-dot-icon-blue.png "title =20x20")), then select **Edit Controls**. 2. Do any of the following: - * To reorder controls, click and drag a control by its handle (![Control handle](../../../images/security-handle-icon.png "")). - * To remove a control, hover over it and select **Remove control** (![Red X icon](../../../images/security-red-x-icon.png "")). - * To edit a control, hover over it and select **Edit control** (![Edit icon](../../../images/security-pencil-icon.png "")). - * To add a new control, click **Add Controls** (![Add icon](../../../images/security-add-icon-blue.png "")). If you already have four controls, you must first remove one to make room for the new one. + * To reorder controls, click and drag a control by its handle (![Reorder](../../../images/security-handle-icon.png "title =20x20")). + * To remove a control, hover over it and select **Remove control** (![Remove](../../../images/security-red-x-icon.png "title =20x20")). + * To edit a control, hover over it and select **Edit control** (![Edit](../../../images/security-pencil-icon.png "title =20x20")). + * To add a new control, click **Add Controls** (![Add](../../../images/security-add-icon-blue.png "title =20x20")). If you already have four controls, you must first remove one to make room for the new one. 3. If you’re editing or adding a control, do the following in the configuration flyout that opens: @@ -112,7 +85,7 @@ By default, the drop-down controls on the Alerts page filter alerts by **Status* 2. Enter a **Label** to identify the control. 3. Click **Save and close**. -4. Click **Save pending changes** (![Save icon](../../../images/security-save-icon-blue.png "")). +4. Click **Save pending changes** (![Save](../../../images/security-save-icon-blue.png "title =20x20")). ## Group alerts [group-alerts] @@ -131,7 +104,7 @@ Each group displays information such as the alerts' severity and how many users, To interact with grouped alerts: * Select the **Take actions** menu to perform a bulk action on all alerts in a group, such as [changing their status](/solutions/security/detect-and-alert/manage-detection-alerts.md#detection-alert-status). -* Click a group’s name or the expand icon (![Grouped alerts expand icon](../../../images/security-expand-icon-vertical-right.png "")) to display alerts within that group. You can filter and customize this view like any other alerts table. +* Click a group’s name or the expand icon (![Grouped alerts expand](../../../images/security-expand-icon-vertical-right.png "title =20x20")) to display alerts within that group. You can filter and customize this view like any other alerts table. :::{image} ../../../images/security-group-alerts-expand.png :alt: Expanded alert group with alerts table @@ -204,7 +177,11 @@ To change an alert’s status, do one of the following: :screenshot: ::: -* [beta] To bulk-change the status of [grouped alerts](/solutions/security/detect-and-alert/manage-detection-alerts.md#group-alerts), select the **Take actions** menu for the group, then select a status. +* To bulk-change the status of [grouped alerts](/solutions/security/detect-and-alert/manage-detection-alerts.md#group-alerts), select the **Take actions** menu for the group, then select a status. + + ::::{warning} + This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. + :::: * In an alert’s details flyout, click **Take action** and select a status. @@ -246,9 +223,9 @@ Users are not notified when they’ve been assigned to, or unassigned from, aler | Action | Instructions | | --- | --- | -| Assign users to an alert | Choose one of the following:

* **Alerts table** - Click **More actions*** (***…​***) in an alert’s row, then click ***Assign alert***. Select users, then click ***Apply***.
* ***Alert details flyout*** - Click ***Take action → Assign alert***. Alternatively, click the ***Assign alert** icon at the top of the alert details flyout, select users, then click **Apply**.
| -| Unassign all users from an alert | Choose one of the following:

* **Alerts table** - Click **More actions*** (***…​***) in an alert’s row, then click ***Unassign alert***.
* ***Alert details flyout** - Click **Take action → Unassign alert**.
| -| Assign users to multiple alerts | From the Alerts table, select the alerts you want to change. Click **Selected *x* alerts** at the upper-left above the table, then click **Assign alert**. Select users, then click **Apply**.

::::{note}
Users assigned to some of the selected alerts will be displayed as unassigned in the selection list. Selecting said users will assign them to all alerts they haven’t been assigned to yet.
::::

| +| Assign users to an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**…​**) in an alert’s row, then click **Assign alert**. Select users, then click **Apply**.
- **Alert details flyout**: Click **Take action → Assign alert**. Alternatively, click the **Assign alert** icon at the top of the alert details flyout, select users, then click **Apply**.
| +| Unassign all users from an alert | Choose one of the following:

- **Alerts table**: Click **More actions** (**…​**) in an alert’s row, then click **Unassign alert**.
- **Alert details flyout**: Click **Take action → Unassign alert**.
| +| Assign users to multiple alerts | From the Alerts table, select the alerts you want to change. Click **Selected *x* alerts** at the upper-left above the table, then click **Assign alert**. Select users, then click **Apply**.

**Note**: Users assigned to some of the selected alerts will be displayed as unassigned in the selection list. Selecting said users will assign them to all alerts they haven’t been assigned to yet.

| | Unassign users from multiple alerts | From the Alerts table, select the alerts you want to change and click **Selected *x* alerts** at the upper-left above the table. Click **Unassign alert** to remove users from the alert. | Show users that have been assigned to alerts by adding the **Assignees** column to the Alerts table (**Fields** → `kibana.alert.workflow_assignee_ids`). Up to four assigned users can appear in the **Assignees** column. If an alert is assigned to five or more users, a number appears instead. @@ -313,5 +290,5 @@ Suppose the rule that generated the alert uses a Timeline template. In this case This Timeline template uses the `host.name: "{host.name}"` dropzone filter in the rule. When alerts generated by the rule are investigated in Timeline, the `{host.name}` value is replaced with the alert’s `host.name` value. If the alerts’s `host.name` value is `Windows-ArsenalFC`, the Timeline dropzone query is `host.name: "Windows-ArsenalFC"`. ::::{note} -Refer to [*Timeline*](/solutions/security/investigate/timeline.md) for information on creating Timelines and Timeline templates. For information on how to add Timeline templates to rules, refer to [*Create a detection rule*](/solutions/security/detect-and-alert/create-detection-rule.md). +Refer to [Timeline](/solutions/security/investigate/timeline.md) for information on creating Timelines and Timeline templates. For information on how to add Timeline templates to rules, refer to [Create a detection rule](/solutions/security/detect-and-alert/create-detection-rule.md). :::: diff --git a/solutions/security/detect-and-alert/manage-detection-rules.md b/solutions/security/detect-and-alert/manage-detection-rules.md index bfc8d04cf3..437f7d55d7 100644 --- a/solutions/security/detect-and-alert/manage-detection-rules.md +++ b/solutions/security/detect-and-alert/manage-detection-rules.md @@ -4,32 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-rules-ui-management.html --- -# Manage detection rules - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/rules-ui-management.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-rules-ui-management.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$sort-filter-rules$$$ - -$$$manually-run-rules$$$ - -$$$import-export-rules-ui$$$ - -$$$edit-rules-settings$$$ - -$$$manage-rules-ui$$$ - -$$$rule-prerequisites$$$ - -$$$rule-status$$$ - -$$$snooze-rule-actions$$$ +# Manage detection rules [security-rules-ui-management] The Rules page allows you to view and manage all prebuilt and custom detection rules. @@ -79,7 +54,7 @@ The **Last response** column displays the current status of each rule, based on * **Failed**: The rule encountered an error that prevented it from running. For example, a {{ml}} rule whose corresponding {{ml}} job wasn’t running. * **Warning**: Nothing prevented the rule from running, but it might have returned unexpected results. For example, a custom query rule tried to search an index pattern that couldn’t be found in {{es}}. -For {{ml}} rules, an indicator icon (![Error icon from rules table](../../../images/security-rules-table-error-icon.png "")) also appears in this column if a required {{ml}} job isn’t running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule’s details page, where you can start the {{ml}} job. +For {{ml}} rules, an indicator icon (![Error icon from rules table](../../../images/security-rules-table-error-icon.png "title =20x20")) also appears in this column if a required {{ml}} job isn’t running. Click the icon to list the affected jobs, then click **Visit rule details page to investigate** to open the rule’s details page, where you can start the {{ml}} job. ## Modify existing rules settings [edit-rules-settings] @@ -97,8 +72,8 @@ Similarly, rules will be skipped if they can’t be modified by a bulk edit. For 1. Find **Detection rules (SIEM)** in the navigation menu or by using the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Do one of the following: - * Edit a single rule: Select the **All actions** menu (**…​**) on a rule, then select **Edit rule settings**. The **Edit rule settings** view opens, where you can modify the [rule’s settings](/solutions/security/detect-and-alert/create-detection-rule.md). - * Bulk edit multiple rules: Select the rules you want to edit, then select an action from the **Bulk actions** menu: + * **Edit a single rule**: Select the **All actions** menu (**…​**) on a rule, then select **Edit rule settings**. The **Edit rule settings** view opens, where you can modify the [rule’s settings](/solutions/security/detect-and-alert/create-detection-rule.md). + * **Bulk edit multiple rules**: Select the rules you want to edit, then select an action from the **Bulk actions** menu: * **Index patterns**: Add or delete the index patterns used by all selected rules. * **Tags**: Add or delete tags on all selected rules. @@ -106,7 +81,7 @@ Similarly, rules will be skipped if they can’t be modified by a bulk edit. For * **Add rule actions**: Add [rule actions](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) on all selected rules. If you add multiple actions, you can specify an action frequency for each of them. To overwrite the frequency of existing actions select the option to **Overwrite all selected rules actions**. ::::{important} - After upgrading to 8.8 or later, frequency settings for rule actions created in 8.7 or earlier are moved from the rule level to the action level. The action schedules remain the same and will continue to run on their previously specified frequency (`On each rule execution`, `Hourly`, `Daily`, or `Weekly`). + After upgrading to {{stack}} 8.8 or later, frequency settings for rule actions created in 8.7 or earlier are moved from the rule level to the action level. The action schedules remain the same and will continue to run on their previously specified frequency (`On each rule execution`, `Hourly`, `Daily`, or `Weekly`). :::: @@ -118,7 +93,7 @@ Similarly, rules will be skipped if they can’t be modified by a bulk edit. For * **Update rule schedules**: Update the [schedules](/solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule) and look-back times on all selected rules. * **Apply Timeline template**: Apply a specified [Timeline template](/solutions/security/investigate/timeline-templates.md) to the selected rules. You can also choose **None** to remove Timeline templates from the selected rules. -3. On the flyout that opens, update the rule settings and actions. +3. On the page or flyout that opens, update the rule settings and actions. ::::{tip} To [snooze](/solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) rule actions, go to the **Actions** tab and click the bell icon. @@ -147,7 +122,7 @@ When duplicating a rule with exceptions, you can choose to duplicate the rule an -## Run rules manually [manually-run-rules] +## Run rules manually [manually-run-rules] ::::{warning} This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. @@ -223,7 +198,7 @@ The `.ndjson` file also includes any actions, connectors, and exception lists re * **Actions and connectors**: Rule actions and connectors are included in the exported file, but sensitive information about the connector (such as authentication credentials) *is not* included. You must re-add missing connector details after importing detection rules. ::::{tip} - You can also use {{kib}}'s [Saved Objects](/explore-analyze/find-and-organize/saved-objects.md#saved-objects-export) UI to export and import necessary connectors before importing detection rules. + You can also use the [Saved Objects](/explore-analyze/find-and-organize/saved-objects.md#saved-objects-export) UI to export and import necessary connectors before importing detection rules. :::: * **Value lists**: Any value lists used for rule exceptions are *not* included in rule exports or imports. Use the [Manage value lists](/solutions/security/detect-and-alert/create-manage-value-lists.md#edit-value-lists) UI to export and import value lists separately. @@ -239,7 +214,7 @@ To export and import detection rules: 3. To import rules: ::::{note} - To import rules with actions, you need at least `Read` privileges for the `Action and Connectors` feature. To overwrite or add new connectors, you need `All` privileges for the `Actions and Connectors` feature. To import rules without actions, you don’t need `Actions and Connectors` privileges. Refer to [Enable and access detections](/solutions/security/detect-and-alert/detections-requirements.md#enable-detections-ui) for more information. + To import rules with or without actions, and to manage rule connectors, you must have the appropriate user role. Refer to [Enable and access detections](detections-requirements.md#enable-detections-ui) for more information. :::: @@ -277,5 +252,5 @@ You can also check rules' related integrations in the **Installed Rules** and ** ::: ::::{tip} -You can hide the **integrations** badge in the rules tables. To do this, turn off `securitySolution:showRelatedIntegrations` [advanced setting](/solutions/security/get-started/configure-advanced-settings.md#show-related-integrations). +You can hide the **integrations** badge in the rules tables by turning off the `securitySolution:showRelatedIntegrations` [advanced setting](/solutions/security/get-started/configure-advanced-settings.md#show-related-integrations). :::: diff --git a/solutions/security/detect-and-alert/mitre-attandckr-coverage.md b/solutions/security/detect-and-alert/mitre-attandckr-coverage.md index f261e1dbb1..7d1de15f36 100644 --- a/solutions/security/detect-and-alert/mitre-attandckr-coverage.md +++ b/solutions/security/detect-and-alert/mitre-attandckr-coverage.md @@ -4,14 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-rules-coverage.html --- -# MITRE ATT&CK® coverage - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/rules-coverage.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-rules-coverage.md +# MITRE ATT&CK® coverage [security-rules-coverage] The **MITRE ATT&CK® coverage** page shows which [MITRE ATT&CK®](https://attack.mitre.org) adversary tactics and techniques are covered by your installed and enabled detection rules. This includes both Elastic prebuilt rules and custom rules. @@ -33,7 +26,7 @@ You can map custom rules to tactics in **Advanced settings** when creating or ed ::: -## Filter rules [_filter_rules] +## Filter rules [security-rules-coverage-filter-rules] Use the drop-down filters at the top of the page to control which of your installed detection rules are included in calculating coverage. @@ -48,7 +41,7 @@ Searches for tactics and techniques must match exactly, are case sensitive, and -## Expand and collapse cells [_expand_and_collapse_cells] +## Expand and collapse cells [security-rules-coverage-expand-and-collapse-cells] Click **Collapse cells** or **Expand cells** to change how much information the cells display. Cells always include the technique’s name and the number of sub-techniques covered by enabled rules. Expand the cells to also display counts of disabled and enabled rules for each technique. @@ -58,11 +51,11 @@ The counts inside cells are affected by how you filter the page. For example, if -## Enable rules [_enable_rules] +## Enable rules [security-rules-coverage-enable-rules] You can quickly enable all the rules for a specific technique that you’ve installed, but not enabled. Click the technique’s cell, then click **Enable all disabled** in the popup that appears. -## Learn more about techniques and sub-techniques [_learn_more_about_techniques_and_sub_techniques] +## Learn more about techniques and sub-techniques [security-rules-coverage-learn-more-about-techniques-and-sub-techniques] For more information on a specific technique and its sub-techniques, click the technique’s cell, then click the title in the popup that appears. This opens a new browser tab with the technique’s MITRE ATT&CK® documentation. diff --git a/solutions/security/detect-and-alert/monitor-rule-executions.md b/solutions/security/detect-and-alert/monitor-rule-executions.md index 68c79de681..e931fd453b 100644 --- a/solutions/security/detect-and-alert/monitor-rule-executions.md +++ b/solutions/security/detect-and-alert/monitor-rule-executions.md @@ -63,7 +63,7 @@ Use these controls to filter what’s included in the logs table: * The date and time picker sets the time range of rule executions included in the table. This is separate from the global date and time picker at the top of the rule details page. * The **Source event time range** button toggles the display of data pertaining to the time range of manual runs. * The **Show metrics columns** toggle includes more or less data in the table, pertaining to the timing of each rule execution. -* The **Actions** column allows you to show alerts generated from a given rule execution. Click the filter icon (![Filter icon](../../../images/security-filter-icon.png "")) to create a global search filter based on the rule execution’s ID value. This replaces any previously applied filters, changes the global date and time range to 24 hours before and after the rule execution, and displays a confirmation notification. You can revert this action by clicking **Restore previous filters** in the notification. +* The **Actions** column allows you to show alerts generated from a given rule execution. Click the filter icon (![Filter icon](../../../images/security-filter-icon.png "title =20x20")) to create a global search filter based on the rule execution’s ID value. This replaces any previously applied filters, changes the global date and time range to 24 hours before and after the rule execution, and displays a confirmation notification. You can revert this action by clicking **Restore previous filters** in the notification. ### Manual runs table [manual-runs-table] diff --git a/solutions/security/detect-and-alert/query-alert-indices.md b/solutions/security/detect-and-alert/query-alert-indices.md index 4e6bb7cb29..ab6c384123 100644 --- a/solutions/security/detect-and-alert/query-alert-indices.md +++ b/solutions/security/detect-and-alert/query-alert-indices.md @@ -4,14 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-query-alert-indices.html --- -# Query alert indices - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/query-alert-indices.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-query-alert-indices.md +# Query alert indices [security-query-alert-indices] This page explains how you should query alert indices, for example, when building rule queries, custom dashboards, or visualizations. For more information about alert event field definitions, review the [Alert schema](/reference/security/fields-and-object-schemas/alert-schema.md). @@ -20,16 +13,10 @@ This page explains how you should query alert indices, for example, when buildin We recommend querying the following index aliases: -* If you’re using version 8.0.0 or later: `.alerts-security.alerts-` This alias includes the legacy (before 8.0.0) and the new alert indices. -* If you’re using a version before 8.0.0: `.siem-signals-` Queries on this alias will function regardless of your {{stack}} version but will not follow the newer `.alerts` naming convention and may be deprecated in future releases. - -Regardless of which alias you query, you should not include a dash or wildcard after the space ID. To query all spaces, use the following syntax: `.alerts-security.alerts-*` or `.siem-signals-*`. +We recommend querying the `.alerts-security.alerts-` index alias. You should not include a dash or wildcard after the space ID. To query all spaces, use the following syntax: `.alerts-security.alerts-*`. ## Alert indices [_alert_indices] -For additional context, alert events are stored in hidden {{es}} indices. We do not recommend querying them directly. The naming conventions for these indices and their aliases differ depending on which version of {{stack}} you’re running: - -* **8.0.0 or later:** `.internal.alerts-security.alerts--NNNNNN`, where `NNNNNN` is a number that increases over time, starting from 000001. -* **Before 8.0.0:** `.siem-signals--NNNNNN`, where `NNNNNN` is a number that increases over time, starting from 000001. +For additional context, alert events are stored in hidden {{es}} indices. We do not recommend querying them directly. The naming convention for these indices and their aliases is `.internal.alerts-security.alerts--NNNNNN`, where `NNNNNN` is a number that increases over time, starting from 000001. diff --git a/solutions/security/detect-and-alert/reduce-notifications-alerts.md b/solutions/security/detect-and-alert/reduce-notifications-alerts.md index 4454862e5d..0e8614bd6b 100644 --- a/solutions/security/detect-and-alert/reduce-notifications-alerts.md +++ b/solutions/security/detect-and-alert/reduce-notifications-alerts.md @@ -4,20 +4,13 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-reduce-notifications-alerts.html --- -# Reduce notifications and alerts - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/reduce-notifications-alerts.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-reduce-notifications-alerts.md +# Reduce notifications and alerts [security-reduce-notifications-alerts] {{elastic-sec}} offers several features to help reduce the number of notifications and alerts generated by your detection rules. This table provides a general comparison of these features, with links for more details: | | | | --- | --- | | [Rule action snoozing](/solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) | **Stops a specific rule’s notification actions from running**.

Use to avoid unnecessary notifications from a specific rule. The rule continues to run and generate alerts during the snooze period, but its [notification actions](/solutions/security/detect-and-alert/create-detection-rule.md#rule-response-action) don’t run.
| -| [Maintenance window](/explore-analyze/alerts-cases/alerts/maintenance-windows.md) | **Prevents all rules' notification actions from running**.

Use to avoid false alarms and unnecessary notifications during planned outages. All rules continue to run and generate alerts during the maintenance window, but their [notification actions](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) don’t run.

::::{note}
Maintenance windows are a {{kib}} feature, configured outside of the {{security-app}} in **Stack Management**.
::::

| +| [Maintenance window](/explore-analyze/alerts-cases/alerts/maintenance-windows.md) | **Prevents all rules' notification actions from running**.

Use to avoid false alarms and unnecessary notifications during planned outages. All rules continue to run and generate alerts during the maintenance window, but their [notification actions](/solutions/security/detect-and-alert/create-detection-rule.md#rule-notifications) don’t run.

**Note**: Maintenance windows are a {{kib}} feature, configured outside of the {{security-app}} in **Stack Management**.

| | [Alert suppression](/solutions/security/detect-and-alert/suppress-detection-alerts.md) | **Reduces repeated or duplicate alerts**.

Use to reduce the number of alerts created when a rule meets its criteria repeatedly. Duplicate qualifying events are grouped, and only one alert is created for each group.
| | [Rule exception](/solutions/security/detect-and-alert/rule-exceptions.md) | **Prevents a rule from creating alerts under specific conditions**.

Use to reduce false positive alerts by preventing trusted processes and network activity from generating unnecessary alerts. You can configure an exception to be used by a single rule or shared among multiple rules, but they typically don’t affect *all* rules.
| diff --git a/solutions/security/detect-and-alert/suppress-detection-alerts.md b/solutions/security/detect-and-alert/suppress-detection-alerts.md index 4db14ff8a3..9f41dcb0d0 100644 --- a/solutions/security/detect-and-alert/suppress-detection-alerts.md +++ b/solutions/security/detect-and-alert/suppress-detection-alerts.md @@ -4,22 +4,16 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-alert-suppression.html --- -# Suppress detection alerts +# Suppress detection alerts [security-alert-suppression] -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/alert-suppression.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-alert-suppression.md ::::{admonition} Requirements and notices -* Alert suppression requires a [Platinum or higher subscription](https://www.elastic.co/pricing). +* In {{stack}} alert suppression requires a [Platinum or higher subscription](https://www.elastic.co/pricing) or the appropriate [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md). * {{ml-cap}} rules have [additional requirements](/solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md) for alert suppression. +* This functionality is in technical preview for event correlation rules only and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. :::: - Alert suppression allows you to reduce the number of repeated or duplicate detection alerts created by these detection rule types: * [Custom query](/solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule) @@ -39,7 +33,7 @@ Alert suppression is not available for Elastic prebuilt rules. However, if you w :::: -## Configure alert suppression [_configure_alert_suppression] +## Configure alert suppression [security-alert-suppression-configure-alert-suppression] You can configure alert suppression when you create or edit a supported rule type. Refer to documentation for creating [custom query](/solutions/security/detect-and-alert/create-detection-rule.md#create-custom-rule), [threshold](/solutions/security/detect-and-alert/create-detection-rule.md#create-threshold-rule), [event correlation](/solutions/security/detect-and-alert/create-detection-rule.md#create-eql-rule), [new terms](/solutions/security/detect-and-alert/create-detection-rule.md#create-new-terms-rule), [{{esql}}](/solutions/security/detect-and-alert/create-detection-rule.md#create-esql-rule), or [{{ml}}](/solutions/security/detect-and-alert/create-detection-rule.md#create-ml-rule) rules for detailed instructions. @@ -51,7 +45,7 @@ You can configure alert suppression when you create or edit a supported rule typ ::::{note} If you specify a field with multiple values, alerts with that field are handled as follows: - * **Custom query or threshold rules:** A group of alerts is created for each value. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts will be suppressed separately for each value of `127.0.0.1`, `127.0.0.2`, and `127.0.0.3`. + * **Custom query or threshold rules:** Alerts are grouped by each unique value. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts will be suppressed separately for each value of `127.0.0.1`, `127.0.0.2`, and `127.0.0.3`. * **Indicator match, event correlation (non-sequence queries only), new terms, {{esql}}, or {{ml}} rules:** Alerts with the specified field name and identical array values are grouped together. For example, if you suppress alerts by `destination.ip` of `[127.0.0.1, 127.0.0.2, 127.0.0.3]`, alerts with the entire array are grouped and only one alert is created for the group. * **Event correlation (sequence queries only) rules:** If the specified field contains an array of values, suppression only happens if the field’s values are an exact match and in the same order. For example, if you specify the field `myips` and one sequence alert has [1.1.1.1, 0.0.0.0] and another sequence alert has [1.1.1.1, 192.168.0.1], neither of those alerts will be suppressed, despite sharing an array element. @@ -64,7 +58,7 @@ You can configure alert suppression when you create or edit a supported rule typ :::: - * **Per rule execution**: Create an alert each time the rule runs and meets its criteria. + * **Per rule execution**: Create an alert each time the rule runs and an event meets its criteria. * **Per time period**: Create one alert for all qualifying events that occur within a specified time window, beginning from when an event first meets the rule criteria and creates the alert. For example, if a rule runs every 5 minutes but you don’t need alerts that frequently, you can set the suppression time period to a longer time, such as 1 hour. If the rule meets its criteria, it creates an alert at that time, and for the next hour, it’ll suppress any subsequent qualifying events. @@ -93,7 +87,7 @@ You can configure alert suppression when you create or edit a supported rule typ -## Confirm suppressed alerts [_confirm_suppressed_alerts] +## Confirm suppressed alerts [security-alert-suppression-confirm-suppressed-alerts] The {{security-app}} displays several indicators of whether a detection alert was created with alert suppression enabled, and how many duplicate alerts were suppressed. @@ -125,7 +119,7 @@ After an alert is moved to the `Closed` status, it will no longer suppress new a -## Investigate events for suppressed alerts [_investigate_events_for_suppressed_alerts] +## Investigate events for suppressed alerts [security-alert-suppression-investigate-events-for-suppressed-alerts] With alert suppression, detection alerts aren’t created for the grouped source events, but you can still retrieve the events for further analysis or investigation. Do one of the following to open Timeline with the original events associated with both the created alert and the suppressed alerts: @@ -139,9 +133,9 @@ With alert suppression, detection alerts aren’t created for the grouped source * Alert details flyout — Select **Take action** → **Investigate in timeline**. -## Alert suppression limit by rule type [_alert_suppression_limit_by_rule_type] +## Alert suppression limit by rule type [security-alert-suppression-alert-suppression-limit-by-rule-type] Some rule types have a maximum number of alerts that can be suppressed (custom query rules don’t have a suppression limit): * **Threshold, event correlation, {{esql}}, and {{ml}}:** The maximum number of alerts is the value you choose for the rule’s **Max alerts per run** [advanced setting](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params), which is `100` by default. -* **Indicator match and new terms:** The maximum number is five times the value you choose for the rule’s **Max alerts per run** [advanced setting](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params). The default value is `100`, which means the default maximum limit for indicator match rules and new term rules is `500`. +* **Indicator match and new terms:** The maximum number is five times the value you choose for the rule’s **Max alerts per run** [advanced setting](/solutions/security/detect-and-alert/create-detection-rule.md#rule-ui-advanced-params). The default value is `100`, which means the default maximum limit for indicator match rules and new terms rules is `500`. diff --git a/solutions/security/detect-and-alert/tune-detection-rules.md b/solutions/security/detect-and-alert/tune-detection-rules.md index 7ff21a33b2..0467468241 100644 --- a/solutions/security/detect-and-alert/tune-detection-rules.md +++ b/solutions/security/detect-and-alert/tune-detection-rules.md @@ -4,26 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-tune-detection-signals.html --- -# Tune detection rules - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/tuning-detection-signals.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$tune-indicator-rules$$$ - -$$$filter-rule-process$$$ - -$$$tune-authorized-processes$$$ - -$$$tune-network-rules$$$ - -$$$tune-windows-rules$$$ +# Tune detection rules [security-tune-detection-signals] Using the {{security-app}}, you can tune prebuilt and custom detection rules to optimize alert generation. To reduce noise, you can: @@ -154,7 +135,7 @@ Take the following steps to tune indicator match rules: -### Noise from common cloud-based network traffic [_noise_from_common_cloud_based_network_traffic] +### Noise from common cloud-based network traffic [security-tune-detection-signals-noise-from-common-cloud-based-network-traffic] In cloud-based organizations, remote workers sometimes access services over the internet. The security policies of home networks probably differ from the security policies of managed corporate networks, and these rules might need tuning to reduce noise from legitimate administrative activities: diff --git a/solutions/security/detect-and-alert/using-logsdb-index-mode-with-elastic-security.md b/solutions/security/detect-and-alert/using-logsdb-index-mode-with-elastic-security.md index 95fe8d2d2f..970179ab38 100644 --- a/solutions/security/detect-and-alert/using-logsdb-index-mode-with-elastic-security.md +++ b/solutions/security/detect-and-alert/using-logsdb-index-mode-with-elastic-security.md @@ -4,21 +4,14 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/detections-logsdb-index-mode-impact.html --- -# Using logsdb index mode with Elastic Security - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/detections-logsdb-index-mode-impact.md -% - [ ] ./raw-migrated-files/docs-content/serverless/detections-logsdb-index-mode-impact.md +# Using logsdb index mode with Elastic Security [detections-logsdb-index-mode-impact] ::::{note} To use the [synthetic `_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source) feature, you must have the appropriate subscription. Refer to the subscription page for [Elastic Cloud](https://www.elastic.co/subscriptions/cloud) and [Elastic Stack/self-managed](https://www.elastic.co/subscriptions) for the breakdown of available features and their associated subscription tiers. :::: -This topic explains the impact of using logsdb index mode with {{elastic-sec}}. +This topic explains the impact of using logsdb index mode with {{elastic-sec}} and {{sec-serverless}}. With logsdb index mode, the original `_source` field is not stored in the index but can be reconstructed using [synthetic `_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source). @@ -26,7 +19,9 @@ When the `_source` is reconstructed, [modifications](elasticsearch://reference/e Continue reading to find out how this affects specific {{elastic-sec}} components. -::::{note} +::::{note} +The following statement applies to {{stack}} users only: + Logsdb is not recommended for {{elastic-sec}} at this time. Users must fully understand and accept the documented changes to detection alert documents (see below), and ensure their deployment has excess hot data tier CPU resource capacity before enabling logsdb mode, as logsdb mode requires additional CPU resources during the ingest/indexing process. Enabling logsdb without sufficient hot data tier CPU may result in data ingestion backups and/or security detection rule timeouts and errors. :::: diff --git a/solutions/security/detect-and-alert/view-detection-alert-details.md b/solutions/security/detect-and-alert/view-detection-alert-details.md index 30cf36cf83..365c8c5e3b 100644 --- a/solutions/security/detect-and-alert/view-detection-alert-details.md +++ b/solutions/security/detect-and-alert/view-detection-alert-details.md @@ -4,34 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-view-alert-details.html --- -# View detection alert details - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/view-alert-details.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-view-alert-details.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$correlations-overview$$$ - -$$$entities-overview$$$ - -$$$expanded-visualizations-view$$$ - -$$$insights-section$$$ - -$$$investigation-section$$$ - -$$$left-panel$$$ - -$$$prevalence-overview$$$ - -$$$preview-panel$$$ - -$$$threat-intelligence-overview$$$ +# View detection alert details [security-view-alert-details] To learn more about an alert, click the **View details** button from the Alerts table. This opens the alert details flyout, which helps you understand and manage the alert. @@ -60,10 +33,11 @@ The right panel provides an overview of the alert. Expand any of the collapsed s From the right panel, you can also: * Click **Expand details** to open the [left panel](/solutions/security/detect-and-alert/view-detection-alert-details.md#left-panel), which shows more information about sections in the right panel. -* Click the **Chat** icon (![AI assistant chat icon](../../../images/security-ai-assistant-chat.png "")) to access the [*AI Assistant*](/solutions/security/ai/ai-assistant.md). -* Click the **Share alert** icon (![Share alert icon](../../../images/security-share-alert.png "")) to get a shareable alert URL. We *do not* recommend copying the URL from your browser’s address bar, which can lead to inconsistent results if you’ve set up filters or relative time ranges for the Alerts page. +* Click the **Chat** icon (![AI assistant chat icon](../../../images/security-ai-assistant-chat.png "title =20x20")) to access the [AI Assistant](/solutions/security/ai/ai-assistant.md). +* Click the **Share alert** icon (![Share alert icon](../../../images/security-share-alert.png "title =20x20")) to get a shareable alert URL. We *do not* recommend copying the URL from your browser’s address bar, which can lead to inconsistent results if you’ve set up filters or relative time ranges for the Alerts page. ::::{note} + For {{stack}} users only: If you’ve configured the [`server.publicBaseUrl`](/deploy-manage/deploy/self-managed/configure.md#server-publicBaseUrl) setting in the `kibana.yml` file, the shareable URL is also in the `kibana.alert.url` field. You can find the field by searching for `kibana.alert.url` on the **Table** tab. :::: @@ -72,14 +46,14 @@ From the right panel, you can also: If you’ve enabled grouping on the Alerts page, the alert details flyout won’t open until you expand a collapsed group and select an individual alert. :::: -* Click the **Flyout settings** icon (![Flyout settings icon](../../../images/security-flyout-settings.png "")) to access options for displaying the alert details flyout. The **Overlay*** option (which displays the flyout over the Alerts table) is selected by default. Select ***Push** to display the flyout to the side of the table instead. In either display, you can resize the flyout panels to your liking. Clicking **Reset size** reverts the flyout to its default dimensions. +* Click the **Flyout settings** icon (![Flyout settings icon](../../../images/security-flyout-settings.png "title =20x20")) to access options for displaying the alert details flyout. The **Overlay** option (which displays the flyout over the Alerts table) is selected by default. Select **Push** to display the flyout to the side of the table instead. In either display, you can resize the flyout panels to your liking. Clicking **Reset size** reverts the flyout to its default dimensions. * Find basic details about the alert, such as the: * Associated rule * Alert status and when the alert was created * Alert severity and risk score (these are inherited from rule that generated the alert) - * Users assigned to the alert (click the **Assign alert** ![Assign alert](../../../images/security-assign-alert.png "") icon to assign more users) - * Notes attached to the alert (click the **Add note** ![Add note](../../../images/security-add-note-icon.png "") icon to create a new note) + * Users assigned to the alert (click the **Assign alert** ![Assign alert](../../../images/security-assign-alert.png "title =20x20") icon to assign more users) + * Notes attached to the alert (click the **Add note** ![Add note](../../../images/security-add-note-icon.png "title =20x20") icon to create a new note) * Click the **Table** or **JSON** tabs to display the alert details in table or JSON format. In table format, alert details are displayed as field-value pairs. @@ -166,8 +140,8 @@ The Visualizations section is located on the **Overview** tab in the right panel Click **Visualizations** to display the following previews: -* **Session viewer preview**: Shows a preview of [Session View](/solutions/security/investigate/session-view.md) data. Click **Session viewer preview** to open the **Session View** tab in Timeline. -* **Analyzer preview**: Shows a preview of the [visual analyzer graph](/solutions/security/investigate/visual-event-analyzer.md). The preview displays up to three levels of the analyzed event’s ancestors and up to three levels of the event’s descendants and children. The ellipses symbol (**`...`***) indicates the event has more ancestors and descendants to examine. Click ***Analyzer preview** to open the **Event Analyzer** tab in Timeline. +* **Session view preview**: Shows a preview of [Session View](/solutions/security/investigate/session-view.md) data. Click **Session viewer preview** to open the **Session View** tab in Timeline. +* **Analyzer preview**: Shows a preview of the [visual analyzer graph](/solutions/security/investigate/visual-event-analyzer.md). The preview displays up to three levels of the analyzed event’s ancestors and up to three levels of the event’s descendants and children. The ellipses symbol (**`...`**) indicates the event has more ancestors and descendants to examine. Click **Analyzer preview** to open the **Event Analyzer** tab in Timeline. ### Expanded visualizations view [expanded-visualizations-view] @@ -210,7 +184,7 @@ The Insights section is located on the **Overview** tab in the right panel. It o ### Entities [entities-overview] -The Entities overview provides high-level details about the user and host that are related to the alert. Host and user risk classifications are also available with a [Platinum subscription](https://www.elastic.co/pricing) or higher. +The Entities overview provides high-level details about the user and host that are related to the alert. Host and user risk classifications are also available with a [Platinum subscription](https://www.elastic.co/pricing) or higher in {{stack}} or the Security Analytics Complete [project feature](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) in {{serverless-short}}. :::{image} ../../../images/security-entities-overview.png :alt: Overview of the entity details section in the right panel @@ -220,7 +194,7 @@ The Entities overview provides high-level details about the user and host that a #### Expanded entities view [expanded-entities-view] -From the right panel, click **Entities** to open a detailed view of the host and user associated with the alert. The expanded view also includes risk scores and classifications (if you have a Platinum subscription or higher) and activity on related hosts and users. +From the right panel, click **Entities** to open a detailed view of the host and user associated with the alert. The expanded view also includes risk scores and classifications and activity on related hosts and users. Access to these features requires a [Platinum subscription](https://www.elastic.co/pricing) or higher in {{stack}} or the Security Analytics Complete [project feature](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) in {{serverless-short}} :::{image} ../../../images/security-expanded-entities-view.png :alt: Expanded view of entity details @@ -306,7 +280,7 @@ The Correlations overview provides the following information: * **Alerts related by process ancestry**: Shows the number of alerts that are related by process events on the same linear branch. ::::{note} - To access data about alerts related by process ancestry, you must have a [Platinum or higher subscription](https://www.elastic.co/pricing). + To access data about alerts related by process ancestry, you must have a [Platinum or higher subscription](https://www.elastic.co/pricing) in {{stack}} or the appropriate [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md). :::: @@ -322,12 +296,7 @@ From the right panel, click **Correlations** to open the expanded Correlations v In the expanded view, corelation data is organized into several tables: -* **Suppressed alerts**: Shows how many duplicate alerts were suppressed. This information only appears if alert suppression is enabled for the rule. - - ::::{warning} - This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. - :::: - +* **Suppressed alerts**: Shows how many duplicate alerts were suppressed. This information only appears if [alert suppression](/solutions/security/detect-and-alert/suppress-detection-alerts.md) is enabled for the rule. * **Related cases**: Shows cases to which the alert has been added. Click a case’s name to open its details. * **Alerts related by source event**: Shows alerts created by the same source event. This can help you find alerts with a shared origin and provide more context about the source event. Click the **Investigate in timeline** button to examine related alerts in Timeline. * **Alerts related by session**: Shows alerts generated during the same [session](/solutions/security/investigate/session-view.md). These alerts share the same session ID, which is a unique ID for tracking a given Linux session. To use this feature, you must enable the **Collect session data** setting in your {{elastic-defend}} integration policy. Refer to [Enable Session View data](/solutions/security/investigate/session-view.md#enable-session-view) for more information. @@ -360,7 +329,7 @@ The expanded Prevalence view provides the following details: * **Alert count**: Shows the total number of alert documents that have identical highlighted field values, including the alert you’re currently examining. For example, if the `host.name` field has an alert count of 5, that means there are five total alerts with the same `host.name` value. The Alert count column only retrieves documents that contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. * **Document count**: Shows the total number of event documents that have identical field values. A dash (`——`) displays if there are no event documents that match the field value. The Document count column only retrieves documents that don’t contain the [`event.kind:signal`](asciidocalypse://docs/ecs/docs/reference/ecs-allowed-values-event-kind.md#ecs-event-kind-signal) field-value pair. -The following features require a [Platinum subscription](https://www.elastic.co/pricing) or higher: +The following features require a [Platinum subscription](https://www.elastic.co/pricing) or higher in {{stack}} or the appropriate [{{serverless-short}} project tier](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) * **Host prevalence**: Shows the percentage of unique hosts that have identical field values. Host prevalence for highlighted fields is calculated by taking the number of unique hosts with identical highlighted field values and dividing that number by the total number of unique hosts in your environment. * **User prevalence**: Shows the percentage of unique users that have identical highlighted field values. User prevalence for highlighted fields is calculated by taking the number of unique users with identical field values and dividing that number by the total number of unique users in your environment. diff --git a/solutions/security/detect-and-alert/visualize-detection-alerts.md b/solutions/security/detect-and-alert/visualize-detection-alerts.md index 7f4e09dee5..8fd0c8aec4 100644 --- a/solutions/security/detect-and-alert/visualize-detection-alerts.md +++ b/solutions/security/detect-and-alert/visualize-detection-alerts.md @@ -4,14 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/security-visualize-alerts.html --- -# Visualize detection alerts - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/visualize-alerts.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-visualize-alerts.md +# Visualize detection alerts [security-visualize-alerts] Visualize and group detection alerts by specific parameters in the visualization section of the Alerts page. @@ -32,14 +25,14 @@ Some view types don’t have the **Group by top** option. You can also leave **G :::: -To reset a view to default settings, hover over it and click the options menu (![Settings menu icon](../../../images/security-three-dot-icon.png "")) that appears, then select **Reset group by fields**. +To reset a view to default settings, hover over it and click the options menu (![More actions](../../../images/security-three-dot-icon.png "title =20x20")) that appears, then select **Reset group by fields**. ::::{tip} The options menu also lets you inspect the visualization’s queries. For the trend and counts views, you can add the visualization to a new or existing case, or open it in Lens. :::: -Click the collapse icon (![Collapse icon](../../../images/security-collapse-icon-horiz-down.png "")) to minimize the visualization section and display a summary of key information instead. +Click the collapse icon (![Collapse icon](../../../images/security-collapse-icon-horiz-down.png "title =20x20")) to minimize the visualization section and display a summary of key information instead. :::{image} ../../../images/security-alert-page-viz-collapsed.png :alt: Alerts page with visualizations section collapsed @@ -47,7 +40,7 @@ Click the collapse icon (![Collapse icon](../../../images/security-collapse-icon ::: -## Summary [_summary] +## Summary [security-visualize-alerts-summary] On the Alerts page, the summary visualization displays by default and shows how alerts are distributed across these indicators: @@ -63,7 +56,7 @@ You can hover and click on elements within the summary — such as severity leve ::: -## Trend [_trend] +## Trend [security-visualize-alerts-trend] The trend view shows the occurrence of alerts over time. By default, it groups alerts by detection rule name (`kibana.alert.rule.name`). @@ -78,7 +71,7 @@ The **Group by top** menu is unavailable for the trend view. ::: -## Counts [_counts] +## Counts [security-visualize-alerts-counts] The counts view shows the count of alerts in each group. By default, it groups alerts first by detection rule name (`kibana.alert.rule.name`), then by host name (`host.name`). @@ -88,7 +81,7 @@ The counts view shows the count of alerts in each group. By default, it groups a ::: -## Treemap [_treemap] +## Treemap [security-visualize-alerts-treemap] The treemap view shows the distribution of alerts as nested, proportionally-sized tiles. This view can help you quickly pinpoint the most prevalent and critical alerts. diff --git a/solutions/security/endpoint-response-actions.md b/solutions/security/endpoint-response-actions.md index d6062d1305..269a7adeb6 100644 --- a/solutions/security/endpoint-response-actions.md +++ b/solutions/security/endpoint-response-actions.md @@ -292,7 +292,7 @@ You can also get a list of commands in the [Help panel](/solutions/security/endp ## Help panel [help-panel] -Click ![Help icon](../../images/security-help-icon.png "") **Help** in the upper-right to open the **Help** panel, which lists available response action commands and parameters as a reference. +Click ![Help icon](../../images/security-help-icon.png "title =20x20") **Help** in the upper-right to open the **Help** panel, which lists available response action commands and parameters as a reference. ::::{note} This panel displays only the response actions that you have the user role or privileges to perform. @@ -305,7 +305,7 @@ This panel displays only the response actions that you have the user role or pri :screenshot: ::: -You can use this panel to build commands with less typing. Click the add icon (![Add icon](../../images/security-add-command-icon.png "")) to add a command to the input area, enter any additional parameters or a comment, then press **Return** to run the command. +You can use this panel to build commands with less typing. Click the add icon (![Add icon](../../images/security-add-command-icon.png "title =20x20")) to add a command to the input area, enter any additional parameters or a comment, then press **Return** to run the command. If the endpoint is running an older version of {{agent}}, some response actions may not be supported, as indicated by an informational icon and tooltip. [Upgrade {{agent}}](/reference/ingestion-tools/fleet/upgrade-elastic-agent.md) on the endpoint to be able to use the latest response actions. diff --git a/solutions/security/endpoint-response-actions/isolate-host.md b/solutions/security/endpoint-response-actions/isolate-host.md index 609c1de3bd..d50902dd46 100644 --- a/solutions/security/endpoint-response-actions/isolate-host.md +++ b/solutions/security/endpoint-response-actions/isolate-host.md @@ -45,7 +45,7 @@ All actions executed on a host are tracked in the host’s response actions hist ::::{dropdown} Isolate a host from a detection alert 1. Open a detection alert: - * From the Alerts table or Timeline: Click **View details** (![View details icon](../../../images/security-view-details-icon.png "")). + * From the Alerts table or Timeline: Click **View details** (![View details icon](../../../images/security-view-details-icon.png "title =20x20")). * From a case with an attached alert: Click **Show alert details** (**>**). 2. Click **Take action → Isolate host**. @@ -116,7 +116,7 @@ After the host is successfully isolated, an **Isolated** status is added to the ::::{dropdown} Release a host from a detection alert 1. Open a detection alert: - * From the Alerts table or Timeline: Click **View details** (![View details icon](../../../images/security-view-details-icon.png "")). + * From the Alerts table or Timeline: Click **View details** (![View details icon](../../../images/security-view-details-icon.png "title =20x20")). * From a case with an attached alert: Click **Show alert details** (**>**). 2. From the alert details flyout, click **Take action → Release host**. diff --git a/solutions/security/get-started/automatic-import.md b/solutions/security/get-started/automatic-import.md index 0e0331febd..26b0b95c72 100644 --- a/solutions/security/get-started/automatic-import.md +++ b/solutions/security/get-started/automatic-import.md @@ -29,9 +29,10 @@ Click [here](https://elastic.navattic.com/automatic-import) to access an interac ::::{admonition} Requirements * A working [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). Recommended models: `Claude 3.5 Sonnet`; `GPT-4o`; `Gemini-1.5-pro-002`. -* An [Enterprise](https://www.elastic.co/pricing) subscription. +* {{stack}} users: An [Enterprise](https://www.elastic.co/pricing) subscription. +* {{serverless-short}} users: a [Security Analytics Complete subscription](/deploy-manage/deploy/elastic-cloud/project-settings.md). * A sample of the data you want to import, in a structured or unstructured format (including JSON, NDJSON, and Syslog). -* To import data from a REST API, have its OpenAPI specification (OAS) file ready. +* To import data from a REST API: its OpenAPI specification (OAS) file. :::: @@ -47,32 +48,29 @@ Using Automatic Import allows users to create new third-party data integrations 1. In {{elastic-sec}}, click **Add integrations**. 2. Under **Can’t find an integration?** click **Create new integration**. - :::{image} ../../../images/security-auto-import-create-new-integration-button.png - :alt: The Integrations page with the Create new integration button highlighted - ::: + :::{image} ../../../images/security-auto-import-create-new-integration-button.png + :alt: The Integrations page with the Create new integration button highlighted + ::: 3. Click **Create integration**. 4. Select an [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). -5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description***, and ***Logo**. Click **Next**. +5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description**, and **Logo**. Click **Next**. 6. Define your integration’s package name, which will prefix the imported event fields. 7. Define your **Data stream title**, **Data stream description**, and **Data stream name**. These fields appear on the integration’s configuration page to help identify the data stream it writes to. 8. Select your [**Data collection method**](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-options.md). This determines how your new integration will ingest the data (for example, from an S3 bucket, an HTTP endpoint, or a file stream). - ::::{admonition} Importing CEL data - :class: note - - If you select **API (CEL input)**, you’ll have the additional option to upload the API’s OAS file here. After you do, the LLM will use it to determine which API endpoints (GET only), query parameters, and data structures to use in the new custom integration. You will then select which API endpoints to consume and your authentication method before uploading your sample data. - - :::: + ::::{admonition} Importing CEL data + :class: note + If you select **API (CEL input)**, you’ll have the additional option to upload the API’s OAS file here. After you do, the LLM will use it to determine which API endpoints (GET only), query parameters, and data structures to use in the new custom integration. You will then select which API endpoints to consume and your authentication method before uploading your sample data. + :::: 9. Upload a sample of your data. Make sure to include all the types of events that you want the new integration to handle. - ::::{admonition} Best practices for sample data - * For JSON and NDJSON samples, each object in your sample should represent an event, and you should avoid deeply nested object structures. - * The more variety in your sample, the more accurate the pipeline will be. Include a wide range of unique log entries instead of just repeating the same type of entry. Automatic Import will select up to 100 different events from your sample to use as the basis for the new integration. - * Ideally, each field name should describe what the field does. - - :::: + ::::{admonition} Best practices for sample data + * For JSON and NDJSON samples, each object in your sample should represent an event, and you should avoid deeply nested object structures. + * The more variety in your sample, the more accurate the pipeline will be. Include a wide range of unique log entries instead of just repeating the same type of entry. Automatic Import will select up to 100 different events from your sample to use as the basis for the new integration. + * Ideally, each field name should describe what the field does. + :::: 10. Click **Analyze logs**, then wait for processing to complete. This may take several minutes. 11. After processing is complete, the pipeline’s field mappings appear, including ECS and custom fields. diff --git a/solutions/security/get-started/elastic-security-ui.md b/solutions/security/get-started/elastic-security-ui.md index b95896af9c..de8c3262d0 100644 --- a/solutions/security/get-started/elastic-security-ui.md +++ b/solutions/security/get-started/elastic-security-ui.md @@ -19,22 +19,22 @@ Filter for alerts, events, processes, and other important security data by enter :screenshot: ::: -* To refine your search results, select **Add Filter** (![Add filter icon](../../../images/security-add-filter-icon.png "")), then enter the field, operator (such as `is not` or `is between`), and value for your filter. -* To save the current KQL query and any applied filters, select **Saved query menu** (![Saved query menu icon](../../../images/security-saved-query-menu-icon.png "")), enter a name for the saved query, and select **Save saved query**. +* To refine your search results, select **Add Filter** (![Add filter icon](../../../images/security-add-filter-icon.png "title =20x20")), then enter the field, operator (such as `is not` or `is between`), and value for your filter. +* To save the current KQL query and any applied filters, select **Saved query menu** (![Saved query menu icon](../../../images/security-saved-query-menu-icon.png "title =20x20")), enter a name for the saved query, and select **Save saved query**. ## Navigation menu [navigation-menu-overview] -The navigation menu contains direct links and expandable groups, identified by the group icon (![Group icon](../../../images/security-group-icon.png "")). +The navigation menu contains direct links and expandable groups, identified by the group icon (![Group icon](../../../images/security-group-icon.png "title =20x20")). * Click a top-level link to go directly to its landing page, which contains links and information for related pages. -* Click a group’s icon (![Group icon](../../../images/security-group-icon.png "")) to open its flyout menu, which displays links to related pages within that group. Click a link in the flyout to navigate to its landing page. -* Click the **Collapse side navigation** icon (![Side menu collapse icon](../../../images/security-side-button.png "")) to collapse and expand the main navigation menu. +* Click a group’s icon (![Group icon](../../../images/security-group-icon.png "title =20x20")) to open its flyout menu, which displays links to related pages within that group. Click a link in the flyout to navigate to its landing page. +* Click the **Collapse side navigation** icon (![Side menu collapse icon](../../../images/security-side-button.png "title =20x20")) to collapse and expand the main navigation menu. ## Visualization actions [visualization-actions] -Many {{elastic-sec}} histograms, graphs, and tables display an **Inspect** button (![Inspect icon](../../../images/security-inspect-icon.png "")) when you hover over them. Click to examine the {{es}} queries used to retrieve data throughout the app. +Many {{elastic-sec}} histograms, graphs, and tables display an **Inspect** button (![Inspect icon](../../../images/security-inspect-icon.png "title =20x20")) when you hover over them. Click to examine the {{es}} queries used to retrieve data throughout the app. :::{image} ../../../images/security-inspect-icon-context.png :alt: Inspect icon @@ -42,7 +42,7 @@ Many {{elastic-sec}} histograms, graphs, and tables display an **Inspect** butto :screenshot: ::: -Other visualizations display an options menu (![Three-dot menu icon](../../../images/security-three-dot-icon.png "")), which allows you to inspect the visualization’s queries, add it to a new or existing case, or open it in Lens for customization. +Other visualizations display an options menu (![Three-dot menu icon](../../../images/security-three-dot-icon.png "title =20x20")), which allows you to inspect the visualization’s queries, add it to a new or existing case, or open it in Lens for customization. :::{image} ../../../images/security-viz-options-menu-open.png :alt: Options menu opened @@ -61,7 +61,7 @@ Throughout the {{security-app}}, you can hover over many data fields and values :screenshot: ::: -In some visualizations, these actions are available in the legend by clicking a value’s options icon (![Vertical three-dot icon](../../../images/security-three-dot-icon-vertical.png "")). +In some visualizations, these actions are available in the legend by clicking a value’s options icon (![Vertical three-dot icon](../../../images/security-three-dot-icon-vertical.png "title =20x20")). :::{image} ../../../images/security-inline-actions-legend.png :alt: Actions in a visualization legend diff --git a/solutions/security/investigate/add-osquery-response-actions.md b/solutions/security/investigate/add-osquery-response-actions.md index 1b0ef4803c..25bada8059 100644 --- a/solutions/security/investigate/add-osquery-response-actions.md +++ b/solutions/security/investigate/add-osquery-response-actions.md @@ -47,7 +47,7 @@ You can add Osquery Response Actions to new or existing custom query rules. Quer * **Query**: Select a saved query or enter a new one. After you enter the query, you can expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](/solutions/security/investigate/osquery.md#osquery-map-fields) included in the results from the live query (optional). ::::{note} - Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: diff --git a/solutions/security/investigate/osquery.md b/solutions/security/investigate/osquery.md index 7bae16f149..8bb4029646 100644 --- a/solutions/security/investigate/osquery.md +++ b/solutions/security/investigate/osquery.md @@ -45,7 +45,7 @@ To inspect hosts, run a query against one or more agents or policies, then view * **Query**: Select a saved query or enter a new one in the text box. After you enter the query, you can expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](#osquery-map-fields) included in the results from the live query (optional). ::::{note} - Overwriting the query’s default timeout period allows you to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: * **Pack**: Select from available query packs. After you select a pack, all of the queries in the pack are displayed. @@ -123,7 +123,7 @@ You can run packs as live queries or schedule packs to run for one or more agent * Click **Add query** and then add a saved query or enter a new query. Each query must include a unique query ID and the interval at which it should run. Optionally, set the minimum Osquery version and platform, specify a timeout period, or [map ECS fields](#osquery-map-fields). When you add a saved query to a pack, this adds a copy of the query. A connection is not maintained between saved queries and packs. ::::{note} - Overwriting the query’s default timeout period allows you to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: * Upload queries from a `.conf` query pack by dragging the pack to the drop zone under the query table. To explore the community packs that Osquery publishes, click **Example packs**. @@ -161,7 +161,7 @@ Once you save a query, you can only edit it from the **Saved queries** tab: * The unique identifier (required). * A brief description. * The SQL query (required). Osquery supports multi-line queries. - * A timeout period (optional). Increase the query’s default timeout period to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + * A timeout period (optional). Increase the query’s default timeout period to support queries that require more time to complete. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). * The [ECS fields](#osquery-map-fields) to populate when the query is run (optional). These fields are also copied in when you add this query to a pack. * The defaults to set when you add the query to a pack. diff --git a/solutions/security/investigate/run-osquery-from-alerts.md b/solutions/security/investigate/run-osquery-from-alerts.md index c25172fd2a..adcff32cf8 100644 --- a/solutions/security/investigate/run-osquery-from-alerts.md +++ b/solutions/security/investigate/run-osquery-from-alerts.md @@ -35,7 +35,7 @@ To run Osquery from an alert: * **Query**: Select a saved query or enter a new one in the text box. After you enter the query, you can expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](/solutions/security/investigate/osquery.md#osquery-map-fields) included in the results from the live query (optional). ::::{note} - Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: diff --git a/solutions/security/investigate/run-osquery-from-investigation-guides.md b/solutions/security/investigate/run-osquery-from-investigation-guides.md index a0afb1d902..96d95feaa2 100644 --- a/solutions/security/investigate/run-osquery-from-investigation-guides.md +++ b/solutions/security/investigate/run-osquery-from-investigation-guides.md @@ -52,7 +52,7 @@ You can only add Osquery to investigation guides for custom rules because prebui 3. Expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](/solutions/security/investigate/osquery.md#osquery-map-fields) included in the results from the live query (optional). ::::{note} - Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: @@ -76,7 +76,7 @@ You can only add Osquery to investigation guides for custom rules because prebui 2. Expand the **Advanced** section to set a timeout period for the query, and view or set [mapped ECS fields](/solutions/security/investigate/osquery.md#osquery-map-fields) included in the results from the live query (optional). ::::{note} - Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `900`. + Overwriting the query’s default timeout period allows you to support queries that take longer to run. The default and minimum supported value for the **Timeout** field is `60`. The maximum supported value is `86400` (24 hours). :::: 6. Click **Submit** to run the query. Query results display in the flyout. diff --git a/solutions/security/investigate/session-view.md b/solutions/security/investigate/session-view.md index e2ee26323f..c467b67bd1 100644 --- a/solutions/security/investigate/session-view.md +++ b/solutions/security/investigate/session-view.md @@ -74,7 +74,7 @@ The Session View UI has the following features: Session View includes additional badges not pictured above: -* The alert badge for multiple alerts appears when a single event causes alerts of multiple types (![Gear icon](../../../images/security-sess-view-process-alert-icon.png "") for `process` alerts, ![Page icon](../../../images/security-sess-view-file-alert-icon.png "") for `file` alerts, and ![Page icon](../../../images/security-sess-view-network-alert-icon.png "") for `network` alerts): +* The alert badge for multiple alerts appears when a single event causes alerts of multiple types (![Gear icon](../../../images/security-sess-view-process-alert-icon.png "title =20x20") for `process` alerts, ![Page icon](../../../images/security-sess-view-file-alert-icon.png "title =20x20") for `file` alerts, and ![Page icon](../../../images/security-sess-view-network-alert-icon.png "title =20x20") for `network` alerts): :::{image} ../../../images/security-session-view-alert-types-badge.png :alt: The alert badge for a command with all three alert types diff --git a/troubleshoot/elasticsearch/hotspotting.md b/troubleshoot/elasticsearch/hotspotting.md index 50789a00fe..94b70dee7b 100644 --- a/troubleshoot/elasticsearch/hotspotting.md +++ b/troubleshoot/elasticsearch/hotspotting.md @@ -175,5 +175,5 @@ Its response contains a `description` that reports this query: indices[winlogbeat-*,logs-window*], sequence by winlog.computer_name with maxspan=1m\n\n[authentication where host.os.type == "windows" and event.action:"logged-in" and\n event.outcome == "success" and process.name == "svchost.exe" ] by winlog.event_data.TargetLogonId ``` -This lets you know which indices to check (`winlogbeat-*,logs-window*`), as well as the [EQL search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) request body. Most likely this is [SIEM related](/solutions/security.md). You can combine this with [audit logging](../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) as needed to trace the request source. +This lets you know which indices to check (`winlogbeat-*,logs-window*`), as well as the [EQL search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) request body. Most likely this is [SIEM related](/solutions/security.md). You can combine this with [audit logging](../../deploy-manage/security/logging-configuration/enabling-audit-logs.md) as needed to trace the request source. diff --git a/troubleshoot/observability/apm-agent-rum-js/apm-real-user-monitoring-javascript-agent.md b/troubleshoot/observability/apm-agent-rum-js/apm-real-user-monitoring-javascript-agent.md index 4212827643..7be6c7e082 100644 --- a/troubleshoot/observability/apm-agent-rum-js/apm-real-user-monitoring-javascript-agent.md +++ b/troubleshoot/observability/apm-agent-rum-js/apm-real-user-monitoring-javascript-agent.md @@ -80,7 +80,7 @@ The effect of this will remain until the tab or browser is closed. In the unlikely event the agent causes disruptions to a production application, you can disable the agent while you troubleshoot. -To disable the agent, set [`active`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/configuration.md#active) to `false`. +To disable the agent, set [`active`](apm-agent-rum-js://reference/configuration.md#active) to `false`. ## Get in touch [get-in-touch] diff --git a/troubleshoot/observability/apm/common-problems.md b/troubleshoot/observability/apm/common-problems.md index 8f3c7e0499..6c31c63315 100644 --- a/troubleshoot/observability/apm/common-problems.md +++ b/troubleshoot/observability/apm/common-problems.md @@ -223,9 +223,9 @@ You will see this warning if your results have more than `1000` unique transacti **More information** -While this can happen with any APM agent, it typically occurs with the RUM agent. For more information on how to correctly set `transaction.name` in the RUM agent, see [custom initial page load transaction names](asciidocalypse://docs/apm-agent-rum-js/docs/reference/custom-transaction-name.md). +While this can happen with any APM agent, it typically occurs with the RUM agent. For more information on how to correctly set `transaction.name` in the RUM agent, see [custom initial page load transaction names](apm-agent-rum-js://reference/custom-transaction-name.md). -The RUM agent can also set the `transaction.name` when observing for transaction events. See [`apm.observe()`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/agent-api.md#observe) for more information. +The RUM agent can also set the `transaction.name` when observing for transaction events. See [`apm.observe()`](apm-agent-rum-js://reference/agent-api.md#observe) for more information. If your problem is occurring in a different APM agent, the tips above still apply. See the relevant [Agent API documentation](https://www.elastic.co/guide/en/apm/agent) to adjust how you’re naming your transactions. @@ -266,7 +266,7 @@ As an example, some APM agents store cookie values in `http.request.cookies`. Si stack: all ``` -If the service map is not showing an expected connection between the client and server, it’s likely because you haven’t configured [`distributedTracingOrigins`](asciidocalypse://docs/apm-agent-rum-js/docs/reference/distributed-tracing.md). +If the service map is not showing an expected connection between the client and server, it’s likely because you haven’t configured [`distributedTracingOrigins`](apm-agent-rum-js://reference/distributed-tracing.md). This setting is necessary, for example, for cross-origin requests. If you have a basic web application that provides data via an API on `localhost:4000`, and serves HTML from `localhost:4001`, you’d need to set `distributedTracingOrigins: ['https://localhost:4000']` to ensure the origin is monitored as a part of distributed tracing. In other words, `distributedTracingOrigins` is consulted prior to the APM agent adding the distributed tracing `traceparent` header to each request. diff --git a/troubleshoot/security/detection-rules.md b/troubleshoot/security/detection-rules.md index c2aa5bea54..5096316c8b 100644 --- a/troubleshoot/security/detection-rules.md +++ b/troubleshoot/security/detection-rules.md @@ -87,7 +87,7 @@ Turning off `autocomplete:useTimeRange` could cause performance issues if the da :::::{dropdown} Warning about type conflicts and unmapped fields :name: rule-exceptions-field-conflicts -A warning icon (![Field conflict warning icon](../../images/security-field-warning-icon.png "")) and message appear for fields with [type conflicts](#fields-with-conflicting-types) across multiple indices or fields that are [unmapped](#unmapped-field-conflict). You can learn more about the conflict by clicking the warning message. +A warning icon (![Field conflict warning icon](../../images/security-field-warning-icon.png "title =20x20")) and message appear for fields with [type conflicts](#fields-with-conflicting-types) across multiple indices or fields that are [unmapped](#unmapped-field-conflict). You can learn more about the conflict by clicking the warning message. ::::{note} A field can have type conflicts *and* be unmapped in specified indices.