diff --git a/deploy-manage/security/aws-privatelink-traffic-filters.md b/deploy-manage/security/aws-privatelink-traffic-filters.md index 1e13206a7e..4eebda33eb 100644 --- a/deploy-manage/security/aws-privatelink-traffic-filters.md +++ b/deploy-manage/security/aws-privatelink-traffic-filters.md @@ -49,7 +49,7 @@ Transport client is not supported over PrivateLink connections. :::: -AWS PrivateLink establishes a secure connection between two AWS Virtual Private Clouds (VPCs). The VPCs can belong to separate accounts, i.e. a service provider and its service consumers. AWS routes the PrivateLink traffic within the AWS data center and never exposes it to the public internet. In such a configuration, Elastic Cloud is the third-party service provider and the customers are service consumers. +AWS PrivateLink establishes a secure connection between two AWS Virtual Private Clouds (VPCs). The VPCs can belong to separate accounts, i.e. a service provider and its service consumers. AWS routes the PrivateLink traffic within the AWS data center and never exposes it to the public internet. In such a configuration, {{ecloud}} is the third-party service provider and the customers are service consumers. PrivateLink is a connection between a VPC Endpoint and a PrivateLink Service. @@ -94,11 +94,11 @@ PrivateLink Service is set up by Elastic in all supported AWS regions under the :::: -The process of setting up the PrivateLink connection to your clusters is split between AWS (e.g. by using AWS console) and Elastic Cloud UI. These are the high-level steps: +The process of setting up the PrivateLink connection to your clusters is split between AWS (e.g. by using AWS console) and {{ecloud}} UI. These are the high-level steps: -| AWS console | Elastic Cloud | +| AWS console | {{ecloud}} | | --- | --- | -| 1. Create a VPC endpoint using Elastic Cloud service name. | | +| 1. Create a VPC endpoint using {{ecloud}} service name. | | | 2. Create a DNS record pointing to the VPC endpoint. | | | | 3. Create a PrivateLink rule set with your VPC endpoint ID. | | | 4. Associate the PrivateLink rule set with your deployments. | @@ -108,7 +108,7 @@ The process of setting up the PrivateLink connection to your clusters is split b ## Ensure your VPC endpoint is in all availability zones supported by {{ecloud}} on the region for the VPC service [ec-aws-vpc-overlapping-azs] ::::{note} -Ensuring that your VPC is in all supported Elastic Cloud availability zones for a particular region avoids potential for a traffic imbalance. That imbalance may saturate some coordinating nodes and underutilize others in the deployment, eventually impacting performance. Enabling all supported Elastic Cloud zones ensures that traffic is balanced optimally. +Ensuring that your VPC is in all supported {{ecloud}} availability zones for a particular region avoids potential for a traffic imbalance. That imbalance may saturate some coordinating nodes and underutilize others in the deployment, eventually impacting performance. Enabling all supported {{ecloud}} zones ensures that traffic is balanced optimally. :::: @@ -164,7 +164,7 @@ The mapping will be different for your region. Our production VPC Service for `u Find out the endpoint of your deployment. You can do that by selecting **Copy endpoint** in the Cloud UI. It looks something like `my-deployment-d53192.es.us-east-1.aws.found.io`. `my-deployment-d53192` is an alias, and `es` is the product you want to access within your deployment. - To access your Elasticsearch cluster over PrivateLink: + To access your {{es}} cluster over PrivateLink: * If you have a [custom endpoint alias](/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md) configured, you can use the custom endpoint URL to connect. * Alternatively, use the following URL structure: @@ -181,7 +181,7 @@ The mapping will be different for your region. Our production VPC Service for `u :::: - You can test the AWS console part of the setup with a following curl (substitute the region and Elasticsearch ID with your cluster): + You can test the AWS console part of the setup with a following curl (substitute the region and {{es}} ID with your cluster): ```sh $ curl -v https://my-deployment-d53192.es.vpce.us-east-1.aws.elastic-cloud.com @@ -269,11 +269,11 @@ $ curl -u 'username:password' -v https://my-deployment-d53192.es.vpce.us-east-1. ``` ::::{note} -If you are using AWS PrivateLink together with Fleet, and enrolling the Elastic Agent with a PrivateLink URL, you need to configure Fleet Server to use and propagate the PrivateLink URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of Kibana. Otherwise, Elastic Agent will reset to use a default address instead of the PrivateLink URL. The URL needs to follow this pattern: `https://.fleet.:443`. +If you are using AWS PrivateLink together with Fleet, and enrolling the Elastic Agent with a PrivateLink URL, you need to configure Fleet Server to use and propagate the PrivateLink URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the PrivateLink URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the Elasticsearch host needs to be updated to propagate the Privatelink URL. The Elasticsearch URL needs to follow this pattern: `https://.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Privatelink URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. -The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [Kibana settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). +The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [{{kib}} settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). :::: diff --git a/deploy-manage/security/azure-private-link-traffic-filters.md b/deploy-manage/security/azure-private-link-traffic-filters.md index 318d2d908b..8aa0cfd968 100644 --- a/deploy-manage/security/azure-private-link-traffic-filters.md +++ b/deploy-manage/security/azure-private-link-traffic-filters.md @@ -54,7 +54,7 @@ Azure Private Link filtering is supported only for Azure regions. :::: -Azure Private Link establishes a secure connection between two Azure VNets. The VNets can belong to separate accounts, for example a service provider and their service consumers. Azure routes the Private Link traffic within the Azure data centers and never exposes it to the public internet. In such a configuration, Elastic Cloud is the third-party service provider and the customers are service consumers. +Azure Private Link establishes a secure connection between two Azure VNets. The VNets can belong to separate accounts, for example a service provider and their service consumers. Azure routes the Private Link traffic within the Azure data centers and never exposes it to the public internet. In such a configuration, {{ecloud}} is the third-party service provider and the customers are service consumers. Private Link is a connection between an Azure Private Endpoint and a Azure Private Link Service. @@ -86,11 +86,11 @@ Private Link Services are set up by Elastic in all supported Azure regions under :::: -The process of setting up the Private link connection to your clusters is split between Azure (e.g. by using Azure portal), Elastic Cloud Support, and Elastic Cloud UI. These are the high-level steps: +The process of setting up the Private link connection to your clusters is split between Azure (e.g. by using Azure portal), {{ecloud}} Support, and {{ecloud}} UI. These are the high-level steps: -| Azure portal | Elastic Cloud UI | +| Azure portal | {{ecloud}} UI | | --- | --- | -| 1. Create a private endpoint using Elastic Cloud service alias. | | +| 1. Create a private endpoint using {{ecloud}} service alias. | | | 2. Create a [DNS record pointing to the private endpoint](https://learn.microsoft.com/en-us/azure/dns/private-dns-privatednszone). | | | | 3. Create an Azure Private Link rule set with the private endpoint **Name** and **ID**. | | | 4. Associate the Azure Private Link rule set with your deployments. | @@ -185,13 +185,13 @@ Creating the filter approves the Private Link connection. Let’s test the connection: -1. Find out the Elasticsearch cluster ID of your deployment. You can do that by selecting **Copy cluster id** in the Cloud UI. It looks something like `9c794b7c08fa494b9990fa3f6f74c2f8`. +1. Find out the {{es}} cluster ID of your deployment. You can do that by selecting **Copy cluster id** in the Cloud UI. It looks something like `9c794b7c08fa494b9990fa3f6f74c2f8`. ::::{tip} - The Elasticsearch cluster ID is **different** from the deployment ID, custom alias endpoint, and Cloud ID values that feature prominently in the user console. + The {{es}} cluster ID is **different** from the deployment ID, custom alias endpoint, and Cloud ID values that feature prominently in the user console. :::: -2. To access your Elasticsearch cluster over Private Link: +2. To access your {{es}} cluster over Private Link: * If you have a [custom endpoint alias](/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md) configured, you can use the custom endpoint URL to connect. @@ -209,7 +209,7 @@ Let’s test the connection: `https://6b111580caaa4a9e84b18ec7c600155e.privatelink.eastus2.azure.elastic-cloud.com:9243` -3. You can test the Azure portal part of the setup with the following command (substitute the region and Elasticsearch ID with your cluster). +3. You can test the Azure portal part of the setup with the following command (substitute the region and {{es}} ID with your cluster). The output should look like this: @@ -230,7 +230,7 @@ Let’s test the connection: The connection is established, and a valid certificate is presented to the client. The `403 Forbidden` is expected, you haven’t associate the rule set with any deployment yet. -4. In the event that the Private Link connection is not approved by Elastic Cloud, you’ll get an error message like the following. Double check that the filter you’ve created in the previous step uses the right resource name and GUID. +4. In the event that the Private Link connection is not approved by {{ecloud}}, you’ll get an error message like the following. Double check that the filter you’ve created in the previous step uses the right resource name and GUID. ```sh $ curl -v https://6b111580caaa4a9e84b18ec7c600155e.privatelink.eastus2.azure.elastic-cloud.com:9243 @@ -264,7 +264,7 @@ Use the alias you’ve set up as CNAME A record to access your deployment. :::: -For example, if your Elasticsearch ID is `6b111580caaa4a9e84b18ec7c600155e` and it is located in `eastus2` region you can access it under `https://6b111580caaa4a9e84b18ec7c600155e.privatelink.eastus2.azure.elastic-cloud.com:9243`. +For example, if your {{es}} ID is `6b111580caaa4a9e84b18ec7c600155e` and it is located in `eastus2` region you can access it under `https://6b111580caaa4a9e84b18ec7c600155e.privatelink.eastus2.azure.elastic-cloud.com:9243`. ```sh $ curl -u 'username:password' -v https://6b111580caaa4a9e84b18ec7c600155e.privatelink.eastus2.azure.elastic-cloud.com:9243 @@ -274,9 +274,9 @@ $ curl -u 'username:password' -v https://6b111580caaa4a9e84b18ec7c600155e.priva ``` ::::{note} -If you are using Azure Private Link together with Fleet, and enrolling the Elastic Agent with a Private Link URL, you need to configure Fleet Server to use and propagate the Private Link URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of Kibana. Otherwise, Elastic Agent will reset to use a default address instead of the Private Link URL. The URL needs to follow this pattern: `https://.fleet.:443`. +If you are using Azure Private Link together with Fleet, and enrolling the Elastic Agent with a Private Link URL, you need to configure Fleet Server to use and propagate the Private Link URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the Private Link URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the Elasticsearch host needs to be updated to propagate the Private Link URL. The Elasticsearch URL needs to follow this pattern: `https://.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Private Link URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. :::: @@ -315,7 +315,7 @@ To remove an association through the UI: Azure supports inter-region Private Link as described in the [Azure documentation](https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview). "The Private Link resource can be deployed in a different region than the virtual network and private endpoint." -This means your deployment on Elastic Cloud can be in a different region than the Private Link endpoints or the clients that consume the deployment endpoints. +This means your deployment on {{ecloud}} can be in a different region than the Private Link endpoints or the clients that consume the deployment endpoints. :::{image} /images/cloud-ce-azure-inter-region-pl.png :alt: Inter-region Private Link @@ -328,4 +328,4 @@ This means your deployment on Elastic Cloud can be in a different region than th 2. Create a Private Hosted Zone for region 2, and associate it with VNET1 similar to the step [Create a Private Link endpoint and DNS](/deploy-manage/security/azure-private-link-traffic-filters.md#ec-private-link-azure-dns). Note that you are creating these resources in region 1, VNET1. 2. [Create a traffic filter rule set](/deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-create-traffic-filter-private-link-rule-set) and [Associate the rule set](/deploy-manage/security/aws-privatelink-traffic-filters.md#ec-associate-traffic-filter-private-link-rule-set) through the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), just as you would for any deployment. -3. [Test the connection](/deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-access-the-deployment-over-private-link) from a VM or client in region 1 to your Private Link endpoint, and it should be able to connect to your Elasticsearch cluster hosted in region 2. +3. [Test the connection](/deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-access-the-deployment-over-private-link) from a VM or client in region 1 to your Private Link endpoint, and it should be able to connect to your {{es}} cluster hosted in region 2. diff --git a/deploy-manage/security/data-security.md b/deploy-manage/security/data-security.md index 1226f922c7..c70d8f7323 100644 --- a/deploy-manage/security/data-security.md +++ b/deploy-manage/security/data-security.md @@ -1,5 +1,37 @@ -# Secure your data +--- +applies_to: + deployment: + ess: ga + ece: ga + eck: ga + self: ga + serverless: ga +--- -:::{warning} -**This page is a work in progress.** +# Secure data, objects, and settings + +Add another layer of security by defining custom encryption rules for your cluster's data, {{kib}} saved objects, and settings. + +**In {{ecloud}}**: + +{{ech}} deployments and serverless projects are already encrypted at rest by default. This includes their data, objects, and settings. For serverless projects, security is fully-managed by Elastic. For {{ech}} deployments, some settings are available for you to customize the default security measures in place: + +- Instead of the default, Elastic-managed encryption, you can choose to use a [customer-managed encryption key](encrypt-deployment-with-customer-managed-encryption-key.md) from one of our supported providers' KMS to encrypt your {{ech}} deployments. +- Store sensitive settings using the [{{es}} keystore](secure-settings.md). + +**In {{ece}}, {{eck}} and self-managed installations**: + +There is no encryption at rest out of the box for deployments orchestrated using [{{ece}}](secure-your-elastic-cloud-enterprise-installation.md) and [{{eck}}](secure-your-eck-installation.md), and for [self-managed clusters](manually-configure-security-in-self-managed-cluster.md). You must instead configure disk-level encryption on your hosts. + +:::{note} +Configuring dm-crypt or similar technologies is outside the scope of the Elastic documentation, and issues related to disk encryption are outside the scope of support. ::: + +However, some native features are available for you to protect sensitive data and objects: + +- Store sensitive settings using the [{{es}} or {{kib}} keystores](secure-settings.md). +- Enable [encryption for {{kib}} saved objects](secure-saved-objects.md). +- Customize [{{kib}} session parameters](kibana-session-management.md). + + + diff --git a/deploy-manage/security/ece-traffic-filtering-through-the-api.md b/deploy-manage/security/ece-traffic-filtering-through-the-api.md index 4ef7beebd9..f5ea31f365 100644 --- a/deploy-manage/security/ece-traffic-filtering-through-the-api.md +++ b/deploy-manage/security/ece-traffic-filtering-through-the-api.md @@ -8,7 +8,7 @@ mapped_urls: # Manage traffic filtering through the ECE API [ece-traffic-filtering-through-the-api] -This example demonstrates how to use the Elastic Cloud Enterprise RESTful API to manage different types of traffic filters. We cover the following examples: +This example demonstrates how to use the {{ece}} RESTful API to manage different types of traffic filters. We cover the following examples: * [Create a traffic filter rule set](ece-traffic-filtering-through-the-api.md#ece-create-a-traffic-filter-rule-set) @@ -19,7 +19,7 @@ This example demonstrates how to use the Elastic Cloud Enterprise RESTful API to * [Delete a rule set association with a deployment](ece-traffic-filtering-through-the-api.md#ece-delete-rule-set-association-with-a-deployment) * [Delete a traffic filter rule set](ece-traffic-filtering-through-the-api.md#ece-delete-a-rule-set) -Read through the main [Traffic Filtering](traffic-filtering.md) page to learn about the general concepts behind filtering access to your Elastic Cloud Enterprise deployments. +Read through the main [Traffic Filtering](traffic-filtering.md) page to learn about the general concepts behind filtering access to your {{ece}} deployments. ## Create a traffic filter rule set [ece-create-a-traffic-filter-rule-set] diff --git a/deploy-manage/security/elastic-cloud-static-ips.md b/deploy-manage/security/elastic-cloud-static-ips.md index 066b8c5e9a..814d17006d 100644 --- a/deploy-manage/security/elastic-cloud-static-ips.md +++ b/deploy-manage/security/elastic-cloud-static-ips.md @@ -6,34 +6,34 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-static-ips.html --- -# Elastic Cloud Static IPs [ec-static-ips] +# {{ecloud}} Static IPs [ec-static-ips] {{ecloud}} provides a range of static IP addresses that enable you to allow or deny IP ranges. There are two types of static IP addresses, [ingress](#ec-ingress) and [egress](#ec-egress), and they each have their own set of use cases. In general, static IPs can be used to introduce network controls (for example, firewall rules) for traffic that goes to and from {{ecloud}} deployments over the Internet. Use of static IPs is not applicable to private cloud service provider connections (for example, AWS/Azure PrivateLink, GCP Private Service Connect). It is important to note that static IP addresses are [subject to change](#ec-warning), and not all [cloud provider regions](#ec-regions) are currently fully supported for ingress and egress static IPs. -## Ingress Static IPs: Traffic To Elastic Cloud [ec-ingress] +## Ingress Static IPs: Traffic To {{ecloud}} [ec-ingress] Suitable usage of ingress static IPs to introduce network controls: -* All traffic **towards Elastic Cloud deployments** from the public Internet, your private cloud network over the public Internet, or your on-premises network over the public Internet (e.g. Elasticsearch traffic, Kibana traffic, etc) uses Ingress Static IPs as network destination +* All traffic **towards {{ecloud}} deployments** from the public Internet, your private cloud network over the public Internet, or your on-premises network over the public Internet (e.g. {{es}} traffic, {{kib}} traffic, etc) uses Ingress Static IPs as network destination Not suitable usage of ingress static IPs to introduce network controls: * Traffic over private cloud service provider connections (e.g. AWS Privatelink, GCP Private Service Connect, Azure Private Link) * Traffic to the [Cloud Console](http://cloud.elastic.co) -* Traffic to non Elastic Cloud websites and services hosted by Elastic (e.g. www.elastic.co) +* Traffic to non {{ecloud}} websites and services hosted by Elastic (e.g. www.elastic.co) -## Egress Static IPs: Traffic From Elastic Cloud [ec-egress] +## Egress Static IPs: Traffic From {{ecloud}} [ec-egress] Suitable usage of egress static IPs to introduce network controls: -* Traffic **from Elastic Cloud deployments** towards the public Internet, your private cloud network over the public Internet, or your on-premises network over the public Internet (e.g. custom Slack alerts, Email alerts, Kibana alerts, etc.) uses Egress Static IPs as network source -* Cross-cluster replication/cross-cluster search traffic **from Elastic Cloud deployments** towards on-premises Elastic Cloud Enterprise deployments protected by on-premises firewalls or Elastic Cloud Enterprise traffic filters +* Traffic **from {{ecloud}} deployments** towards the public Internet, your private cloud network over the public Internet, or your on-premises network over the public Internet (e.g. custom Slack alerts, Email alerts, {{kib}} alerts, etc.) uses Egress Static IPs as network source +* Cross-cluster replication/cross-cluster search traffic **from {{ecloud}} deployments** towards on-premises {{ece}} deployments protected by on-premises firewalls or {{ece}} traffic filters Not suitable usage of egress static IPs to introduce network controls: -* Snapshot traffic that stays within the same cloud provider and regional boundaries (e.g. an Elastic Cloud deployment hosted in aws-us-east-1 using an S3 bucket also hosted in aws-us-east-1 as a snapshot repository) +* Snapshot traffic that stays within the same cloud provider and regional boundaries (e.g. an {{ecloud}} deployment hosted in aws-us-east-1 using an S3 bucket also hosted in aws-us-east-1 as a snapshot repository) ## Supported Regions [ec-regions] @@ -121,7 +121,7 @@ Not suitable usage of egress static IPs to introduce network controls: ::::{warning} :name: ec-warning -Static IP ranges are subject to change. You will need to update your firewall rules when they change to prevent service disruptions. We will announce changes at least 8 weeks in advance (see [example](https://status.elastic.co/incidents/1xs411x77wgh)). Please subscribe to the [Elastic Cloud Status Page](https://status.elastic.co/) to remain up to date with any changes to the Static IP ranges which you will need to update at your side. +Static IP ranges are subject to change. You will need to update your firewall rules when they change to prevent service disruptions. We will announce changes at least 8 weeks in advance (see [example](https://status.elastic.co/incidents/1xs411x77wgh)). Please subscribe to the [{{ecloud}} Status Page](https://status.elastic.co/) to remain up to date with any changes to the Static IP ranges which you will need to update at your side. :::: diff --git a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md index d1336ab9e1..ad7145abdc 100644 --- a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md +++ b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md @@ -1,9 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-encrypt-with-cmek.html --- -# Encrypt your deployment with a customer-managed encryption key [ec-encrypt-with-cmek] +# Use a customer-managed encryption key [ec-encrypt-with-cmek] + +The following information applies to your {{ech}} deployments. By default, Elastic already encrypts your deployment data and snapshots at rest. You can reinforce this mechanism by providing your own encryption key, also known as Bring Your Own Key (BYOK). To do that, you need a customer-managed key that you set up and manage in your cloud provider’s Key Management Service (KMS). @@ -12,7 +17,7 @@ Encryption at rest using customer-managed keys is only available for the Enterpr :::: -Using a customer-managed key allows you to strengthen the security of your deployment data and snapshot data at rest. Note that if you use a custom snapshot repository different from the one provided by Elastic Cloud, these snapshots are not encrypted with your customer-managed key by default. The encryption happens at the file system level. +Using a customer-managed key allows you to strengthen the security of your deployment data and snapshot data at rest. Note that if you use a custom snapshot repository different from the one provided by {{ecloud}}, these snapshots are not encrypted with your customer-managed key by default. The encryption happens at the file system level. ## How using a customer-managed key helps to improve your data security [ec_how_using_a_customer_managed_key_helps_to_improve_your_data_security] @@ -24,7 +29,7 @@ Using a customer-managed key helps protect against threats related to the manage Using a customer-managed key can help comply with regulations or security requirements, but it is not a complete security solution by itself. There are other types of threats that it does not protect against. -[1] You set up your customer-managed keys and their access in your key management service. When you provide a customer-managed key identifier to Elastic Cloud, we do not access or store the cryptographic material associated with that key. Customer-managed keys are not directly used to encrypt deployment or snapshot data. Elastic Cloud accesses your customer-managed keys to encrypt and decrypt data encryption keys, which, in turn, are used to encrypt the data. +[1] You set up your customer-managed keys and their access in your key management service. When you provide a customer-managed key identifier to {{ecloud}}, we do not access or store the cryptographic material associated with that key. Customer-managed keys are not directly used to encrypt deployment or snapshot data. {{ecloud}} accesses your customer-managed keys to encrypt and decrypt data encryption keys, which, in turn, are used to encrypt the data. When a deployment encrypted with a customer-managed key is deleted or terminated, its data is locked first before being deleted, ensuring a fully secure deletion process. @@ -35,22 +40,22 @@ When a deployment encrypted with a customer-managed key is deleted or terminated ::::::{tab-item} AWS * Have permissions on AWS KMS to [create a symmetric AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) and to configure AWS IAM roles. -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by {{ecloud}}. :::::: ::::::{tab-item} Azure * Have the following permissions on Azure: * Permissions to [create an RSA key](https://learn.microsoft.com/en-us/azure/key-vault/keys/about-keys#key-types-and-protection-methods) in the Azure Key Vault where you want to store your key. - * Membership in the **Application Administrator** role. This is required to create a new service principal for Elastic Cloud in your Azure tenant. + * Membership in the **Application Administrator** role. This is required to create a new service principal for {{ecloud}} in your Azure tenant. * Permissions to [assign roles in your Key Vault using Access control (IAM)](https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli#prerequisites). This is required to grant the service principal access to your key. * The Azure Key Vault where the RSA key will be stored must have [purge protection](https://learn.microsoft.com/en-us/azure/key-vault/general/soft-delete-overview#purge-protection) enabled to support the encryption of snapshots. -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by {{ecloud}}. :::::: ::::::{tab-item} Google Cloud -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by {{ecloud}}. * Have the following permissions in Google Cloud KMS: * Permissions to [create a KMS key](https://cloud.google.com/kms/docs/create-key) on a key ring in the same region as your deployment. If you don’t have a key ring in the same region, or want to store the key in its own key ring, then you also need permissions to [create a key ring](https://cloud.google.com/kms/docs/create-key-ring). @@ -85,13 +90,13 @@ At this time, the following features are not supported: :::::::{tab-set} ::::::{tab-item} AWS -1. Create a symmetric [single-region key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) or [multi-region replica key](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.html). The key must be available in each region in which you have deployments to encrypt. You can use the same key to encrypt multiple deployments. Later, you will need to provide the Amazon Resource Name (ARN) of that key or key alias to Elastic Cloud. +1. Create a symmetric [single-region key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) or [multi-region replica key](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.html). The key must be available in each region in which you have deployments to encrypt. You can use the same key to encrypt multiple deployments. Later, you will need to provide the Amazon Resource Name (ARN) of that key or key alias to {{ecloud}}. ::::{note} Use an alias ARN instead of the key ARN itself if you plan on doing manual key rotations. When using a key ARN directly, only automatic rotations are supported. :::: -2. Apply a key policy with the settings required by Elastic Cloud to the key created in the previous step: +2. Apply a key policy with the settings required by {{ecloud}} to the key created in the previous step: ```json { @@ -120,8 +125,8 @@ At this time, the following features are not supported: 2. [kms:Encrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html) - This operation is used to encrypt the data encryption keys generated by the KMS as well as encrypting your snapshots. 3. [kms:GetKeyRotationStatus](https://docs.aws.amazon.com/kms/latest/APIReference/API_GetKeyRotationStatus.html) - This operation is used to determine whether automatic key rotation is enabled. 4. [kms:GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html) - This operation is used to generate a data encryption key along with an encrypted version of it. The system leverages the randomness provided by the KMS to produce the data encryption key and your actual customer-managed key to encrypt the data encryption key. - 5. [kms:DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html) - This operation is used to check whether your key is properly configured for Elastic Cloud. In addition, Elastic Cloud uses this to check if a manual key rotation was performed by comparing underlying key IDs associated with an alias. - 6. This condition allows the accounts associated with the Elastic Cloud production infrastructure to access your key. Under typical circumstances, Elastic Cloud will only be accessing your key via two AWS accounts: the account your deployment’s host is in and the account your S3 bucket containing snapshots is in. However, determining these particular account IDs prior to the deployment creation is not possible at the moment. This encompasses all of the possibilities. For more on this, check the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgpaths). + 5. [kms:DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html) - This operation is used to check whether your key is properly configured for {{ecloud}}. In addition, {{ecloud}} uses this to check if a manual key rotation was performed by comparing underlying key IDs associated with an alias. + 6. This condition allows the accounts associated with the {{ecloud}} production infrastructure to access your key. Under typical circumstances, {{ecloud}} will only be accessing your key via two AWS accounts: the account your deployment’s host is in and the account your S3 bucket containing snapshots is in. However, determining these particular account IDs prior to the deployment creation is not possible at the moment. This encompasses all of the possibilities. For more on this, check the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgpaths). :::::: ::::::{tab-item} Azure @@ -131,11 +136,11 @@ At this time, the following features are not supported: * `https://example-byok-key-vault.vault.azure.net/keys/test-key` (without version identifier) * `https://example-byok-key-vault.vault.azure.net/keys/test-key/1234` (with version identifier) - Later, you will need to provide this identifier to Elastic Cloud. + Later, you will need to provide this identifier to {{ecloud}}. ::::{tip} -Provide your key identifier without the key version identifier so Elastic Cloud can [rotate the key](#rotate-a-customer-managed-key) on your behalf. +Provide your key identifier without the key version identifier so {{ecloud}} can [rotate the key](#rotate-a-customer-managed-key) on your behalf. :::: :::::: @@ -148,7 +153,7 @@ Provide your key identifier without the key version identifier so Elastic Cloud `projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY_NAME` - Later, you will need to provide this ID to Elastic Cloud. + Later, you will need to provide this ID to {{ecloud}}. :::::: ::::::: @@ -170,9 +175,9 @@ Provide your key identifier without the key version identifier so Elastic Cloud * using the API: * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). - * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. + * [Get a valid {{ecloud}} API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. * Get the ARN of the symmetric AWS KMS key or of its alias. Use an alias if you are planning to do manual key rotations as specified in the [AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: + * Use these parameters to create a new deployment with the [{{ecloud}} API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```bash curl -XPOST \ @@ -201,18 +206,18 @@ The deployment is now created and encrypted using the specified key. Future snap :::::: ::::::{tab-item} Azure -To create a new deployment with a customer-managed key in Azure, you need to perform actions in Elastic Cloud and in your Azure tenant. +To create a new deployment with a customer-managed key in Azure, you need to perform actions in {{ecloud}} and in your Azure tenant. -**Step 1: Create a service principal for Elastic Cloud** +**Step 1: Create a service principal for {{ecloud}}** -1. In Elastic Cloud, retrieve the Azure application ID: +1. In {{ecloud}}, retrieve the Azure application ID: * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Azure** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. * Copy the **Azure application ID**. -2. Using the ID that you copied, [create a new service principal](https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-configure-cross-tenant-existing-account?tabs=azure-portal#the-customer-installs-the-service-provider-application-in-the-customer-tenant) for Elastic Cloud in your Azure tenant. The service principal grants Elastic Cloud access to interact with your RSA key. +2. Using the ID that you copied, [create a new service principal](https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-configure-cross-tenant-existing-account?tabs=azure-portal#the-customer-installs-the-service-provider-application-in-the-customer-tenant) for {{ecloud}} in your Azure tenant. The service principal grants {{ecloud}} access to interact with your RSA key. For example, you might use the following Azure CLI command to create the service principal: @@ -245,8 +250,8 @@ After you have created the service principal and granted it the necessary permis * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). - * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: + * [Get a valid {{ecloud}} API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. + * Use these parameters to create a new deployment with the [{{ecloud}} API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```bash curl -XPOST \ @@ -277,12 +282,12 @@ The deployment is now created and encrypted using the specified key. Future snap ::::::{tab-item} Google Cloud **Step 1: Grant service principals access to your key** -Elastic Cloud uses two service principals to encrypt and decrypt data using your key. You must grant these services access to your key before you create your deployment. +{{ecloud}} uses two service principals to encrypt and decrypt data using your key. You must grant these services access to your key before you create your deployment. * **Google Cloud Platform cloud storage service agent**: Used for Elastic-managed snapshots stored on Google Cloud Storage. -* **Elastic service account**: Used for all other Elasticsearch data. +* **Elastic service account**: Used for all other {{es}} data. -1. In Elastic Cloud, retrieve the email addresses for the service principals that will be used by Elastic: +1. In {{ecloud}}, retrieve the email addresses for the service principals that will be used by Elastic: * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Google Cloud** and select a region. @@ -324,8 +329,8 @@ After you have granted the Elastic principals the necessary roles, you can finis * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). - * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: + * [Get a valid {{ecloud}} API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. + * Use these parameters to create a new deployment with the [{{ecloud}} API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```bash curl -XPOST \ @@ -362,34 +367,34 @@ You can check that your hosted deployment is correctly encrypted with the key yo :::::::{tab-set} ::::::{tab-item} AWS -Elastic Cloud will automatically rotate the keys every 31 days as a security best practice. +{{ecloud}} will automatically rotate the keys every 31 days as a security best practice. -You can also trigger a manual rotation [in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html), which will take effect in Elastic Cloud within 30 minutes. **For manual rotations to work, you must use an alias when creating the deployment. We do not currently support [on-demand rotations](https://docs.aws.amazon.com/kms/latest/APIReference/API_RotateKeyOnDemand.html) but plan on supporting this in the future.** +You can also trigger a manual rotation [in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html), which will take effect in {{ecloud}} within 30 minutes. **For manual rotations to work, you must use an alias when creating the deployment. We do not currently support [on-demand rotations](https://docs.aws.amazon.com/kms/latest/APIReference/API_RotateKeyOnDemand.html) but plan on supporting this in the future.** :::::: ::::::{tab-item} Azure -To rotate your key, you can [update your key version](https://learn.microsoft.com/en-us/azure/container-registry/tutorial-rotate-revoke-customer-managed-keys) or [configure a key rotation policy](https://learn.microsoft.com/en-us/azure/key-vault/keys/how-to-configure-key-rotation) in Azure Key Vault. In both cases, the rotation will take effect in Elastic Cloud within a day. +To rotate your key, you can [update your key version](https://learn.microsoft.com/en-us/azure/container-registry/tutorial-rotate-revoke-customer-managed-keys) or [configure a key rotation policy](https://learn.microsoft.com/en-us/azure/key-vault/keys/how-to-configure-key-rotation) in Azure Key Vault. In both cases, the rotation will take effect in {{ecloud}} within a day. For rotations to work, you must provide your key identifier without the key version identifier when you create your deployment. -Elastic Cloud does not currently support rotating your key using a new key identifier. +{{ecloud}} does not currently support rotating your key using a new key identifier. :::::: ::::::{tab-item} Google Cloud -Key rotations are triggered in Google Cloud. You can rotate your key [manually](https://cloud.google.com/kms/docs/rotate-key#manual) or [automatically](https://cloud.google.com/kms/docs/rotate-key#automatic). In both cases, the rotation will take effect in Elastic Cloud within a day. +Key rotations are triggered in Google Cloud. You can rotate your key [manually](https://cloud.google.com/kms/docs/rotate-key#manual) or [automatically](https://cloud.google.com/kms/docs/rotate-key#automatic). In both cases, the rotation will take effect in {{ecloud}} within a day. :::::: ::::::: ## Revoke a customer-managed key [ec_revoke_a_customer_managed_key] -Revoking a customer-managed key in your key management service can be a break-glass procedure in case of a security breach. Elastic Cloud gets an error if an encryption key is disabled, deleted, or if the appropriate role is removed from the IAM policy. Within 30 minutes maximum, Elastic Cloud locks the directories in which your deployment data live and prompts you to delete your deployment as an increased security measure. +Revoking a customer-managed key in your key management service can be a break-glass procedure in case of a security breach. {{ecloud}} gets an error if an encryption key is disabled, deleted, or if the appropriate role is removed from the IAM policy. Within 30 minutes maximum, {{ecloud}} locks the directories in which your deployment data live and prompts you to delete your deployment as an increased security measure. If that happens and this is not intended, you can restore the key in the key management system. Your deployment operations will resume when the key can be reached again. For more details, check [Troubleshooting](#ec-encrypt-with-cmek-troubleshooting). -When a customer-managed key is permanently revoked and isn’t restored, the data stored in Elastic Cloud is effectively crypto-shredded. +When a customer-managed key is permanently revoked and isn’t restored, the data stored in {{ecloud}} is effectively crypto-shredded. -In a future release of Elastic Cloud, you will be able to: +In a future release of {{ecloud}}, you will be able to: * Remove a customer-managed key and revert your deployment to using an Elastic-managed encryption. * Edit the customer-managed key in use in a deployment to re-encrypt it with a different key. @@ -397,9 +402,9 @@ In a future release of Elastic Cloud, you will be able to: ## Encrypt an existing deployment using a new customer-managed key [ec_encrypt_an_existing_deployment_using_a_new_customer_managed_key] -Encrypting deployments with a customer-managed key is currently only possible for new deployments. In a future release of Elastic Cloud, you will be able to: +Encrypting deployments with a customer-managed key is currently only possible for new deployments. In a future release of {{ecloud}}, you will be able to: -* Encrypt an existing Elastic Cloud deployment with a customer-managed key. +* Encrypt an existing {{ecloud}} deployment with a customer-managed key. * Edit the customer-managed key in use in a deployment to re-encrypt it with a different key. @@ -407,17 +412,17 @@ Encrypting deployments with a customer-managed key is currently only possible fo **My deployment became inaccessible. What’s causing this?** -When Elastic Cloud can’t reach the encryption key, your deployment may become inaccessible. The most common reasons for this issue are: +When {{ecloud}} can’t reach the encryption key, your deployment may become inaccessible. The most common reasons for this issue are: -* Connectivity issues between Elastic Cloud and the KMS.
+* Connectivity issues between {{ecloud}} and the KMS.
- When Elastic Cloud is unable to access the customer-managed key, Elastic is alerted and will work to identify the cause. Elastic does not pause or terminate deployment instances when detecting connectivity issues, but your deployment may be inaccessible until issues are fixed. + When {{ecloud}} is unable to access the customer-managed key, Elastic is alerted and will work to identify the cause. Elastic does not pause or terminate deployment instances when detecting connectivity issues, but your deployment may be inaccessible until issues are fixed. * The customer-managed key was deleted or revoked on the KMS.
- Restore or recover your key, and if need be, rotate your key and associate a new key before deleting your old key. Elastic Cloud will send you alerts prompting you to restore the key if it cannot access your key and your deployment is not operational.
+ Restore or recover your key, and if need be, rotate your key and associate a new key before deleting your old key. {{ecloud}} will send you alerts prompting you to restore the key if it cannot access your key and your deployment is not operational.
- Within 30 minutes maximum, Elastic Cloud locks the directories in which your deployment data live and prompts you to delete your deployment as an increased security measure.
+ Within 30 minutes maximum, {{ecloud}} locks the directories in which your deployment data live and prompts you to delete your deployment as an increased security measure.
While it is locked, the deployment retains all data but is not readable or writable*: @@ -428,6 +433,6 @@ When Elastic Cloud can’t reach the encryption key, your deployment may become * If Elastic performed some platform operations on your instances during the locked period, restoring operations can require some downtime. It’s also possible that some data can’t be restored** depending on the available snapshots. -**During the locked directory period, Elastic may need to perform platform operations on the machines hosting your instances that result in data loss on the Elasticsearch data nodes but not the deployment snapshots.* +**During the locked directory period, Elastic may need to perform platform operations on the machines hosting your instances that result in data loss on the {{es}} data nodes but not the deployment snapshots.* ***Elastic recommends that you keep snapshots of your deployment in custom snapshot repositories in your own CSP account for data recovery purposes.* diff --git a/deploy-manage/security/encrypt-deployment.md b/deploy-manage/security/encrypt-deployment.md deleted file mode 100644 index e0fe9ba445..0000000000 --- a/deploy-manage/security/encrypt-deployment.md +++ /dev/null @@ -1,7 +0,0 @@ -# Encrypt your deployment - -% What needs to be done: Write from scratch - -% GitHub issue: https://github.com/elastic/docs-projects/issues/346 - -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file diff --git a/deploy-manage/security/fips-140-2.md b/deploy-manage/security/fips-140-2.md index f2b71557a7..8cac694493 100644 --- a/deploy-manage/security/fips-140-2.md +++ b/deploy-manage/security/fips-140-2.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + self: ga mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/fips-140-compliance.html - https://www.elastic.co/guide/en/kibana/current/xpack-security-fips-140-2.html @@ -6,19 +9,6 @@ mapped_urls: # FIPS 140-2 compliance -% What needs to be done: Refine - -% GitHub issue: https://github.com/elastic/docs-projects/issues/346 - -% Scope notes: link to Deploy a FIPS compatible version of ECK - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md -% - [ ] ./raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - $$$configuring-es-yml$$$ $$$fips-cached-password-hashing$$$ @@ -39,7 +29,209 @@ $$$keystore-fips-password$$$ $$$verify-security-provider$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +The Federal Information Processing Standard (FIPS) Publication 140-2, (FIPS PUB 140-2), titled "Security Requirements for Cryptographic Modules" is a U.S. government computer security standard used to approve cryptographic modules. +- [{{es}}](#fips-elasticsearch) offers a FIPS 140-2 compliant mode and as such can run in a FIPS 140-2 configured JVM. +- [{{kib}}](#fips-kibana) offers a FIPS 140-2 compliant mode and as such can run in a Node.js environment configured with a FIPS 140-2 compliant OpenSSL3 provider. + +:::{note} +If you are running {{es}} through {{eck}}, refer to [ECK FIPS compatibility](/deploy-manage/deploy/cloud-on-k8s/deploy-fips-compatible-version-of-eck.md). +::: + +## {{es}} [fips-elasticsearch] + + + +::::{important} +The JVM bundled with {{es}} is not configured for FIPS 140-2. You must configure an external JDK with a FIPS 140-2 certified Java Security Provider. Refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for supported JVM configurations. See [subscriptions](https://www.elastic.co/subscriptions) for required licensing. +:::: + + +Compliance with FIPS 140-2 requires using only FIPS approved / NIST recommended cryptographic algorithms. Generally this can be done by the following: + +* Installation and configuration of a FIPS certified Java security provider. +* Ensuring the configuration of {{es}} is FIPS 140-2 compliant as documented below. +* Setting `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. Note - this setting alone is not sufficient to be compliant with FIPS 140-2. + + +### Configuring {{es}} for FIPS 140-2 [_configuring_es_for_fips_140_2] + +Detailed instructions for the configuration required for FIPS 140-2 compliance is beyond the scope of this document. It is the responsibility of the user to ensure compliance with FIPS 140-2. {{es}} has been tested with a specific configuration described below. However, there are other configurations possible to achieve compliance. + +The following is a high-level overview of the required configuration: + +* Use an externally installed Java installation. The JVM bundled with {{es}} is not configured for FIPS 140-2. +* Install a FIPS certified security provider .jar file(s) in {{es}}'s `lib` directory. +* Configure Java to use a FIPS certified security provider ([see below](/deploy-manage/security/fips-140-2.md#java-security-provider)). +* Configure {{es}}'s security manager to allow use of the FIPS certified provider ([see below](/deploy-manage/security/fips-140-2.md#java-security-manager)). +* Ensure the keystore and truststore are configured correctly ([see below](/deploy-manage/security/fips-140-2.md#keystore-fips-password)). +* Ensure the TLS settings are configured correctly ([see below](/deploy-manage/security/fips-140-2.md#fips-tls)). +* Ensure the password hashing settings are configured correctly ([see below](/deploy-manage/security/fips-140-2.md#fips-stored-password-hashing)). +* Ensure the cached password hashing settings are configured correctly ([see below](/deploy-manage/security/fips-140-2.md#fips-cached-password-hashing)). +* Configure `elasticsearch.yml` to use FIPS 140-2 mode, see ([below](/deploy-manage/security/fips-140-2.md#configuring-es-yml)). +* Verify the security provider is installed and configured correctly ([see below](/deploy-manage/security/fips-140-2.md#verify-security-provider)). +* Review the upgrade considerations ([see below](/deploy-manage/security/fips-140-2.md#fips-upgrade-considerations)) and limitations ([see below](/deploy-manage/security/fips-140-2.md#fips-limitations)). + + +#### Java security provider [java-security-provider] + +Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.html) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.html) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. + +{{es}} has been tested with Bouncy Castle’s [bc-fips 1.0.2.5](https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2.5/bc-fips-1.0.2.5.jar) and [bctls-fips 1.0.19](https://repo1.maven.org/maven2/org/bouncycastle/bctls-fips/1.0.19/bctls-fips-1.0.19.jar). Please refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for details on which combinations of JVM and security provider are supported in FIPS mode. {{es}} does not ship with a FIPS certified provider. It is the responsibility of the user to install and configure the security provider to ensure compliance with FIPS 140-2. Using a FIPS certified provider will ensure that only approved cryptographic algorithms are used. + +To configure {{es}} to use additional security provider(s) configure {{es}}'s [JVM property](elasticsearch://reference/elasticsearch/jvm-settings.md#set-jvm-options) `java.security.properties` to point to a file ([example](https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.security)) in {{es}}'s `config` directory. Ensure the FIPS certified security provider is configured with the lowest order. This file should contain the necessary configuration to instruct Java to use the FIPS certified security provider. + + +#### Java security manager [java-security-manager] + +All code running in {{es}} is subject to the security restrictions enforced by the Java security manager. The security provider you have installed and configured may require additional permissions in order to function correctly. You can grant these permissions by providing your own [Java security policy](https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.md#FileSyntax) + +To configure {{es}}'s security manager configure the JVM property `java.security.policy` to point a file ([example](https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.policy))in {{es}}'s `config` directory with the desired permissions. This file should contain the necessary configuration for the Java security manager to grant the required permissions needed by the security provider. + + +#### {{es}} Keystore [keystore-fips-password] + +FIPS 140-2 (via NIST Special Publication 800-132) dictates that encryption keys should at least have an effective strength of 112 bits. As such, the {{es}} keystore that stores the node’s [secure settings](/deploy-manage/security/secure-settings.md) needs to be password protected with a password that satisfies this requirement. This means that the password needs to be 14 bytes long which is equivalent to a 14 character ASCII encoded password, or a 7 character UTF-8 encoded password. You can use the [elasticsearch-keystore passwd](elasticsearch://reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) subcommand to change or set the password of an existing keystore. Note that when the keystore is password-protected, you must supply the password each time {{es}} starts. + + +#### TLS [fips-tls] + +SSLv2 and SSLv3 are not allowed by FIPS 140-2, so `SSLv2Hello` and `SSLv3` cannot be used for [`ssl.supported_protocols`](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). + +::::{note} +The use of TLS ciphers is mainly governed by the relevant crypto module (the FIPS Approved Security Provider that your JVM uses). All the ciphers that are configured by default in {{es}} are FIPS 140-2 compliant and as such can be used in a FIPS 140-2 JVM. See [`ssl.cipher_suites`](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). +:::: + + + +#### TLS keystores and keys [_tls_keystores_and_keys] + +Keystores can be used in a number of [General TLS settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings) in order to conveniently store key and trust material. Neither `JKS`, nor `PKCS#12` keystores can be used in a FIPS 140-2 configured JVM. Avoid using these types of keystores. Your FIPS 140-2 provider may provide a compliant keystore implementation that can be used, or you can use PEM encoded files. To use PEM encoded key material, you can use the relevant `\*.key` and `*.certificate` configuration options, and for trust material you can use `*.certificate_authorities`. + +FIPS 140-2 compliance dictates that the length of the public keys used for TLS must correspond to the strength of the symmetric key algorithm in use in TLS. Depending on the value of `ssl.cipher_suites` that you select to use, the TLS keys must have corresponding length according to the following table: + +$$$comparable-key-strength$$$ + +| | | | +| --- | --- | --- | +| Symmetric Key Algorithm | RSA key Length | ECC key length | +| `3DES` | 2048 | 224-255 | +| `AES-128` | 3072 | 256-383 | +| `AES-256` | 15630 | 512+ | + + +#### Stored password hashing [_stored_password_hashing] + +$$$fips-stored-password-hashing$$$ +While {{es}} offers a number of algorithms for securely hashing credentials on disk, only the `PBKDF2` based family of algorithms is compliant with FIPS 140-2 for stored password hashing. However, since `PBKDF2` is essentially a key derivation function, your JVM security provider may enforce a [112-bit key strength requirement](/deploy-manage/security/fips-140-2.md#keystore-fips-password). Although FIPS 140-2 does not mandate user password standards, this requirement may affect password hashing in {{es}}. To comply with this requirement, while allowing you to use passwords that satisfy your security policy, {{es}} offers [pbkdf2_stretch](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings) which is the suggested hashing algorithm when running {{es}} in FIPS 140-2 environments. `pbkdf2_stretch` performs a single round of SHA-512 on the user password before passing it to the `PBKDF2` implementation. + +::::{note} +You can still use one of the plain `pbkdf2` options instead of `pbkdf2_stretch` if you have external policies and tools that can ensure all user passwords for the reserved, native, and file realms are longer than 14 bytes. +:::: + + +You must set the `xpack.security.authc.password_hashing.algorithm` setting to one of the available `pbkdf_stretch_*` values. When FIPS-140 mode is enabled, the default value for `xpack.security.authc.password_hashing.algorithm` is `pbkdf2_stretch`. See [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings). + +Password hashing configuration changes are not retroactive so the stored hashed credentials of existing users of the reserved, native, and file realms are not updated on disk. To ensure FIPS 140-2 compliance, recreate users or change their password using the [elasticsearch-user](elasticsearch://reference/elasticsearch/command-line-tools/users-command.md) CLI tool for the file realm and the [create users](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-user) and [change password](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password) APIs for the native and reserved realms. Other types of realms are not affected and do not require any changes. + + +#### Cached password hashing [_cached_password_hashing] + +$$$fips-cached-password-hashing$$$ +`ssha256` (salted `sha256`) is recommended for cache hashing. Though `PBKDF2` is compliant with FIPS-140-2, it is — by design — slow, and thus not generally suitable as a cache hashing algorithm. Cached credentials are never stored on disk, and salted `sha256` provides an adequate level of security for in-memory credential hashing, without imposing prohibitive performance overhead. You *may* use `PBKDF2`, however you should carefully assess performance impact first. Depending on your deployment, the overhead of `PBKDF2` could undo most of the performance gain of using a cache. + +Either set all `cache.hash_algo` settings to `ssha256` or leave them undefined, since `ssha256` is the default value for all `cache.hash_algo` settings. See [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings). + +The user cache will be emptied upon node restart, so any existing hashes using non-compliant algorithms will be discarded and the new ones will be created using the algorithm you have selected. + + +#### Configure {{es}} elasticsearch.yml [configuring-es-yml] + +* Set `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. This setting is used to ensure to configure some internal configuration to be FIPS 140-2 compliant and provides some additional verification. +* Set `xpack.security.autoconfiguration.enabled` to `false`. This will disable the automatic configuration of the security settings. Users must ensure that the security settings are configured correctly for FIPS-140-2 compliance. This is only applicable for new installations. +* Set `xpack.security.authc.password_hashing.algorithm` appropriately see [above](/deploy-manage/security/fips-140-2.md#fips-stored-password-hashing). +* Other relevant security settings. For example, TLS for the transport and HTTP interfaces. (not explicitly covered here or in the example below) +* Optional: Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to ensure the required security providers (8.13+). see [below](/deploy-manage/security/fips-140-2.md#verify-security-provider). + +```yaml +xpack.security.fips_mode.enabled: true +xpack.security.autoconfiguration.enabled: false +xpack.security.fips_mode.required_providers: ["BCFIPS", "BCJSSE"] +xpack.security.authc.password_hashing.algorithm: "pbkdf2_stretch" +``` + + +#### Verify the security provider is installed [verify-security-provider] + +To verify that the security provider is installed and in use, you can use any of the following steps: + +* Verify the required security providers are configured with the lowest order in the file pointed to by `java.security.properties`. For example, `security.provider.1` is a lower order than `security.provider.2` +* Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to the list of required security providers. This setting is used to ensure that the correct security provider is installed and configured. (8.13+) If the security provider is not installed correctly, {{es}} will fail to start. `["BCFIPS", "BCJSSE"]` are the values to use for Bouncy Castle’s FIPS JCE and JSSE certified provider. + + +### Upgrade considerations [fips-upgrade-considerations] + +{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.html)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. + +Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). + +::::{important} +Some encryption algorithms may no longer be available by default in updated FIPS 140-2 security providers. Notably, Triple DES and PKCS1.5 RSA are now discouraged and [Bouncy Castle](https://www.bouncycastle.org/fips-java) now requires explicit configuration to continue using these algorithms. + +:::: + + +If you plan to upgrade your existing cluster to a version that can be run in a FIPS 140-2 configured JVM, we recommend to first perform a rolling upgrade to the new version in your existing JVM and perform all necessary configuration changes in preparation for running in FIPS 140-2 mode. You can then perform a rolling restart of the nodes, starting each node in a FIPS 140-2 JVM. During the restart, {{es}}: + +* Upgrades [secure settings](/deploy-manage/security/secure-settings.md) to the latest, compliant format. A FIPS 140-2 JVM cannot load previous format versions. If your keystore is not password-protected, you must manually set a password. See [{{es}} Keystore](/deploy-manage/security/fips-140-2.md#keystore-fips-password). +* Upgrades self-generated trial licenses to the latest FIPS 140-2 compliant format. + +If your [subscription](https://www.elastic.co/subscriptions) already supports FIPS 140-2 mode, you can elect to perform a rolling upgrade while at the same time running each upgraded node in a FIPS 140-2 JVM. In this case, you would need to also manually regenerate your `elasticsearch.keystore` and migrate all secure settings to it, in addition to the necessary configuration changes outlined below, before starting each node. + + +### Limitations [fips-limitations] + +Due to the limitations that FIPS 140-2 compliance enforces, a small number of features are not available while running in FIPS 140-2 mode. The list is as follows: + +* Azure Classic Discovery Plugin +* The [`elasticsearch-certutil`](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) tool. However, `elasticsearch-certutil` can very well be used in a non FIPS 140-2 configured JVM (pointing `ES_JAVA_HOME` environment variable to a different java installation) in order to generate the keys and certificates that can be later used in the FIPS 140-2 configured JVM. +* The SQL CLI client cannot run in a FIPS 140-2 configured JVM while using TLS for transport security or PKI for client authentication. + + + +## {{kib}} [fips-kibana] + +To run {{kib}} in FIPS mode, you must have the appropriate [subscription](https://www.elastic.co/subscriptions). + +::::{important} +The Node bundled with {{kib}} is not configured for FIPS 140-2. You must configure a FIPS 140-2 compliant OpenSSL3 provider. Consult the Node.js documentation to learn how to configure your environment. + +:::: + + +For {{kib}}, adherence to FIPS 140-2 is ensured by: + +* Using FIPS approved / NIST recommended cryptographic algorithms. +* Delegating the implementation of these cryptographic algorithms to a NIST validated cryptographic module (available via Node.js configured with an OpenSSL3 provider). +* Allowing the configuration of {{kib}} in a FIPS 140-2 compliant manner, as documented below. + +### Configuring {{kib}} for FIPS 140-2 [_configuring_kib_for_fips_140_2] + +Apart from setting `xpack.security.fipsMode.enabled` to `true` in your {{kib}} config, a number of security related settings need to be reviewed and configured in order to run {{kib}} successfully in a FIPS 140-2 compliant Node.js environment. + +#### {{kib}} keystore [_kibana_keystore] + +FIPS 140-2 (via NIST Special Publication 800-132) dictates that encryption keys should at least have an effective strength of 112 bits. As such, the {{kib}} keystore that stores the application’s secure settings needs to be password protected with a password that satisfies this requirement. This means that the password needs to be 14 bytes long which is equivalent to a 14 character ASCII encoded password, or a 7 character UTF-8 encoded password. + +For more information on how to set this password, refer to the [keystore documentation](/deploy-manage/security/secure-settings.md#change-password). + + +#### TLS keystore and keys [_tls_keystore_and_keys] + +Keystores can be used in a number of General TLS settings in order to conveniently store key and trust material. PKCS#12 keystores cannot be used in a FIPS 140-2 compliant Node.js environment. Avoid using these types of keystores. Your FIPS 140-2 provider may provide a compliant keystore implementation that can be used, or you can use PEM encoded files. To use PEM encoded key material, you can use the relevant `\*.key` and `*.certificate` configuration options, and for trust material you can use `*.certificate_authorities`. + +As an example, avoid PKCS#12 specific settings such as: -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md) -* [/raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md](/raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md) \ No newline at end of file +* `server.ssl.keystore.path` +* `server.ssl.truststore.path` +* `elasticsearch.ssl.keystore.path` +* `elasticsearch.ssl.truststore.path` \ No newline at end of file diff --git a/deploy-manage/security/gcp-private-service-connect-traffic-filters.md b/deploy-manage/security/gcp-private-service-connect-traffic-filters.md index 98fd86c64c..25dcd94acc 100644 --- a/deploy-manage/security/gcp-private-service-connect-traffic-filters.md +++ b/deploy-manage/security/gcp-private-service-connect-traffic-filters.md @@ -44,7 +44,7 @@ Private Service Connect filtering is supported only for Google Cloud regions. :::: -Private Service Connect establishes a secure connection between two Google Cloud VPCs. The VPCs can belong to separate accounts, for example a service provider and their service consumers. Google Cloud routes the Private Service Connect traffic within the Google Cloud data centers and never exposes it to the public internet. In such a configuration, Elastic Cloud is the third-party service provider and the customers are service consumers. +Private Service Connect establishes a secure connection between two Google Cloud VPCs. The VPCs can belong to separate accounts, for example a service provider and their service consumers. Google Cloud routes the Private Service Connect traffic within the Google Cloud data centers and never exposes it to the public internet. In such a configuration, {{ecloud}} is the third-party service provider and the customers are service consumers. Private Link is a connection between a Private Service Connect Endpoint and a Service Attachment. [Learn more about using Private Service Connect on Google Cloud](https://cloud.google.com/vpc/docs/private-service-connect#benefits-services). @@ -85,11 +85,11 @@ Service Attachments are set up by Elastic in all supported GCP regions under the :::: -The process of setting up the Private link connection to your clusters is split between Google Cloud (e.g. by using Google Cloud console), and Elastic Cloud UI. These are the high-level steps: +The process of setting up the Private link connection to your clusters is split between Google Cloud (e.g. by using Google Cloud console), and {{ecloud}} UI. These are the high-level steps: -| Google Cloud console | Elastic Cloud UI | +| Google Cloud console | {{ecloud}} UI | | --- | --- | -| 1. Create a Private Service Connect endpoint using Elastic Cloud Service Attachment URI. | | +| 1. Create a Private Service Connect endpoint using {{ecloud}} Service Attachment URI. | | | 2. Create a DNS record pointing to the Private Service Connect endpoint. | | | | 3. Create a Private Service Connect rule set with the **PSC Connection ID**. | | | 4. Associate the Private Service Connect rule set with your deployments. | @@ -120,14 +120,14 @@ The process of setting up the Private link connection to your clusters is split 3. Test the connection. - Find out the Elasticsearch cluster ID of your deployment. You can do that by selecting **Copy cluster id** in the Cloud UI. It looks something like `9c794b7c08fa494b9990fa3f6f74c2f8`. + Find out the {{es}} cluster ID of your deployment. You can do that by selecting **Copy cluster id** in the Cloud UI. It looks something like `9c794b7c08fa494b9990fa3f6f74c2f8`. ::::{tip} - The Elasticsearch cluster ID is **different** from the deployment ID, custom alias endpoint, and Cloud ID values that feature prominently in the user console. + The {{es}} cluster ID is **different** from the deployment ID, custom alias endpoint, and Cloud ID values that feature prominently in the user console. :::: - To access your Elasticsearch cluster over Private Link:, + To access your {{es}} cluster over Private Link:, * If you have a [custom endpoint alias](/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md) configured, you can use the custom endpoint URL to connect. @@ -146,7 +146,7 @@ The process of setting up the Private link connection to your clusters is split `https://6b111580caaa4a9e84b18ec7c600155e.psc.asia-southeast1.gcp.elastic-cloud.com:9243` - You can test the Google Cloud console part of the setup with the following command (substitute the region and Elasticsearch ID with your cluster): + You can test the Google Cloud console part of the setup with the following command (substitute the region and {{es}} ID with your cluster): ```sh $ curl -v https://6b111580caaa4a9e84b18ec7c600155e.psc.asia-southeast1.gcp.elastic-cloud.com:9243 @@ -219,7 +219,7 @@ Use the alias you’ve set up as CNAME A record to access your deployment. :::: -For example, if your Elasticsearch ID is `6b111580caaa4a9e84b18ec7c600155e` and it is located in `asia-southeast1` region you can access it under `https://6b111580caaa4a9e84b18ec7c600155e.psc.asia-southeast1.gcp.elastic-cloud.com:9243`. +For example, if your {{es}} ID is `6b111580caaa4a9e84b18ec7c600155e` and it is located in `asia-southeast1` region you can access it under `https://6b111580caaa4a9e84b18ec7c600155e.psc.asia-southeast1.gcp.elastic-cloud.com:9243`. ```sh $ curl -u 'username:password' -v https://6b111580caaa4a9e84b18ec7c600155e.psc.asia-southeast1.gcp.elastic-cloud.com:9243 @@ -229,11 +229,11 @@ $ curl -u 'username:password' -v https://6b111580caaa4a9e84b18ec7c600155e.psc.as ``` ::::{note} -If you are using Private Service Connect together with Fleet, and enrolling the Elastic Agent with a Private Service Connect URL, you need to configure Fleet Server to use and propagate the Private Service Connect URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of Kibana. Otherwise, Elastic Agent will reset to use a default address instead of the Private Service Connect URL. The URL needs to follow this pattern: `https://.fleet.:443`. +If you are using Private Service Connect together with Fleet, and enrolling the Elastic Agent with a Private Service Connect URL, you need to configure Fleet Server to use and propagate the Private Service Connect URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the Private Service Connect URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the Elasticsearch host needs to be updated to propagate the Private Service Connect URL. The Elasticsearch URL needs to follow this pattern: `https://.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Private Service Connect URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. -The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [Kibana settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). +The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [{{kib}} settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). :::: diff --git a/deploy-manage/security/httprest-clients-security.md b/deploy-manage/security/httprest-clients-security.md index edcd4c302a..5764ac124f 100644 --- a/deploy-manage/security/httprest-clients-security.md +++ b/deploy-manage/security/httprest-clients-security.md @@ -5,7 +5,7 @@ mapped_pages: # HTTP/REST clients and security [http-clients] -The {{es}} {{security-features}} work with standard HTTP [basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) headers to authenticate users. Since Elasticsearch is stateless, this header must be sent with every request: +The {{es}} {{security-features}} work with standard HTTP [basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) headers to authenticate users. Since {{es}} is stateless, this header must be sent with every request: ```shell Authorization: Basic <1> @@ -73,7 +73,7 @@ For more information about using {{security-features}} with the language specifi * [Java](elasticsearch-java://reference/_basic_authentication.md) * [JavaScript](elasticsearch-js://reference/connecting.md) * [.NET](elasticsearch-net://reference/configuration.md) -* [Perl](https://metacpan.org/pod/Search::Elasticsearch::Cxn::HTTPTiny#CONFIGURATION) +* [Perl](https://metacpan.org/pod/Search::{{es}}::Cxn::HTTPTiny#CONFIGURATION) * [PHP](elasticsearch-php://reference/connecting.md) * [Python](https://elasticsearch-py.readthedocs.io/en/master/#ssl-and-authentication) * [Ruby](https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport#authentication) diff --git a/deploy-manage/security/install-stack-demo-secure.md b/deploy-manage/security/install-stack-demo-secure.md index f75b41bf8f..595f926254 100644 --- a/deploy-manage/security/install-stack-demo-secure.md +++ b/deploy-manage/security/install-stack-demo-secure.md @@ -536,7 +536,7 @@ Now that the transport and HTTP layers are configured with encryption using the elasticsearch.ssl.certificateAuthorities: [/etc/kibana/elasticsearch-ca.pem] ``` -5. Log in to the first Elasticsearch node and use the certificate utility to generate a certificate bundle for the Kibana server. This certificate will be used to encrypt the traffic between Kibana and the client’s browser. In the command, replace and with the name and IP address of your Kibana server host: +5. Log in to the first {{es}} node and use the certificate utility to generate a certificate bundle for the {{kib}} server. This certificate will be used to encrypt the traffic between {{kib}} and the client’s browser. In the command, replace and with the name and IP address of your {{kib}} server host: ```shell sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name kibana-server --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key --dns --ip --pem @@ -544,7 +544,7 @@ Now that the transport and HTTP layers are configured with encryption using the When prompted, specify a unique name for the output file, such as `kibana-cert-bundle.zip`. -6. Copy the generated archive over to your Kibana host and unpack it: +6. Copy the generated archive over to your {{kib}} host and unpack it: ```shell sudo unzip kibana-cert-bundle.zip @@ -617,11 +617,11 @@ Now that the transport and HTTP layers are configured with encryption using the tail -f /var/log/kibana/kibana.log ``` - In the log file you should find a `Kibana is now available` message. + In the log file you should find a `{{kib}} is now available` message. 14. You should now have an end-to-end ecnrypted deployment with {{es}} and {{kib}} that provides encryption between both the cluster nodes and {{kib}}, and HTTPS access to {{kib}}. - Open a web browser to the external IP address of the Kibana host machine: `https://:5601`. Note that the URL should use the `https` and not the `http` protocol. + Open a web browser to the external IP address of the {{kib}} host machine: `https://:5601`. Note that the URL should use the `https` and not the `http` protocol. 15. Log in using the `elastic` user and password that you configured when [installing your self-managed {{stack}}](/deploy-manage/deploy/self-managed.md). @@ -890,6 +890,6 @@ Congratulations! You’ve successfully configured security for {{es}}, {{kib}}, ## What’s next? [_whats_next] -* Do you have data ready to ingest into your newly set up {{stack}}? Learn how to [add data to Elasticsearch](../../manage-data/ingest.md). +* Do you have data ready to ingest into your newly set up {{stack}}? Learn how to [add data to {{es}}](../../manage-data/ingest.md). * Use [Elastic {{observability}}](https://www.elastic.co/observability) to unify your logs, infrastructure metrics, uptime, and application performance data. * Want to protect your endpoints from security threats? Try [{{elastic-sec}}](https://www.elastic.co/security). Adding endpoint protection is just another integration that you add to the agent policy! diff --git a/deploy-manage/security/ip-traffic-filtering.md b/deploy-manage/security/ip-traffic-filtering.md index 552d986bdb..bee688bbf0 100644 --- a/deploy-manage/security/ip-traffic-filtering.md +++ b/deploy-manage/security/ip-traffic-filtering.md @@ -36,7 +36,7 @@ Other [traffic filtering](/deploy-manage/security/traffic-filtering.md) methods :::::{tab-set} :group: deployment-type -::::{tab-item} Elastic Cloud +::::{tab-item} {{ecloud}} :sync: cloud Traffic filtering, by IP address or CIDR block, is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. We have two types of filters available for filtering by IP address or CIDR block: Ingress/Inbound and Egress/Outbound (Beta, API only). @@ -129,10 +129,10 @@ To delete a rule set with all its rules: :::: -::::{tab-item} Elastic Cloud Enterprise +::::{tab-item} {{ece}} :sync: cloud-enterprise -Follow the step described here to set up ingress or inbound IP filters through the Elastic Cloud Enterprise console. +Follow the step described here to set up ingress or inbound IP filters through the {{ece}} console. **1. Create an IP filter rule set** @@ -216,7 +216,7 @@ You can apply IP filtering to application clients, node clients, or transport cl If a node’s IP address is on the denylist, the {{es}} {{security-features}} allow the connection to {{es}} but it is be dropped immediately and no requests are processed. :::{note} -Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the other capabilities of the {{es}} {{security-features}} do not change this condition. +{{es}} installations are not designed to be publicly accessible over the Internet. IP Filtering and the other capabilities of the {{es}} {{security-features}} do not change this condition. ::: @@ -277,7 +277,7 @@ xpack.security.http.filter.enabled: true **Specifying TCP transport profiles** -[TCP transport profiles](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-profiles) enable Elasticsearch to bind on multiple hosts. The {{es}} {{security-features}} enable you to apply different IP filtering on different profiles. +[TCP transport profiles](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-profiles) enable {{es}} to bind on multiple hosts. The {{es}} {{security-features}} enable you to apply different IP filtering on different profiles. ```yaml xpack.security.transport.filter.allow: 172.16.0.0/24 diff --git a/deploy-manage/security/kibana-session-management.md b/deploy-manage/security/kibana-session-management.md index 577b7b16bf..08a7160be0 100644 --- a/deploy-manage/security/kibana-session-management.md +++ b/deploy-manage/security/kibana-session-management.md @@ -1,9 +1,15 @@ --- +applies_to: + deployment: + ess: ga + ece: ga + eck: ga + self: ga mapped_pages: - https://www.elastic.co/guide/en/kibana/current/xpack-security-session-management.html --- -# Kibana session management [xpack-security-session-management] +# {{kib}} session management [xpack-security-session-management] When you log in, {{kib}} creates a session that is used to authenticate subsequent requests to {{kib}}. A session consists of two components: an encrypted cookie that is stored in your browser, and an encrypted document in a dedicated {{es}} hidden index. By default, the name of that index is `.kibana_security_session_1`, where the prefix is derived from the primary `.kibana` index. If either of these components are missing, the session is no longer valid. @@ -32,7 +38,7 @@ xpack.security.session.lifespan: "7d" ## Session cleanup interval [session-cleanup-interval] ::::{important} -If you disable session idle timeout and lifespan, then Kibana will not automatically remove session information from the index unless you explicitly log out. This might lead to an infinitely growing session index. As long as either idle timeout or lifespan is configured, Kibana sessions will be cleaned up even if you don’t explicitly log out. +If you disable session idle timeout and lifespan, then {{kib}} will not automatically remove session information from the index unless you explicitly log out. This might lead to an infinitely growing session index. As long as either idle timeout or lifespan is configured, {{kib}} sessions will be cleaned up even if you don’t explicitly log out. :::: diff --git a/deploy-manage/security/secure-saved-objects.md b/deploy-manage/security/secure-saved-objects.md index cdb9461a08..50b241d035 100644 --- a/deploy-manage/security/secure-saved-objects.md +++ b/deploy-manage/security/secure-saved-objects.md @@ -1,9 +1,14 @@ --- +applies_to: + deployment: + ece: ga + eck: ga + self: ga mapped_pages: - https://www.elastic.co/guide/en/kibana/current/xpack-security-secure-saved-objects.html --- -# Secure saved objects [xpack-security-secure-saved-objects] +# Secure {{kib}} saved objects [xpack-security-secure-saved-objects] {{kib}} stores entities such as dashboards, visualizations, alerts, actions, and advanced settings as saved objects, which are kept in a dedicated, internal {{es}} index. If such an object includes sensitive information, for example a PagerDuty integration key or email server credentials used by the alert action, {{kib}} encrypts it and makes sure it cannot be accidentally leaked or tampered with. diff --git a/deploy-manage/security/secure-settings.md b/deploy-manage/security/secure-settings.md index 25b3e3ea86..f2ca264cb7 100644 --- a/deploy-manage/security/secure-settings.md +++ b/deploy-manage/security/secure-settings.md @@ -1,4 +1,10 @@ --- +applies_to: + deployment: + ess: ga + ece: ga + eck: ga + self: ga mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configuring-keystore.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-restful-api-examples-configuring-keystore.html @@ -11,38 +17,478 @@ mapped_urls: # Secure your settings -% What needs to be done: Refine +$$$reloadable-secure-settings$$$ -% GitHub issue: https://github.com/elastic/docs-projects/issues/346 +$$$ec-add-secret-values$$$ -% Scope notes: put UI and API instructions on the same page could consider merging with the page above +$$$change-password$$$ -% Use migrated content from existing pages that map to this page: +$$$creating-keystore$$$ -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-configuring-keystore.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md -% - [ ] ./raw-migrated-files/kibana/kibana/secure-settings.md +Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. Depending on the settings you need to protect, you can use: -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +- [The {{es}} keystore](secure-settings.md#the-es-keystore) and the [`elasticsearch-keystore` tool](elasticsearch://reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) to manage {{es}} settings. +- [The {{kib}} keystore](secure-settings.md#the-kib-keystore) and the `kibana-keystore` tool to manage {{kib}} settings. +- [Kubernetes secrets](secure-settings.md#kubernetes-secrets), if you are using {{eck}}. -$$$reloadable-secure-settings$$$ -$$$ec-add-secret-values$$$ +:::{important} +Only some settings are designed to be read from the keystore. However, the keystore has no validation to block unsupported settings. Adding unsupported settings to the keystore causes [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) to fail and if not addressed, {{es}} will fail to start. To check whether a setting is supported in the keystore, look for a "Secure" qualifier in the [setting reference](/reference/index.md). +::: -$$$change-password$$$ +## The {{es}} keystore -$$$creating-keystore$$$ +With the {{es}} keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. + +:::::{tab-set} +:group: deployment-type + +::::{tab-item} {{ecloud}} +:sync: cloud + +There are three types of secrets that you can use: + +* **Single string** - Associate a secret value to a setting. +* **Multiple strings** - Associate multiple keys to multiple secret values. +* **JSON block/file** - Associate multiple keys to multiple secret values in JSON format. + + +**Add secret values** + +Add keys and secret values to the keystore. + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, select **Security**. +4. Locate **{{es}} keystore** and select **Add settings**. +5. On the **Create setting** window, select the secret **Type**. +6. Configure the settings, then select **Save**. +7. All the modifications to the non-reloadable keystore take effect only after restarting {{es}}. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. + + +**Delete secret values** + +When your keys and secret values are no longer needed, delete them from the keystore. + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, select **Security**. +4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. +5. On the **Confirm to delete** window, select **Confirm**. +6. All modifications to the non-reloadable keystore take effect only after restarting {{es}}. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. + +:::: + +::::{tab-item} {{ece}} +:sync: ece + +There are three types of secrets that you can use: + +* **Single string** - Associate a secret value to a setting. +* **Multiple strings** - Associate multiple keys to multiple secret values. +* **JSON block/file** - Associate multiple keys to multiple secret values in JSON format. + + +**Add secret values** + +Add keys and secret values to the keystore. + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From your deployment menu, select **Security**. +4. Locate **{{es}} keystore** and select **Add settings**. +5. On the **Create setting** window, select the secret **Type**. +6. Configure the settings, then select **Save**. +7. All the modifications to the non-reloadable keystore take effect only after restarting {{es}}. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. + + + +**Delete secret values** + +When your keys and secret values are no longer needed, delete them from the keystore. + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From your deployment menu, select **Security**. +4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. +5. On the **Confirm to delete** window, select **Confirm**. +6. All modifications to the non-reloadable keystore take effect only after restarting {{es}}. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. + +:::{dropdown} Using the API + +**Steps** + +Create the keystore: + +```sh +curl -k -X PATCH -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/$DEPLOYMENT_ID/elasticsearch/$REF_ID/keystore \ +{ + "secrets": { + "s3.client.CLIENT_NAME.access_key": { + "as_file": false + "value": "ACCESS_KEY_VALUE" + } + "s3.client.CLIENT_NAME.secret_key": { + "value": "SECRET_KEY_VALUE" + } + } +} +``` + +`ELASTICSEARCH_CLUSTER_ID` +: The {{es}} cluster ID as shown in the Cloud UI or obtained through the API + +List the keys defined in the keystore: + +```sh +{ + "secrets": { + "s3.client.CLIENT_NAME.access_key": { + "as_file": false + }, + "s3.client.CLIENT_NAME.secret_key": { + "as_file": false + } + } +} +``` + +Create the credentials for an S3 or Minio repository: + +```sh +curl -k -X PUT -H "Authorization: ApiKey $ECE_API_KEY" https://$COODINATOR_HOST:12443/api/v1/clusters/elasticsearch/$ELASTICSEARCH_CLUSTER_ID/_snapshot/s3-repo +{ + "type": "s3", + "settings": { + "bucket": "s3_REPOSITORY_NAME", + "client": "CLIENT_NAME", + "base_path": "PATH_NAME" + } +} +``` + +Create the credentials for a GCS repository: + +```sh +curl -k -X PUT -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/clusters/elasticsearch/ELASTICSEARCH_CLUSTER_ID/_snapshot/s3-repo +{ + "type": "gcs", + "settings": { + "bucket": "BUCKET_NAME", + "base_path": "BASE_PATH_NAME", + "client": "CLIENT_NAME" + } +} +``` + +To use GCS snapshots, the cluster must have the `repository-gcs` plugin enabled. + + +Remove keys that are defined in the keystore: + +```sh +curl -k -X PATCH -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/$DEPLOYMENT_ID/elasticsearch/$REF_ID/keystore \ +{ + "secrets": { + "KEY_TO_REMOVE": {} + } +} +``` + + +**Verify your credentials** + +If your credentials are invalid, an administrator can verify that they are correct by checking the `keystore` field in the cluster metadata. + +If the credential values are correct, but do not work, the keystore file could be out of sync on one or more nodes. To sync the keystore file, update the value for the key by using the patch API to delete the key from keystore, then add it back again. + +::: + +:::: + +::::{tab-item} Self-managed +:sync: self-managed + +All the modifications to the keystore take effect only after restarting {{es}}. + +These settings, just like the regular ones in the `elasticsearch.yml` config file, need to be specified on each node in the cluster. Currently, all secure settings are node-specific settings that must have the same value on every node. + + +**Reloadable secure settings** + +Just like the settings values in `elasticsearch.yml`, changes to the keystore contents are not automatically applied to the running {{es}} node. Re-reading settings requires a node restart. However, certain secure settings are marked as **reloadable**. Such settings can be re-read and applied on a running node. + +You can define these settings before the node is started, or call the [Nodes reload secure settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) after the settings are defined to apply them to a running node. + +The values of all secure settings, **reloadable** or not, must be identical across all cluster nodes. After making the desired secure settings changes, using the `bin/elasticsearch-keystore add` command, call: + +```console +POST _nodes/reload_secure_settings +{ + "secure_settings_password": "keystore-password" <1> +} +``` + +1. The password that the {{es}} keystore is encrypted with. + + +This API decrypts, re-reads the entire keystore and validates all settings on every cluster node, but only the **reloadable** secure settings are applied. Changes to other settings do not go into effect until the next restart. Once the call returns, the reload has been completed, meaning that all internal data structures dependent on these settings have been changed. Everything should look as if the settings had the new value from the start. + +When changing multiple **reloadable** secure settings, modify all of them on each cluster node, then issue a [`reload_secure_settings`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) call instead of reloading after each modification. + +There are reloadable secure settings for: + +* [The Azure repository plugin](/deploy-manage/tools/snapshot-and-restore/azure-repository.md) +* [The EC2 discovery plugin](elasticsearch://reference/elasticsearch-plugins/discovery-ec2-usage.md#_configuring_ec2_discovery) +* [The GCS repository plugin](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md) +* [The S3 repository plugin](/deploy-manage/tools/snapshot-and-restore/s3-repository.md) +* [Monitoring settings](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md) +* [{{watcher}} settings](elasticsearch://reference/elasticsearch/configuration-reference/watcher-settings.md) +* [JWT realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-jwt-settings) +* [Active Directory realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ad-settings) +* [LDAP realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) +* [Remote cluster credentials for the API key based security model](/deploy-manage/remote-clusters/remote-clusters-settings.md#remote-cluster-credentials-setting) + + +:::: + +::::: + +## The {{kib}} keystore +```{applies_to} +deployment: + self: ga +``` + +Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. For this use case, {{kib}} provides a keystore, and the `kibana-keystore` tool to manage the settings in the keystore. + +::::{note} +* Run all commands as the user who runs {{kib}}. +* Any valid {{kib}} setting can be stored in the keystore securely. Unsupported, extraneous or invalid settings will cause {{kib}} to fail to start up. + +:::: + +### Create the keystore [creating-keystore] + +To create the `kibana.keystore`, use the `create` command: + +```sh +bin/kibana-keystore create +``` + +The file `kibana.keystore` will be created in the `config` directory defined by the environment variable `KBN_PATH_CONF`. + +To create a password protected keystore use the `--password` flag. + + +### List settings in the keystore [list-settings] + +A list of the settings in the keystore is available with the `list` command: + +```sh +bin/kibana-keystore list +``` + + +### Add string settings [add-string-to-keystore] + +::::{note} +Your input will be JSON-parsed to allow for object/array input configurations. To enforce string values, use "double quotes" around your input. +:::: + + +Sensitive string settings, like authentication credentials for {{es}} can be added using the `add` command: + +```sh +bin/kibana-keystore add the.setting.name.to.set +``` + +Once added to the keystore, these setting will be automatically applied to this instance of {{kib}} when started. For example if you do + +```sh +bin/kibana-keystore add elasticsearch.username +``` + +you will be prompted to provide the value for elasticsearch.username. (Your input will show as asterisks.) + +The tool will prompt for the value of the setting. To pass the value through stdin, use the `--stdin` flag: + +```sh +cat /file/containing/setting/value | bin/kibana-keystore add the.setting.name.to.set --stdin +``` + + +### Remove settings [remove-settings] + +To remove a setting from the keystore, use the `remove` command: + +```sh +bin/kibana-keystore remove the.setting.name.to.remove +``` + + +### Read settings [read-settings] + +To display the configured setting values, use the `show` command: + +```sh +bin/kibana-keystore show setting.key +``` + + +### Change password [change-password] + +To change the password of the keystore, use the `passwd` command: + +```sh +bin/kibana-keystore passwd +``` + + +### Has password [has-password] + +To check if the keystore is password protected, use the `has-passwd` command. An exit code of 0 will be returned if the keystore is password protected, and the command will fail otherwise. + +```sh +bin/kibana-keystore has-passwd +``` + +## Kubernetes secrets +```{applies_to} +deployment: + eck: ga +``` + +You can specify [secure settings](/deploy-manage/security/secure-settings.md) with [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/). The secrets should contain a key-value pair for each secure setting you want to add. ECK automatically injects these settings into the keystore on each {{es}} node before it starts {{es}}. The ECK operator continues to watch the secrets for changes and will update the {{es}} keystore when it detects a change. + +### Basic usage [k8s_basic_usage] + +It is possible to reference several secrets: + +```yaml +spec: + secureSettings: + - secretName: one-secure-settings-secret + - secretName: two-secure-settings-secret +``` + +For the following secret, a `gcs.client.default.credentials_file` key will be created in {{es}}’s keystore with the provided value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: one-secure-settings-secret +type: Opaque +stringData: + gcs.client.default.credentials_file: | + { + "type": "service_account", + "project_id": "your-project-id", + "private_key_id": "...", + "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", + "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", + "client_id": "...", + "auth_uri": "https://accounts.google.com/o/oauth2/auth", + "token_uri": "https://accounts.google.com/o/oauth2/token", + "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" + } +``` + +::::{tip} +Note that by default [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/) are expecting the value to be base64 encoded unless under a `stringData` field. +:::: + + + +### Projection of secret keys to specific paths [k8s_projection_of_secret_keys_to_specific_paths] + +You can export a subset of secret keys and also project keys to specific paths using the `entries`, `key` and `path` fields: + +```yaml +spec: + secureSettings: + - secretName: gcs-secure-settings + entries: + - key: gcs.client.default.credentials_file + - key: gcs_client_1 + path: gcs.client.client_1.credentials_file + - key: gcs_client_2 + path: gcs.client.client_2.credentials_file +``` + +For the three entries listed in the `gcs-secure-settings` secret, three keys are created in {{es}}’s keystore: + +* `gcs.client.default.credentials_file` +* `gcs.client.client_1.credentials_file` +* `gcs.client.client_2.credentials_file` + +The referenced `gcs-secure-settings` secret now looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: gcs-secure-settings +type: Opaque +stringData: + gcs.client.default.credentials_file: | + { + "type": "service_account", + "project_id": "project-id-to-be-used-for-default-client", + "private_key_id": "private key ID for default-client", + "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", + "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", + "client_id": "client ID for the default client", + "auth_uri": "https://accounts.google.com/o/oauth2/auth", + "token_uri": "https://accounts.google.com/o/oauth2/token", + "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" + } + gcs_client_1: | + { + "type": "service_account", + "project_id": "project-id-to-be-used-for-gcs_client_1", + "private_key_id": "private key ID for gcs_client_1", + "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", + "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", + "client_id": "client ID for the gcs_client_1 client", + "auth_uri": "https://accounts.google.com/o/oauth2/auth", + "token_uri": "https://accounts.google.com/o/oauth2/token", + "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" + } + gcs_client_2: | + { + "type": "service_account", + "project_id": "project-id-to-be-used-for-gcs_client_2", + "private_key_id": "private key ID for gcs_client_2", + "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", + "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", + "client_id": "client ID for the gcs_client_2 client", + "auth_uri": "https://accounts.google.com/o/oauth2/auth", + "token_uri": "https://accounts.google.com/o/oauth2/token", + "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" + } +``` + + +### More examples [k8s_more_examples] + +Check [How to create automated snapshots](/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md) for an example use case that illustrates how secure settings can be used to set up automated {{es}} snapshots to a GCS storage bucket. -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md](/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md](/raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md) -* [/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md](/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md](/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md) -* [/raw-migrated-files/kibana/kibana/secure-settings.md](/raw-migrated-files/kibana/kibana/secure-settings.md) \ No newline at end of file diff --git a/deploy-manage/security/secure-your-cluster-deployment.md b/deploy-manage/security/secure-your-cluster-deployment.md index c92f47f5b1..f4a8dae5b1 100644 --- a/deploy-manage/security/secure-your-cluster-deployment.md +++ b/deploy-manage/security/secure-your-cluster-deployment.md @@ -72,13 +72,16 @@ Refer to [](traffic-filtering.md). - **TLS certificates and keys** -## Data and objects security +## Data, objects and settings security -- **Bring your own encryption key** -- **Elasticsearch keystore** -- **Kibana saved objects** +- **Bring your own encryption key**: Use your own encryption key instead of the default encryption at rest provided by Elastic. +- **{{es}} and {{kib}} keystores**: Secure sensitive settings using keystores +- **{{kib}} saved objects**: Customize the encryption for {{kib}} objects such as dashboards. +- **{{kib}} session management**: Customize {{kib}} session expiration settings. -## User roles and sessions +Refer to [](data-security.md). + +## User roles [Define roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for your users and [assign appropriate privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to ensure that users have access only to the resources that they need. This process determines whether the user behind an incoming request is allowed to run that request. diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md index ba9337fd2d..33267f3fe9 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation.md @@ -7,7 +7,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-considerations.html --- -# Secure your Elastic Cloud Enterprise orchestrator [ece-securing-considerations] +# Secure your {{ece}} orchestrator [ece-securing-considerations] This section covers security settings for your {{ece}} orchestrator. @@ -28,31 +28,31 @@ Additional security settings are available for you to configure individually for ### Users with admin privileges [ece_users_with_admin_privileges] -In Elastic Cloud Enterprise, every user who can manage your installation through the Cloud UI or the RESTful API is a user with admin privileges. This includes both the `admin` user and the `readonly` user that get created when you install ECE on your first host. Initially, only the `admin` user has the required privileges to make changes to resources on ECE. +In {{ece}}, every user who can manage your installation through the Cloud UI or the RESTful API is a user with admin privileges. This includes both the `admin` user and the `readonly` user that get created when you install ECE on your first host. Initially, only the `admin` user has the required privileges to make changes to resources on ECE. -[Role-based access control](../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for Elastic Cloud Enterprise allows you to connect multiple users or user groups to the platform. +[Role-based access control](../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for {{ece}} allows you to connect multiple users or user groups to the platform. -All Elasticsearch clusters come with X-Pack security features and support role-based access control. To learn more, check [Secure Your Clusters](../users-roles/cluster-or-deployment-auth.md). +All {{es}} clusters come with X-Pack security features and support role-based access control. To learn more, check [Secure Your Clusters](../users-roles/cluster-or-deployment-auth.md). ### Encryption [ece_encryption] -Elastic Cloud Enterprise does not implement encryption at rest out of the box. To ensure encryption at rest for all data managed by Elastic Cloud Enterprise, the hosts running Elastic Cloud Enterprise must be configured with disk-level encryption, such as dm-crypt. In addition, snapshot targets must ensure that data is encrypted at rest as well. +{{ece}} does not implement encryption at rest out of the box. To ensure encryption at rest for all data managed by {{ece}}, the hosts running {{ece}} must be configured with disk-level encryption, such as dm-crypt. In addition, snapshot targets must ensure that data is encrypted at rest as well. -Configuring dm-crypt or similar technologies is outside the scope of the Elastic Cloud Enterprise documentation, and issues related to disk encryption are outside the scope of support. +Configuring dm-crypt or similar technologies is outside the scope of the {{ece}} documentation, and issues related to disk encryption are outside the scope of support. -Elastic Cloud Enterprise provides full encryption of all network traffic by default. +{{ece}} provides full encryption of all network traffic by default. -TLS is supported when interacting with the [RESTful API of Elastic Cloud Enterprise](https://www.elastic.co/docs/api/doc/cloud-enterprise/) and for the proxy layer that routes user requests to clusters of all versions. Internally, our administrative services also ensure transport-level encryption. +TLS is supported when interacting with the [RESTful API of {{ece}}](https://www.elastic.co/docs/api/doc/cloud-enterprise/) and for the proxy layer that routes user requests to clusters of all versions. Internally, our administrative services also ensure transport-level encryption. ### Attack vectors versus separation of roles [ece-securing-vectors] As covered in [Separation of Roles](../deploy/cloud-enterprise/ece-roles.md), it is important to not mix certain roles in a production environment. -Specifically, a host that is used as an allocator should hold *only* the allocator role. Allocators run the Elasticsearch and Kibana nodes that handle your workloads, which can expose a larger attack surface than the internal admin services. By separating the allocator role from other roles, you reduce any potential security exposure. +Specifically, a host that is used as an allocator should hold *only* the allocator role. Allocators run the {{es}} and {{kib}} nodes that handle your workloads, which can expose a larger attack surface than the internal admin services. By separating the allocator role from other roles, you reduce any potential security exposure. -Elastic Cloud Enterprise is designed to ensure that an allocator has access only to the keys necessary to manage the clusters that it has been assigned. If there is a compromise of Elasticsearch or Kibana combined with a zero-day or Linux kernel exploit, for example, this design ensures that the entire Elastic Cloud Enterprise installation or clusters other than those already managed by that allocator are not affected. +{{ece}} is designed to ensure that an allocator has access only to the keys necessary to manage the clusters that it has been assigned. If there is a compromise of {{es}} or {{kib}} combined with a zero-day or Linux kernel exploit, for example, this design ensures that the entire {{ece}} installation or clusters other than those already managed by that allocator are not affected. Security comes in layers, and running separate services on separate infrastructure is the last layer of defense, on top of other security features like the JVM security manager, system call filtering, and running nodes in isolated containers with no shared secrets. @@ -60,6 +60,6 @@ Security comes in layers, and running separate services on separate infrastructu ### Hardware isolation $$$ece_clusters_share_the_same_resources$$$ -The Elasticsearch clusters you create on Elastic Cloud Enterprise share the same resources. It is currently not possible to run a specific cluster on entirely dedicated hardware not shared by other clusters. +The {{es}} clusters you create on {{ece}} share the same resources. It is currently not possible to run a specific cluster on entirely dedicated hardware not shared by other clusters. diff --git a/deploy-manage/security/secure-your-elastic-cloud-organization.md b/deploy-manage/security/secure-your-elastic-cloud-organization.md index 50f8e53353..8f661e1f27 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-organization.md +++ b/deploy-manage/security/secure-your-elastic-cloud-organization.md @@ -6,7 +6,7 @@ applies_to: serverless: ga --- -# Secure your Elastic Cloud organization [ec-securing-considerations] +# Secure your {{ecloud}} organization [ec-securing-considerations] This section covers security settings for your {{ecloud}} organization, the platform for managing {{ech}} deployments and serverless projects. @@ -14,9 +14,9 @@ This section covers security settings for your {{ecloud}} organization, the plat As a managed service, Elastic automatically handles a [number of security features](https://www.elastic.co/cloud/security#details) with no configuration required: -- **TLS encrypted communication** is provided in the default configuration. Elasticsearch nodes communicate using TLS. +- **TLS encrypted communication** is provided in the default configuration. {{es}} nodes communicate using TLS. - **Encryption at rest**. By default, all of your {{ecloud}} resources are encrypted at rest. Note that you can choose to encrypt your {{ech}} deployments [using your own encryption key](/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md). -- **Cluster isolation**. Elasticsearch nodes run in isolated containers, configured according to the principle of least privilege, and with restrictions on system calls and allowed root operations. +- **Cluster isolation**. {{es}} nodes run in isolated containers, configured according to the principle of least privilege, and with restrictions on system calls and allowed root operations. **Additional organization-level security settings** diff --git a/deploy-manage/security/security-certificates-keys.md b/deploy-manage/security/security-certificates-keys.md index 6d41517fff..32184b8b51 100644 --- a/deploy-manage/security/security-certificates-keys.md +++ b/deploy-manage/security/security-certificates-keys.md @@ -140,10 +140,10 @@ If the auto-configuration process already completed, you can still obtain the fi openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt ``` -The command returns the security certificate, including the fingerprint. The `issuer` should be `Elasticsearch security auto-configuration HTTP CA`. +The command returns the security certificate, including the fingerprint. The `issuer` should be `{{es}} security auto-configuration HTTP CA`. ```sh -issuer= /CN=Elasticsearch security auto-configuration HTTP CA +issuer= /CN={{es}} security auto-configuration HTTP CA SHA256 Fingerprint= ``` diff --git a/deploy-manage/security/set-up-basic-security-plus-https.md b/deploy-manage/security/set-up-basic-security-plus-https.md index 1781e4544c..92b92a4e77 100644 --- a/deploy-manage/security/set-up-basic-security-plus-https.md +++ b/deploy-manage/security/set-up-basic-security-plus-https.md @@ -223,7 +223,7 @@ Typically, you need to create the following separate roles: * **setup** role for setting up index templates and other dependencies * **monitoring** role for sending monitoring information * **writer** role for publishing events collected by {{metricbeat}} -* **reader** role for Kibana users who need to view and create visualizations that access {{metricbeat}} data +* **reader** role for {{kib}} users who need to view and create visualizations that access {{metricbeat}} data ::::{note} These instructions assume that you are using the default name for {{metricbeat}} indices. If the indicated index names are not listed, or you are using a custom name, enter it manually when defining roles and modify the privileges to match your index naming pattern. @@ -335,7 +335,7 @@ Users who publish events to {{es}} need to create and write to {{metricbeat}} in 1. Create the reader role: 2. Enter **metricbeat_reader** as the role name. 3. On the **metricbeat-\*** indices, choose the **read** privilege. -4. Under **Kibana**, click **Add Kibana privilege**. +4. Under **{{kib}}**, click **Add {{kib}} privilege**. * Under **Spaces**, choose **Default**. * Choose **Read** or **All** for Discover, Visualize, Dashboard, and Metrics. diff --git a/deploy-manage/security/traffic-filtering.md b/deploy-manage/security/traffic-filtering.md index bbfbb11588..b082012ee2 100644 --- a/deploy-manage/security/traffic-filtering.md +++ b/deploy-manage/security/traffic-filtering.md @@ -19,7 +19,7 @@ Traffic filtering allows you to limit how your deployments and clusters can be a :::::{tab-set} :group: deployment-type -::::{tab-item} Elastic Cloud +::::{tab-item} {{ecloud}} :sync: cloud On {{ecloud}}, the following types of traffic filters are available for your {{ech}} deployments: @@ -35,7 +35,7 @@ On {{ecloud}}, the following types of traffic filters are available for your {{e **How does it work?** -By default, all your {{ecloud}} deployments are accessible over the public internet. They are not accessible over unknown PrivateLink connections. This only applies to external traffic. Internal traffic is managed by {{ecloud}}. For example, Kibana can connect to Elasticsearch, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. +By default, all your {{ecloud}} deployments are accessible over the public internet. They are not accessible over unknown PrivateLink connections. This only applies to external traffic. Internal traffic is managed by {{ecloud}}. For example, {{kib}} can connect to {{es}}, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. In {{ecloud}} you can define traffic filters from the **Features** > **Traffic filters** page, and apply them to your {{ech}} deployments individually from their **Settings** page. @@ -51,7 +51,7 @@ Filtering rules are grouped into rule sets, which in turn are associated with on - You can assign multiple rule sets to a single deployment. The rule sets can be of different types. In case of multiple rule sets, traffic can match ANY of them. If none of the rule sets match the request is rejected with `403 Forbidden`. - Traffic filter rule sets are bound to a single region. The rule sets can be assigned only to deployments in the same region. If you want to associate a rule set with deployments in multiple regions you have to create the same rule set in all the regions you want to apply it to. - You can mark a rule set as *default*. It is automatically attached to all new deployments that you create in its region. You can detach default rule sets from deployments after they are created. Note that a *default* rule set is not automatically attached to existing deployments. -- Traffic filter rule sets when associated with a deployment will apply to all deployment endpoints, such as Elasticsearch, Kibana, APM Server, and others. +- Traffic filter rule sets when associated with a deployment will apply to all deployment endpoints, such as {{es}}, {{kib}}, APM Server, and others. - Any traffic filter rule set assigned to a deployment overrides the default behavior of *allow all access over the public internet endpoint; deny all access over Private Link*. The implication is that if you make a mistake putting in the traffic source (for example, specified the wrong IP address) the deployment will be effectively locked down to any of your traffic. You can use the UI to adjust or remove the rule sets. @@ -77,7 +77,7 @@ On {{ece}}, make sure your [load balancer](/deploy-manage/deploy/cloud-enterpris **How does it work?** -By default, all your deployments are accessible over the public internet, assuming that your orchestrator's proxies are accessible. This only applies to external traffic. Internal traffic is managed by the orchestrator. For example, Kibana can connect to Elasticsearch, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. +By default, all your deployments are accessible over the public internet, assuming that your orchestrator's proxies are accessible. This only applies to external traffic. Internal traffic is managed by the orchestrator. For example, {{kib}} can connect to {{es}}, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. You can define traffic filters from the **Platform** > **Security** page, and apply them to your {{ech}} deployments individually from their **Settings** page. @@ -93,7 +93,7 @@ Filtering rules are grouped into rule sets, which in turn are associated with on - You can assign multiple rule sets to a single deployment. The rule sets can be of different types. In case of multiple rule sets, traffic can match ANY of them. If none of the rule sets match the request is rejected with `403 Forbidden`. - Traffic filter rule sets are bound to a single region. The rule sets can be assigned only to deployments in the same region. If you want to associate a rule set with deployments in multiple regions you have to create the same rule set in all the regions you want to apply it to. - You can mark a rule set as *default*. It is automatically attached to all new deployments that you create in its region. You can detach default rule sets from deployments after they are created. Note that a *default* rule set is not automatically attached to existing deployments. -- Traffic filter rule sets when associated with a deployment will apply to all deployment endpoints, such as Elasticsearch, Kibana, APM Server, and others. +- Traffic filter rule sets when associated with a deployment will apply to all deployment endpoints, such as {{es}}, {{kib}}, APM Server, and others. - Any traffic filter rule set assigned to a deployment overrides the default behavior of *allow all access over the public internet endpoint; deny all access over Private Link*. The implication is that if you make a mistake putting in the traffic source (for example, specified the wrong IP address) the deployment will be effectively locked down to any of your traffic. You can use the UI to adjust or remove the rule sets. :::: diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index b39b08a6ac..2a5cb6146a 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -513,14 +513,13 @@ toc: - file: security/same-ca.md - file: security/different-ca.md - file: security/supported-ssltls-versions-by-jdk-version.md + - file: security/enabling-cipher-suites-for-stronger-encryption.md - file: security/data-security.md children: - - file: security/encrypt-deployment.md - children: - - file: security/encrypt-deployment-with-customer-managed-encryption-key.md - - file: security/enabling-cipher-suites-for-stronger-encryption.md + - file: security/encrypt-deployment-with-customer-managed-encryption-key.md - file: security/secure-settings.md - file: security/secure-saved-objects.md + - file: security/kibana-session-management.md - file: security/logging-configuration/security-event-audit-logging.md children: - file: security/logging-configuration/enabling-audit-logs.md @@ -530,7 +529,6 @@ toc: - file: security/logging-configuration/logfile-audit-output.md - file: security/logging-configuration/auditing-search-queries.md - file: security/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md - - file: security/kibana-session-management.md - file: security/fips-140-2.md - file: security/secure-clients-integrations.md children: diff --git a/deploy-manage/users-roles.md b/deploy-manage/users-roles.md index 3bac1bef76..d5e1d04873 100644 --- a/deploy-manage/users-roles.md +++ b/deploy-manage/users-roles.md @@ -21,7 +21,7 @@ The methods that you use to authenticate users and control access depends on the Preventing unauthorized access is only one element of a complete security strategy. To secure your Elastic environment, you can also do the following: * Restrict the nodes and clients that can connect to the cluster using [traffic filters](/deploy-manage/security/traffic-filtering.md). -* Take steps to maintain your data integrity and confidentiality by [encrypting HTTP and inter-node communications](/deploy-manage/security/secure-cluster-communications.md), as well as [encrypting your data at rest](/deploy-manage/security/encrypt-deployment.md). +* Take steps to maintain your data integrity and confidentiality by [encrypting HTTP and inter-node communications](/deploy-manage/security/secure-cluster-communications.md), as well as [encrypting your data at rest](/deploy-manage/security/data-security.md). * Maintain an [audit trail](/deploy-manage/security/logging-configuration/security-event-audit-logging.md) for security-related events. * Control access to dashboards and other saved objects in your UI using [{{kib}} spaces](/deploy-manage/manage-spaces.md). * Connect your cluster to a [remote cluster](/deploy-manage/remote-clusters.md) to enable cross-cluster replication and search. diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md deleted file mode 100644 index 4ab7b16999..0000000000 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md +++ /dev/null @@ -1,123 +0,0 @@ -# Secure settings [k8s-es-secure-settings] - -You can specify [secure settings](/deploy-manage/security/secure-settings.md) with [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/). The secrets should contain a key-value pair for each secure setting you want to add. ECK automatically injects these settings into the keystore on each Elasticsearch node before it starts Elasticsearch. The ECK operator continues to watch the secrets for changes and will update the Elasticsearch keystore when it detects a change. - -## Basic usage [k8s_basic_usage] - -It is possible to reference several secrets: - -```yaml -spec: - secureSettings: - - secretName: one-secure-settings-secret - - secretName: two-secure-settings-secret -``` - -For the following secret, a `gcs.client.default.credentials_file` key will be created in Elasticsearch’s keystore with the provided value: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: one-secure-settings-secret -type: Opaque -stringData: - gcs.client.default.credentials_file: | - { - "type": "service_account", - "project_id": "your-project-id", - "private_key_id": "...", - "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", - "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", - "client_id": "...", - "auth_uri": "https://accounts.google.com/o/oauth2/auth", - "token_uri": "https://accounts.google.com/o/oauth2/token", - "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", - "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" - } -``` - -::::{tip} -Note that by default [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/) are expecting the value to be base64 encoded unless under a `stringData` field. -:::: - - - -## Projection of secret keys to specific paths [k8s_projection_of_secret_keys_to_specific_paths] - -You can export a subset of secret keys and also project keys to specific paths using the `entries`, `key` and `path` fields: - -```yaml -spec: - secureSettings: - - secretName: gcs-secure-settings - entries: - - key: gcs.client.default.credentials_file - - key: gcs_client_1 - path: gcs.client.client_1.credentials_file - - key: gcs_client_2 - path: gcs.client.client_2.credentials_file -``` - -For the three entries listed in the `gcs-secure-settings` secret, three keys are created in Elasticsearch’s keystore: - -* `gcs.client.default.credentials_file` -* `gcs.client.client_1.credentials_file` -* `gcs.client.client_2.credentials_file` - -The referenced `gcs-secure-settings` secret now looks like this: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: gcs-secure-settings -type: Opaque -stringData: - gcs.client.default.credentials_file: | - { - "type": "service_account", - "project_id": "project-id-to-be-used-for-default-client", - "private_key_id": "private key ID for default-client", - "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", - "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", - "client_id": "client ID for the default client", - "auth_uri": "https://accounts.google.com/o/oauth2/auth", - "token_uri": "https://accounts.google.com/o/oauth2/token", - "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", - "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" - } - gcs_client_1: | - { - "type": "service_account", - "project_id": "project-id-to-be-used-for-gcs_client_1", - "private_key_id": "private key ID for gcs_client_1", - "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", - "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", - "client_id": "client ID for the gcs_client_1 client", - "auth_uri": "https://accounts.google.com/o/oauth2/auth", - "token_uri": "https://accounts.google.com/o/oauth2/token", - "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", - "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" - } - gcs_client_2: | - { - "type": "service_account", - "project_id": "project-id-to-be-used-for-gcs_client_2", - "private_key_id": "private key ID for gcs_client_2", - "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", - "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com", - "client_id": "client ID for the gcs_client_2 client", - "auth_uri": "https://accounts.google.com/o/oauth2/auth", - "token_uri": "https://accounts.google.com/o/oauth2/token", - "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", - "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com" - } -``` - - -## More examples [k8s_more_examples] - -Check [How to create automated snapshots](../../../deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md) for an example use case that illustrates how secure settings can be used to set up automated Elasticsearch snapshots to a GCS storage bucket. - - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md b/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md deleted file mode 100644 index 1297b1ebff..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md +++ /dev/null @@ -1,46 +0,0 @@ -# Secure your settings [ece-configuring-keystore] - -Some of the settings that you configure in Elastic Cloud Enterprise are sensitive, such as passwords, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, use the Elasticsearch keystore. With the Elasticsearch keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. - -There are three types of secrets that you can use: - -* **Single string** - Associate a secret value to a setting. -* **Multiple strings** - Associate multiple keys to multiple secret values. -* **JSON block/file** - Associate multiple keys to multiple secret values in JSON format. - - -## Add secret values [ece-add-secret-values] - -Add keys and secret values to the keystore. - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From your deployment menu, select **Security**. -4. Locate **Elasticsearch keystore** and select **Add settings**. -5. On the **Create setting** window, select the secret **Type**. -6. Configure the settings, then select **Save**. -7. All the modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - -::::{important} -Only some settings are designed to be read from the keystore. However, the keystore has no validation to block unsupported settings. Adding unsupported settings to the keystore causes [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) to fail and if not addressed, Elasticsearch will fail to start. To check whether a setting is supported in the keystore, look for a "Secure" qualifier in the [setting reference](../../../deploy-manage/security/secure-settings.md). -:::: - - - -## Delete secret values [ece-delete-keystore] - -When your keys and secret values are no longer needed, delete them from the keystore. - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From your deployment menu, select **Security**. -4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. -5. On the **Confirm to delete** window, select **Confirm**. -6. All modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md b/raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md deleted file mode 100644 index f92b442ab1..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md +++ /dev/null @@ -1,102 +0,0 @@ -# Secure your settings [ece-restful-api-examples-configuring-keystore] - -Some of the settings that you configure in Elastic Cloud Enterprise are sensitive, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, such as passwords, you can use the Elasticsearch keystore. - - -## Before you begin [ece_before_you_begin_28] - -To configure the keystore, you must meet the minimum criteria: - -* To access the RESTful API for Elastic Cloud Enterprise, you must use your Elastic Cloud Enterprise credentials. - -To learn more about the Elasticsearch keystore, refer to the [Elasticsearch documentation](/deploy-manage/security/secure-settings.md). - - -## Steps [ece_steps_9] - -Create the keystore: - -```sh -curl -k -X PATCH -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/$DEPLOYMENT_ID/elasticsearch/$REF_ID/keystore \ -{ - "secrets": { - "s3.client.CLIENT_NAME.access_key": { - "as_file": false - "value": "ACCESS_KEY_VALUE" - } - "s3.client.CLIENT_NAME.secret_key": { - "value": "SECRET_KEY_VALUE" - } - } -} -``` - -`ELASTICSEARCH_CLUSTER_ID` -: The Elasticsearch cluster ID as shown in the Cloud UI or obtained through the API - -List the keys defined in the keystore: - -```sh -{ - "secrets": { - "s3.client.CLIENT_NAME.access_key": { - "as_file": false - }, - "s3.client.CLIENT_NAME.secret_key": { - "as_file": false - } - } -} -``` - -Create the credentials for an S3 or Minio repository: - -```sh -curl -k -X PUT -H "Authorization: ApiKey $ECE_API_KEY" https://$COODINATOR_HOST:12443/api/v1/clusters/elasticsearch/$ELASTICSEARCH_CLUSTER_ID/_snapshot/s3-repo -{ - "type": "s3", - "settings": { - "bucket": "s3_REPOSITORY_NAME", - "client": "CLIENT_NAME", - "base_path": "PATH_NAME" - } -} -``` - -Create the credentials for a GCS repository: - -```sh -curl -k -X PUT -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/clusters/elasticsearch/ELASTICSEARCH_CLUSTER_ID/_snapshot/s3-repo -{ - "type": "gcs", - "settings": { - "bucket": "BUCKET_NAME", - "base_path": "BASE_PATH_NAME", - "client": "CLIENT_NAME" - } -} -``` - -::::{tip} -To use GCS snapshots, the cluster must have the `repository-gcs` plugin enabled. -:::: - - -Remove keys that are defined in the keystore: - -```sh -curl -k -X PATCH -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/$DEPLOYMENT_ID/elasticsearch/$REF_ID/keystore \ -{ - "secrets": { - "KEY_TO_REMOVE": {} - } -} -``` - - -## Verify your credentials [ece_verify_your_credentials] - -If your credentials are invalid, an administrator can verify that they are correct by checking the `keystore` field in the cluster metadata. - -If the credential values are correct, but do not work, the keystore file could be out of sync on one or more nodes. To sync the keystore file, update the value for the key by using the patch API to delete the key from keystore, then add it back again. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md b/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md deleted file mode 100644 index d1c979f760..0000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md +++ /dev/null @@ -1,46 +0,0 @@ -# Secure your settings [ech-configuring-keystore] - -Some of the settings that you configure in Elasticsearch Add-On for Heroku are sensitive, such as passwords, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, use the Elasticsearch keystore. With the Elasticsearch keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. - -There are three types of secrets that you can use: - -* **Single string** - Associate a secret value to a setting. -* **Multiple strings** - Associate multiple keys to multiple secret values. -* **JSON block/file** - Associate multiple keys to multiple secret values in JSON format. - - -## Add secret values [ech-add-secret-values] - -Add keys and secret values to the keystore. - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **Security**. -4. Locate **Elasticsearch keystore** and select **Add settings**. -5. On the **Create setting** window, select the secret **Type**. -6. Configure the settings, then select **Save**. -7. All the modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - -::::{important} -Only some settings are designed to be read from the keystore. However, the keystore has no validation to block unsupported settings. Adding unsupported settings to the keystore causes [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) to fail and if not addressed, Elasticsearch will fail to start. To check whether a setting is supported in the keystore, look for a "Secure" qualifier in the [setting reference](../../../deploy-manage/security/secure-settings.md). -:::: - - - -## Delete secret values [ech-delete-keystore] - -When your keys and secret values are no longer needed, delete them from the keystore. - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **Security**. -4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. -5. On the **Confirm to delete** window, select **Confirm**. -6. All modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - diff --git a/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md b/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md deleted file mode 100644 index 7c267874de..0000000000 --- a/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md +++ /dev/null @@ -1,46 +0,0 @@ -# Secure your settings [ec-configuring-keystore] - -Some of the settings that you configure in {{ech}} are sensitive, such as passwords, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, use the Elasticsearch keystore. With the Elasticsearch keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. - -There are three types of secrets that you can use: - -* **Single string** - Associate a secret value to a setting. -* **Multiple strings** - Associate multiple keys to multiple secret values. -* **JSON block/file** - Associate multiple keys to multiple secret values in JSON format. - - -## Add secret values [ec-add-secret-values] - -Add keys and secret values to the keystore. - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **Security**. -4. Locate **Elasticsearch keystore** and select **Add settings**. -5. On the **Create setting** window, select the secret **Type**. -6. Configure the settings, then select **Save**. -7. All the modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - -::::{important} -Only some settings are designed to be read from the keystore. However, the keystore has no validation to block unsupported settings. Adding unsupported settings to the keystore causes [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) to fail and if not addressed, Elasticsearch will fail to start. To check whether a setting is supported in the keystore, look for a "Secure" qualifier in the [setting reference](../../../deploy-manage/security/secure-settings.md). -:::: - - - -## Delete secret values [ec-delete-keystore] - -When your keys and secret values are no longer needed, delete them from the keystore. - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **Security**. -4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. -5. On the **Confirm to delete** window, select **Confirm**. -6. All modifications to the non-reloadable keystore take effect only after restarting Elasticsearch. Reloadable keystore changes take effect after issuing a [reload_secure_settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API request. - diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md deleted file mode 100644 index 58ab529f1e..0000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md +++ /dev/null @@ -1,159 +0,0 @@ -# FIPS 140-2 [fips-140-compliance] - -The Federal Information Processing Standard (FIPS) Publication 140-2, (FIPS PUB 140-2), titled "Security Requirements for Cryptographic Modules" is a U.S. government computer security standard used to approve cryptographic modules. {{es}} offers a FIPS 140-2 compliant mode and as such can run in a FIPS 140-2 configured JVM. - -::::{important} -The JVM bundled with {{es}} is not configured for FIPS 140-2. You must configure an external JDK with a FIPS 140-2 certified Java Security Provider. Refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for supported JVM configurations. See [subscriptions](https://www.elastic.co/subscriptions) for required licensing. -:::: - - -Compliance with FIPS 140-2 requires using only FIPS approved / NIST recommended cryptographic algorithms. Generally this can be done by the following: - -* Installation and configuration of a FIPS certified Java security provider. -* Ensuring the configuration of {{es}} is FIPS 140-2 compliant as documented below. -* Setting `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. Note - this setting alone is not sufficient to be compliant with FIPS 140-2. - - -## Configuring {{es}} for FIPS 140-2 [_configuring_es_for_fips_140_2] - -Detailed instructions for the configuration required for FIPS 140-2 compliance is beyond the scope of this document. It is the responsibility of the user to ensure compliance with FIPS 140-2. {{es}} has been tested with a specific configuration described below. However, there are other configurations possible to achieve compliance. - -The following is a high-level overview of the required configuration: - -* Use an externally installed Java installation. The JVM bundled with {{es}} is not configured for FIPS 140-2. -* Install a FIPS certified security provider .jar file(s) in {{es}}'s `lib` directory. -* Configure Java to use a FIPS certified security provider ([see below](../../../deploy-manage/security/fips-140-2.md#java-security-provider)). -* Configure {{es}}'s security manager to allow use of the FIPS certified provider ([see below](../../../deploy-manage/security/fips-140-2.md#java-security-manager)). -* Ensure the keystore and truststore are configured correctly ([see below](../../../deploy-manage/security/fips-140-2.md#keystore-fips-password)). -* Ensure the TLS settings are configured correctly ([see below](../../../deploy-manage/security/fips-140-2.md#fips-tls)). -* Ensure the password hashing settings are configured correctly ([see below](../../../deploy-manage/security/fips-140-2.md#fips-stored-password-hashing)). -* Ensure the cached password hashing settings are configured correctly ([see below](../../../deploy-manage/security/fips-140-2.md#fips-cached-password-hashing)). -* Configure `elasticsearch.yml` to use FIPS 140-2 mode, see ([below](../../../deploy-manage/security/fips-140-2.md#configuring-es-yml)). -* Verify the security provider is installed and configured correctly ([see below](../../../deploy-manage/security/fips-140-2.md#verify-security-provider)). -* Review the upgrade considerations ([see below](../../../deploy-manage/security/fips-140-2.md#fips-upgrade-considerations)) and limitations ([see below](../../../deploy-manage/security/fips-140-2.md#fips-limitations)). - - -### Java security provider [java-security-provider] - -Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.html) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.html) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. - -Elasticsearch has been tested with Bouncy Castle’s [bc-fips 1.0.2.5](https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2.5/bc-fips-1.0.2.5.jar) and [bctls-fips 1.0.19](https://repo1.maven.org/maven2/org/bouncycastle/bctls-fips/1.0.19/bctls-fips-1.0.19.jar). Please refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for details on which combinations of JVM and security provider are supported in FIPS mode. Elasticsearch does not ship with a FIPS certified provider. It is the responsibility of the user to install and configure the security provider to ensure compliance with FIPS 140-2. Using a FIPS certified provider will ensure that only approved cryptographic algorithms are used. - -To configure {{es}} to use additional security provider(s) configure {{es}}'s [JVM property](elasticsearch://reference/elasticsearch/jvm-settings.md#set-jvm-options) `java.security.properties` to point to a file ([example](https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.security)) in {{es}}'s `config` directory. Ensure the FIPS certified security provider is configured with the lowest order. This file should contain the necessary configuration to instruct Java to use the FIPS certified security provider. - - -### Java security manager [java-security-manager] - -All code running in {{es}} is subject to the security restrictions enforced by the Java security manager. The security provider you have installed and configured may require additional permissions in order to function correctly. You can grant these permissions by providing your own [Java security policy](https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.md#FileSyntax) - -To configure {{es}}'s security manager configure the JVM property `java.security.policy` to point a file ([example](https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.policy))in {{es}}'s `config` directory with the desired permissions. This file should contain the necessary configuration for the Java security manager to grant the required permissions needed by the security provider. - - -### {{es}} Keystore [keystore-fips-password] - -FIPS 140-2 (via NIST Special Publication 800-132) dictates that encryption keys should at least have an effective strength of 112 bits. As such, the {{es}} keystore that stores the node’s [secure settings](../../../deploy-manage/security/secure-settings.md) needs to be password protected with a password that satisfies this requirement. This means that the password needs to be 14 bytes long which is equivalent to a 14 character ASCII encoded password, or a 7 character UTF-8 encoded password. You can use the [elasticsearch-keystore passwd](elasticsearch://reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) subcommand to change or set the password of an existing keystore. Note that when the keystore is password-protected, you must supply the password each time Elasticsearch starts. - - -### TLS [fips-tls] - -SSLv2 and SSLv3 are not allowed by FIPS 140-2, so `SSLv2Hello` and `SSLv3` cannot be used for [`ssl.supported_protocols`](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). - -::::{note} -The use of TLS ciphers is mainly governed by the relevant crypto module (the FIPS Approved Security Provider that your JVM uses). All the ciphers that are configured by default in {{es}} are FIPS 140-2 compliant and as such can be used in a FIPS 140-2 JVM. See [`ssl.cipher_suites`](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). -:::: - - - -### TLS keystores and keys [_tls_keystores_and_keys] - -Keystores can be used in a number of [General TLS settings](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings) in order to conveniently store key and trust material. Neither `JKS`, nor `PKCS#12` keystores can be used in a FIPS 140-2 configured JVM. Avoid using these types of keystores. Your FIPS 140-2 provider may provide a compliant keystore implementation that can be used, or you can use PEM encoded files. To use PEM encoded key material, you can use the relevant `\*.key` and `*.certificate` configuration options, and for trust material you can use `*.certificate_authorities`. - -FIPS 140-2 compliance dictates that the length of the public keys used for TLS must correspond to the strength of the symmetric key algorithm in use in TLS. Depending on the value of `ssl.cipher_suites` that you select to use, the TLS keys must have corresponding length according to the following table: - -$$$comparable-key-strength$$$ - -| | | | -| --- | --- | --- | -| Symmetric Key Algorithm | RSA key Length | ECC key length | -| `3DES` | 2048 | 224-255 | -| `AES-128` | 3072 | 256-383 | -| `AES-256` | 15630 | 512+ | - - -### Stored password hashing [_stored_password_hashing] - -$$$fips-stored-password-hashing$$$ -While {{es}} offers a number of algorithms for securely hashing credentials on disk, only the `PBKDF2` based family of algorithms is compliant with FIPS 140-2 for stored password hashing. However, since `PBKDF2` is essentially a key derivation function, your JVM security provider may enforce a [112-bit key strength requirement](../../../deploy-manage/security/fips-140-2.md#keystore-fips-password). Although FIPS 140-2 does not mandate user password standards, this requirement may affect password hashing in {{es}}. To comply with this requirement, while allowing you to use passwords that satisfy your security policy, {{es}} offers [pbkdf2_stretch](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings) which is the suggested hashing algorithm when running {{es}} in FIPS 140-2 environments. `pbkdf2_stretch` performs a single round of SHA-512 on the user password before passing it to the `PBKDF2` implementation. - -::::{note} -You can still use one of the plain `pbkdf2` options instead of `pbkdf2_stretch` if you have external policies and tools that can ensure all user passwords for the reserved, native, and file realms are longer than 14 bytes. -:::: - - -You must set the `xpack.security.authc.password_hashing.algorithm` setting to one of the available `pbkdf_stretch_*` values. When FIPS-140 mode is enabled, the default value for `xpack.security.authc.password_hashing.algorithm` is `pbkdf2_stretch`. See [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings). - -Password hashing configuration changes are not retroactive so the stored hashed credentials of existing users of the reserved, native, and file realms are not updated on disk. To ensure FIPS 140-2 compliance, recreate users or change their password using the [elasticsearch-user](elasticsearch://reference/elasticsearch/command-line-tools/users-command.md) CLI tool for the file realm and the [create users](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-user) and [change password](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password) APIs for the native and reserved realms. Other types of realms are not affected and do not require any changes. - - -### Cached password hashing [_cached_password_hashing] - -$$$fips-cached-password-hashing$$$ -`ssha256` (salted `sha256`) is recommended for cache hashing. Though `PBKDF2` is compliant with FIPS-140-2, it is — by design — slow, and thus not generally suitable as a cache hashing algorithm. Cached credentials are never stored on disk, and salted `sha256` provides an adequate level of security for in-memory credential hashing, without imposing prohibitive performance overhead. You *may* use `PBKDF2`, however you should carefully assess performance impact first. Depending on your deployment, the overhead of `PBKDF2` could undo most of the performance gain of using a cache. - -Either set all `cache.hash_algo` settings to `ssha256` or leave them undefined, since `ssha256` is the default value for all `cache.hash_algo` settings. See [User cache and password hash algorithms](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#hashing-settings). - -The user cache will be emptied upon node restart, so any existing hashes using non-compliant algorithms will be discarded and the new ones will be created using the algorithm you have selected. - - -### Configure {{es}} elasticsearch.yml [configuring-es-yml] - -* Set `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. This setting is used to ensure to configure some internal configuration to be FIPS 140-2 compliant and provides some additional verification. -* Set `xpack.security.autoconfiguration.enabled` to `false`. This will disable the automatic configuration of the security settings. Users must ensure that the security settings are configured correctly for FIPS-140-2 compliance. This is only applicable for new installations. -* Set `xpack.security.authc.password_hashing.algorithm` appropriately see [above](../../../deploy-manage/security/fips-140-2.md#fips-stored-password-hashing). -* Other relevant security settings. For example, TLS for the transport and HTTP interfaces. (not explicitly covered here or in the example below) -* Optional: Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to ensure the required security providers (8.13+). see [below](../../../deploy-manage/security/fips-140-2.md#verify-security-provider). - -```yaml -xpack.security.fips_mode.enabled: true -xpack.security.autoconfiguration.enabled: false -xpack.security.fips_mode.required_providers: ["BCFIPS", "BCJSSE"] -xpack.security.authc.password_hashing.algorithm: "pbkdf2_stretch" -``` - - -### Verify the security provider is installed [verify-security-provider] - -To verify that the security provider is installed and in use, you can use any of the following steps: - -* Verify the required security providers are configured with the lowest order in the file pointed to by `java.security.properties`. For example, `security.provider.1` is a lower order than `security.provider.2` -* Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to the list of required security providers. This setting is used to ensure that the correct security provider is installed and configured. (8.13+) If the security provider is not installed correctly, {{es}} will fail to start. `["BCFIPS", "BCJSSE"]` are the values to use for Bouncy Castle’s FIPS JCE and JSSE certified provider. - - -## Upgrade considerations [fips-upgrade-considerations] - -{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.html)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. - -Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). - -::::{important} -Some encryption algorithms may no longer be available by default in updated FIPS 140-2 security providers. Notably, Triple DES and PKCS1.5 RSA are now discouraged and [Bouncy Castle](https://www.bouncycastle.org/fips-java) now requires explicit configuration to continue using these algorithms. - -:::: - - -If you plan to upgrade your existing cluster to a version that can be run in a FIPS 140-2 configured JVM, we recommend to first perform a rolling upgrade to the new version in your existing JVM and perform all necessary configuration changes in preparation for running in FIPS 140-2 mode. You can then perform a rolling restart of the nodes, starting each node in a FIPS 140-2 JVM. During the restart, {{es}}: - -* Upgrades [secure settings](../../../deploy-manage/security/secure-settings.md) to the latest, compliant format. A FIPS 140-2 JVM cannot load previous format versions. If your keystore is not password-protected, you must manually set a password. See [{{es}} Keystore](../../../deploy-manage/security/fips-140-2.md#keystore-fips-password). -* Upgrades self-generated trial licenses to the latest FIPS 140-2 compliant format. - -If your [subscription](https://www.elastic.co/subscriptions) already supports FIPS 140-2 mode, you can elect to perform a rolling upgrade while at the same time running each upgraded node in a FIPS 140-2 JVM. In this case, you would need to also manually regenerate your `elasticsearch.keystore` and migrate all secure settings to it, in addition to the necessary configuration changes outlined below, before starting each node. - - -## Limitations [fips-limitations] - -Due to the limitations that FIPS 140-2 compliance enforces, a small number of features are not available while running in FIPS 140-2 mode. The list is as follows: - -* Azure Classic Discovery Plugin -* The [`elasticsearch-certutil`](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) tool. However, `elasticsearch-certutil` can very well be used in a non FIPS 140-2 configured JVM (pointing `ES_JAVA_HOME` environment variable to a different java installation) in order to generate the keys and certificates that can be later used in the FIPS 140-2 configured JVM. -* The SQL CLI client cannot run in a FIPS 140-2 configured JVM while using TLS for transport security or PKI for client authentication. - diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md deleted file mode 100644 index 7b821533c1..0000000000 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/secure-settings.md +++ /dev/null @@ -1,49 +0,0 @@ -# Secure settings [secure-settings] - -Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. For this use case, {{es}} provides a keystore and the [`elasticsearch-keystore` tool](elasticsearch://reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) to manage the settings in the keystore. - -::::{important} -Only some settings are designed to be read from the keystore. Adding unsupported settings to the keystore causes the validation in the `_nodes/reload_secure_settings` API to fail and if not addressed, will cause {{es}} to fail to start. To see whether a setting is supported in the keystore, look for a "Secure" qualifier in the setting reference. -:::: - - -All the modifications to the keystore take effect only after restarting {{es}}. - -These settings, just like the regular ones in the `elasticsearch.yml` config file, need to be specified on each node in the cluster. Currently, all secure settings are node-specific settings that must have the same value on every node. - - -## Reloadable secure settings [reloadable-secure-settings] - -Just like the settings values in `elasticsearch.yml`, changes to the keystore contents are not automatically applied to the running {{es}} node. Re-reading settings requires a node restart. However, certain secure settings are marked as **reloadable**. Such settings can be re-read and applied on a running node. - -You can define these settings before the node is started, or call the [Nodes reload secure settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) after the settings are defined to apply them to a running node. - -The values of all secure settings, **reloadable** or not, must be identical across all cluster nodes. After making the desired secure settings changes, using the `bin/elasticsearch-keystore add` command, call: - -```console -POST _nodes/reload_secure_settings -{ - "secure_settings_password": "keystore-password" <1> -} -``` - -1. The password that the {{es}} keystore is encrypted with. - - -This API decrypts, re-reads the entire keystore and validates all settings on every cluster node, but only the **reloadable** secure settings are applied. Changes to other settings do not go into effect until the next restart. Once the call returns, the reload has been completed, meaning that all internal data structures dependent on these settings have been changed. Everything should look as if the settings had the new value from the start. - -When changing multiple **reloadable** secure settings, modify all of them on each cluster node, then issue a [`reload_secure_settings`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) call instead of reloading after each modification. - -There are reloadable secure settings for: - -* [The Azure repository plugin](../../../deploy-manage/tools/snapshot-and-restore/azure-repository.md) -* [The EC2 discovery plugin](elasticsearch://reference/elasticsearch-plugins/discovery-ec2-usage.md#_configuring_ec2_discovery) -* [The GCS repository plugin](../../../deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md) -* [The S3 repository plugin](../../../deploy-manage/tools/snapshot-and-restore/s3-repository.md) -* [Monitoring settings](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md) -* [{{watcher}} settings](elasticsearch://reference/elasticsearch/configuration-reference/watcher-settings.md) -* [JWT realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-jwt-settings) -* [Active Directory realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ad-settings) -* [LDAP realm](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) -* [Remote cluster credentials for the API key based security model](../../../deploy-manage/remote-clusters/remote-clusters-settings.md#remote-cluster-credentials-setting) - diff --git a/raw-migrated-files/kibana/kibana/elasticsearch-mutual-tls.md b/raw-migrated-files/kibana/kibana/elasticsearch-mutual-tls.md index 012b124d1e..8165a70b99 100644 --- a/raw-migrated-files/kibana/kibana/elasticsearch-mutual-tls.md +++ b/raw-migrated-files/kibana/kibana/elasticsearch-mutual-tls.md @@ -93,7 +93,7 @@ If you haven’t already, start {{kib}} and connect it to {{es}} using the [enro elasticsearch.ssl.keystore.path: "/path/to/kibana-client.p12" ``` - If your PKCS#12 file is encrypted, add the decryption password to your [{{kib}} keystore](secure-settings.md): + If your PKCS#12 file is encrypted, add the decryption password to your [{{kib}} keystore](/deploy-manage/security/secure-settings.md): ```yaml bin/kibana-keystore add elasticsearch.ssl.keystore.password @@ -112,7 +112,7 @@ If you haven’t already, start {{kib}} and connect it to {{es}} using the [enro elasticsearch.ssl.key: "/path/to/kibana-client.key" ``` - If your private key is encrypted, add the decryption password to your [{{kib}} keystore](secure-settings.md): + If your private key is encrypted, add the decryption password to your [{{kib}} keystore](/deploy-manage/security/secure-settings.md): ```yaml bin/kibana-keystore add elasticsearch.ssl.keyPassphrase diff --git a/raw-migrated-files/kibana/kibana/secure-settings.md b/raw-migrated-files/kibana/kibana/secure-settings.md deleted file mode 100644 index 5facaefea0..0000000000 --- a/raw-migrated-files/kibana/kibana/secure-settings.md +++ /dev/null @@ -1,97 +0,0 @@ -# Secure settings [secure-settings] - -Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. For this use case, Kibana provides a keystore, and the `kibana-keystore` tool to manage the settings in the keystore. - -::::{note} -* Run all commands as the user who runs {{kib}}. -* Any valid {{kib}} setting can be stored in the keystore securely. Unsupported, extraneous or invalid settings will cause {{kib}} to fail to start up. - -:::: - - - -## Create the keystore [creating-keystore] - -To create the `kibana.keystore`, use the `create` command: - -```sh -bin/kibana-keystore create -``` - -The file `kibana.keystore` will be created in the `config` directory defined by the environment variable `KBN_PATH_CONF`. - -To create a password protected keystore use the `--password` flag. - - -## List settings in the keystore [list-settings] - -A list of the settings in the keystore is available with the `list` command: - -```sh -bin/kibana-keystore list -``` - - -## Add string settings [add-string-to-keystore] - -::::{note} -Your input will be JSON-parsed to allow for object/array input configurations. To enforce string values, use "double quotes" around your input. -:::: - - -Sensitive string settings, like authentication credentials for Elasticsearch can be added using the `add` command: - -```sh -bin/kibana-keystore add the.setting.name.to.set -``` - -Once added to the keystore, these setting will be automatically applied to this instance of Kibana when started. For example if you do - -```sh -bin/kibana-keystore add elasticsearch.username -``` - -you will be prompted to provide the value for elasticsearch.username. (Your input will show as asterisks.) - -The tool will prompt for the value of the setting. To pass the value through stdin, use the `--stdin` flag: - -```sh -cat /file/containing/setting/value | bin/kibana-keystore add the.setting.name.to.set --stdin -``` - - -## Remove settings [remove-settings] - -To remove a setting from the keystore, use the `remove` command: - -```sh -bin/kibana-keystore remove the.setting.name.to.remove -``` - - -## Read settings [read-settings] - -To display the configured setting values, use the `show` command: - -```sh -bin/kibana-keystore show setting.key -``` - - -## Change password [change-password] - -To change the password of the keystore, use the `passwd` command: - -```sh -bin/kibana-keystore passwd -``` - - -## Has password [has-password] - -To check if the keystore is password protected, use the `has-passwd` command. An exit code of 0 will be returned if the keystore is password protected, and the command will fail otherwise. - -```sh -bin/kibana-keystore has-passwd -``` - diff --git a/raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md b/raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md deleted file mode 100644 index 5cd03e7ecd..0000000000 --- a/raw-migrated-files/kibana/kibana/xpack-security-fips-140-2.md +++ /dev/null @@ -1,44 +0,0 @@ -# FIPS 140-2 [xpack-security-fips-140-2] - -The Federal Information Processing Standard (FIPS) Publication 140-2, (FIPS PUB 140-2), titled "Security Requirements for Cryptographic Modules" is a U.S. government computer security standard used to approve cryptographic modules. - -{{kib}} offers a FIPS 140-2 compliant mode and as such can run in a Node.js environment configured with a FIPS 140-2 compliant OpenSSL3 provider. - -To run {{kib}} in FIPS mode, you must have the appropriate [subscription](https://www.elastic.co/subscriptions). - -::::{important} -The Node bundled with {{kib}} is not configured for FIPS 140-2. You must configure a FIPS 140-2 compliant OpenSSL3 provider. Consult the Node.js documentation to learn how to configure your environment. - -:::: - - -For {{kib}}, adherence to FIPS 140-2 is ensured by: - -* Using FIPS approved / NIST recommended cryptographic algorithms. -* Delegating the implementation of these cryptographic algorithms to a NIST validated cryptographic module (available via Node.js configured with an OpenSSL3 provider). -* Allowing the configuration of {{kib}} in a FIPS 140-2 compliant manner, as documented below. - -## Configuring {{kib}} for FIPS 140-2 [_configuring_kib_for_fips_140_2] - -Apart from setting `xpack.security.fipsMode.enabled` to `true` in your {{kib}} config, a number of security related settings need to be reviewed and configured in order to run {{kib}} successfully in a FIPS 140-2 compliant Node.js environment. - -### Kibana keystore [_kibana_keystore] - -FIPS 140-2 (via NIST Special Publication 800-132) dictates that encryption keys should at least have an effective strength of 112 bits. As such, the Kibana keystore that stores the application’s secure settings needs to be password protected with a password that satisfies this requirement. This means that the password needs to be 14 bytes long which is equivalent to a 14 character ASCII encoded password, or a 7 character UTF-8 encoded password. - -For more information on how to set this password, refer to the [keystore documentation](../../../deploy-manage/security/secure-settings.md#change-password). - - -### TLS keystore and keys [_tls_keystore_and_keys] - -Keystores can be used in a number of General TLS settings in order to conveniently store key and trust material. PKCS#12 keystores cannot be used in a FIPS 140-2 compliant Node.js environment. Avoid using these types of keystores. Your FIPS 140-2 provider may provide a compliant keystore implementation that can be used, or you can use PEM encoded files. To use PEM encoded key material, you can use the relevant `\*.key` and `*.certificate` configuration options, and for trust material you can use `*.certificate_authorities`. - -As an example, avoid PKCS#12 specific settings such as: - -* `server.ssl.keystore.path` -* `server.ssl.truststore.path` -* `elasticsearch.ssl.keystore.path` -* `elasticsearch.ssl.truststore.path` - - - diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 90588a5713..82bac17508 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -14,7 +14,6 @@ toc: - file: cloud-on-k8s/cloud-on-k8s/index.md children: - file: cloud-on-k8s/cloud-on-k8s/k8s-custom-http-certificate.md - - file: cloud-on-k8s/cloud-on-k8s/k8s-es-secure-settings.md - file: cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md - file: cloud-on-k8s/cloud-on-k8s/k8s-tls-certificates.md - file: cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md @@ -27,7 +26,6 @@ toc: - file: cloud/cloud-enterprise/ece-administering-deployments.md - file: cloud/cloud-enterprise/ece-api-console.md - file: cloud/cloud-enterprise/ece-change-deployment.md - - file: cloud/cloud-enterprise/ece-configuring-keystore.md - file: cloud/cloud-enterprise/ece-create-deployment.md - file: cloud/cloud-enterprise/ece-delete-deployment.md - file: cloud/cloud-enterprise/ece-find.md @@ -39,7 +37,6 @@ toc: - file: cloud/cloud-enterprise/ece-manage-kibana.md - file: cloud/cloud-enterprise/ece-monitoring-deployments.md - file: cloud/cloud-enterprise/ece-password-reset-elastic.md - - file: cloud/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md - file: cloud/cloud-enterprise/ece-restore-across-clusters.md - file: cloud/cloud-enterprise/ece-restore-deployment.md - file: cloud/cloud-enterprise/ece-securing-clusters.md @@ -57,7 +54,6 @@ toc: - file: cloud/cloud-heroku/ech-access-kibana.md - file: cloud/cloud-heroku/ech-activity-page.md - file: cloud/cloud-heroku/ech-add-user-settings.md - - file: cloud/cloud-heroku/ech-configuring-keystore.md - file: cloud/cloud-heroku/ech-custom-repository.md - file: cloud/cloud-heroku/ech-delete-deployment.md - file: cloud/cloud-heroku/ech-editing-user-settings.md @@ -86,7 +82,6 @@ toc: - file: cloud/cloud/ec-activity-page.md - file: cloud/cloud/ec-add-user-settings.md - file: cloud/cloud/ec-billing-stop.md - - file: cloud/cloud/ec-configuring-keystore.md - file: cloud/cloud/ec-custom-bundles.md - file: cloud/cloud/ec-custom-repository.md - file: cloud/cloud/ec-delete-deployment.md @@ -158,7 +153,6 @@ toc: - file: elasticsearch/elasticsearch-reference/documents-indices.md - file: elasticsearch/elasticsearch-reference/es-security-principles.md - file: elasticsearch/elasticsearch-reference/esql-using.md - - file: elasticsearch/elasticsearch-reference/fips-140-compliance.md - file: elasticsearch/elasticsearch-reference/how-monitoring-works.md - file: elasticsearch/elasticsearch-reference/index-modules-allocation.md - file: elasticsearch/elasticsearch-reference/index-modules-mapper.md @@ -173,7 +167,6 @@ toc: - file: elasticsearch/elasticsearch-reference/search-with-synonyms.md - file: elasticsearch/elasticsearch-reference/secure-cluster.md - file: elasticsearch/elasticsearch-reference/secure-monitoring.md - - file: elasticsearch/elasticsearch-reference/secure-settings.md - file: elasticsearch/elasticsearch-reference/security-basic-setup-https.md - file: elasticsearch/elasticsearch-reference/security-basic-setup.md - file: elasticsearch/elasticsearch-reference/security-files.md @@ -198,14 +191,12 @@ toc: - file: kibana/kibana/reporting-production-considerations.md - file: kibana/kibana/search-ai-assistant.md - file: kibana/kibana/secure-reporting.md - - file: kibana/kibana/secure-settings.md - file: kibana/kibana/Security-production-considerations.md - file: kibana/kibana/set-time-filter.md - file: kibana/kibana/setup.md - file: kibana/kibana/upgrade-migrations-rolling-back.md - file: kibana/kibana/upgrade.md - file: kibana/kibana/using-kibana-with-security.md - - file: kibana/kibana/xpack-security-fips-140-2.md - file: kibana/kibana/xpack-security.md - file: logstash/logstash/index.md children: