diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md index 6ab3f1e405..dcfe91be57 100644 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md +++ b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md @@ -13,7 +13,7 @@ This section covers the following topics: NodeSets are used to specify the topology of the Elasticsearch cluster. Each NodeSet represents a group of Elasticsearch nodes that share the same Elasticsearch configuration and Kubernetes Pod configuration. ::::{tip} -You can use [YAML anchors](https://yaml.org/spec/1.2/spec.md#id2765878) to declare the configuration change once and reuse it across all the node sets. +You can use [YAML anchors](https://yaml.org/spec/1.2/spec.html#id2765878) to declare the configuration change once and reuse it across all the node sets. :::: diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md index 47296125a6..5072efa781 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md @@ -95,7 +95,7 @@ In this step, you’ll create a Python script that generates logs in JSON format This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.md). + For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md index fbfbbb7a0b..ade532b0f0 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md @@ -5,7 +5,7 @@ Traffic filtering, to only AWS PrivateLink connections, is one of the security l Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Add-On for Heroku. ::::{note} -PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-availability-zones) for more details. +PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-availability-zones) for more details. :::: @@ -96,7 +96,7 @@ The mapping will be different for your region. Our production VPC Service for `u 1. Create a VPC endpoint in your VPC using the service name for your region. - Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. + Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. Use [the service name for your region](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ech-private-link-service-names-aliases). @@ -118,7 +118,7 @@ The mapping will be different for your region. Our production VPC Service for `u 2. Then create a DNS CNAME alias pointing to the PrivateLink Endpoint. Add the record to a private DNS zone in your VPC. Use `*` as the record name, and the VPC endpoint DNS name as a value. - Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.md) for details on creating a CNAME record which points to your VPC endpoint DNS name. + Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) for details on creating a CNAME record which points to your VPC endpoint DNS name. :::{image} ../../../images/cloud-heroku-ec-private-link-cname.png :alt: PrivateLink CNAME diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md index 48028eabf5..46314e77a5 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md @@ -98,7 +98,7 @@ In this step, you’ll create a Python script that generates logs in JSON format This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.md). + For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md index 4f092e9914..d337ccb855 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md @@ -5,7 +5,7 @@ Traffic filtering, to only AWS PrivateLink connections, is one of the security l Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. ::::{note} -PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-availability-zones) for more details. +PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-availability-zones) for more details. :::: @@ -96,7 +96,7 @@ The mapping will be different for your region. Our production VPC Service for `u 1. Create a VPC endpoint in your VPC using the service name for your region. - Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. + Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. Use [the service name for your region](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ec-private-link-service-names-aliases). @@ -118,7 +118,7 @@ The mapping will be different for your region. Our production VPC Service for `u 2. Then create a DNS CNAME alias pointing to the PrivateLink Endpoint. Add the record to a private DNS zone in your VPC. Use `*` as the record name, and the VPC endpoint DNS name as a value. - Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.md) for details on creating a CNAME record which points to your VPC endpoint DNS name. + Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) for details on creating a CNAME record which points to your VPC endpoint DNS name. :::{image} ../../../images/cloud-ec-private-link-cname.png :alt: PrivateLink CNAME diff --git a/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md b/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md index 0d2142a1c9..4fa495579c 100644 --- a/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md +++ b/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md @@ -61,7 +61,7 @@ To set up the AI Assistant: * [OpenAI API keys](https://platform.openai.com/docs/api-reference) * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) - * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.md) + * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get) 2. From **Project settings** → **Management** → **Connectors**, create a connector for your AI provider: diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md index c75e24ecc8..8f3b15d1a8 100644 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md +++ b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md @@ -131,7 +131,7 @@ Finally, configure the connector in {{kib}}: Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](https://docs.elastic.co/security/ai-assistant). ::::{important} -If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.md), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. +If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. :::: diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md index cf3a71e4f6..58ab529f1e 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md @@ -35,7 +35,7 @@ The following is a high-level overview of the required configuration: ### Java security provider [java-security-provider] -Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.md) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.md) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. +Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.html) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.html) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. Elasticsearch has been tested with Bouncy Castle’s [bc-fips 1.0.2.5](https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2.5/bc-fips-1.0.2.5.jar) and [bctls-fips 1.0.19](https://repo1.maven.org/maven2/org/bouncycastle/bctls-fips/1.0.19/bctls-fips-1.0.19.jar). Please refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for details on which combinations of JVM and security provider are supported in FIPS mode. Elasticsearch does not ship with a FIPS certified provider. It is the responsibility of the user to install and configure the security provider to ensure compliance with FIPS 140-2. Using a FIPS certified provider will ensure that only approved cryptographic algorithms are used. @@ -131,7 +131,7 @@ To verify that the security provider is installed and in use, you can use any of ## Upgrade considerations [fips-upgrade-considerations] -{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.md)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. +{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.html)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md index 138b35bf61..a260ea552a 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md @@ -78,7 +78,7 @@ The bundled JVM is treated the same as any other dependency of {{es}} in terms o :::: -If you decide to run {{es}} using a version of Java that is different from the bundled one, prefer to use the latest release of a [LTS version of Java](https://www.oracle.com/technetwork/java/eol-135779.md) which is [listed in the support matrix](https://elastic.co/support/matrix). Although such a configuration is supported, if you encounter a security issue or other bug in your chosen JVM then Elastic may not be able to help unless the issue is also present in the bundled JVM. Instead, you must seek assistance directly from the supplier of your chosen JVM. You must also take responsibility for reacting to security and bug announcements from the supplier of your chosen JVM. {{es}} may not perform optimally if using a JVM other than the bundled one. {{es}} is closely coupled to certain OpenJDK-specific features, so it may not work correctly with JVMs that are not OpenJDK. {{es}} will refuse to start if you attempt to use a known-bad JVM version. +If you decide to run {{es}} using a version of Java that is different from the bundled one, prefer to use the latest release of a [LTS version of Java](https://www.oracle.com/technetwork/java/eol-135779.html) which is [listed in the support matrix](https://elastic.co/support/matrix). Although such a configuration is supported, if you encounter a security issue or other bug in your chosen JVM then Elastic may not be able to help unless the issue is also present in the bundled JVM. Instead, you must seek assistance directly from the supplier of your chosen JVM. You must also take responsibility for reacting to security and bug announcements from the supplier of your chosen JVM. {{es}} may not perform optimally if using a JVM other than the bundled one. {{es}} is closely coupled to certain OpenJDK-specific features, so it may not work correctly with JVMs that are not OpenJDK. {{es}} will refuse to start if you attempt to use a known-bad JVM version. To use your own version of Java, set the `ES_JAVA_HOME` environment variable to the path to your own JVM installation. The bundled JVM is located within the `jdk` subdirectory of the {{es}} home directory. You may remove this directory if using your own JVM. diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md index 14e4941bec..c6c33b574d 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md @@ -20,7 +20,7 @@ The following examples use the: * models available through [Azure AI Studio](https://ai.azure.com/explore/models?selectedTask=embeddings) or [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) * `text-embedding-004` model for [Google Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api) * `mistral-embed` model for [Mistral](https://docs.mistral.ai/getting-started/models/) -* `amazon.titan-embed-text-v1` model for [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.md) +* `amazon.titan-embed-text-v1` model for [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html) * `ops-text-embedding-zh-001` model for [AlibabaCloud AI](https://help.aliyun.com/zh/open-search/search-platform/developer-reference/text-embedding-api-details) You can use any Cohere and OpenAI models, they are all supported by the {{infer}} API. For a list of recommended models available on HuggingFace, refer to [the supported model list](../../../explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md). @@ -556,7 +556,7 @@ PUT amazon-bedrock-embeddings 1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. 2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [Amazon Titan model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.md) or the [Cohere Embeddings model](https://docs.cohere.com/reference/embed) documentation. +3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [Amazon Titan model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.html) or the [Cohere Embeddings model](https://docs.cohere.com/reference/embed) documentation. 4. For Amazon Bedrock embeddings, the `dot_product` function should be used to calculate similarity for Amazon titan models, or `cosine` for Cohere models. 5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. 6. The field type which is text in this example. diff --git a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md index 0ca6c08e52..02e041db21 100644 --- a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md +++ b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md @@ -76,7 +76,7 @@ To set up the AI Assistant: * [OpenAI API keys](https://platform.openai.com/docs/api-reference) * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) - * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.md) + * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get) 2. Create a connector for your AI provider. Refer to the connector documentation to learn how: