diff --git a/deploy-manage/autoscaling/autoscaling-in-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md index b4c214870d..2f4fa862fb 100644 --- a/deploy-manage/autoscaling/autoscaling-in-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -167,7 +167,7 @@ spec: max: 512Gi ``` -You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/2.16/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md). +You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md). #### Change the polling interval [k8s-autoscaling-polling-interval] diff --git a/deploy-manage/deploy/cloud-on-k8s.md b/deploy-manage/deploy/cloud-on-k8s.md index 02930691f3..802556fd47 100644 --- a/deploy-manage/deploy/cloud-on-k8s.md +++ b/deploy-manage/deploy/cloud-on-k8s.md @@ -57,7 +57,7 @@ Afterwards, you can: * Learn how to [update your deployment](./cloud-on-k8s/update-deployments.md) * Check out [our recipes](./cloud-on-k8s/recipes.md) for multiple use cases -* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/samples) +* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples) ## Supported versions [k8s-supported] @@ -70,7 +70,7 @@ ECK is compatible with the following Kubernetes distributions and related techno * Kubernetes 1.28-1.32 * OpenShift 4.14-4.18 * Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS) -* Helm: 3.2.0+ +* Helm: {{eck_helm_minimum_version}}+ ECK should work with all conformant **installers** listed in these [FAQs](https://github.com/cncf/k8s-conformance/blob/master/faq.md#what-is-a-distribution-hosted-platform-and-an-installer). Distributions include source patches and so may not work as-is with ECK. diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md index a368484028..f33e4630f6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md @@ -206,7 +206,7 @@ Starting with ECK 2.0 the operator can make Kubernetes Node labels available as 2. On the {{es}} resources set the `eck.k8s.elastic.co/downward-node-labels` annotations with the list of the Kubernetes node labels that should be copied as Pod annotations. 3. Use the [Kubernetes downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) in the `podTemplate` to make those annotations available as environment variables in {{es}} Pods. -Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/samples/elasticsearch/elasticsearch.yaml) for a complete example. +Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples/elasticsearch/elasticsearch.yaml) for a complete example. ### Using node topology labels, Kubernetes topology spread constraints, and {{es}} shard allocation awareness [k8s-availability-zone-awareness-example] diff --git a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md index 88376e2bfe..1ece3c00c7 100644 --- a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md +++ b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md @@ -42,7 +42,7 @@ ECK will automatically set the correct container image for each application. Whe To deploy the ECK operator in an air-gapped environment, you first have to mirror the operator image itself from `docker.elastic.co` to a private container registry, for example `my.registry`. -Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:2.16.1` with the private name of the image, for example `my.registry/eck/eck-operator:2.16.1`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`. +Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:{{eck_version}}` with the private name of the image, for example `my.registry/eck/eck-operator:{{eck_version}}`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`. ## Override the default container registry [k8s-container-registry-override] diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md index 366afb8e23..36ce4f4765 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md @@ -18,7 +18,7 @@ The examples in this section are purely descriptive and should not be considered ## Metricbeat for Kubernetes monitoring [k8s_metricbeat_for_kubernetes_monitoring] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/metricbeat_hosts.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/metricbeat_hosts.yaml ``` Deploys Metricbeat as a DaemonSet that monitors the usage of the following resources: @@ -30,7 +30,7 @@ Deploys Metricbeat as a DaemonSet that monitors the usage of the following resou ## Filebeat with autodiscover [k8s_filebeat_with_autodiscover] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collects logs from Pods in every namespace and loads them to the connected {{es}} cluster. @@ -39,7 +39,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collec ## Filebeat with autodiscover for metadata [k8s_filebeat_with_autodiscover_for_metadata] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from Pods that match the following criteria are shipped to the connected {{es}} cluster: @@ -51,7 +51,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from ## Filebeat without autodiscover [k8s_filebeat_without_autodiscover] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_no_autodiscover.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_no_autodiscover.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the entire logs directory on the host as the input source. This configuration does not require any RBAC resources as no Kubernetes APIs are used. @@ -60,7 +60,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the ## {{es}} and {{kib}} Stack Monitoring [k8s_elasticsearch_and_kibana_stack_monitoring] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/stack_monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/stack_monitoring.yaml ``` Deploys Metricbeat configured for {{es}} and {{kib}} [Stack Monitoring](/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) and Filebeat using autodiscover. Deploys one monitored {{es}} cluster and one monitoring {{es}} cluster. You can access the Stack Monitoring app in the monitoring cluster’s {{kib}}. @@ -74,7 +74,7 @@ In this example, TLS verification is disabled when Metricbeat communicates with ## Heartbeat monitoring {{es}} and {{kib}} health [k8s_heartbeat_monitoring_elasticsearch_and_kibana_health] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/heartbeat_es_kb_health.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/heartbeat_es_kb_health.yaml ``` Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} and {{kib}} by TCP probing their Service endpoints. Heartbeat expects that {{es}} and {{kib}} are deployed in the `default` namespace. @@ -83,7 +83,7 @@ Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} ## Auditbeat [k8s_auditbeat] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/auditbeat_hosts.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/auditbeat_hosts.yaml ``` Deploys Auditbeat as a DaemonSet that checks file integrity and audits file operations on the host system. @@ -92,7 +92,7 @@ Deploys Auditbeat as a DaemonSet that checks file integrity and audits file oper ## Packetbeat monitoring DNS and HTTP traffic [k8s_packetbeat_monitoring_dns_and_http_traffic] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/packetbeat_dns_http.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/packetbeat_dns_http.yaml ``` Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) traffic on ports `80`, `8000`, `8080` and `9200`. @@ -101,7 +101,7 @@ Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) tra ## OpenShift monitoring [k8s_openshift_monitoring] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/openshift_monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/openshift_monitoring.yaml ``` Deploys Metricbeat as a DaemonSet that monitors the host resource usage (CPU, memory, network, filesystem), OpenShift resources (Nodes, Pods, Containers, Volumes), API Server and Filebeat using autodiscover. Deploys an {{es}} cluster and {{kib}} to centralize data collection. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md index c93b51e5b3..0be6417fec 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md @@ -18,7 +18,7 @@ The examples in this section are for illustration purposes only and should not b ## System and {{k8s}} {{integrations}} [k8s_system_and_k8s_integrations] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml ``` Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{integrations}} enabled. System integration collects syslog logs, auth logs and system metrics (for CPU, I/O, filesystem, memory, network, process and others). {{k8s}} {{integrations}} collects API server, Container, Event, Node, Pod, Volume and system metrics. @@ -27,7 +27,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{int ## System and {{k8s}} {{integrations}} running as non-root [k8s_system_and_k8s_integrations_running_as_non_root] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml ``` The provided example is functionally identical to the previous section but runs the {{agent}} processes (both the {{agent}} running as the {{fleet}} server and the {{agent}} connected to {{fleet}}) as a non-root user by utilizing a DaemonSet to ensure directory and file permissions. @@ -41,7 +41,7 @@ The DaemonSet itself must run as root to set up permissions and ECK >= 2.10.0 is ## Custom logs integration with autodiscover [k8s_custom_logs_integration_with_autodiscover] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml ``` Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration enabled. Collects logs from all Pods in the `default` namespace using autodiscover feature. @@ -50,7 +50,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration ## APM integration [k8s_apm_integration] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/fleet-apm-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-apm-integration.yaml ``` Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integration enabled. @@ -59,7 +59,7 @@ Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integrat ## Synthetic monitoring [k8s_synthetic_monitoring] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/synthetic-monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/synthetic-monitoring.yaml ``` Deploys an {{fleet}}-enrolled {{agent}} that can be used as for [Synthetic monitoring](/solutions/observability/synthetics/index.md). This {{agent}} uses the `elastic-agent-complete` image. The agent policy still needs to be [registered as private location](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-add) in {{kib}}. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md index 87a4f9635c..7a1a0ce406 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md @@ -18,7 +18,7 @@ The examples in this section are for illustration purposes only. They should not ## Single pipeline defined in CRD [k8s-logstash-configuration-single-pipeline-crd] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-eck.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-eck.yaml ``` Deploys Logstash with a single pipeline defined in the CRD @@ -27,7 +27,7 @@ Deploys Logstash with a single pipeline defined in the CRD ## Single Pipeline defined in Secret [k8s-logstash-configuration-single-pipeline-secret] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-pipeline-as-secret.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-pipeline-as-secret.yaml ``` Deploys Logstash with a single pipeline defined in a secret, referenced by a `pipelineRef` @@ -36,7 +36,7 @@ Deploys Logstash with a single pipeline defined in a secret, referenced by a `pi ## Pipeline configuration in mounted volume [k8s-logstash-configuration-pipeline-volume] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-pipeline-as-volume.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-pipeline-as-volume.yaml ``` Deploys Logstash with a single pipeline defined in a secret, mounted as a volume, and referenced by `path.config` @@ -45,7 +45,7 @@ Deploys Logstash with a single pipeline defined in a secret, mounted as a volume ## Writing to a custom {{es}} index [k8s-logstash-configuration-custom-index] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-es-role.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-es-role.yaml ``` Deploys Logstash and {{es}}, and creates an updated version of the `eck_logstash_user_role` to write to a user specified index. @@ -54,7 +54,7 @@ Deploys Logstash and {{es}}, and creates an updated version of the `eck_logstash ## Creating persistent volumes for PQ and DLQ [k8s-logstash-configuration-pq-dlq] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-volumes.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-volumes.yaml ``` Deploys Logstash, Beats and {{es}}. Logstash is configured with two pipelines: @@ -66,7 +66,7 @@ Deploys Logstash, Beats and {{es}}. Logstash is configured with two pipelines: ## {{es}} and {{kib}} Stack Monitoring [k8s-logstash-configuration-stack-monitoring] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-monitored.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-monitored.yaml ``` Deploys an {{es}} and {{kib}} monitoring cluster, and a Logstash that will send its monitoring information to this cluster. You can view the stack monitoring information in the monitoring cluster’s Kibana @@ -75,7 +75,7 @@ Deploys an {{es}} and {{kib}} monitoring cluster, and a Logstash that will send ## Multiple pipelines/multiple {{es}} clusters [k8s-logstash-configuration-multiple-pipelines] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-multi.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-multi.yaml ``` Deploys {{es}} in prod and qa configurations, running in separate namespaces. Logstash is configured with a multiple pipeline→pipeline configuration, with a source pipeline routing to `prod` and `qa` pipelines. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md index 8d7a866161..c667cef8bd 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md @@ -18,7 +18,7 @@ The examples in this section are for illustration purposes only and should not b ## System integration [k8s_system_integration] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/system-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/system-integration.yaml ``` Deploys Elastic Agent as a DaemonSet in standalone mode with system integration enabled. Collects syslog logs, auth logs and system metrics (for CPU, I/O, filesystem, memory, network, process and others). @@ -27,7 +27,7 @@ Deploys Elastic Agent as a DaemonSet in standalone mode with system integration ## Kubernetes integration [k8s_kubernetes_integration] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/kubernetes-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/kubernetes-integration.yaml ``` Deploys Elastic Agent as a DaemonSet in standalone mode with Kubernetes integration enabled. Collects API server, Container, Event, Node, Pod, Volume and system metrics. @@ -36,7 +36,7 @@ Deploys Elastic Agent as a DaemonSet in standalone mode with Kubernetes integrat ## Multiple {{es}} clusters output [k8s_multiple_elasticsearch_clusters_output] ```sh -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/multi-output.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/multi-output.yaml ``` Deploys two {{es}} clusters and two {{kib}} instances together with single Elastic Agent DaemonSet in standalone mode with System integration enabled. System metrics are sent to the `elasticsearch` cluster. Elastic Agent monitoring data is sent to `elasticsearch-mon` cluster. diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md index 23ec2ed9ae..ddfb0662f7 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md @@ -44,7 +44,7 @@ The cluster that you deployed in this quickstart guide only allocates a persiste :::: -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd elasticsearch diff --git a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md index b9186de363..d8c9da3c27 100644 --- a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md @@ -31,7 +31,7 @@ You can disable the generation of the self-signed certificate and hence disable ### Ingress and {{kib}} configuration [k8s-maps-ingress] -To use Elastic Maps Server from your {{kib}} instances, you need to configure {{kib}} to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your {{kib}} is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/) for an example on how to set this up using an Ingress resource. +To use Elastic Maps Server from your {{kib}} instances, you need to configure {{kib}} to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your {{kib}} is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/) for an example on how to set this up using an Ingress resource. :::{admonition} Support scope for Ingress Controllers [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md index 917858358a..2eaef41f06 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md @@ -17,7 +17,7 @@ helm repo update ``` ::::{note} -The minimum supported version of Helm is 3.2.0. +The minimum supported version of Helm is {{eck_helm_minimum_version}}. :::: ## Installation options @@ -67,7 +67,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na --set=managedNamespaces='{namespace-a, namespace-b}' ``` -You can find the profile files in the Helm cache directory or in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-operator). +You can find the profile files in the Helm cache directory or in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator). :::: The previous example disabled the validation webhook along with all other cluster-wide resources. If you need to enable the validation webhook in a restricted environment, see [](./webhook-namespace-selectors.md). To understand what the validation webhook does, refer to [](./configure-validating-webhook.md). @@ -89,7 +89,7 @@ Migrating an existing installation to Helm is essentially an upgrade operation a You can migrate an existing operator installation to Helm by adding the `meta.helm.sh/release-name`, `meta.helm.sh/release-namespace` annotations and the `app.kubernetes.io/managed-by` label to all the resources you want to be adopted by Helm. You *must* do this for the Elastic Custom Resource Definitions (CRD) because deleting them would trigger the deletion of all deployed Elastic applications as well. All other resources are optional and can be deleted. ::::{note} -A shell script is available in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/blob/2.16/deploy/helm-migrate.sh) to demonstrate how to migrate from version 1.7.1 to Helm. You can modify it to suit your own environment. +A shell script is available in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/deploy/helm-migrate.sh) to demonstrate how to migrate from version 1.7.1 to Helm. You can modify it to suit your own environment. :::: For example, an ECK 1.2.1 installation deployed using [YAML manifests](/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md) can be migrated to Helm as follows: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md index 8f09e96c9d..171b4f849e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md @@ -13,7 +13,7 @@ Deploying Elastic Agent on Openshift may require additional permissions dependin The following example assumes that Elastic Agent is deployed in the Namespace `elastic` with the ServiceAccount `elastic-agent`. You can replace these values according to your environment. ::::{note} -If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/elastic-agent), the ServiceAccount may already exist. +If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/elastic-agent), the ServiceAccount may already exist. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md index ff171d0c0d..2392ef8609 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md @@ -13,7 +13,7 @@ Deploying Beats on Openshift may require some privileged permissions. This secti The following example assumes that Beats is deployed in the Namespace `elastic` with the ServiceAccount `heartbeat`. You can replace these values according to your environment. ::::{note} -If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/beats), the ServiceAccount may already exist. +If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/beats), the ServiceAccount may already exist. :::: @@ -103,5 +103,5 @@ spec: path: /var/lib/docker/containers ``` -Check the complete examples in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/beats). +Check the complete examples in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/beats). diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md index 87f42eb3ea..ea0d7ab38d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md @@ -66,7 +66,7 @@ To deploy a simple [{{kib}}](/get-started/the-stack.md#stack-components-kibana) ``` -For a full description of each `CustomResourceDefinition` (CRD), refer to the [API reference](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [API reference](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd kibana diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md index e4a53fa629..8d4509248f 100644 --- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md @@ -17,7 +17,7 @@ helm repo update ``` ::::{note} -The minimum supported version of Helm is 3.2.0. +The minimum supported version of Helm is {{eck_helm_minimum_version}}. :::: The {{stack}} (`eck-stack`) Helm chart is built on top of individual charts such as `eck-elasticsearch` and `eck-kibana`. For more details on its structure and dependencies, refer to the [chart repository](https://github.com/elastic/cloud-on-k8s/tree/main/deploy/eck-stack/). @@ -39,15 +39,15 @@ helm install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namesp ### Customize {{es}} and {{kib}} installation with example values [k8s-eck-stack-helm-customize] -You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-stack/examples). +You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-stack/examples). To use one or more of these example configurations, use the `--values` Helm option, as seen in the following section. ```sh # Install an eck-managed Elasticsearch and Kibana using the Elasticsearch node roles example with hot, warm, and cold data tiers, and the Kibana example customizing the http service. helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/kibana/http-configuration.yaml + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/kibana/http-configuration.yaml ``` ## Fleet Server with Elastic Agents along with {{es}} and {{kib}} [k8s-install-fleet-agent-elasticsearch-kibana-helm] @@ -57,7 +57,7 @@ The following section builds upon the previous section, and allows installing Fl ```sh # Install an eck-managed Elasticsearch, Kibana, Fleet Server, and managed Elastic Agents using custom values. helm install eck-stack-with-fleet elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack ``` ## Logstash along with {{es}}, {{kib}} and Beats [k8s-install-logstash-elasticsearch-kibana-helm] @@ -67,7 +67,7 @@ The following section builds upon the previous sections, and allows installing L ```sh # Install an eck-managed Elasticsearch, Kibana, Beats and Logstash using custom values. helm install eck-stack-with-logstash elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack ``` ## Standalone Elastic APM Server along with {{es}} and {{kib}} [k8s-install-apm-server-elasticsearch-kibana-helm] @@ -77,12 +77,12 @@ The following section builds upon the previous sections, and allows installing a ```sh # Install an eck-managed Elasticsearch, Kibana, and standalone APM Server using custom values. helm install eck-stack-with-apm-server elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack ``` ## Enterprise Search server along with {{es}} and {{kib}} [k8s-install-enterprise-search-elasticsearch-kibana-helm] -Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example deployment of {{es}} version 8.x, {{kib}} 8.x, and an 8.x Enterprise Search server using the Helm chart, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-stack-helm-chart.html). +Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example deployment of {{es}} version 8.x, {{kib}} 8.x, and an 8.x Enterprise Search server using the Helm chart, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s-stack-helm-chart.html). ## Install individual components of the {{stack}} [k8s-eck-stack-individual-components] diff --git a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md index 6785da7562..cab1a32a1d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md +++ b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md @@ -15,7 +15,7 @@ The following guides provide specific instructions for deploying and configuring * [{{ls}}](logstash.md) ::::{note} -Enterprise Search is not available in {{stack}} versions 9.0 and later. To deploy or manage Enterprise Search in earlier versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-enterprise-search.html). +Enterprise Search is not available in {{stack}} versions 9.0 and later. To deploy or manage Enterprise Search in earlier versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s-enterprise-search.html). :::: When orchestrating any of these applications, also consider the following topics: diff --git a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md index f0468306af..f7940dd90b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md +++ b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md @@ -10,7 +10,7 @@ mapped_pages: The default Kubernetes service created by ECK, named `-es-http`, is configured to include all the {{es}} nodes in that cluster. This configuration is good to get started and is adequate for most use cases. However, if you are operating an {{es}} cluster with [different node types](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md) and want control over which nodes handle which types of traffic, you should create additional Kubernetes services yourself. -As an alternative, you can use features provided by third-party software such as service meshes and ingress controllers to achieve more advanced traffic management configurations. Check the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes) in the ECK source repository for a few examples. +As an alternative, you can use features provided by third-party software such as service meshes and ingress controllers to achieve more advanced traffic management configurations. Check the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes) in the ECK source repository for a few examples. :::{admonition} Support scope for Ingress Controllers [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. diff --git a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md index 8efeb620d0..fac827b65c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md +++ b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md @@ -79,7 +79,7 @@ To enable the restriction of cross-namespace associations, start the operator wi ``` -In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/2.16/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml). +In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml). ::::{note} If the `serviceAccountName` is not set, ECK uses the default service account assigned to the pod by the [Service Account Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller). diff --git a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md index abd853cbfe..a87ef8755c 100644 --- a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md +++ b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md @@ -56,6 +56,6 @@ If you are using a custom TLS certificate and you need to set `insecureSkipVerif 2. Ensure that the CA secret is mounted within the Prometheus Pod. - Steps will vary between Prometheus installations. If you're using the Prometheus operator, you can set the `spec.secrets` field of the `Prometheus` custom resource to the name of the previously created Kubernetes Secret. See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-operator/values.yaml) for more information. + Steps will vary between Prometheus installations. If you're using the Prometheus operator, you can set the `spec.secrets` field of the `Prometheus` custom resource to the name of the previously created Kubernetes Secret. See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator/values.yaml) for more information. diff --git a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md index f16cdb3157..6955cfc0b6 100644 --- a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md +++ b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md @@ -39,7 +39,7 @@ Providing this secret is sufficient to use your own certificate if it is from a * Set `serviceMonitor.insecureSkipVerify` to `false` to enable TLS validation. * Set `serviceMonitor.caSecret` to the name of an existing Kubernetes secret within the Prometheus namespace that contains the CA in PEM format in a file called `ca.crt`. - * Set the `spec.secrets` field of the `Prometheus` custom resource, or `prometheus.prometheusSpec.secrets` when using the Helm chart such that the CA secret is mounted into the Prometheus pod at `serviceMonitor.caMountDirectory` (assuming you are using the Prometheus operator). See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-operator/values.yaml) for more information. + * Set the `spec.secrets` field of the `Prometheus` custom resource, or `prometheus.prometheusSpec.secrets` when using the Helm chart such that the CA secret is mounted into the Prometheus pod at `serviceMonitor.caMountDirectory` (assuming you are using the Prometheus operator). See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator/values.yaml) for more information. Refer to [](k8s-prometheus-requirements.md) for more information on creating the CA secret. diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index 03b071f79c..ace4a7b534 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -90,7 +90,7 @@ spec: 1. This requires the sample data: [/explore-analyze/index.md#gs-get-data-into-kibana](/explore-analyze/index.md#gs-get-data-into-kibana) -You can find a complete example in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/remoteclusters). +You can find a complete example in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/remoteclusters). ### Using the certificate security model [k8s_using_the_certificate_security_model] diff --git a/deploy-manage/security/k8s-network-policies.md b/deploy-manage/security/k8s-network-policies.md index 8baf7d2661..8d0476dc7f 100644 --- a/deploy-manage/security/k8s-network-policies.md +++ b/deploy-manage/security/k8s-network-policies.md @@ -435,4 +435,4 @@ spec: ## Isolating Enterprise Search [k8s-network-policies-enterprise-search-isolation] -Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example of Enterprise Search isolation using network policies in previous {{stack}} versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s_prerequisites.html#k8s-network-policies-enterprise-search-isolation). +Enterprise Search is not available in {{stack}} versions 9.0 and later. For an example of Enterprise Search isolation using network policies in previous {{stack}} versions, refer to the [previous ECK documentation](https://www.elastic.co/guide/en/cloud-on-k8s/{{eck_release_branch}}/k8s_prerequisites.html#k8s-network-policies-enterprise-search-isolation). diff --git a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md index f53ada1cc7..a9cf45e2ce 100644 --- a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md +++ b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md @@ -23,9 +23,9 @@ To uninstall the operator: 2. Uninstall the operator: - ```shell - kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/operator.yaml - kubectl delete -f https://download.elastic.co/downloads/eck/2.16.1/crds.yaml + ```shell subs=true + kubectl delete -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml + kubectl delete -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml ``` ::::{warning} diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md index bdc59573ca..8a0593ed97 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md @@ -13,13 +13,13 @@ This page provides instructions on how to upgrade the ECK operator. To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the {{stack}} version](../deployment-or-cluster.md). -## Before you upgrade to ECK 3.0.0 [k8s-ga-upgrade] +## Before you upgrade to ECK {{eck_version}} [k8s-ga-upgrade] The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all {{es}} and {{kib}} pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large {{es}} cluster or multiple {{stack}} deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and {{es}} clusters. For more details on controlling rolling restarts during the upgrade, refer to the [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart) section. Before upgrading, refer to the [release notes](cloud-on-k8s://release-notes/index.md) to make sure that the release does not contain any breaking changes that could affect you. The [release highlights document](cloud-on-k8s://release-notes/index.md) provides more details and possible workarounds for any breaking changes or known issues in each release. -Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to 3.0.0, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. +Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to {{eck_version}}, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. ::::{warning} When upgrading always ensure that the version of the CRDs installed in the cluster matches the version of the operator. If you are using Helm, the CRDs are upgraded automatically as part of the Helm chart. If you are using the YAML manifests, you must upgrade the CRDs manually. Running differing versions of the CRDs and the operator is not a supported configuration and can lead to unexpected behavior. @@ -34,15 +34,15 @@ When upgrading always ensure that the version of the CRDs installed in the clust Release 1.7.0 moved the [CustomResourceDefinitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD) used by ECK to the v1 version. If you upgrade from a previous version of ECK, the new version of the CRDs replaces the existing CRDs. If you cannot remove the current ECK installation because you have production workloads that must not be deleted, the following approach is recommended. -```shell -kubectl replace -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml +```shell subs=true +kubectl replace -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml ``` ::::{note} If you skipped a release in which new CRDs where introduced, you will get an error message similar to `Error from server (NotFound): error when replacing "config/crds.yaml": customresourcedefinitions.apiextensions.k8s.io ... not found`. To add the missing CRDs run -```shell -kubectl create -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml +```shell subs=true +kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml ``` :::: @@ -50,8 +50,8 @@ kubectl create -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml Then upgrade the remaining objects with the operator manifest: -```shell -kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/operator.yaml +```shell subs=true +kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml ``` If you are using Helm: force upgrade the CRD chart to move to the v1 CRDs. @@ -76,13 +76,13 @@ Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with a ### Upgrading from ECK 2.0 or later [k8s_upgrading_from_eck_2_0_or_later] -There are no special instructions to follow if you upgrade from any 2.x version to 3.0.0. Use the upgrade method applicable to your installation method of choice. +There are no special instructions to follow if you upgrade from any 2.x version to {{eck_version}}. Use the upgrade method applicable to your installation method of choice. If you are using our YAML manifests: -```shell -kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/crds.yaml -kubectl apply -f https://download.elastic.co/downloads/eck/3.0.0/operator.yaml +```shell subs=true +kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml ``` If you are using Helm: @@ -115,7 +115,7 @@ Once a resource is excluded from being managed by ECK, you will not be able to a Exclude Elastic resources from being managed by the operator: -```shell +```shell subs=true ANNOTATION='eck.k8s.elastic.co/managed=false' # Exclude a single Elasticsearch resource named "quickstart" @@ -130,7 +130,7 @@ for NS in $(kubectl get ns -o=custom-columns='NAME:.metadata.name' --no-headers) Once the operator has been upgraded and you are ready to let the resource become managed again (triggering a rolling restart of pods in the process), remove the annotation. -```shell +```shell subs=true RM_ANNOTATION='eck.k8s.elastic.co/managed-' # Resume management of a single {{es}} cluster named "quickstart" @@ -138,5 +138,5 @@ kubectl annotate elasticsearch quickstart $RM_ANNOTATION ``` ::::{note} -The ECK source repository contains a [shell script](https://github.com/elastic/cloud-on-k8s/tree/2.16/hack/annotator) to assist with mass addition/deletion of annotations. +The ECK source repository contains a [shell script](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/hack/annotator) to assist with mass addition/deletion of annotations. :::: diff --git a/docset.yml b/docset.yml index ea5815cc94..e5053d35e0 100644 --- a/docset.yml +++ b/docset.yml @@ -272,6 +272,8 @@ subs: kib-pull: "https://github.com/elastic/kibana/pull/" stack-version: "9.0.0" eck_version: "3.0.0" + eck_release_branch: "3.0" + eck_helm_minimum_version: "3.2.0" eck_resources_list: "Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash" eck_resources_list_short: "APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash" apm_server_version: "9.0.0" diff --git a/troubleshoot/deployments/cloud-on-k8s/common-problems.md b/troubleshoot/deployments/cloud-on-k8s/common-problems.md index 1080287f41..5987e644fa 100644 --- a/troubleshoot/deployments/cloud-on-k8s/common-problems.md +++ b/troubleshoot/deployments/cloud-on-k8s/common-problems.md @@ -19,11 +19,11 @@ kubectl -n elastic-system \ get pods -o=jsonpath='{.items[].status.containerStatuses}' | jq ``` -```json +```json subs=true [ { "containerID": "containerd://...", - "image": "docker.elastic.co/eck/eck-operator:2.16.1", + "image": "docker.elastic.co/eck/eck-operator:{{eck_version}}", "imageID": "docker.elastic.co/eck/eck-operator@sha256:...", "lastState": { "terminated": {