diff --git a/deploy-manage/autoscaling/autoscaling-in-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md index 24dac0d07f..b7677992b3 100644 --- a/deploy-manage/autoscaling/autoscaling-in-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -177,7 +177,7 @@ spec: max: 512Gi ``` -You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md). +You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{version.eck | M.M}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md). #### Change the polling interval [k8s-autoscaling-polling-interval] diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md index 177e247de8..9227606b51 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md @@ -14,9 +14,9 @@ Versions of the {{stack}}, containing {{es}}, {{kib}}, and other products, are a The first table contains the stack versions that shipped with the 4.0 version of {{ece}}. You can also check the [most recent stack packs and Docker images](#ece-recent-download-list), which might have released after the 4.0 version of ECE, as well as the [full list of available stack packs and Docker images](#ece-full-download-list). -| Docker images included with {{ece}} {{ece_version}} | +| Docker images included with {{ece}} {{version.ece}} | | --- | -| docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} | +| docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.18.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 | diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md index c30d2e7cdb..1b21a30593 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md @@ -16,7 +16,7 @@ To perform an offline installation without a private Docker registry, you have t 1. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md). Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images (the version 8.x images are required for system deployments). ```sh subs=true - docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} + docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.0 docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 @@ -26,15 +26,15 @@ To perform an offline installation without a private Docker registry, you have t docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 ``` - For example, for {{ece}} {{ece_version}} and the {{stack}} versions it shipped with, you need: + For example, for {{ece}} {{version.ece}} and the {{stack}} versions it shipped with, you need: - * {{ece}} {{ece_version}} + * {{ece}} {{version.ece}} * {{es}} 9.0.0, {{kib}} 9.0.0, and APM 9.0.0 2. Create .tar files of the images: ```sh subs=true - docker save -o ece.{{ece_version}}.tar docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} + docker save -o ece.{{version.ece}}.tar docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} docker save -o es.8.18.0.tar docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 docker save -o kibana.8.18.0.tar docker.elastic.co/cloud-release/kibana-cloud:8.18.0 docker save -o apm.8.18.0.tar docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 @@ -48,7 +48,7 @@ To perform an offline installation without a private Docker registry, you have t 4. On each host, load the images into Docker, replacing `FILE_PATH` with the correct path to the .tar files: ```sh subs=true - docker load < FILE_PATH/ece.{{ece_version}}.tar + docker load < FILE_PATH/ece.{{version.ece}}.tar docker load < FILE_PATH/es.8.18.0.tar docker load < FILE_PATH/kibana.8.18.0.tar docker load < FILE_PATH/apm.8.18.0.tar diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md index aa327bfda9..bdb0940986 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md @@ -22,7 +22,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau 2. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md) and push them to your private Docker registry. Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images. ```sh subs=true - docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} + docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.0 docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 @@ -32,9 +32,9 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 ``` - For example, for {{ece}} {{ece_version}} and the {{stack}} versions it shipped with, you need: + For example, for {{ece}} {{version.ece}} and the {{stack}} versions it shipped with, you need: - * {{ece}} {{ece_version}} + * {{ece}} {{version.ece}} * {{es}} 9.0.0, {{kib}} 9.0.0, APM 9.0.0 :::{important} @@ -44,7 +44,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau 3. Tag the Docker images with your private registry URL by replacing `REGISTRY` with your actual registry address, for example `my.private.repo:5000`: ```sh subs=true - docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} + docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} docker tag docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 REGISTRY/cloud-release/elasticsearch-cloud-ess:8.18.0 docker tag docker.elastic.co/cloud-release/kibana-cloud:8.18.0 REGISTRY/cloud-release/kibana-cloud:8.18.0 docker tag docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 REGISTRY/cloud-release/elastic-agent-cloud:8.18.0 @@ -57,7 +57,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau 4. Push the Docker images to your private Docker registry, using the same tags from the previous step. Replace `REGISTRY` with your actual registry URL, for example `my.private.repo:5000`: ```sh subs=true - docker push REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} + docker push REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} docker push REGISTRY/cloud-release/elasticsearch-cloud-ess:8.18.0 docker push REGISTRY/cloud-release/kibana-cloud:8.18.0 docker push REGISTRY/cloud-release/elastic-agent-cloud:8.18.0 diff --git a/deploy-manage/deploy/cloud-on-k8s.md b/deploy-manage/deploy/cloud-on-k8s.md index 5b14687c14..709e359dd8 100644 --- a/deploy-manage/deploy/cloud-on-k8s.md +++ b/deploy-manage/deploy/cloud-on-k8s.md @@ -59,7 +59,7 @@ Afterwards, you can: * Learn how to [update your deployment](./cloud-on-k8s/update-deployments.md) * Check out [our recipes](./cloud-on-k8s/recipes.md) for multiple use cases -* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples) +* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/samples) ## Supported versions [k8s-supported] diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md index c94c54629d..6802e11b12 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md @@ -208,7 +208,7 @@ Starting with ECK 2.0 the operator can make Kubernetes Node labels available as 2. On the {{es}} resources set the `eck.k8s.elastic.co/downward-node-labels` annotations with the list of the Kubernetes node labels that should be copied as Pod annotations. 3. Use the [Kubernetes downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) in the `podTemplate` to make those annotations available as environment variables in {{es}} Pods. -Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples/elasticsearch/elasticsearch.yaml) for a complete example. +Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/samples/elasticsearch/elasticsearch.yaml) for a complete example. ### Using node topology labels, Kubernetes topology spread constraints, and {{es}} shard allocation awareness [k8s-availability-zone-awareness-example] diff --git a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md index 122b0ad6b8..92e5161e6e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md +++ b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md @@ -44,7 +44,7 @@ ECK will automatically set the correct container image for each application. Whe To deploy the ECK operator in an air-gapped environment, you first have to mirror the operator image itself from `docker.elastic.co` to a private container registry, for example `my.registry`. -Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:{{eck_version}}` with the private name of the image, for example `my.registry/eck/eck-operator:{{eck_version}}`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`. +Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:{{version.eck}}` with the private name of the image, for example `my.registry/eck/eck-operator:{{version.eck}}`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`. ## Override the default container registry [k8s-container-registry-override] diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md index 7a9804e5d7..d335b133e8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md @@ -20,7 +20,7 @@ The examples in this section are purely descriptive and should not be considered ## Metricbeat for Kubernetes monitoring [k8s_metricbeat_for_kubernetes_monitoring] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/metricbeat_hosts.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/metricbeat_hosts.yaml ``` Deploys Metricbeat as a DaemonSet that monitors the usage of the following resources: @@ -32,7 +32,7 @@ Deploys Metricbeat as a DaemonSet that monitors the usage of the following resou ## Filebeat with autodiscover [k8s_filebeat_with_autodiscover] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_autodiscover.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collects logs from Pods in every namespace and loads them to the connected {{es}} cluster. @@ -41,7 +41,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collec ## Filebeat with autodiscover for metadata [k8s_filebeat_with_autodiscover_for_metadata] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from Pods that match the following criteria are shipped to the connected {{es}} cluster: @@ -53,7 +53,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from ## Filebeat without autodiscover [k8s_filebeat_without_autodiscover] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_no_autodiscover.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_no_autodiscover.yaml ``` Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the entire logs directory on the host as the input source. This configuration does not require any RBAC resources as no Kubernetes APIs are used. @@ -62,7 +62,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the ## {{es}} and {{kib}} Stack Monitoring [k8s_elasticsearch_and_kibana_stack_monitoring] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/stack_monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/stack_monitoring.yaml ``` Deploys Metricbeat configured for {{es}} and {{kib}} [Stack Monitoring](/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) and Filebeat using autodiscover. Deploys one monitored {{es}} cluster and one monitoring {{es}} cluster. You can access the Stack Monitoring app in the monitoring cluster’s {{kib}}. @@ -76,7 +76,7 @@ In this example, TLS verification is disabled when Metricbeat communicates with ## Heartbeat monitoring {{es}} and {{kib}} health [k8s_heartbeat_monitoring_elasticsearch_and_kibana_health] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/heartbeat_es_kb_health.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/heartbeat_es_kb_health.yaml ``` Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} and {{kib}} by TCP probing their Service endpoints. Heartbeat expects that {{es}} and {{kib}} are deployed in the `default` namespace. @@ -85,7 +85,7 @@ Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} ## Auditbeat [k8s_auditbeat] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/auditbeat_hosts.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/auditbeat_hosts.yaml ``` Deploys Auditbeat as a DaemonSet that checks file integrity and audits file operations on the host system. @@ -94,7 +94,7 @@ Deploys Auditbeat as a DaemonSet that checks file integrity and audits file oper ## Packetbeat monitoring DNS and HTTP traffic [k8s_packetbeat_monitoring_dns_and_http_traffic] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/packetbeat_dns_http.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/packetbeat_dns_http.yaml ``` Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) traffic on ports `80`, `8000`, `8080` and `9200`. @@ -103,7 +103,7 @@ Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) tra ## OpenShift monitoring [k8s_openshift_monitoring] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/openshift_monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/openshift_monitoring.yaml ``` Deploys Metricbeat as a DaemonSet that monitors the host resource usage (CPU, memory, network, filesystem), OpenShift resources (Nodes, Pods, Containers, Volumes), API Server and Filebeat using autodiscover. Deploys an {{es}} cluster and {{kib}} to centralize data collection. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md index 1f4c4ebeda..074761d68e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md @@ -20,7 +20,7 @@ The examples in this section are for illustration purposes only and should not b ## System and {{k8s}} {{integrations}} [k8s_system_and_k8s_integrations] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml ``` Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{integrations}} enabled. System integration collects syslog logs, auth logs and system metrics (for CPU, I/O, filesystem, memory, network, process and others). {{k8s}} {{integrations}} collects API server, Container, Event, Node, Pod, Volume and system metrics. @@ -29,7 +29,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{int ## System and {{k8s}} {{integrations}} running as non-root [k8s_system_and_k8s_integrations_running_as_non_root] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml ``` The provided example is functionally identical to the previous section but runs the {{agent}} processes (both the {{agent}} running as the {{fleet}} server and the {{agent}} connected to {{fleet}}) as a non-root user by utilizing a DaemonSet to ensure directory and file permissions. @@ -43,7 +43,7 @@ The DaemonSet itself must run as root to set up permissions and ECK >= 2.10.0 is ## Custom logs integration with autodiscover [k8s_custom_logs_integration_with_autodiscover] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml ``` Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration enabled. Collects logs from all Pods in the `default` namespace using autodiscover feature. @@ -52,7 +52,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration ## APM integration [k8s_apm_integration] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-apm-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-apm-integration.yaml ``` Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integration enabled. @@ -61,7 +61,7 @@ Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integrat ## Synthetic monitoring [k8s_synthetic_monitoring] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/synthetic-monitoring.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/synthetic-monitoring.yaml ``` Deploys an {{fleet}}-enrolled {{agent}} that can be used as for [Synthetic monitoring](/solutions/observability/synthetics/index.md). This {{agent}} uses the `elastic-agent-complete` image. The agent policy still needs to be [registered as private location](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-add) in {{kib}}. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md index 53c854c1ff..87ff518ba8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md @@ -20,7 +20,7 @@ The examples in this section are for illustration purposes only. They should not ## Single pipeline defined in CRD [k8s-logstash-configuration-single-pipeline-crd] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-eck.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-eck.yaml ``` Deploys Logstash with a single pipeline defined in the CRD @@ -29,7 +29,7 @@ Deploys Logstash with a single pipeline defined in the CRD ## Single Pipeline defined in Secret [k8s-logstash-configuration-single-pipeline-secret] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-pipeline-as-secret.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-pipeline-as-secret.yaml ``` Deploys Logstash with a single pipeline defined in a secret, referenced by a `pipelineRef` @@ -38,7 +38,7 @@ Deploys Logstash with a single pipeline defined in a secret, referenced by a `pi ## Pipeline configuration in mounted volume [k8s-logstash-configuration-pipeline-volume] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-pipeline-as-volume.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-pipeline-as-volume.yaml ``` Deploys Logstash with a single pipeline defined in a secret, mounted as a volume, and referenced by `path.config` @@ -47,7 +47,7 @@ Deploys Logstash with a single pipeline defined in a secret, mounted as a volume ## Writing to a custom {{es}} index [k8s-logstash-configuration-custom-index] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-es-role.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-es-role.yaml ``` Deploys Logstash and {{es}}, and creates an updated version of the `eck_logstash_user_role` to write to a user specified index. @@ -56,7 +56,7 @@ Deploys Logstash and {{es}}, and creates an updated version of the `eck_logstash ## Creating persistent volumes for PQ and DLQ [k8s-logstash-configuration-pq-dlq] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-volumes.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-volumes.yaml ``` Deploys Logstash, Beats and {{es}}. Logstash is configured with two pipelines: @@ -68,7 +68,7 @@ Deploys Logstash, Beats and {{es}}. Logstash is configured with two pipelines: ## {{es}} and {{kib}} Stack Monitoring [k8s-logstash-configuration-stack-monitoring] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-monitored.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-monitored.yaml ``` Deploys an {{es}} and {{kib}} monitoring cluster, and a Logstash that will send its monitoring information to this cluster. You can view the stack monitoring information in the monitoring cluster’s Kibana @@ -77,7 +77,7 @@ Deploys an {{es}} and {{kib}} monitoring cluster, and a Logstash that will send ## Multiple pipelines/multiple {{es}} clusters [k8s-logstash-configuration-multiple-pipelines] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/logstash/logstash-multi.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/logstash/logstash-multi.yaml ``` Deploys {{es}} in prod and qa configurations, running in separate namespaces. Logstash is configured with a multiple pipeline→pipeline configuration, with a source pipeline routing to `prod` and `qa` pipelines. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md index 4c10e64fe2..c92fc5a29b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md @@ -20,7 +20,7 @@ The examples in this section are for illustration purposes only and should not b ## System integration [k8s_system_integration] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/system-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/system-integration.yaml ``` Deploys Elastic Agent as a DaemonSet in standalone mode with system integration enabled. Collects syslog logs, auth logs and system metrics (for CPU, I/O, filesystem, memory, network, process and others). @@ -29,7 +29,7 @@ Deploys Elastic Agent as a DaemonSet in standalone mode with system integration ## Kubernetes integration [k8s_kubernetes_integration] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/kubernetes-integration.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/kubernetes-integration.yaml ``` Deploys Elastic Agent as a DaemonSet in standalone mode with Kubernetes integration enabled. Collects API server, Container, Event, Node, Pod, Volume and system metrics. @@ -38,7 +38,7 @@ Deploys Elastic Agent as a DaemonSet in standalone mode with Kubernetes integrat ## Multiple {{es}} clusters output [k8s_multiple_elasticsearch_clusters_output] ```sh subs=true -kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/multi-output.yaml +kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/multi-output.yaml ``` Deploys two {{es}} clusters and two {{kib}} instances together with single Elastic Agent DaemonSet in standalone mode with System integration enabled. System metrics are sent to the `elasticsearch` cluster. Elastic Agent monitoring data is sent to `elasticsearch-mon` cluster. diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md index 8d21da2b83..455b74c70c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md @@ -139,7 +139,7 @@ If you use [Operator Lifecycle Manager (OLM)](https://github.com/operator-framew name: elastic-cloud-eck source: elastic-operators sourceNamespace: openshift-marketplace - startingCSV: elastic-cloud-eck.v{{eck_version}} + startingCSV: elastic-cloud-eck.v{{version.eck}} config: volumes: - name: config diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md index 77ef0cd5e1..38e2763768 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md @@ -46,7 +46,7 @@ The cluster that you deployed in this quickstart guide only allocates a persiste :::: -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd elasticsearch diff --git a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md index 402cf27e56..a2b7b25d38 100644 --- a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md @@ -33,10 +33,10 @@ You can disable the generation of the self-signed certificate and hence disable ### Ingress and {{kib}} configuration [k8s-maps-ingress] -To use Elastic Maps Server from your {{kib}} instances, you need to configure {{kib}} to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your {{kib}} is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/) for an example on how to set this up using an Ingress resource. +To use Elastic Maps Server from your {{kib}} instances, you need to configure {{kib}} to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your {{kib}} is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes/) for an example on how to set this up using an Ingress resource. :::{admonition} Support scope for Ingress Controllers -[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. +[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. If ingress configuration is challenging or unsupported in your environment, consider using standard `LoadBalancer` services as a simpler alternative. ::: diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md index 948a3f03a6..fd7ec6cb8d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md @@ -69,7 +69,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na --set=managedNamespaces='{namespace-a, namespace-b}' ``` -You can find the profile files in the Helm cache directory or in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator). +You can find the profile files in the Helm cache directory or in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/deploy/eck-operator). :::: The previous example disabled the validation webhook along with all other cluster-wide resources. If you need to enable the validation webhook in a restricted environment, see [](./webhook-namespace-selectors.md). To understand what the validation webhook does, refer to [](./configure-validating-webhook.md). @@ -91,7 +91,7 @@ Migrating an existing installation to Helm is essentially an upgrade operation a You can migrate an existing operator installation to Helm by adding the `meta.helm.sh/release-name`, `meta.helm.sh/release-namespace` annotations and the `app.kubernetes.io/managed-by` label to all the resources you want to be adopted by Helm. You *must* do this for the Elastic Custom Resource Definitions (CRD) because deleting them would trigger the deletion of all deployed Elastic applications as well. All other resources are optional and can be deleted. ::::{note} -A shell script is available in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/deploy/helm-migrate.sh) to demonstrate how to migrate from version 1.7.1 to Helm. You can modify it to suit your own environment. +A shell script is available in the [ECK source repository](https://github.com/elastic/cloud-on-k8s/blob/{{version.eck | M.M}}/deploy/helm-migrate.sh) to demonstrate how to migrate from version 1.7.1 to Helm. You can modify it to suit your own environment. :::: For example, an ECK 1.2.1 installation deployed using [YAML manifests](/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md) can be migrated to Helm as follows: diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md index 321b218882..abc184d362 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md @@ -42,7 +42,7 @@ To deploy the ECK operator: 1. Install Elastic's [custom resource definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) with [`create`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/): ```sh subs=true - kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml + kubectl create -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml ``` You'll see output similar to the following as resources are created: @@ -61,7 +61,7 @@ To deploy the ECK operator: 2. Using [`kubectl apply`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/), install the operator with its RBAC rules: ```sh subs=true - kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml + kubectl apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml ``` ::::{note} diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md index e476484fe2..7a0d3504fa 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md @@ -15,7 +15,7 @@ Deploying Elastic Agent on Openshift may require additional permissions dependin The following example assumes that Elastic Agent is deployed in the Namespace `elastic` with the ServiceAccount `elastic-agent`. You can replace these values according to your environment. ::::{note} -If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/elastic-agent), the ServiceAccount may already exist. +If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes/elastic-agent), the ServiceAccount may already exist. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md index 5f513e639f..daf13f5f10 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md @@ -14,8 +14,8 @@ Deploying Beats on Openshift may require some privileged permissions. This secti The following example assumes that Beats is deployed in the Namespace `elastic` with the ServiceAccount `heartbeat`. You can replace these values according to your environment. -::::{note} -If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/beats), the ServiceAccount may already exist. +::::{note} +If you used the examples from the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes/beats), the ServiceAccount may already exist. :::: @@ -105,5 +105,5 @@ spec: path: /var/lib/docker/containers ``` -Check the complete examples in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/beats). +Check the complete examples in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes/beats). diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md index 164e2163c9..aac133b243 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md @@ -15,8 +15,8 @@ This page shows the installation steps to deploy ECK in Openshift: 1. Apply the manifests the same way as described in [](./install-using-yaml-manifest-quickstart.md) document: ```shell subs=true - oc create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml - oc apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml + oc create -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml + oc apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml ``` 2. [Optional] If the Software Defined Network is configured with the `ovs-multitenant` plug-in, you must allow the `elastic-system` namespace to access other Pods and Services in the cluster: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md index 86c43ea1ec..39ae9c7a04 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md @@ -38,8 +38,8 @@ The operator itself must be connected to the service mesh to deploy and manage { 2. Install ECK: ```sh subs=true - kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml - kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml + kubectl create -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml + kubectl apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml ``` 3. Check the configuration and make sure the installation has been successful: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md index 56b0f1f717..4d2f38f3ae 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md @@ -12,7 +12,7 @@ products: The following sections describe how to connect the operator and managed resources to the Linkerd service mesh. It is assumed that Linkerd is already installed and configured on your Kubernetes cluster. If you are new to Linkerd, refer to the [product documentation](https://linkerd.io) for more information and installation instructions. -::::{note} +::::{note} These instructions have been tested with Linkerd 2.7.0. :::: @@ -22,8 +22,8 @@ These instructions have been tested with Linkerd 2.7.0. In order to connect the operator to the service mesh, Linkerd sidecar must be injected into the ECK deployment. This can be done during installation as follows: ```sh subs=true -kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml -linkerd inject https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml | kubectl apply -f - +kubectl create -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml +linkerd inject https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml | kubectl apply -f - ``` Confirm that the operator is now meshed: diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md index e07289444a..1da4013eac 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md @@ -68,7 +68,7 @@ To deploy a simple [{{kib}}](/get-started/the-stack.md#stack-components-kibana) ``` -For a full description of each `CustomResourceDefinition` (CRD), refer to the [API reference](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [API reference](cloud-on-k8s://reference/api-docs.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd kibana diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md index ef4f13086b..72c0960604 100644 --- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md @@ -41,15 +41,15 @@ helm install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namesp ### Customize {{es}} and {{kib}} installation with example values [k8s-eck-stack-helm-customize] -You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-stack/examples). +You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/deploy/eck-stack/examples). To use one or more of these example configurations, use the `--values` Helm option, as seen in the following section. ```sh subs=true # Install an eck-managed Elasticsearch and Kibana using the Elasticsearch node roles example with hot, warm, and cold data tiers, and the Kibana example customizing the http service. helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/kibana/http-configuration.yaml + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/deploy/eck-stack/examples/kibana/http-configuration.yaml ``` ## Fleet Server with Elastic Agents along with {{es}} and {{kib}} [k8s-install-fleet-agent-elasticsearch-kibana-helm] @@ -59,7 +59,7 @@ The following section builds upon the previous section, and allows installing Fl ```sh subs=true # Install an eck-managed Elasticsearch, Kibana, Fleet Server, and managed Elastic Agents using custom values. helm install eck-stack-with-fleet elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack ``` ## Logstash along with {{es}}, {{kib}} and Beats [k8s-install-logstash-elasticsearch-kibana-helm] @@ -69,7 +69,7 @@ The following section builds upon the previous sections, and allows installing L ```sh subs=true # Install an eck-managed Elasticsearch, Kibana, Beats and Logstash using custom values. helm install eck-stack-with-logstash elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack ``` ## Standalone Elastic APM Server along with {{es}} and {{kib}} [k8s-install-apm-server-elasticsearch-kibana-helm] @@ -79,7 +79,7 @@ The following section builds upon the previous sections, and allows installing a ```sh subs=true # Install an eck-managed Elasticsearch, Kibana, and standalone APM Server using custom values. helm install eck-stack-with-apm-server elastic/eck-stack \ - --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{eck_release_branch}}/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack + --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack ``` ## Enterprise Search server along with {{es}} and {{kib}} [k8s-install-enterprise-search-elasticsearch-kibana-helm] diff --git a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md index 065a272dd9..9e6f03b589 100644 --- a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md +++ b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md @@ -12,10 +12,10 @@ products: The default Kubernetes service created by ECK, named `-es-http`, is configured to include all the {{es}} nodes in that cluster. This configuration is good to get started and is adequate for most use cases. However, if you are operating an {{es}} cluster with [different node types](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md) and want control over which nodes handle which types of traffic, you should create additional Kubernetes services yourself. -As an alternative, you can use features provided by third-party software such as service meshes and ingress controllers to achieve more advanced traffic management configurations. Check the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes) in the ECK source repository for a few examples. +As an alternative, you can use features provided by third-party software such as service meshes and ingress controllers to achieve more advanced traffic management configurations. Check the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes) in the ECK source repository for a few examples. :::{admonition} Support scope for Ingress Controllers -[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. +[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. If ingress configuration is challenging or unsupported in your environment, consider using standard `LoadBalancer` services as a simpler alternative. ::: diff --git a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md index f4a932ca9e..dfeb8e0116 100644 --- a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md +++ b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md @@ -81,7 +81,7 @@ To enable the restriction of cross-namespace associations, start the operator wi ``` -In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml). +In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{version.eck | M.M}}/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml). ::::{note} If the `serviceAccountName` is not set, ECK uses the default service account assigned to the pod by the [Service Account Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller). diff --git a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md index 82b25ae041..e6ddc7b75a 100644 --- a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md +++ b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md @@ -58,6 +58,6 @@ If you are using a custom TLS certificate and you need to set `insecureSkipVerif 2. Ensure that the CA secret is mounted within the Prometheus Pod. - Steps will vary between Prometheus installations. If you're using the Prometheus operator, you can set the `spec.secrets` field of the `Prometheus` custom resource to the name of the previously created Kubernetes Secret. See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator/values.yaml) for more information. + Steps will vary between Prometheus installations. If you're using the Prometheus operator, you can set the `spec.secrets` field of the `Prometheus` custom resource to the name of the previously created Kubernetes Secret. See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/deploy/eck-operator/values.yaml) for more information. diff --git a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md index 2b60965cdc..e4aa6cae8c 100644 --- a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md +++ b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md @@ -41,7 +41,7 @@ Providing this secret is sufficient to use your own certificate if it is from a * Set `serviceMonitor.insecureSkipVerify` to `false` to enable TLS validation. * Set `serviceMonitor.caSecret` to the name of an existing Kubernetes secret within the Prometheus namespace that contains the CA in PEM format in a file called `ca.crt`. - * Set the `spec.secrets` field of the `Prometheus` custom resource, or `prometheus.prometheusSpec.secrets` when using the Helm chart such that the CA secret is mounted into the Prometheus pod at `serviceMonitor.caMountDirectory` (assuming you are using the Prometheus operator). See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/deploy/eck-operator/values.yaml) for more information. + * Set the `spec.secrets` field of the `Prometheus` custom resource, or `prometheus.prometheusSpec.secrets` when using the Helm chart such that the CA secret is mounted into the Prometheus pod at `serviceMonitor.caMountDirectory` (assuming you are using the Prometheus operator). See the [ECK Helm chart values file](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/deploy/eck-operator/values.yaml) for more information. Refer to [](k8s-prometheus-requirements.md) for more information on creating the CA secret. @@ -184,7 +184,7 @@ EOF By default a self-signed certificate will be generated for use by the metrics endpoint. If you want to use your own TLS certificate for the metrics endpoint you will need to follow the previous instructions to enable secure metrics as well as the following steps: 1. Create a `Secret` containing the TLS certificate and TLS private key. The following keys are supported within the secret: - + * `tls.crt` - The PEM-encoded TLS certificate * `tls.key` - The PEM-encoded TLS private key diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index 4340e004e0..35b781752d 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -92,7 +92,7 @@ spec: 1. This requires the sample data: [/explore-analyze/index.md#gs-get-data-into-kibana](/explore-analyze/index.md#gs-get-data-into-kibana) -You can find a complete example in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/recipes/remoteclusters). +You can find a complete example in the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/recipes/remoteclusters). ### Using the certificate security model [k8s_using_the_certificate_security_model] diff --git a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md index 1cb50a0d8b..345affacf9 100644 --- a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md +++ b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md @@ -26,8 +26,8 @@ To uninstall the operator: 2. Uninstall the operator: ```shell subs=true - kubectl delete -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml - kubectl delete -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml + kubectl delete -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml + kubectl delete -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml ``` ::::{warning} diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md index 5e798a12de..d88be7e515 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md @@ -27,19 +27,19 @@ During the upgrade window, there might be a short period of time during which yo ## The upgrade version matrix [ece-upgrade-version-matrix] -The following table shows the recommended upgrade paths from older {{ece}} versions to {{ece_version}}. +The following table shows the recommended upgrade paths from older {{ece}} versions to {{version.ece}}. | Upgrade from | Recommended upgrade path to 4.0 | | --- | --- | -| Any 3.x version | 1. Upgrade to 3.8.0
2. Upgrade to {{ece_version}}
| -| 2.13 | 1. Upgrade to 3.8.0
2. Upgrade to {{ece_version}}
| -| 2.5-2.12 | 1. Upgrade to 2.13.4
2. Upgrade to 3.8.0
3. Upgrade to {{ece_version}}
| -| 2.0-2.4 | 1. Upgrade to 2.5.1
2. Upgrade to 2.13.4
3. Upgrade to 3.8.0
4. Upgrade to {{ece_version}}
| +| Any 3.x version | 1. Upgrade to 3.8.0
2. Upgrade to {{version.ece}}
| +| 2.13 | 1. Upgrade to 3.8.0
2. Upgrade to {{version.ece}}
| +| 2.5-2.12 | 1. Upgrade to 2.13.4
2. Upgrade to 3.8.0
3. Upgrade to {{version.ece}}
| +| 2.0-2.4 | 1. Upgrade to 2.5.1
2. Upgrade to 2.13.4
3. Upgrade to 3.8.0
4. Upgrade to {{version.ece}}
| -If you have to upgrade to any of the intermediate versions, follow the upgrade instructions of the relevant release before upgrading to {{ece_version}}: +If you have to upgrade to any of the intermediate versions, follow the upgrade instructions of the relevant release before upgrading to {{version.ece}}: - [ECE 2.5 Upgrade](https://www.elastic.co/guide/en/cloud-enterprise/2.5/ece-upgrade.html) - [ECE 2.13 Upgrade](https://www.elastic.co/guide/en/cloud-enterprise/2.13/ece-upgrade.html) - + :::{note} We don't recommend upgrading to 2.13.0, as it can cause issues and you may lose access to the admin console. We strongly recommend upgrading to 2.13.4. ::: @@ -86,7 +86,7 @@ Before starting the upgrade process, verify that your setup meets the following - **Proxies and load balancing**. To avoid any downtime for {{ece}}, the installation must include more than one proxy and must use a load balancer as recommended. If only a single proxy is configured or if the installation is not using a load balancer, some downtime is expected when the containers on the proxies are upgraded. Each container upgrade typically takes five to ten seconds, times the number of containers on a typical host. - **For *offline* or *air-gapped* installations**. Additional steps are required to upgrade {{ece}}. After downloading the installation script for the new version, pull and load the required container images and push them to a private Docker registry. To learn more about pulling and loading Docker images, check Install [ECE offline](../../../deploy-manage/deploy/cloud-enterprise/air-gapped-install.md). - Check the security cluster’s zone count. Due to internal limitations in ECE, the built-in security cluster cannot be scaled to two zones during the ECE upgrade procedure. If the zone count is set to 2 zones, scale the cluster to 3 or 1 zone(s) before upgrading ECE. -- **[Verify if you can upgrade directly](#ece-upgrade-version-matrix)**. When upgrading to ECE 4.0 or a higher version: +- **[Verify if you can upgrade directly](#ece-upgrade-version-matrix)**. When upgrading to ECE 4.0 or a higher version: - You need to first upgrade to ECE 3.8.0 or later. Refer to the [ECE version 3.8.0 upgrade instructions](https://www.elastic.co/guide/en/cloud-enterprise/3.8/ece-upgrade.html) for details. :::{warning} @@ -142,7 +142,7 @@ You can follow along while each container for {{ece}} is upgraded on the hosts t By default, ECE updates to the most current available version. If you want to upgrade to a specific ECE version, use the `--cloud-enterprise-version` option: ```sh subs=true -bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) upgrade --user admin --pass $PASSWORD --cloud-enterprise-version {{ece_version}} +bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) upgrade --user admin --pass $PASSWORD --cloud-enterprise-version {{version.ece}} ``` diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md index dd74c1f983..ab52a7c661 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md @@ -15,13 +15,13 @@ This page provides instructions on how to upgrade the ECK operator. To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the {{stack}} version](../deployment-or-cluster.md). -## Before you upgrade to ECK {{eck_version}} [k8s-ga-upgrade] +## Before you upgrade to ECK {{version.eck}} [k8s-ga-upgrade] The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all {{es}} and {{kib}} pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large {{es}} cluster or multiple {{stack}} deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and {{es}} clusters. For more details on controlling rolling restarts during the upgrade, refer to the [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart) section. Before upgrading, refer to the [release notes](cloud-on-k8s://release-notes/index.md) to make sure that the release does not contain any breaking changes that could affect you. The [release highlights document](cloud-on-k8s://release-notes/index.md) provides more details and possible workarounds for any breaking changes or known issues in each release. -Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to {{eck_version}}, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. +Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to {{version.eck}}, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade. ::::{warning} When upgrading always ensure that the version of the CRDs installed in the cluster matches the version of the operator. If you are using Helm, the CRDs are upgraded automatically as part of the Helm chart. If you are using the YAML manifests, you must upgrade the CRDs manually. Running differing versions of the CRDs and the operator is not a supported configuration and can lead to unexpected behavior. @@ -37,14 +37,14 @@ When upgrading always ensure that the version of the CRDs installed in the clust Release 1.7.0 moved the [CustomResourceDefinitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD) used by ECK to the v1 version. If you upgrade from a previous version of ECK, the new version of the CRDs replaces the existing CRDs. If you cannot remove the current ECK installation because you have production workloads that must not be deleted, the following approach is recommended. ```shell subs=true -kubectl replace -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml +kubectl replace -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml ``` ::::{note} If you skipped a release in which new CRDs where introduced, you will get an error message similar to `Error from server (NotFound): error when replacing "config/crds.yaml": customresourcedefinitions.apiextensions.k8s.io ... not found`. To add the missing CRDs run ```shell subs=true -kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml +kubectl create -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml ``` :::: @@ -53,7 +53,7 @@ kubectl create -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds Then upgrade the remaining objects with the operator manifest: ```shell subs=true -kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml ``` If you are using Helm: force upgrade the CRD chart to move to the v1 CRDs. @@ -78,13 +78,13 @@ Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with a ### Upgrading from ECK 2.0 or later [k8s_upgrading_from_eck_2_0_or_later] -There are no special instructions to follow if you upgrade from any 2.x version to {{eck_version}}. Use the upgrade method applicable to your installation method of choice. +There are no special instructions to follow if you upgrade from any 2.x version to {{version.eck}}. Use the upgrade method applicable to your installation method of choice. If you are using our YAML manifests: ```shell subs=true -kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/crds.yaml -kubectl apply -f https://download.elastic.co/downloads/eck/{{eck_version}}/operator.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/crds.yaml +kubectl apply -f https://download.elastic.co/downloads/eck/{{version.eck}}/operator.yaml ``` If you are using Helm: @@ -140,5 +140,5 @@ kubectl annotate elasticsearch quickstart $RM_ANNOTATION ``` ::::{note} -The ECK source repository contains a [shell script](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/hack/annotator) to assist with mass addition/deletion of annotations. +The ECK source repository contains a [shell script](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/hack/annotator) to assist with mass addition/deletion of annotations. :::: diff --git a/docset.yml b/docset.yml index d4f0e318dc..6056ef226a 100644 --- a/docset.yml +++ b/docset.yml @@ -279,9 +279,6 @@ subs: fleet-server-issue: "https://github.com/elastic/fleet-server/issues/" fleet-server-pull: "https://github.com/elastic/fleet-server/pull/" kib-pull: "https://github.com/elastic/kibana/pull/" - ece_version: "4.0.1" - eck_version: "3.0.0" - eck_release_branch: "3.0" eck_helm_minimum_version: "3.2.0" eck_resources_list: "Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash" eck_resources_list_short: "APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash" diff --git a/troubleshoot/deployments/cloud-on-k8s/common-problems.md b/troubleshoot/deployments/cloud-on-k8s/common-problems.md index ad6a0ade80..a025912bec 100644 --- a/troubleshoot/deployments/cloud-on-k8s/common-problems.md +++ b/troubleshoot/deployments/cloud-on-k8s/common-problems.md @@ -25,7 +25,7 @@ kubectl -n elastic-system \ [ { "containerID": "containerd://...", - "image": "docker.elastic.co/eck/eck-operator:{{eck_version}}", + "image": "docker.elastic.co/eck/eck-operator:{{version.eck}}", "imageID": "docker.elastic.co/eck/eck-operator@sha256:...", "lastState": { "terminated": {