Skip to content
Merged
13 changes: 13 additions & 0 deletions deploy-manage/deploy/cloud-on-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,10 +69,23 @@ This section outlines the supported Kubernetes and {{stack}} versions for ECK. C

ECK is compatible with the following Kubernetes distributions and related technologies:

::::{tab-set}

:::{tab-item} ECK 3.1
* Kubernetes 1.29-1.33
* OpenShift 4.15-4.19
* Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS)
* Helm: {{eck_helm_minimum_version}}+
:::

:::{tab-item} ECK 3.0
* Kubernetes 1.28-1.32
* OpenShift 4.14-4.18
* Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS)
* Helm: {{eck_helm_minimum_version}}+
:::

::::

ECK should work with all conformant **installers** listed in these [FAQs](https://github.com/cncf/k8s-conformance/blob/master/faq.md#what-is-a-distribution-hosted-platform-and-an-installer). Distributions include source patches and so may not work as-is with ECK.

Expand Down
17 changes: 16 additions & 1 deletion deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,8 +146,23 @@ By default, every reference targets all instances in your {{es}}, {{kib}} and {{

## Customize {{agent}} configuration [k8s-elastic-agent-fleet-configuration-custom-configuration]

In contrast to {{agents}} in standalone mode, the configuration is managed through {{fleet}}, and it cannot be defined through `config` or `configRef` elements.
In contrast to {{agents}} in standalone mode, the configuration is managed through {{fleet}}, and it cannot be defined through `config` or `configRef` elements with a few exceptions.

One of those exceptions is the configuration of providers as described in [advanced Agent configuration managed by Fleet](/reference/fleet/advanced-kubernetes-managed-by-fleet.md). When {{agent}} is managed by {{fleet}} and is orchestrated by ECK, the configuration of providers can simply be done through the `.spec.config` element in the Agent resource as of {applies_to}`stack: ga 8.13`:
Copy link
Contributor

@eedugon eedugon Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@barkbay : what do we mean with the applies_to stack 8.13 here? That the providers configuration can be done only for Agents running 8.13 or later?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
One of those exceptions is the configuration of providers as described in [advanced Agent configuration managed by Fleet](/reference/fleet/advanced-kubernetes-managed-by-fleet.md). When {{agent}} is managed by {{fleet}} and is orchestrated by ECK, the configuration of providers can simply be done through the `.spec.config` element in the Agent resource as of {applies_to}`stack: ga 8.13`:
One of those exceptions is the configuration of providers as described in [advanced Agent configuration managed by Fleet](/reference/fleet/advanced-kubernetes-managed-by-fleet.md). Starting in stack version 8.13, if {{agent}} is managed by {{fleet}} and orchestrated by ECK, you can configure providers using the `.spec.config` element in the Agent resource:

Possible version switching the 8.13 statement to the narrative side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This content was already reviewed here without notes: #1446

Copy link
Contributor

@eedugon eedugon Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pebrc for the extra details!

My opinion is still the same but of course it's not a big deal. Also when we reviewed the linked PR, I think the applies_to was added in a later commit, as otherwise I'd probably have highlighted it.

Anyway the current text and usage of the badge is all right too, so whatever you want.
cc: @shainaraskas

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can totally change it to whatever makes most sense from a docs perspective.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, let's allow @shainaraskas to share her thoughts for a final decision :)

Shaina, do you like the usage of the inline badge there? I don't feel it very intuitive and I've suggested to change it to a narrative sentence, but maybe both approaches are fine.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this is not ideal, partially because our labels don't look right in sentences.

one reason this is hard to reframe is that this is positioned as "one of these exceptions" - is this the only exception? are exceptions only valid as of 8.13?

this could get an Exceptions subheading that has an applies label at the heading level, ideally, if it makes sense.

if that doesn't make sense, I'd go with prose inline or a note inline. we'll have to refactor it later when we have more components at our disposal, but will read better in the short term.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approved this PR already but some ECK 3.1 tagging should be added here before this is shipped


```yaml
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
name: elastic-agent
spec:
config:
fleet:
enabled: true
providers.kubernetes:
add_resource_metadata:
deployment: true
```

## Upgrade the {{agent}} specification [k8s-elastic-agent-fleet-configuration-upgrade-specification]

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
applies_to:
deployment:
eck: preview 3.1
products:
- id: cloud-kubernetes
---

# Propagate Labels and Annotations [k8s-propagate-labels-annotations]

Starting with version `3.1.0`, {{eck}} supports propagating labels and annotations from the parent resource to the child resources it creates. This can be used on all custom resources managed by ECK, such as {{eck_resources_list}}.

The example below demonstrates how to use this feature on a {{es}} cluster, however, as mentioned above, this can be also applied to any custom resource managed by {{eck}}.

```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
annotations:
# Some custom annotations to be propagated to resources created by the operator.
my-annotation1: "my-annotation1-value"
my-annotation2: "my-annotation2-value"
# Instructions for the operator to propagate these annotations and labels to resources it creates.
eck.k8s.alpha.elastic.co/propagate-annotations: "my-annotation1, my-annotation2"
eck.k8s.alpha.elastic.co/propagate-labels: "my-label1, my-label2"
labels:
# Some custom labels to be propagated to resources created by the operator.
my-label1: "my-label1-value"
my-label2: "my-label2-value"
name: elasticsearch-sample
spec:
version: 9.1.0
nodeSets:
- name: default
config:
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
count: 1
```

The custom labels and annotations specified in the `metadata` section of the parent resource will be propagated to all child resources created by {{eck}}, such as StatefulSets, Pods, Services, and Secrets. This ensures that all resources have consistent metadata, which can be useful for filtering, monitoring, and managing resources in Kubernetes:

```sh
kubectl get sts,pods,svc -l my-label1=my-label1-value,my-label2=my-label2-value
```

```sh
NAME READY AGE
statefulset.apps/elasticsearch-sample-es-default 1/1 4m10s

NAME READY STATUS RESTARTS AGE
pod/elasticsearch-sample-es-default-0 1/1 Running 0 4m9s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-sample-es-default ClusterIP None <none> 9200/TCP 4m12s
service/elasticsearch-sample-es-http ClusterIP XX.XX.XX.XX <none> 9200/TCP 4m14s
service/elasticsearch-sample-es-internal-http ClusterIP XX.XX.XX.XX <none> 9200/TCP 4m14s
service/elasticsearch-sample-es-transport ClusterIP None <none> 9300/TCP 4m14s
```

It is possible to use `*` as a wildcard to propagate all labels and annotations from the parent resource to the child resources. For example:

```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
annotations:
# Instructions for the operator to propagate all the annotations and labels to resources it creates.
eck.k8s.alpha.elastic.co/propagate-annotations: "*"
eck.k8s.alpha.elastic.co/propagate-labels: "*"
name: elasticsearch-sample
spec:
version: 9.1.0
nodeSets:
- name: default
config:
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
count: 1
```

::::{note}
Note the following considerations when using this feature:
* Propagated labels and annotations are not automatically deleted. If you want to remove them from the child resources, you need to do so manually or use a cleanup script.
* To prevent conflicts, some labels and annotations reserved for internal use by ECK or Kubernetes are not propagated. This is the case for labels and annotations that match `*.k8s.*.elastic.co/` and also `kubectl.kubernetes.io/last-applied-configuration`.
::::
1 change: 1 addition & 0 deletions deploy-manage/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,7 @@ toc:
- file: deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md
- file: deploy/cloud-on-k8s/k8s-kibana-plugins.md
- file: deploy/cloud-on-k8s/customize-pods.md
- file: deploy/cloud-on-k8s/propagate-labels-annotations.md
- file: deploy/cloud-on-k8s/manage-compute-resources.md
- file: deploy/cloud-on-k8s/recipes.md
- file: deploy/cloud-on-k8s/connect-to-external-elastic-resources.md
Expand Down
4 changes: 2 additions & 2 deletions docset.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
project: 'Elastic documentation'
max_toc_depth: 2

Expand Down Expand Up @@ -280,8 +280,8 @@
kib-pull: "https://github.com/elastic/kibana/pull/"
stack-version: "9.0.3"
ece_version: "4.0.1"
eck_version: "3.0.0"
eck_release_branch: "3.0"
eck_version: "3.1.0"
eck_release_branch: "3.1"
eck_helm_minimum_version: "3.2.0"
eck_resources_list: "Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash"
eck_resources_list_short: "APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash"
Expand Down
1 change: 1 addition & 0 deletions reference/fleet/advanced-kubernetes-managed-by-fleet.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,4 +106,5 @@ volumes:

1. By default the manifests for {{agent}} managed by {{fleet}} have `hostNetwork:true`. In order to support multiple installations of {{agent}}s in the same node you should set `hostNetwork:false`. See this relevant [example](https://github.com/elastic/elastic-agent/tree/main/docs/manifests/hostnetwork) as described in [{{agent}} Manifests in order to support Kube-State-Metrics Sharding](https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-ksm-sharding.md).
2. The volume `/usr/share/elastic-agent/state` must remain mounted in [elastic-agent-managed-kubernetes.yaml](https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml), otherwise custom config map provided above will be overwritten.
3. If {{agent}} is deployed through ECK, you can define the provider configuration in the `spec.config` field of the Kubernetes custom resource. Refer to [{{fleet}}-managed {{agent}} on ECK](/deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md) for details.

Loading