diff --git a/content/en/docs/concepts/cluster-administration/node-shutdown.md b/content/en/docs/concepts/cluster-administration/node-shutdown.md index a0960ffc23603..b8134caf5bd7d 100644 --- a/content/en/docs/concepts/cluster-administration/node-shutdown.md +++ b/content/en/docs/concepts/cluster-administration/node-shutdown.md @@ -236,8 +236,6 @@ are emitted under the kubelet subsystem to monitor node shutdowns. ## Non-graceful node shutdown handling {#non-graceful-node-shutdown} -{{< feature-state feature_gate_name="NodeOutOfServiceVolumeDetach" >}} - A node shutdown action may not be detected by kubelet's Node Shutdown Manager, either because the command does not trigger the inhibitor locks mechanism used by kubelet or because of a user error, i.e., the ShutdownGracePeriod and diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index d1990ab587803..77f9bdf95a8c6 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -337,9 +337,6 @@ As an alternative, a cluster administrator can enforce size limits for ## Local ephemeral storage - -{{< feature-state for_k8s_version="v1.25" state="stable" >}} - Nodes have local ephemeral storage, backed by locally-attached writeable devices or, sometimes, by RAM. "Ephemeral" means that there is no long-term guarantee about durability. diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index b43bde9a2fc40..43a0ae005b53f 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -10,8 +10,6 @@ hide_summary: true # Listed separately in section index -{{< feature-state for_k8s_version="v1.20" state="stable" >}} - This page describes the RuntimeClass resource and runtime selection mechanism. RuntimeClass is a feature for selecting the container runtime configuration. The container runtime @@ -135,8 +133,6 @@ See CRI-O's [config documentation](https://github.com/cri-o/cri-o/blob/master/do ## Scheduling -{{< feature-state for_k8s_version="v1.16" state="beta" >}} - By specifying the `scheduling` field for a RuntimeClass, you can set constraints to ensure that Pods running with this RuntimeClass are scheduled to nodes that support it. If `scheduling` is not set, this RuntimeClass is assumed to be supported by all nodes. @@ -157,8 +153,6 @@ To learn more about configuring the node selector and tolerations, see ### Pod Overhead -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - You can specify _overhead_ resources that are associated with running a Pod. Declaring overhead allows the cluster (including the scheduler) to account for it when making decisions about Pods and resources. diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index d3c6ef60117fa..a9049d6d33fe1 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -156,13 +156,6 @@ The general workflow of a device plugin includes the following steps: * mounts * fully-qualified CDI device names - {{< note >}} - The processing of the fully-qualified CDI device names by the Device Manager requires - that the `DevicePluginCDIDevices` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) - is enabled for both the kubelet and the kube-apiserver. This was added as an alpha feature in Kubernetes - v1.28, graduated to beta in v1.29 and to GA in v1.31. - {{< /note >}} - ### Handling kubelet restarts A device plugin is expected to detect kubelet restarts and re-register itself with the new diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index db45f8640a90a..3d144e6190696 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -71,8 +71,6 @@ separate endpoint for each group version. ### Aggregated discovery -{{< feature-state feature_gate_name="AggregatedDiscoveryEndpoint" >}} - Kubernetes offers stable support for _aggregated discovery_, publishing all resources supported by a cluster through two endpoints (`/api` and `/apis`). Requesting this @@ -201,8 +199,6 @@ checks). ### OpenAPI V3 -{{< feature-state feature_gate_name="OpenAPIV3" >}} - Kubernetes supports publishing a description of its APIs as OpenAPI v3. A discovery endpoint `/openapi/v3` is provided to see a list of all diff --git a/content/en/docs/concepts/policy/node-resource-managers.md b/content/en/docs/concepts/policy/node-resource-managers.md index ced81883bc16f..e386a0cd49b94 100644 --- a/content/en/docs/concepts/policy/node-resource-managers.md +++ b/content/en/docs/concepts/policy/node-resource-managers.md @@ -24,8 +24,6 @@ the policy you specify. To learn more, read ## Policies for assigning CPUs to Pods -{{< feature-state feature_gate_name="CPUManager" >}} - Once a Pod is bound to a Node, the kubelet on that node may need to either multiplex the existing hardware (for example, sharing CPUs across multiple Pods) or allocate hardware by dedicating some resource (for example, assigning one of more CPUs for a Pod's exclusive use). diff --git a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md index 6b128fc486ccd..9ae6f9de34a26 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md @@ -10,8 +10,6 @@ weight: 30 -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - When you run a Pod on a Node, the Pod itself takes an amount of system resources. These resources are additional to the resources needed to run the container(s) inside the Pod. In Kubernetes, _Pod Overhead_ is a way to account for the resources consumed by the Pod diff --git a/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md b/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md index 00df459a2deaf..1e83670b42504 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md @@ -6,18 +6,13 @@ weight: 90 -{{< feature-state for_k8s_version="v1.14" state="stable" >}} - [Pods](/docs/concepts/workloads/pods/) can have _priority_. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. - - - {{< warning >}} In a cluster where not all users are trusted, a malicious user could create Pods at the highest possible priorities, causing other Pods to be evicted/not get @@ -102,8 +97,6 @@ description: "This priority class should be used for XYZ service pods only." ## Non-preempting PriorityClass {#non-preempting-priority-class} -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - Pods with `preemptionPolicy: Never` will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 45ad3cb5ca817..691b9efc4ae83 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -96,14 +96,6 @@ your cluster. Those fields are: A domain is a particular instance of a topology. An eligible domain is a domain whose nodes match the node selector. - - {{< note >}} - Before Kubernetes v1.30, the `minDomains` field was only available if the - `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates-removed/) - was enabled (default since v1.28). In older Kubernetes clusters it might be explicitly - disabled or the field might not be available. - {{< /note >}} - - The value of `minDomains` must be greater than 0, when specified. You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`. - When the number of eligible domains with match topology keys is less than `minDomains`, diff --git a/content/en/docs/concepts/security/service-accounts.md b/content/en/docs/concepts/security/service-accounts.md index e1ed8ac958475..c6d819e346d35 100644 --- a/content/en/docs/concepts/security/service-accounts.md +++ b/content/en/docs/concepts/security/service-accounts.md @@ -180,15 +180,16 @@ following methods: rotates the token before it expires. * [Service Account Token Secrets](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) (not recommended): You can mount service account tokens as Kubernetes - Secrets in Pods. These tokens don't expire and don't rotate. In versions prior to v1.24, a permanent token was automatically created for each service account. + Secrets in Pods. These tokens don't expire and don't rotate. + In versions prior to v1.24, a permanent token was automatically created for each service account. This method is not recommended anymore, especially at scale, because of the risks associated - with static, long-lived credentials. The [LegacyServiceAccountTokenNoAutoGeneration feature gate](/docs/reference/command-line-tools-reference/feature-gates-removed) - (which was enabled by default from Kubernetes v1.24 to v1.26), prevented Kubernetes from automatically creating these tokens for - ServiceAccounts. The feature gate is removed in v1.27, because it was elevated to GA status; you can still create indefinite service account tokens manually, but should take into account the security implications. + with static, long-lived credentials. You can still create indefinite service account tokens manually, + but should take into account the security implications. {{< note >}} For applications running outside your Kubernetes cluster, you might be considering -creating a long-lived ServiceAccount token that is stored in a Secret. This allows authentication, but the Kubernetes project recommends you avoid this approach. +creating a long-lived ServiceAccount token that is stored in a Secret. +This allows authentication, but the Kubernetes project recommends you avoid this approach. Long-lived bearer tokens represent a security risk as, once disclosed, the token can be misused. Instead, consider using an alternative. For example, your external application can authenticate using a well-protected private key `and` a certificate, @@ -202,7 +203,8 @@ You can also use TokenRequest to obtain short-lived tokens for your external app {{< feature-state for_k8s_version="v1.32" state="deprecated" >}} {{< note >}} -`kubernetes.io/enforce-mountable-secrets` is deprecated since Kubernetes v1.32. Use separate namespaces to isolate access to mounted secrets. +`kubernetes.io/enforce-mountable-secrets` is deprecated since Kubernetes v1.32. +Use separate namespaces to isolate access to mounted secrets. {{< /note >}} Kubernetes provides an annotation called `kubernetes.io/enforce-mountable-secrets` @@ -231,7 +233,8 @@ the Secrets from this ServiceAccount are subject to certain mounting restriction 1. The name of each Secret referenced using `imagePullSecrets` in a Pod must also appear in the `secrets` field of the Pod's ServiceAccount. -By understanding and enforcing these restrictions, cluster administrators can maintain a tighter security profile and ensure that secrets are accessed only by the appropriate resources. +By understanding and enforcing these restrictions, cluster administrators can maintain +a tighter security profile and ensure that secrets are accessed only by the appropriate resources. ## Authenticating service account credentials {#authenticating-credentials} diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index d9e3d1670fbff..a49f4da8c5473 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -297,8 +297,6 @@ selectors and uses DNS names instead. For more information, see the ### EndpointSlices -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) are objects that represent a subset (a _slice_) of the backing network endpoints for a Service. @@ -351,8 +349,6 @@ The same API limit means that you cannot manually update an Endpoints to have mo ### Application protocol -{{< feature-state for_k8s_version="v1.20" state="stable" >}} - The `appProtocol` field provides a way to specify an application protocol for each Service port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. @@ -636,14 +632,11 @@ balancer health checks are extensively used within the context of supporting the #### Load balancers with mixed protocol types -{{< feature-state feature_gate_name="MixedProtocolLBService" >}} - By default, for LoadBalancer type of Services, when there is more than one port defined, all ports must have the same protocol, and the protocol must be one which is supported by the cloud provider. - -The feature gate `MixedProtocolLBService` (enabled by default for the kube-apiserver as of v1.24) allows the use of -different protocols for LoadBalancer type of Services, when there is more than one port defined. +However, Kubernetes allows the use of different protocols for LoadBalancer type of Services, +when there is more than one port defined. {{< note >}} The set of protocols that can be used for load balanced Services is defined by your @@ -652,8 +645,6 @@ cloud provider; they may impose restrictions beyond what the Kubernetes API enfo #### Disabling load balancer NodePort allocation {#load-balancer-nodeport-allocation} -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - You can optionally disable node port allocation for a Service of `type: LoadBalancer`, by setting the field `spec.allocateLoadBalancerNodePorts` to `false`. This should only be used for load balancer implementations that route traffic directly to pods as opposed to using node ports. By default, `spec.allocateLoadBalancerNodePorts` @@ -663,8 +654,6 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all #### Specifying class of load balancer implementation {#load-balancer-class} -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - For a Service with `type` set to `LoadBalancer`, the `.spec.loadBalancerClass` field enables you to use a load balancer implementation other than the cloud provider default. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 1ede54a14d831..d3f021fa18f6f 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -634,8 +634,7 @@ The access modes are: : the volume can be mounted as read-write by many nodes. `ReadWriteOncePod` -: {{< feature-state for_k8s_version="v1.29" state="stable" >}} - the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod +: the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. @@ -763,11 +762,9 @@ You can see the name of the PVC bound to the PV using `kubectl describe persiste #### Phase transition timestamp -{{< feature-state feature_gate_name="PersistentVolumeLastPhaseTransitionTime" >}} - -The `.status` field for a PersistentVolume can include an alpha `lastPhaseTransitionTime` field. This field records -the timestamp of when the volume last transitioned its phase. For newly created -volumes the phase is set to `Pending` and `lastPhaseTransitionTime` is set to +The `.status` field for a PersistentVolume can include a `lastPhaseTransitionTime` field. +This field records the timestamp of when the volume last transitioned its phase. +For newly created volumes the phase is set to `Pending` and `lastPhaseTransitionTime` is set to the current time. ## PersistentVolumeClaims @@ -894,8 +891,6 @@ it won't be supported in a future Kubernetes release. #### Retroactive default StorageClass assignment -{{< feature-state for_k8s_version="v1.28" state="stable" >}} - You can create a PersistentVolumeClaim without specifying a `storageClassName` for the new PVC, and you can do so even when no default StorageClass exists in your cluster. In this case, the new PVC creates as you defined it, and the diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 8f745fc973821..14fa00244a977 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -819,6 +819,7 @@ before using it in the Pod. #### Portworx CSI migration + {{< feature-state feature_gate_name="CSIMigrationPortworx" >}} In Kubernetes {{% skew currentVersion %}}, all operations for the in-tree @@ -924,8 +925,6 @@ spec: ### Using subPath with expanded environment variables {#using-subpath-expanded-environment} -{{< feature-state for_k8s_version="v1.17" state="stable" >}} - Use the `subPathExpr` field to construct `subPath` directory names from downward API environment variables. The `subPath` and `subPathExpr` properties are mutually exclusive. @@ -1069,11 +1068,7 @@ persistent volume: call to the CSI driver. All supported versions of Kubernetes offer the `nodeExpandSecretRef` field, and have it available by default. Kubernetes releases prior to v1.25 did not include this support. -* Enable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates-removed/) - named `CSINodeExpandSecret` for each kube-apiserver and for the kubelet on every - node. Since Kubernetes version 1.27, this feature has been enabled by default - and no explicit enablement of the feature gate is required. - You must also be using a CSI driver that supports or requires secret data during +* You must use a CSI driver that supports or requires secret data during node-initiated storage resize operations. * `nodePublishSecretRef`: A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI @@ -1088,8 +1083,6 @@ persistent volume: #### CSI raw block volume support -{{< feature-state for_k8s_version="v1.18" state="stable" >}} - Vendors with external CSI drivers can implement raw block volume support in Kubernetes workloads. @@ -1099,8 +1092,6 @@ as usual, without any CSI-specific changes. #### CSI ephemeral volumes -{{< feature-state for_k8s_version="v1.25" state="stable" >}} - You can directly configure CSI volumes within the Pod specification. Volumes specified in this way are ephemeral and do not persist across pod restarts. See @@ -1126,10 +1117,8 @@ For more details, refer to the deployment guide of the CSI plugin you wish to de #### Migrating to CSI drivers from in-tree plugins -{{< feature-state for_k8s_version="v1.25" state="stable" >}} - -The `CSIMigration` feature directs operations against existing in-tree -plugins to corresponding CSI plugins (which are expected to be installed and configured). +The operations against existing in-tree plugins are redirected to +the corresponding CSI plugins (which are expected to be installed and configured). As a result, operators do not have to make any configuration changes to existing Storage Classes, PersistentVolumes or PersistentVolumeClaims (referring to in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin. @@ -1159,8 +1148,6 @@ are listed in [Types of Volumes](#volume-types). ### flexVolume (deprecated) {#flexvolume} -{{< feature-state for_k8s_version="v1.23" state="deprecated" >}} - FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. The FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node and in some cases the control plane nodes as well. diff --git a/content/en/docs/concepts/windows/user-guide.md b/content/en/docs/concepts/windows/user-guide.md index 732e7d16123c6..448c06ffc6930 100644 --- a/content/en/docs/concepts/windows/user-guide.md +++ b/content/en/docs/concepts/windows/user-guide.md @@ -167,12 +167,6 @@ that the containers in that Pod are designed for. For Pods that run Linux contai `.spec.os.name` to `linux`. For Pods that run Windows containers, set `.spec.os.name` to `windows`. -{{< note >}} -If you are running a version of Kubernetes older than 1.24, you may need to enable -the `IdentifyPodOS` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -to be able to set a value for `.spec.pod.os`. -{{< /note >}} - The scheduler does not use the value of `.spec.os.name` when assigning Pods to nodes. You should use normal Kubernetes mechanisms for [assigning pods to nodes](/docs/concepts/scheduling-eviction/assign-pod-node/) diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 28b5a54f8dd5f..26737a56b3683 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -91,8 +91,6 @@ in your pod spec can also cause voluntary (and involuntary) disruptions. ## Pod disruption budgets -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - Kubernetes offers features to help you run highly available applications even when you introduce frequent voluntary disruptions. @@ -233,8 +231,6 @@ can happen, according to: ## Pod disruption conditions {#pod-disruption-conditions} -{{< feature-state feature_gate_name="PodDisruptionConditions" >}} - A dedicated Pod `DisruptionTarget` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is added to indicate that the Pod is about to be deleted due to a {{}}. diff --git a/content/en/docs/concepts/workloads/pods/pod-hostname.md b/content/en/docs/concepts/workloads/pods/pod-hostname.md index 7eedf792924b1..d3b094bfa7fb9 100644 --- a/content/en/docs/concepts/workloads/pods/pod-hostname.md +++ b/content/en/docs/concepts/workloads/pods/pod-hostname.md @@ -52,8 +52,6 @@ Refer to: [Pod's hostname and subdomain fields](/docs/concepts/services-networki ## Hostname with pod's setHostnameAsFQDN fields -{{< feature-state for_k8s_version="v1.22" state="stable" >}} - When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name `busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example`, @@ -72,7 +70,7 @@ the `spec.subdomain`, the `namespace` name, and the cluster domain suffix. {{< note >}} In Linux, the hostname field of the kernel (the `nodename` field of `struct utsname`) is limited to 64 characters. -If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. +If a Pod's FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as "Failed to construct FQDN from Pod hostname and cluster domain". @@ -82,6 +80,7 @@ and `spec.subdomain` fields results in an FQDN that does not exceed 64 character {{< /note >}} ## Hostname with pod's hostnameOverride + {{< feature-state feature_gate_name="HostnameOverride" >}} Setting a value for `hostnameOverride` in the Pod spec causes the kubelet @@ -120,4 +119,4 @@ The API server will explicitly reject any create request attempting this combina For details on behavior when `hostnameOverride` is set in combination with other fields (hostname, subdomain, setHostnameAsFQDN, hostNetwork), -see the table in the [KEP-4762 design details](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/4762-allow-arbitrary-fqdn-as-pod-hostname/README.md#design-details ). \ No newline at end of file +see the table in the [KEP-4762 design details](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/4762-allow-arbitrary-fqdn-as-pod-hostname/README.md#design-details ). diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 2619dc5493b78..269013db56c19 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -283,8 +283,7 @@ explains the behaviour of `init containers` when specify `restartpolicy` field o #### Individual container restart policy and rules {#container-restart-rules} -{{< feature-state -feature_gate_name="ContainerRestartRules" >}} +{{< feature-state feature_gate_name="ContainerRestartRules" >}} If your cluster has the feature gate `ContainerRestartRules` enabled, you can specify `restartPolicy` and `restartPolicyRules` on _individual containers_ to override the Pod @@ -381,8 +380,7 @@ loss and containers may be re-run even when you expect a container not to be res ### Reduced container restart delay -{{< feature-state -feature_gate_name="ReduceDefaultCrashLoopBackOffDecay" >}} +{{< feature-state feature_gate_name="ReduceDefaultCrashLoopBackOffDecay" >}} With the alpha feature gate `ReduceDefaultCrashLoopBackOffDecay` enabled, container start retries across your cluster will be reduced to begin at 1s @@ -464,8 +462,6 @@ Field name | Description ### Pod readiness {#pod-readiness-gate} -{{< feature-state for_k8s_version="v1.14" state="stable" >}} - Your application can inject extra feedback or signals into PodStatus: _Pod readiness_. To use this, set `readinessGates` in the Pod's `spec` to specify a list of additional conditions that the kubelet evaluates for Pod readiness. @@ -522,11 +518,7 @@ When a Pod's containers are Ready but at least one custom condition is missing o ### Pod network readiness {#pod-has-network} -{{< feature-state for_k8s_version="v1.29" state="beta" >}} - -{{< note >}} -During its early development, this condition was named `PodHasNetwork`. -{{< /note >}} +{{< feature-state feature_gate_name="PodReadyToStartContainersCondition" >}} After a Pod gets scheduled on a node, it needs to be admitted by the kubelet and to have any required storage volumes mounted. Once these phases are complete, diff --git a/content/en/docs/concepts/workloads/pods/user-namespaces.md b/content/en/docs/concepts/workloads/pods/user-namespaces.md index a2daca7dcd74f..125dfd2a8e26d 100644 --- a/content/en/docs/concepts/workloads/pods/user-namespaces.md +++ b/content/en/docs/concepts/workloads/pods/user-namespaces.md @@ -7,7 +7,7 @@ min-kubernetes-server-version: v1.25 --- -{{< feature-state for_k8s_version="v1.30" state="beta" >}} +{{< feature-state feature_gate_name="UserNamespacesSupport" >}} This page explains how user namespaces are used in Kubernetes pods. A user namespace isolates the user running inside the container from the one @@ -243,12 +243,12 @@ In Kubernetes prior to v1.33, the ID count for each of Pods was hard-coded to ## Integration with Pod security admission checks -{{< feature-state state="alpha" for_k8s_version="v1.29" >}} +{{< feature-state feature_gate_name="UserNamespacesPodSecurityStandards" >}} For Linux Pods that enable user namespaces, Kubernetes relaxes the application of [Pod Security Standards](/docs/concepts/security/pod-security-standards) in a controlled way. -This behavior can be controlled by the [feature -gate](/docs/reference/command-line-tools-reference/feature-gates/) +This behavior can be controlled by the +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `UserNamespacesPodSecurityStandards`, which allows an early opt-in for end users. Admins have to ensure that user namespaces are enabled by all nodes within the cluster if using the feature gate. diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 63d56791f1d35..b288aa2c6bbda 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -713,8 +713,6 @@ admission plugin, which allows preventing pods from running on specifically tain ### PodSecurity {#podsecurity} -{{< feature-state for_k8s_version="v1.25" state="stable" >}} - **Type**: Validating. The PodSecurity admission controller checks new Pods before they are @@ -857,7 +855,7 @@ conditions. **Type**: Validating. [This admission controller](/docs/reference/access-authn-authz/validating-admission-policy/) implements the CEL validation for incoming matched requests. -It is enabled when both feature gate `validatingadmissionpolicy` and `admissionregistration.k8s.io/v1alpha1` group/version are enabled. +It is enabled when the `admissionregistration.k8s.io/v1alpha1` group/version are enabled. If any of the ValidatingAdmissionPolicy fails, the request fails. ### ValidatingAdmissionWebhook {#validatingadmissionwebhook} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 0c6c079a16cb4..377c73afb43e7 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -1875,19 +1875,6 @@ you see the user details and properties for the user that was impersonated. By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview` feature is enabled. It is allowed by the `system:basic-user` cluster role. -{{< note >}} -You can only make `SelfSubjectReview` requests if: - -* the `APISelfSubjectReview` - [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) - is enabled for your cluster (not needed for Kubernetes {{< skew currentVersion >}}, but older - Kubernetes versions might not offer this feature gate, or might default it to be off) -* (if you are running a version of Kubernetes older than v1.28) the API server for your - cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1` - {{< glossary_tooltip term_id="api-group" text="API group" >}} - enabled. -{{< /note >}} - ## {{% heading "whatsnext" %}} * To learn about issuing certificates for users, read [Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest](/docs/tasks/tls/certificate-issue-client-csr/) diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index cd577f775a32c..eebcd2e7013b1 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -732,8 +732,6 @@ The `matchPolicy` for an admission webhooks defaults to `Equivalent`. ### Matching requests: `matchConditions` -{{< feature-state feature_gate_name="AdmissionWebhookMatchConditions" >}} - You can define _match conditions_ for webhooks if you need fine-grained request filtering. These conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still doesn't provide the filtering you want over when to call out over HTTP. Match conditions are diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index 1dcf45682b09b..e21ac56ebbad2 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -227,8 +227,6 @@ For more information on JWTs and their structure, see the [JSON Web Token RFC](h ## Bound service account token volume mechanism {#bound-service-account-token-volume} -{{< feature-state feature_gate_name="BoundServiceAccountTokenVolume" >}} - By default, the Kubernetes control plane (specifically, the [ServiceAccount admission controller](#serviceaccount-admission-controller)) adds a [projected volume](/docs/concepts/storage/projected-volumes/) to Pods, @@ -422,8 +420,6 @@ it does the following when a Pod is created: ### Legacy ServiceAccount token tracking controller -{{< feature-state feature_gate_name="LegacyServiceAccountTokenTracking" >}} - This controller generates a ConfigMap called `kube-system/kube-apiserver-legacy-service-account-token-tracking` in the `kube-system` namespace. The ConfigMap records the timestamp when legacy service @@ -431,17 +427,14 @@ account tokens began to be monitored by the system. ### Legacy ServiceAccount token cleaner -{{< feature-state feature_gate_name="LegacyServiceAccountTokenCleanUp" >}} - The legacy ServiceAccount token cleaner runs as part of the `kube-controller-manager` and checks every 24 hours to see if any auto-generated legacy ServiceAccount token has not been used in a *specified amount of time*. If so, the cleaner marks those tokens as invalid. -The cleaner works by first checking the ConfigMap created by the control plane -(provided that `LegacyServiceAccountTokenTracking` is enabled). If the current -time is a *specified amount of time* after the date in the ConfigMap, the -cleaner then loops through the list of Secrets in the cluster and evaluates each +The cleaner works by first checking the ConfigMap created by the control plane. +If the current time is a *specified amount of time* after the date in the ConfigMap, +the cleaner then loops through the list of Secrets in the cluster and evaluates each Secret that has the type `kubernetes.io/service-account-token`. If a Secret meets all of the following conditions, the cleaner marks it as @@ -467,8 +460,6 @@ administrator can configure this value through the ### TokenRequest API -{{< feature-state for_k8s_version="v1.22" state="stable" >}} - You use the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) subresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount. You don't need to call this to obtain an API token for use within a container, since diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md index cfaa6aa1b5a86..7a4c5f85ac2f9 100644 --- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md +++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md @@ -8,12 +8,8 @@ content_type: concept --- - -{{< feature-state state="stable" for_k8s_version="v1.30" >}} - This page provides an overview of Validating Admission Policy. - ## What is Validating Admission Policy? diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates/LocalStorageCapacityIsolationFSQuotaMonitoring.md b/content/en/docs/reference/command-line-tools-reference/feature-gates/LocalStorageCapacityIsolationFSQuotaMonitoring.md index 3449033dc7b18..09f5e1a16efcc 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates/LocalStorageCapacityIsolationFSQuotaMonitoring.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates/LocalStorageCapacityIsolationFSQuotaMonitoring.md @@ -14,9 +14,7 @@ stages: defaultValue: false fromVersion: "1.31" --- -When `LocalStorageCapacityIsolation` -is enabled for -[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/), -the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas, -and `UserNamespacesSupport` is enabled, -project quotas are used to monitor `emptyDir` volume storage consumption rather than using filesystem walk, ensuring better performance and accuracy. \ No newline at end of file +When the backing filesystem for an [emptyDir](/docs/concepts/storage/volumes/#emptydir) +volume supports project quotas, and the `UserNamespacesSupport` feature is enabled, +project quotas are used to monitor `emptyDir` volume storage consumption rather than +using filesystem walking, ensuring better performance and accuracy. diff --git a/content/en/docs/reference/debug-cluster/flow-control.md b/content/en/docs/reference/debug-cluster/flow-control.md index 40bf92da232f6..f7b383cfddf35 100644 --- a/content/en/docs/reference/debug-cluster/flow-control.md +++ b/content/en/docs/reference/debug-cluster/flow-control.md @@ -32,8 +32,7 @@ PriorityLevelConfigurations. ## Debug endpoints -With the `APIPriorityAndFairness` feature enabled, the `kube-apiserver` -serves the following additional paths at its HTTP(S) ports. +The `kube-apiserver` serves the following additional paths at its HTTP(S) ports. You need to ensure you have permissions to access these endpoints. You don't have to do anything if you are using admin. diff --git a/content/en/docs/reference/instrumentation/slis.md b/content/en/docs/reference/instrumentation/slis.md index cfcb8ad02f5e5..c8352008b0890 100644 --- a/content/en/docs/reference/instrumentation/slis.md +++ b/content/en/docs/reference/instrumentation/slis.md @@ -11,13 +11,9 @@ description: >- -{{< feature-state feature_gate_name="ComponentSLIs" >}} - By default, Kubernetes {{< skew currentVersion >}} publishes Service Level Indicator (SLI) metrics for each Kubernetes component binary. This metric endpoint is exposed on the serving -HTTPS port of each component, at the path `/metrics/slis`. The -`ComponentSLIs` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -defaults to enabled for each Kubernetes component as of v1.27. +HTTPS port of each component, at the path `/metrics/slis`. diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 98e0f35c32922..4e95e4ba33ebf 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -1020,7 +1020,7 @@ Example: `pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io` Used on: PersistentVolume, PersistentVolumeClaim It is added to a PersistentVolume(PV) and PersistentVolumeClaim(PVC) that is supposed to be -dynamically provisioned/deleted by its corresponding CSI driver through the `CSIMigration` feature gate. +dynamically provisioned/deleted by its corresponding CSI driver. When this annotation is set, the Kubernetes components will "stand-down" and the `external-provisioner` will act on the objects. diff --git a/content/en/docs/reference/networking/virtual-ips.md b/content/en/docs/reference/networking/virtual-ips.md index d155b0fa98d92..b5388a36ca7e9 100644 --- a/content/en/docs/reference/networking/virtual-ips.md +++ b/content/en/docs/reference/networking/virtual-ips.md @@ -577,8 +577,6 @@ spec: ### IP address ranges for Service virtual IP addresses {#service-ip-static-sub-range} -{{< feature-state for_k8s_version="v1.26" state="stable" >}} - Kubernetes divides the `ClusterIP` range into two bands, based on the size of the configured `service-cluster-ip-range` by using the following formula `min(max(16, cidrSize / 16), 256)`. That formula means the result is _never less than 16 or @@ -596,8 +594,6 @@ to control how Kubernetes routes traffic to healthy (“ready”) backends. ### Internal traffic policy -{{< feature-state for_k8s_version="v1.26" state="stable" >}} - You can set the `.spec.internalTrafficPolicy` field to control how traffic from internal sources is routed. Valid values are `Cluster` and `Local`. Set the field to `Cluster` to route internal traffic to all ready endpoints and `Local` to only route @@ -678,11 +674,7 @@ checking port with logic that matches the kube-proxy implementation. ### Traffic to terminating endpoints -{{< feature-state for_k8s_version="v1.28" state="stable" >}} - -If the `ProxyTerminatingEndpoints` -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -is enabled in kube-proxy and the traffic policy is `Local`, that node's +If the traffic policy is `Local`, that node's kube-proxy uses a more complicated algorithm to select endpoints for a Service. With the feature enabled, kube-proxy checks if the node has local endpoints and whether or not all the local endpoints are marked as terminating. diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index e4e10031a32ce..d0241aa70107f 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -573,8 +573,6 @@ The `content-encoding` header indicates that the response is compressed with `gz ## Retrieving large results sets in chunks -{{< feature-state feature_gate_name="APIListChunking" >}} - On large clusters, retrieving the collection of some resource types may result in very large responses that can impact the server and client. For instance, a cluster may have tens of thousands of Pods, each of which is equivalent to roughly 2 KiB of @@ -1133,8 +1131,6 @@ To learn more, see [Declarative API Validation](/docs/reference/using-api/declar ## Dry-run -{{< feature-state feature_gate_name="DryRun" >}} - When you use HTTP verbs that can modify resources (`POST`, `PUT`, `PATCH`, and `DELETE`), you can submit your request in a _dry run_ mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index 2cac487a54c17..e5e33c023749e 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -11,8 +11,6 @@ weight: 25 -{{< feature-state feature_gate_name="ServerSideApply" >}} - Kubernetes supports multiple appliers collaborating to manage the fields of a single [object](/docs/concepts/overview/working-with-objects/). diff --git a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md index 4e0c1b0456bf6..8d18c970d72d0 100644 --- a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md +++ b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md @@ -54,11 +54,7 @@ with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed. The out-of-tree cloud provider must have built a `cloud-controller-manager` with -Leader Migration implementation. If the cloud provider imports -`k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, -Leader Migration will be available. However, for version before v0.22.0, Leader -Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be -enabled in `cloud-controller-manager`. +Leader Migration implementation. This guide assumes that kubelet of each control plane node starts `kube-controller-manager` and `cloud-controller-manager` as static pods defined by diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md index c638a54823400..9ecdae4a3f4b3 100644 --- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md @@ -8,8 +8,6 @@ weight: 400 -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - This document describes how to configure and use kernel parameters within a Kubernetes cluster using the {{< glossary_tooltip term_id="sysctl" >}} interface. diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md index 69972a0b7e9b6..50752ed0415ca 100644 --- a/content/en/docs/tasks/administer-cluster/topology-manager.md +++ b/content/en/docs/tasks/administer-cluster/topology-manager.md @@ -13,8 +13,6 @@ weight: 150 -{{< feature-state state="stable" for_k8s_version="v1.27" >}} - An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index ee8ce93e51455..b05d620743fcb 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -380,8 +380,6 @@ myregistrykey ## ServiceAccount token volume projection -{{< feature-state for_k8s_version="v1.20" state="stable" >}} - {{< note >}} To enable and use token request projection, you must specify each of the following command line arguments to `kube-apiserver`: @@ -483,11 +481,7 @@ often good enough for the application to load the token on a schedule ### Service account issuer discovery -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - -If you have enabled [token projection](#serviceaccount-token-volume-projection) -for ServiceAccounts in your cluster, then you can also make use of the discovery -feature. Kubernetes provides a way for clients to federate as an _identity provider_, +Kubernetes provides a way for clients to federate as an _identity provider_, so that one or more external systems can act as a _relying party_. {{< note >}} diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index b2b943ccb653a..34fe3d13959c6 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -384,13 +384,6 @@ spec: ## Webhook conversion -{{< feature-state state="stable" for_k8s_version="v1.16" >}} - -{{< note >}} -Webhook conversion is available as beta since 1.15, and as alpha since Kubernetes 1.13. The -`CustomResourceWebhookConversion` feature must be enabled, which is the case automatically for many clusters for beta features. Please refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information. -{{< /note >}} - The above example has a None conversion between versions which only sets the `apiVersion` field on conversion and does not change the rest of the object. The API server also supports webhook conversions that call an external service in case a conversion is required. For example when: diff --git a/content/en/docs/tasks/job/pod-failure-policy.md b/content/en/docs/tasks/job/pod-failure-policy.md index 619e3913f9acb..2dd686fa6226e 100644 --- a/content/en/docs/tasks/job/pod-failure-policy.md +++ b/content/en/docs/tasks/job/pod-failure-policy.md @@ -5,8 +5,6 @@ min-kubernetes-server-version: v1.25 weight: 60 --- -{{< feature-state feature_gate_name="JobPodFailurePolicy" >}} - This document shows you how to use the diff --git a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md index ce0925a109db0..ecf1a9c9e5970 100644 --- a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -7,7 +7,6 @@ description: Configure and manage huge pages as a schedulable resource in a clus --- -{{< feature-state feature_gate_name="HugePages" >}} Kubernetes supports the allocation and consumption of pre-allocated huge pages by applications in a Pod. This page describes how users can consume huge pages. diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index ea70ff453dd64..2220333ccac31 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -7,8 +7,6 @@ min-kubernetes-server-version: v1.21 -{{< feature-state for_k8s_version="v1.21" state="stable" >}} - This page shows how to limit the number of concurrent disruptions that your application experiences, allowing for higher availability while permitting the cluster administrator to manage the clusters @@ -241,8 +239,6 @@ These pods are tracked via `.status.currentHealthy` field in the PDB status. ## Unhealthy Pod Eviction Policy -{{< feature-state feature_gate_name="PDBUnhealthyPodEvictionPolicy" >}} - PodDisruptionBudget guarding an application ensures that `.status.currentHealthy` number of pods does not fall below the number specified in `.status.desiredHealthy` by disallowing eviction of healthy pods. By using `.spec.unhealthyPodEvictionPolicy`, you can also define the criteria when unhealthy pods diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index e40c794158904..0632b0fc62c59 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -315,8 +315,6 @@ pod usage is still within acceptable limits. ### Container resource metrics -{{< feature-state feature_gate_name="HPAContainerMetrics" >}} - The HorizontalPodAutoscaler API also supports a container metric source where the HPA can track the resource usage of individual containers across a set of Pods, in order to scale the target resource. This lets you configure scaling thresholds for the containers that matter most in a particular Pod. diff --git a/content/en/docs/tutorials/security/apparmor.md b/content/en/docs/tutorials/security/apparmor.md index d4e51d35b769f..e4d8095229e9b 100644 --- a/content/en/docs/tutorials/security/apparmor.md +++ b/content/en/docs/tutorials/security/apparmor.md @@ -8,8 +8,6 @@ weight: 30 -{{< feature-state feature_gate_name="AppArmor" >}} - This page shows you how to load AppArmor profiles on your nodes and enforce those profiles in Pods. To learn more about how Kubernetes can confine Pods using AppArmor, see @@ -17,18 +15,14 @@ AppArmor, see ## {{% heading "objectives" %}} - * See an example of how to load a profile on a Node * Learn how to enforce the profile on a Pod * Learn how to check that the profile is loaded * See what happens when a profile is violated * See what happens when a profile cannot be loaded - - ## {{% heading "prerequisites" %}} - AppArmor is an optional kernel module and Kubernetes feature, so verify it is supported on your Nodes before proceeding: @@ -292,10 +286,8 @@ An AppArmor profile has 2 fields: The profile must be preconfigured on the node to work. This option must be provided if and only if the `type` is `Localhost`. - ## {{% heading "whatsnext" %}} - Additional resources: * [Quick guide to the AppArmor profile language](https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage) diff --git a/content/en/docs/tutorials/security/seccomp.md b/content/en/docs/tutorials/security/seccomp.md index 08e6b73d30c3e..d07b9dba02186 100644 --- a/content/en/docs/tutorials/security/seccomp.md +++ b/content/en/docs/tutorials/security/seccomp.md @@ -11,8 +11,6 @@ min-kubernetes-server-version: v1.22 -{{< feature-state for_k8s_version="v1.19" state="stable" >}} - Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the @@ -424,8 +422,6 @@ kubectl delete pod fine-pod --wait --now ## Enable the use of `RuntimeDefault` as the default seccomp profile for all workloads -{{< feature-state state="stable" for_k8s_version="v1.27" >}} - To use seccomp profile defaulting, you must run the kubelet with the `--seccomp-default` [command line flag](/docs/reference/command-line-tools-reference/kubelet)