Skip to content

Commit 01f91e0

Browse files
committed
Remove all references to 2022 and 2023
Signed-off-by: Manuel Buil <mbuil@suse.com>
1 parent 5f420ba commit 01f91e0

File tree

6 files changed

+0
-95
lines changed

6 files changed

+0
-95
lines changed

docs/cli/certificate.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,10 +81,6 @@ The command performs integrity checks to confirm that the updated certificates a
8181
If the updated data is acceptable, the datastore's encrypted bootstrap key is updated, and the new certificates and keys will be used the next time K3s starts.
8282
If problems are encountered while validating the certificates and keys, an error is reported to the system log and the operation is cancelled without changes.
8383

84-
:::info Version Gate
85-
Support for the `k3s certificate rotate-ca` command and the ability to use CA certificates signed by an external CA is available starting with the 2023-02 releases (v1.26.2+k3s1, v1.25.7+k3s1, v1.24.11+k3s1, v1.23.17+k3s1).
86-
:::
87-
8884
### Using Custom CA Certificates
8985

9086
#### Cautions Against CA Reuse

docs/cli/etcd-snapshot.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -307,10 +307,6 @@ The token value can also be set in the K3s config file.
307307

308308
## ETCDSnapshotFile Custom Resources
309309

310-
:::info Version Gate
311-
ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4
312-
:::
313-
314310
Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped `ETCDSnapshotFile` resources.
315311
Unlike the `k3s etcd-snapshot list` command, which only shows snapshots visible to that node, `ETCDSnapshotFile` resources track all snapshots present on cluster members.
316312

docs/cli/token.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -143,10 +143,6 @@ Flag | Description
143143

144144
#### `k3s token rotate`
145145

146-
:::info Version Gate
147-
Available as of the October 2023 releases (v1.28.2+k3s1, v1.27.7+k3s1, v1.26.10+k3s1, v1.25.15+k3s1).
148-
:::
149-
150146
Rotate original server token with a new server token. After running this command, all servers and any agents that originally joined with the old token must be restarted with the new token.
151147

152148
If you do not specify a new token, one will be generated for you.

docs/installation/requirements.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,6 @@ K3s is available for the following architectures:
1919
- armhf
2020
- arm64/aarch64
2121

22-
:::warning ARM64 Page Size
23-
24-
Prior to May 2023 releases (v1.24.14+k3s1, v1.25.10+k3s1, v1.26.5+k3s1, v1.27.2+k3s1), on `aarch64/arm64` systems, the kernel must use 4k pages. **RHEL9**, **Ubuntu**, **Raspberry PI OS**, and **SLES** all meet this requirement.
25-
26-
:::
27-
2822
## Operating Systems
2923

3024
K3s is expected to work on most modern Linux systems.

docs/known-issues.md

Lines changed: 0 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -36,79 +36,6 @@ Iptables versions 1.8.0-1.8.4 also have known issues that can cause K3s to fail.
3636

3737
K3s includes a known-good version of iptables (v1.8.8) which has been tested to function properly. You can tell K3s to use its bundled version of iptables by starting K3s with the `--prefer-bundled-bin` option, or by uninstalling the iptables/nftables packages from your operating system.
3838

39-
:::info Version Gate
40-
41-
The `--prefer-bundled-bin` flag is available starting with the 2022-12 releases (v1.26.0+k3s1, v1.25.5+k3s1, v1.24.9+k3s1, v1.23.15+k3s1).
42-
43-
:::
44-
4539
### Rootless Mode
4640

4741
Running K3s with Rootless mode is experimental and has several [known issues.](./advanced.md#known-issues-with-rootless-mode)
48-
49-
### Upgrading Hardened Clusters from v1.24.x to v1.25.x {#hardened-125}
50-
51-
Kubernetes removed PodSecurityPolicy from v1.25 in favor of Pod Security Standards. You can read more about PSS in the [upstream documentation](https://kubernetes.io/docs/concepts/security/pod-security-standards/). For K3S, there are some manual steps that must be taken if any `PodSecurityPolicy` has been configured on the nodes.
52-
53-
1. On all nodes, update the `kube-apiserver-arg` value to remove the `PodSecurityPolicy` admission-plugin. Add the following arg value instead: `'admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml'`, but do NOT restart or upgrade K3S yet. Below is an example of what a configuration file might look like after this update for the node to be hardened:
54-
```yaml
55-
protect-kernel-defaults: true
56-
secrets-encryption: true
57-
kube-apiserver-arg:
58-
- 'admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml'
59-
- 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
60-
- 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
61-
- 'audit-log-maxage=30'
62-
- 'audit-log-maxbackup=10'
63-
- 'audit-log-maxsize=100'
64-
kube-controller-manager-arg:
65-
- 'terminated-pod-gc-threshold=10'
66-
- 'use-service-account-credentials=true'
67-
kubelet-arg:
68-
- 'streaming-connection-idle-timeout=5m'
69-
```
70-
2. Create the `/var/lib/rancher/k3s/server/psa.yaml` file with the following contents. You may want to exempt more namespaces as well. The below example exempts `kube-system` (required), `cis-operator-system` (optional, but useful for when running security scans through Rancher), and `system-upgrade` (required if doing [Automated Upgrades](./upgrades/automated.md)).
71-
```yaml
72-
apiVersion: apiserver.config.k8s.io/v1
73-
kind: AdmissionConfiguration
74-
plugins:
75-
- name: PodSecurity
76-
configuration:
77-
apiVersion: pod-security.admission.config.k8s.io/v1beta1
78-
kind: PodSecurityConfiguration
79-
defaults:
80-
enforce: "restricted"
81-
enforce-version: "latest"
82-
audit: "restricted"
83-
audit-version: "latest"
84-
warn: "restricted"
85-
warn-version: "latest"
86-
exemptions:
87-
usernames: []
88-
runtimeClasses: []
89-
namespaces: [kube-system, cis-operator-system, system-upgrade]
90-
```
91-
3. Perform the upgrade as normal. If doing [Automated Upgrades](./upgrades/automated.md), ensure that the namespace where the `system-upgrade-controller` pod is running in is setup to be privileged in accordance with the [Pod Security levels](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-levels):
92-
```yaml
93-
apiVersion: v1
94-
kind: Namespace
95-
metadata:
96-
name: system-upgrade
97-
labels:
98-
# This value must be privileged for the controller to run successfully.
99-
pod-security.kubernetes.io/enforce: privileged
100-
pod-security.kubernetes.io/enforce-version: v1.25
101-
# We are setting these to our _desired_ `enforce` level, but note that these below values can be any of the available options.
102-
pod-security.kubernetes.io/audit: privileged
103-
pod-security.kubernetes.io/audit-version: v1.25
104-
pod-security.kubernetes.io/warn: privileged
105-
pod-security.kubernetes.io/warn-version: v1.25
106-
```
107-
4. After the upgrade is complete, remove any remaining PSP resources from the cluster. In many cases, there may be PodSecurityPolicies and associated RBAC resources in custom files used for hardening within `/var/lib/rancher/k3s/server/manifests/`. Remove those resources and k3s will update automatically. Sometimes, due to timing, some of these may be left in the cluster, in which case you will need to delete them manually. If the [Hardening Guide](./security/hardening-guide.md) was previously followed, you should be able to delete them via the following:
108-
```sh
109-
# Get the resources associated with PSPs
110-
$ kubectl get roles,clusterroles,rolebindings,clusterrolebindings -A | grep -i psp
111-
112-
# Delete those resources:
113-
$ kubectl delete clusterrole.rbac.authorization.k8s.io/psp:restricted-psp clusterrole.rbac.authorization.k8s.io/psp:svclb-psp clusterrole.rbac.authorization.k8s.io/psp:system-unrestricted-psp clusterrolebinding.rbac.authorization.k8s.io/default:restricted-psp clusterrolebinding.rbac.authorization.k8s.io/system-unrestricted-node-psp-rolebinding && kubectl delete -n kube-system rolebinding.rbac.authorization.k8s.io/svclb-psp-rolebinding rolebinding.rbac.authorization.k8s.io/system-unrestricted-svc-acct-psp-rolebinding
114-
```

docs/upgrades/automated.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -149,10 +149,6 @@ Jobs to execute upgrades for a plan will not be created outside the time window.
149149

150150
## Downgrade Prevention
151151

152-
:::info Version Gate
153-
Starting with the 2023-07 releases ([v1.27.4+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.25.12%2Bk3s1), [v1.24.16+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.24.16%2Bk3s1))
154-
:::
155-
156152
Kubernetes does not support downgrades of control-plane components. The k3s-upgrade image used by upgrade plans will refuse to downgrade K3s, failing the plan. Nodes with `cordon: true` configured in their plan will stay cordoned following the failure.
157153

158154
Here is an example cluster, showing failed upgrade pods and cordoned nodes:

0 commit comments

Comments
 (0)