1- apiVersion : troubleshoot.sh/v1beta2
1+ apiVersion : troubleshoot.sh/v1beta3
22kind : Preflight
33metadata :
44 name : example
1717 - pass :
1818 when : " >= 1.22.0"
1919 message : Your cluster meets the recommended and required versions of Kubernetes.
20+ docString : |
21+ Title: Kubernetes Control Plane Requirements
22+ Requirement:
23+ - Version:
24+ - Minimum: 1.20.0
25+ - Recommended: 1.22.0
26+ These version targets ensure that required APIs and default behaviors are
27+ available and patched. Moving below the minimum commonly removes GA APIs
28+ (e.g., apps/v1 workloads, storage and ingress v1 APIs), changes admission
29+ defaults, and lacks critical CVE fixes. Running at or above the recommended
30+ version matches what is exercised most extensively in CI and receives the
31+ best operational guidance for upgrades and incident response.
2032 - customResourceDefinition :
2133 checkName : Ingress
2234 customResourceDefinitionName : ingressroutes.contour.heptio.com
@@ -25,13 +37,37 @@ spec:
2537 message : Contour ingress not found!
2638 - pass :
2739 message : Contour ingress found!
40+ docString : |
41+ Title: Required CRDs and Ingress Capabilities
42+ Requirement:
43+ - Ingress Controller: Contour
44+ - CRD must be present:
45+ - Group: heptio.com
46+ - Kind: IngressRoute
47+ - Version: v1beta1 or later served version
48+ The ingress layer terminates TLS and routes external traffic to Services.
49+ Contour relies on the IngressRoute CRD to express host/path routing, TLS
50+ configuration, and policy. If the CRD is not installed and served by the
51+ API server, Contour cannot reconcile desired state, leaving routes
52+ unconfigured and traffic unreachable.
2853 - containerRuntime :
2954 outcomes :
3055 - pass :
3156 when : " == containerd"
3257 message : containerd container runtime was found.
3358 - fail :
3459 message : Did not find containerd container runtime.
60+ docString : |
61+ Title: Container Runtime Requirements
62+ Requirement:
63+ - Runtime: containerd (CRI)
64+ - Kubelet cgroup driver: systemd
65+ - CRI socket path: /run/containerd/containerd.sock
66+ containerd (via the CRI) is the supported runtime for predictable container
67+ lifecycle management. On modern distros (cgroup v2), kubelet and the OS must
68+ both use the systemd cgroup driver to avoid resource accounting mismatches
69+ that lead to unexpected OOMKills and throttling. The CRI socket path must
70+ match kubelet configuration so the node can start and manage pods.
3571 - storageClass :
3672 checkName : Required storage classes
3773 storageClassName : " default"
4076 message : Could not find a storage class called default.
4177 - pass :
4278 message : All good on storage classes
79+ docString : |
80+ Title: Default Storage Class Requirements
81+ Requirement:
82+ - Storage Class: default
83+ - Provisioner: Must support dynamic provisioning
84+ - Access Modes: ReadWriteOnce minimum
85+ A default storage class enables automatic persistent volume provisioning
86+ for StatefulSets and PVC-backed workloads. Without it, pods requiring
87+ persistent storage will remain in Pending state, unable to schedule.
88+ The storage class must support at least ReadWriteOnce access mode for
89+ single-pod workloads like databases and file servers.
4390 - distribution :
4491 outcomes :
4592 - fail :
@@ -80,6 +127,17 @@ spec:
80127 message : Kind is a supported distribution
81128 - warn :
82129 message : Unable to determine the distribution of Kubernetes
130+ docString : |
131+ Title: Supported Kubernetes Distributions
132+ Requirement:
133+ - Production distributions: EKS, GKE, AKS, KURL, RKE2, K3S, DigitalOcean, OKE
134+ - Development distributions: Kind (testing only)
135+ - Unsupported: Docker Desktop, Microk8s, Minikube
136+ This application requires production-grade Kubernetes distributions that
137+ provide enterprise features like proper networking, storage integration,
138+ and security policies. Development-focused distributions lack the stability,
139+ performance characteristics, and operational tooling needed for reliable
140+ application deployment and management.
83141 - nodeResources :
84142 checkName : Must have at least 3 nodes in the cluster, with 5 recommended
85143 outcomes :
@@ -93,6 +151,17 @@ spec:
93151 uri : https://kurl.sh/docs/install-with-kurl/adding-nodes
94152 - pass :
95153 message : This cluster has enough nodes.
154+ docString : |
155+ Title: Cluster Node Count Requirements
156+ Requirement:
157+ - Minimum: 3 nodes
158+ - Recommended: 5 nodes
159+ - High Availability: Odd number for quorum
160+ A minimum of 3 nodes ensures basic high availability and allows for
161+ rolling updates without service interruption. The recommended 5 nodes
162+ provide better resource distribution, fault tolerance, and maintenance
163+ windows. Odd numbers are preferred for etcd quorum and leader election
164+ in distributed components.
96165 - nodeResources :
97166 checkName : Every node in the cluster must have at least 8 GB of memory, with 32 GB recommended
98167 outcomes :
@@ -106,6 +175,17 @@ spec:
106175 uri : https://kurl.sh/docs/install-with-kurl/system-requirements
107176 - pass :
108177 message : All nodes have at least 32 GB of memory.
178+ docString : |
179+ Title: Node Memory Requirements
180+ Requirement:
181+ - Minimum: 8 GB per node
182+ - Recommended: 32 GB per node
183+ - Reserved: ~2 GB for system processes
184+ Each node requires sufficient memory for the kubelet, container runtime,
185+ system processes, and application workloads. The 8 GB minimum accounts
186+ for Kubernetes overhead and basic application needs. The 32 GB recommendation
187+ provides headroom for memory-intensive workloads, caching, and prevents
188+ OOMKills during traffic spikes or batch processing.
109189 - nodeResources :
110190 checkName : Total CPU Cores in the cluster is 4 or greater
111191 outcomes :
@@ -115,6 +195,17 @@ spec:
115195 uri : https://kurl.sh/docs/install-with-kurl/system-requirements
116196 - pass :
117197 message : There are at least 4 cores in the cluster
198+ docString : |
199+ Title: Cluster CPU Requirements
200+ Requirement:
201+ - Minimum: 4 total CPU cores across all nodes
202+ - Distribution: At least 1 core per node recommended
203+ - Architecture: x86_64 or arm64
204+ The cluster needs sufficient CPU capacity for Kubernetes control plane
205+ components, system daemons, and application workloads. 4 cores minimum
206+ ensures basic functionality, but distribution across multiple nodes
207+ provides better scheduling flexibility and fault tolerance than
208+ concentrating all cores on a single node.
118209 - nodeResources :
119210 checkName : Every node in the cluster must have at least 40 GB of ephemeral storage, with 100 GB recommended
120211 outcomes :
@@ -128,3 +219,14 @@ spec:
128219 uri : https://kurl.sh/docs/install-with-kurl/system-requirements
129220 - pass :
130221 message : All nodes have at least 100 GB of ephemeral storage.
222+ docString : |
223+ Title: Node Ephemeral Storage Requirements
224+ Requirement:
225+ - Minimum: 40 GB per node
226+ - Recommended: 100 GB per node
227+ - Usage: Container images, logs, temporary files
228+ Ephemeral storage houses container images, pod logs, and temporary
229+ files created by running containers. The 40 GB minimum covers basic
230+ Kubernetes components and small applications. The 100 GB recommendation
231+ accommodates larger container images, extensive logging, and temporary
232+ data processing without triggering evictions due to disk pressure.
0 commit comments