Skip to content

Add support for Kubernetes v1.34.0-beta.0 #21261

@medyagh

Description

@medyagh

the PR that bumps the k8s version to 1.34.0-bet.0 fails test
#21223

I tried manually on that PR's code (21223)

mk start -p newk8s --kubernetes-version=v1.34.0-beta.0 --alsologtostderr

its stuck in Booting up control plane (for 4-5 minutes)

I0807 14:24:59.134810   19360 out.go:250]     ▪ Generating certificates and keys ...
🐳  Preparing Kubernetes v1.34.0-beta.0 on Docker 28.3.2 ...
    ▪ Booting up control plane ...◜I0807 14:25:01.780774   19360 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.003142001s
    ▪ Booting up control plane ...◜I0807 14:25:32.145812   19360 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 31.583015616s
    ▪ Booting up control plane ...◜

the Log of Exit

▪ Booting up control plane ...◜I0807 14:29:00.495779   19360 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.001326055s
I0807 14:29:00.495861   19360 kubeadm.go:310] 
I0807 14:29:00.495991   19360 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
I0807 14:29:00.496126   19360 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0807 14:29:00.496280   19360 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
I0807 14:29:00.496419   19360 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0807 14:29:00.496645   19360 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
I0807 14:29:00.496806   19360 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I0807 14:29:00.496828   19360 kubeadm.go:310] 
I0807 14:29:00.498473   19360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0807 14:29:00.498799   19360 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
I0807 14:29:00.498926   19360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
W0807 14:29:00.499159   19360 out.go:283] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost newk8s] and IPs [192.168.105.74 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost newk8s] and IPs [192.168.105.74 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.668584ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 1.003142001s
[control-plane-check] kube-scheduler is healthy after 31.583015616s
[control-plane-check] kube-apiserver is not healthy after 4m0.001326055s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost newk8s] and IPs [192.168.105.74 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost newk8s] and IPs [192.168.105.74 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.668584ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 1.003142001s
[control-plane-check] kube-scheduler is healthy after 31.583015616s
[control-plane-check] kube-apiserver is not healthy after 4m0.001326055s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

I0807 14:29:00.500229   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0807 14:29:01.161080   19360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0807 14:29:01.166311   19360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0807 14:29:01.169221   19360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0807 14:29:01.169229   19360 kubeadm.go:157] found existing configuration files:

I0807 14:29:01.169268   19360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0807 14:29:01.171896   19360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:

stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0807 14:29:01.171937   19360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0807 14:29:01.174998   19360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0807 14:29:01.177923   19360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:

stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0807 14:29:01.177962   19360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0807 14:29:01.180926   19360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0807 14:29:01.183870   19360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:

stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0807 14:29:01.183905   19360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0807 14:29:01.186866   19360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0807 14:29:01.189533   19360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0807 14:29:01.189570   19360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0807 14:29:01.192292   19360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0807 14:29:01.204815   19360 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0-beta.0
I0807 14:29:01.204846   19360 kubeadm.go:310] [preflight] Running pre-flight checks
I0807 14:29:01.228152   19360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0807 14:29:01.228203   19360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0807 14:29:01.228235   19360 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0807 14:29:01.235429   19360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0807 14:29:01.238047   19360 out.go:250]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...◜I0807 14:29:01.238080   19360 kubeadm.go:310] [certs] Using existing ca certificate authority
I0807 14:29:01.238106   19360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0807 14:29:01.238130   19360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0807 14:29:01.238148   19360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0807 14:29:01.238183   19360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0807 14:29:01.238207   19360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0807 14:29:01.238238   19360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0807 14:29:01.238266   19360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0807 14:29:01.238295   19360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0807 14:29:01.238331   19360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0807 14:29:01.238355   19360 kubeadm.go:310] [certs] Using the existing "sa" key
I0807 14:29:01.238381   19360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    ▪ Generating certificates and keys ...◝I0807 14:29:01.347249   19360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
    ▪ Generating certificates and keys ...◟I0807 14:29:01.567654   19360 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0807 14:29:01.635533   19360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
    ▪ Generating certificates and keys ...◞I0807 14:29:01.894158   19360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    ▪ Generating certificates and keys ...◟I0807 14:29:01.947721   19360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0807 14:29:01.947892   19360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0807 14:29:01.948574   19360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0807 14:29:01.952651   19360 out.go:250]     ▪ Booting up control plane ...
I0807 14:29:01.952696   19360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0807 14:29:01.952727   19360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0807 14:29:01.952768   19360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0807 14:29:01.955667   19360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0807 14:29:01.955705   19360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0807 14:29:01.957525   19360 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0807 14:29:01.957611   19360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0807 14:29:01.957634   19360 kubeadm.go:310] [kubelet-start] Starting the kubelet
    ▪ Booting up control plane ...◜I0807 14:29:02.070580   19360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0807 14:29:02.070628   19360 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
    ▪ Booting up control plane ...◝I0807 14:29:02.572800   19360 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.74537ms
I0807 14:29:02.576094   19360 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I0807 14:29:02.576206   19360 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
I0807 14:29:02.576314   19360 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I0807 14:29:02.576403   19360 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
    ▪ Booting up control plane ...◞I0807 14:29:03.120720   19360 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 544.581777ms
    ▪ Booting up control plane ...◝I0807 14:29:34.168219   19360 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 31.588552126s
    ▪ Booting up control plane ...◟I0807 14:33:02.582427   19360 kubeadm.go:310] [control-plane-check] kube-apiserver is not healthy after 4m0.001452919s
I0807 14:33:02.582607   19360 kubeadm.go:310] 
I0807 14:33:02.582819   19360 kubeadm.go:310] A control plane component may have crashed or exited when started by the container runtime.
I0807 14:33:02.582969   19360 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0807 14:33:02.583122   19360 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
I0807 14:33:02.583329   19360 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0807 14:33:02.583508   19360 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
I0807 14:33:02.583706   19360 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I0807 14:33:02.583734   19360 kubeadm.go:310] 
I0807 14:33:02.585924   19360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0807 14:33:02.586204   19360 kubeadm.go:310] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
I0807 14:33:02.586273   19360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
I0807 14:33:02.586337   19360 kubeadm.go:394] duration metric: took 8m8.543944292s to StartCluster
I0807 14:33:02.586534   19360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0807 14:33:02.587008   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0807 14:33:02.621286   19360 cri.go:89] found id: "9a6de111de48c51ecd1972ab9afbf789001a68a29fc0deea7baacb357c376b4f"
I0807 14:33:02.621326   19360 cri.go:89] found id: ""
I0807 14:33:02.621336   19360 logs.go:282] 1 containers: [9a6de111de48c51ecd1972ab9afbf789001a68a29fc0deea7baacb357c376b4f]
I0807 14:33:02.621626   19360 ssh_runner.go:195] Run: which crictl
I0807 14:33:02.623871   19360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0807 14:33:02.624012   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
    ▪ Booting up control plane ...◜I0807 14:33:02.642658   19360 cri.go:89] found id: "73b2f2e735102d75ebb9e99a95c0a8c27e6270f65f9b75459323bd104023f0ae"
I0807 14:33:02.642670   19360 cri.go:89] found id: ""
I0807 14:33:02.642675   19360 logs.go:282] 1 containers: [73b2f2e735102d75ebb9e99a95c0a8c27e6270f65f9b75459323bd104023f0ae]
I0807 14:33:02.642777   19360 ssh_runner.go:195] Run: which crictl
I0807 14:33:02.644302   19360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0807 14:33:02.644354   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0807 14:33:02.657180   19360 cri.go:89] found id: ""
I0807 14:33:02.657195   19360 logs.go:282] 0 containers: []
W0807 14:33:02.657199   19360 logs.go:284] No container was found matching "coredns"
I0807 14:33:02.657202   19360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0807 14:33:02.657283   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0807 14:33:02.669080   19360 cri.go:89] found id: "02cc719b1d027ea15e253c233764f303e71eabb4ed9481abbb5050f59377b0a9"
I0807 14:33:02.669093   19360 cri.go:89] found id: ""
I0807 14:33:02.669097   19360 logs.go:282] 1 containers: [02cc719b1d027ea15e253c233764f303e71eabb4ed9481abbb5050f59377b0a9]
I0807 14:33:02.669176   19360 ssh_runner.go:195] Run: which crictl
I0807 14:33:02.670307   19360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0807 14:33:02.670356   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0807 14:33:02.680127   19360 cri.go:89] found id: ""
I0807 14:33:02.680140   19360 logs.go:282] 0 containers: []
W0807 14:33:02.680144   19360 logs.go:284] No container was found matching "kube-proxy"
I0807 14:33:02.680147   19360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0807 14:33:02.680223   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0807 14:33:02.689812   19360 cri.go:89] found id: "dea3cc6cde894e347b1330d65a1abdb1a4c04766ccb7e69518fefd5643543ac4"
I0807 14:33:02.689822   19360 cri.go:89] found id: ""
I0807 14:33:02.689826   19360 logs.go:282] 1 containers: [dea3cc6cde894e347b1330d65a1abdb1a4c04766ccb7e69518fefd5643543ac4]
I0807 14:33:02.689893   19360 ssh_runner.go:195] Run: which crictl
I0807 14:33:02.690878   19360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0807 14:33:02.690932   19360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0807 14:33:02.700621   19360 cri.go:89] found id: ""
I0807 14:33:02.700637   19360 logs.go:282] 0 containers: []
W0807 14:33:02.700641   19360 logs.go:284] No container was found matching "kindnet"
I0807 14:33:02.700650   19360 logs.go:123] Gathering logs for container status ...
I0807 14:33:02.700654   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0807 14:33:02.711516   19360 logs.go:123] Gathering logs for describe nodes ...
I0807 14:33:02.711532   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0807 14:33:02.734875   19360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
E0807 21:33:02.763329    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.763419    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.764753    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.764859    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.766115    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
E0807 21:33:02.763329    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.763419    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.764753    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.764859    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
E0807 21:33:02.766115    5221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0807 14:33:02.734922   19360 logs.go:123] Gathering logs for etcd [73b2f2e735102d75ebb9e99a95c0a8c27e6270f65f9b75459323bd104023f0ae] ...
I0807 14:33:02.734928   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b2f2e735102d75ebb9e99a95c0a8c27e6270f65f9b75459323bd104023f0ae"
    ▪ Booting up control plane ...◝I0807 14:33:02.745782   19360 logs.go:123] Gathering logs for kubelet ...
I0807 14:33:02.745797   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0807 14:33:02.773899   19360 logs.go:123] Gathering logs for dmesg ...
I0807 14:33:02.773915   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0807 14:33:02.778364   19360 logs.go:123] Gathering logs for kube-apiserver [9a6de111de48c51ecd1972ab9afbf789001a68a29fc0deea7baacb357c376b4f] ...
I0807 14:33:02.778374   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6de111de48c51ecd1972ab9afbf789001a68a29fc0deea7baacb357c376b4f"
I0807 14:33:02.800175   19360 logs.go:123] Gathering logs for kube-scheduler [02cc719b1d027ea15e253c233764f303e71eabb4ed9481abbb5050f59377b0a9] ...
I0807 14:33:02.800187   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02cc719b1d027ea15e253c233764f303e71eabb4ed9481abbb5050f59377b0a9"
I0807 14:33:02.818825   19360 logs.go:123] Gathering logs for kube-controller-manager [dea3cc6cde894e347b1330d65a1abdb1a4c04766ccb7e69518fefd5643543ac4] ...
I0807 14:33:02.818842   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea3cc6cde894e347b1330d65a1abdb1a4c04766ccb7e69518fefd5643543ac4"
I0807 14:33:02.828968   19360 logs.go:123] Gathering logs for Docker ...
I0807 14:33:02.828980   19360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
    ▪ Booting up control plane ...◞W0807 14:33:02.847209   19360 out.go:432] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.74537ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 544.581777ms
[control-plane-check] kube-scheduler is healthy after 31.588552126s
[control-plane-check] kube-apiserver is not healthy after 4m0.001452919s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0807 14:33:02.847273   19360 out.go:283] 

W0807 14:33:02.847313   19360 out.go:283] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.74537ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 544.581777ms
[control-plane-check] kube-scheduler is healthy after 31.588552126s
[control-plane-check] kube-apiserver is not healthy after 4m0.001452919s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.74537ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 544.581777ms
[control-plane-check] kube-scheduler is healthy after 31.588552126s
[control-plane-check] kube-apiserver is not healthy after 4m0.001452919s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

W0807 14:33:02.847665   19360 out.go:283] 

W0807 14:33:02.848350   19360 out.go:306] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0807 14:33:02.859416   19360 out.go:201] 

W0807 14:33:02.871439   19360 out.go:283] ❌  Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.74537ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 544.581777ms
[control-plane-check] kube-scheduler is healthy after 31.588552126s
[control-plane-check] kube-apiserver is not healthy after 4m0.001452919s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

❌  Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.0-beta.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.74537ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.105.74:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 544.581777ms
[control-plane-check] kube-scheduler is healthy after 31.588552126s
[control-plane-check] kube-apiserver is not healthy after 4m0.001452919s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'


stderr:
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: kube-apiserver check failed at https://192.168.105.74:8443/livez: client rate limiter Wait returned an error: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

W0807 14:33:02.871622   19360 out.go:283] 

W0807 14:33:02.872279   19360 out.go:306] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0807 14:33:02.883474   19360 out.go:201] 

14:33:02 medya/workspace/minikube
fix_newesk8sttest ✓
$ 

3 Other Problems

1- it doesnt produce any new logs and not sure which part of the code is it actually running
2- it doesnt time out in a normal time out. it keeps going for a long long long time. I would have expected a sane time out (6 minutes)
3- The Exit with Advice Prints Two Times.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions