Skip to content

Error running command - could not find a ready tiller pod #30

@yoplait

Description

@yoplait

Hi team, looks like there are some issues with the scripts:

null_resource.kube-cni:` Provisioning with 'local-exec'...
null_resource.kube-cni (local-exec): Executing: ["/bin/sh" "-c" "KUBECONFIG=secrets/admin.conf helm install -n kube-system hcloud-csi-driver mlohr/hcloud-csi-driver --set csiDriver.secret.create=true --set csiDriver.secret.hcloudApiToken=0lQ5BEtHPUxodken3TCqF6pR7ZA112DSFmf5K71mEqM9YVUOSIiOj8Kt68LNM2bV"]
null_resource.kube-cni (local-exec): Error: could not find a ready tiller pod
╷
│ Error: local-exec provisioner error
│
│   with null_resource.kube-cni,
│   on 03-kube-post-init.tf line 59, in resource "null_resource" "kube-cni":
│   59:   provisioner "local-exec" {
│
│ Error running command 'KUBECONFIG=secrets/admin.conf helm install -n kube-system hcloud-csi-driver mlohr/hcloud-csi-driver --set csiDriver.secret.create=true --set
│ csiDriver.secret.hcloudApiToken=0lQ5BEtHPUxodken3TCqF6pR7ZA112DSFmf5K71mEqM9YVUOSIiOj8Kt68LNM2bV': exit status 1. Output: Error: could not find a ready tiller pod
│
╵

I am trying to deploy 3 masters, 2 workers clusters and looks like some issue is happening with CNI and CoreDNS pods:

| => KUBECONFIG=secrets/admin.conf kubectl get nodes
KUBECONFIG=secrets/admin.conf kubectl get pods -A -o wide
NAME                    STATUS     ROLES    AGE     VERSION
k8s-helsinki-master-1   NotReady   master   10m     v1.18.6
k8s-helsinki-master-2   NotReady   master   8m52s   v1.18.6
k8s-helsinki-master-3   NotReady   master   7m41s   v1.18.6
k8s-helsinki-node-1     NotReady   <none>   5m53s   v1.18.6
k8s-helsinki-node-2     NotReady   <none>   6m13s   v1.18.6
________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_01 @ jperez-mbp (jperez)
| => KUBECONFIG=secrets/admin.conf kubectl get pods -A -o wide
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-9rj9r                        0/1     Pending   0          10m     <none>          <none>                  <none>           <none>
kube-system   coredns-66bff467f8-qqzvp                        0/1     Pending   0          10m     <none>          <none>                  <none>           <none>
kube-system   etcd-k8s-helsinki-master-1                      1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   etcd-k8s-helsinki-master-2                      1/1     Running   0          8m48s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   etcd-k8s-helsinki-master-3                      1/1     Running   0          7m37s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-1            1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-2            1/1     Running   0          8m51s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-3            1/1     Running   0          7m40s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-1   1/1     Running   1          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-2   1/1     Running   0          8m51s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-3   1/1     Running   0          7m41s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-proxy-6mhh7                                1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-proxy-fxmhr                                1/1     Running   0          7m42s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-proxy-h4lt9                                1/1     Running   0          5m54s   65.21.251.5     k8s-helsinki-node-1     <none>           <none>
kube-system   kube-proxy-r85mj                                1/1     Running   0          8m52s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-proxy-v2fvk                                1/1     Running   0          6m14s   65.108.86.224   k8s-helsinki-node-2     <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-1            1/1     Running   1          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-2            1/1     Running   0          8m52s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-3            1/1     Running   0          7m40s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   tiller-deploy-56b574c76d-5t8bs                  0/1     Pending   0          5m43s   <none>          <none>                  <none>           <none>
kube-system   tiller-deploy-587d84cd48-jl9nl                  0/1     Pending   0          5m47s   <none>          <none>                  <none>           <none>

Any ideas?

| => KUBECONFIG=secrets/admin.conf kubectl get pods --namespace=kube-system -o wide

NAME                                            READY   STATUS    RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-9rj9r                        0/1     Pending   0          11m     <none>          <none>                  <none>           <none>
coredns-66bff467f8-qqzvp                        0/1     Pending   0          11m     <none>          <none>                  <none>           <none>
etcd-k8s-helsinki-master-1                      1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
etcd-k8s-helsinki-master-2                      1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
etcd-k8s-helsinki-master-3                      1/1     Running   0          9m1s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-apiserver-k8s-helsinki-master-1            1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-apiserver-k8s-helsinki-master-2            1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-apiserver-k8s-helsinki-master-3            1/1     Running   0          9m4s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-controller-manager-k8s-helsinki-master-1   1/1     Running   1          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-controller-manager-k8s-helsinki-master-2   1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-controller-manager-k8s-helsinki-master-3   1/1     Running   0          9m5s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-proxy-6mhh7                                1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-proxy-fxmhr                                1/1     Running   0          9m6s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-proxy-h4lt9                                1/1     Running   0          7m18s   65.21.251.5     k8s-helsinki-node-1     <none>           <none>
kube-proxy-r85mj                                1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-proxy-v2fvk                                1/1     Running   0          7m38s   65.108.86.224   k8s-helsinki-node-2     <none>           <none>
kube-scheduler-k8s-helsinki-master-1            1/1     Running   1          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-scheduler-k8s-helsinki-master-2            1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-scheduler-k8s-helsinki-master-3            1/1     Running   0          9m4s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
tiller-deploy-56b574c76d-5t8bs                  0/1     Pending   0          7m7s    <none>          <none>                  <none>           <none>
tiller-deploy-587d84cd48-jl9nl                  0/1     Pending   0          7m11s   <none>          <none>                  <none>           <none>

Looks like following some links:
coredns-pod-is-not-running-in-kubernetes?

| => KUBECONFIG=secrets/admin.conf kubectl describe pods coredns-66bff467f8-9rj9r --namespace=kube-system
Name:                 coredns-66bff467f8-9rj9r
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=66bff467f8
Annotations:          <none>
Status:               Pending
IP:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  11m                default-scheduler  0/2 nodes are available: 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  10m                default-scheduler  0/3 nodes are available: 3 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  9m7s               default-scheduler  0/5 nodes are available: 5 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  9m7s               default-scheduler  0/5 nodes are available: 5 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  12m (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  12m (x3 over 12m)  default-scheduler  0/2 nodes are available: 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions