|
| 1 | +--- |
| 2 | +title: Troubleshooting CNI plugin-related errors |
| 3 | +content_type: task |
| 4 | +reviewers: |
| 5 | +- mikebrow |
| 6 | +- divya-mohan0209 |
| 7 | +weight: 10 |
| 8 | +--- |
| 9 | + |
| 10 | +<!-- overview --> |
| 11 | + |
| 12 | +To avoid CNI plugin-related errors, verify that you are using or upgrading to a |
| 13 | +container runtime that has been tested to work correctly with your version of |
| 14 | +Kubernetes. |
| 15 | + |
| 16 | +For example, the following container runtimes are being prepared, or have already been prepared, for Kubernetes v1.24: |
| 17 | + |
| 18 | +* containerd v1.6.4 and later, v1.5.11 and later |
| 19 | +* The CRI-O v1.24.0 and later |
| 20 | + |
| 21 | +## About the "Incompatible CNI versions" and "Failed to destroy network for sandbox" errors |
| 22 | + |
| 23 | +Service issues exist for pod CNI network setup and tear down in containerd |
| 24 | +v1.6.0-v1.6.3 when the CNI plugins have not been upgraded and/or the CNI config |
| 25 | +version is not declared in the CNI config files. The containerd team reports, "these issues are resolved in containerd v1.6.4." |
| 26 | + |
| 27 | +With containerd v1.6.0-v1.6.3, if you do not upgrade the CNI plugins and/or |
| 28 | +declare the CNI config version, you might encounter the following "Incompatible |
| 29 | +CNI versions" or "Failed to destroy network for sandbox" error conditions. |
| 30 | + |
| 31 | +### Incompatible CNI versions error |
| 32 | + |
| 33 | +If the version of your CNI plugin does not correctly match the plugin version in |
| 34 | +the config because the config version is later than the plugin version, the |
| 35 | +containerd log will likely show an error message on startup of a pod similar |
| 36 | +to: |
| 37 | + |
| 38 | +``` |
| 39 | +incompatible CNI versions; config is \"1.0.0\", plugin supports [\"0.1.0\" \"0.2.0\" \"0.3.0\" \"0.3.1\" \"0.4.0\"]" |
| 40 | +``` |
| 41 | + |
| 42 | +To fix this issue, [update your CNI plugins and CNI config files](#updating-your-cni-plugins-and-cni-config-files). |
| 43 | + |
| 44 | +### Failed to destroy network for sandbox error |
| 45 | + |
| 46 | +If the version of the plugin is missing in the CNI plugin config, the pod may |
| 47 | +run. However, stopping the pod generates an error similar to: |
| 48 | + |
| 49 | +``` |
| 50 | +ERRO[2022-04-26T00:43:24.518165483Z] StopPodSandbox for "b" failed |
| 51 | +error="failed to destroy network for sandbox \"bbc85f891eaf060c5a879e27bba9b6b06450210161dfdecfbb2732959fb6500a\": invalid version \"\": the version is empty" |
| 52 | +``` |
| 53 | + |
| 54 | +This error leaves the pod in the not-ready state with a network namespace still |
| 55 | +attached. To recover from this problem, [edit the CNI config file](#updating-your-cni-plugins-and-cni-config-files) to add |
| 56 | +the missing version information. The next attempt to stop the pod should |
| 57 | +be successful. |
| 58 | + |
| 59 | +### Updating your CNI plugins and CNI config files |
| 60 | + |
| 61 | +If you're using containerd v1.6.0-v1.6.3 and encountered "Incompatible CNI |
| 62 | +versions" or "Failed to destroy network for sandbox" errors, consider updating |
| 63 | +your CNI plugins and editing the CNI config files. |
| 64 | + |
| 65 | +Here's an overview of the typical steps for each node: |
| 66 | + |
| 67 | +1. [Safely drain and cordon the |
| 68 | +node](/docs/tasks/administer-cluster/safely-drain-node/). |
| 69 | +2. After stopping your container runtime and kubelet services, perform the |
| 70 | +following upgrade operations: |
| 71 | + - If you're running CNI plugins, upgrade them to the latest version. |
| 72 | + - If you're using non-CNI plugins, replace them with CNI plugins. Use the |
| 73 | + latest version of the plugins. |
| 74 | + - Update the plugin configuration file to specify or match a version of the |
| 75 | + CNI specification that the plugin supports, as shown in the following ["An |
| 76 | + example containerd configuration |
| 77 | + file"](#an-example-containerd-configuration-file) section. |
| 78 | + - For `containerd`, ensure that you have installed the latest version (v1.0.0 |
| 79 | + or later) of the CNI loopback plugin. |
| 80 | + - Upgrade node components (for example, the kubelet) to Kubernetes v1.24 |
| 81 | + - Upgrade to or install the most current version of the container runtime. |
| 82 | +3. Bring the node back into your cluster by restarting your container runtime |
| 83 | +and kubelet. Uncordon the node (`kubectl uncordon <nodename>`). |
| 84 | + |
| 85 | +## An example containerd configuration file |
| 86 | + |
| 87 | +The following example shows a configuration for `containerd` runtime v1.6.x, |
| 88 | +which supports a recent version of the CNI specification (v1.0.0). |
| 89 | + |
| 90 | +Please see the documentation from your plugin and networking provider for |
| 91 | +further instructions on configuring your system. |
| 92 | + |
| 93 | +On Kubernetes, containerd runtime adds a loopback interface, `lo`, to pods as a |
| 94 | +default behavior. The containerd runtime configures the loopback interface via a |
| 95 | +CNI plugin, `loopback`. The `loopback` plugin is distributed as part of the |
| 96 | +`containerd` release packages that have the `cni` designation. `containerd` |
| 97 | +v1.6.0 and later includes a CNI v1.0.0-compatible loopback plugin as well as |
| 98 | +other default CNI plugins. The configuration for the loopback plugin is done |
| 99 | +internally by containerd, and is set to use CNI v1.0.0. This also means that the |
| 100 | +version of the `loopback` plugin must be v1.0.0 or later when this newer version |
| 101 | +`containerd` is started. |
| 102 | + |
| 103 | +The following bash command generates an example CNI config. Here, the 1.0.0 |
| 104 | +value for the config version is assigned to the `cniVersion` field for use when |
| 105 | +`containerd` invokes the CNI bridge plugin. |
| 106 | + |
| 107 | +```bash |
| 108 | +cat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist |
| 109 | +{ |
| 110 | + "cniVersion": "1.0.0", |
| 111 | + "name": "containerd-net", |
| 112 | + "plugins": [ |
| 113 | + { |
| 114 | + "type": "bridge", |
| 115 | + "bridge": "cni0", |
| 116 | + "isGateway": true, |
| 117 | + "ipMasq": true, |
| 118 | + "promiscMode": true, |
| 119 | + "ipam": { |
| 120 | + "type": "host-local", |
| 121 | + "ranges": [ |
| 122 | + [{ |
| 123 | + "subnet": "10.88.0.0/16" |
| 124 | + }], |
| 125 | + [{ |
| 126 | + "subnet": "2001:db8:4860::/64" |
| 127 | + }] |
| 128 | + ], |
| 129 | + "routes": [ |
| 130 | + { "dst": "0.0.0.0/0" }, |
| 131 | + { "dst": "::/0" } |
| 132 | + ] |
| 133 | + } |
| 134 | + }, |
| 135 | + { |
| 136 | + "type": "portmap", |
| 137 | + "capabilities": {"portMappings": true} |
| 138 | + } |
| 139 | + ] |
| 140 | +} |
| 141 | +EOF |
| 142 | +``` |
| 143 | + |
| 144 | +Update the IP address ranges in the preceding example with ones that are based |
| 145 | +on your use case and network addressing plan. |
0 commit comments