You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: container-toolkit/troubleshooting.md
+278-1Lines changed: 278 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,4 +119,281 @@ Without this option, you might observe this error when running GPU containers:
119
119
``Failed to initialize NVML: Insufficient Permissions``.
120
120
However, using this option disables SELinux separation in the container and the container is executed
121
121
in an unconfined type.
122
-
Review the SELinux policies on your system.
122
+
Review the SELinux policies on your system.
123
+
124
+
125
+
## Containers losing access to GPUs with error: "Failed to initialize NVML: Unknown Error"
126
+
127
+
Under specific conditions, it’s possible that containerized GPU workloads may suddenly lose access to their GPUs.
128
+
This situation occurs when `systemd` is used to manage the cgroups of the container and it is triggered to reload any Unit files that have references to NVIDIA GPUs (e.g. with something as simple as a systemctl daemon-reload).
129
+
130
+
When the container loses access to the GPU, you will see the following error message from the console output:
131
+
132
+
```console
133
+
Failed to initialize NVML: Unknown Error
134
+
```
135
+
136
+
The container needs to be deleted once the issue occurs.
137
+
When it is restarted (manually or automatically depending if you are using a container orchestration platform), it will regain access to the GPU.
138
+
139
+
The issue originates from the fact that recent versions of `runc` require that symlinks be present under `/dev/char` to any device nodes being injected into a container. Unfortunately, these symlinks are not present for NVIDIA devices, and the NVIDIA GPU driver does not provide a means for them to be created automatically.
140
+
141
+
A fix will be present in the next patch release of all supported NVIDIA GPU drivers.
142
+
143
+
### Affected environments
144
+
145
+
You many be affected by this issue if you are use `runc` and enable `systemd cgroup` management at the high-level container runtime.
146
+
147
+
```{note}
148
+
If the system is NOT using `systemd` to manage `cgroups`, then it is NOT subject to this issue.
149
+
```
150
+
151
+
Below is a full list of affected environments:
152
+
153
+
- Docker environment using `containerd` / `runc` and you have the following configurations:
154
+
-`cgroup driver` enabled with `systemd`.
155
+
For example, parameter `"exec-opts": ["native.cgroupdriver=systemd"]` set in /etc/docker/daemon.json.
156
+
- Newer docker version is used where `systemd cgroup` management is the default, like on Ubuntu 22.04.
157
+
158
+
To check if Docker uses systemd cgroup management, run the following command (the output below indicates that systemd cgroup driver is enabled) :
159
+
160
+
```console
161
+
$ docker info
162
+
...
163
+
Cgroup Driver: systemd
164
+
Cgroup Version: 1
165
+
```
166
+
167
+
- K8s environment using `containerd` / `runc` with the following configruations:
168
+
- `SystemdCgroup = true` in the containerd configuration file (usually located in `/etc/containerd/config.toml`) as shown below:
- K8s environment (including OpenShift) using `cri-o` / `runc` with the following configurations:
199
+
- `cgroup_manager` enabled with systemd in the cri-o configuration file usually located in `/etc/crio/crio.conf` or `/etc/crio/crio.conf.d/00-default` as shown below (sample with OpenShift):
200
+
201
+
```console
202
+
[crio.runtime]
203
+
...
204
+
cgroup_manager = "systemd"
205
+
206
+
hooks_dir = [
207
+
"/etc/containers/oci/hooks.d",
208
+
"/run/containers/oci/hooks.d",
209
+
"/usr/share/containers/oci/hooks.d",
210
+
]
211
+
```
212
+
213
+
Podman environments use crun by default and are not subject to this issue unless runc is configured as the low-level container runtime to be used.
214
+
215
+
### How to check if you are affected
216
+
217
+
You can use the following steps to confirm that your system is affected. After you implement one of the workarounds (mentioned in the next section), you can repeat the steps to confirm that the error is no longer reproducible.
218
+
219
+
#### For Docker environments
220
+
221
+
1. Run a test container:
222
+
223
+
```console
224
+
$ docker run -d --rm --runtime=nvidia --gpus all \
The following workarounds are available for both standalone docker environments and Kubernetes environments.
346
+
347
+
### For Docker environments
348
+
349
+
The recommended workaround for Docker environments is to **use the `nvidia-ctk` utility.**
350
+
The NVIDIA Container Toolkit v1.12.0 and later includes this utility for creating symlinks in `/dev/char` for all possible NVIDIA device nodes required for using GPUs in containers.
351
+
This can be run as follows:
352
+
353
+
1. Run `nvidia-ctk`:
354
+
355
+
```console
356
+
$ sudo nvidia-ctk system create-dev-char-symlinks \
357
+
--create-all
358
+
```
359
+
360
+
In cases where the NVIDIA GPU Driver Container is used, the path to the driver installation must be specified. In this case the command should be modified to:
361
+
362
+
```console
363
+
$ sudo nvidia-ctk system create-dev-symlinks \
364
+
--create-all \
365
+
--driver-root={{NVIDIA_DRIVER_ROOT}}
366
+
```
367
+
368
+
Where {{NVIDIA_DRIVER_ROOT}} is the path to which the NVIDIA GPU Driver container installs the NVIDIA GPU driver and creates the NVIDIA Device Nodes.
369
+
370
+
1. Configure this command to run at boot on each node where GPUs will be used in containers.
371
+
The command requires that the NVIDIA driver kernel modules have been loaded at the point where it is run.
372
+
373
+
A simple `udev` rule to enforce this can be seen below:
374
+
375
+
```console
376
+
# This will create /dev/char symlinks to all device nodes
377
+
ACTION=="add", DEVPATH=="/bus/pci/drivers/nvidia", RUN+="/usr/bin/nvidia-ctk system create-dev-char-symlinks --create-all"
378
+
```
379
+
380
+
A good place to install this rule would be in `/lib/udev/rules.d/71-nvidia-dev-char.rules`
381
+
382
+
Some additional workarounds for Docker environments:
383
+
384
+
- **Explicitly disabling systemd cgroup management in Docker.**
385
+
- Set the parameter `"exec-opts": ["native.cgroupdriver=cgroupfs"]` in the `/etc/docker/daemon.json` file and restart docker.
386
+
- **Downgrading to `docker.io` packages where `systemd` is not the default `cgroup` manager.**
387
+
388
+
#### For K8s environments
389
+
390
+
The recommended workaround is to deploy GPU Operator 22.9.2 or later to automatically fix the issue on all K8s nodes of the cluster.
391
+
The fix is integrated inside the validator pod which will run when a new node is deployed or at every reboot of the node.
392
+
393
+
Some additional workarounds for Kubernets environments:
394
+
395
+
- For deployments using the standalone k8s-device-plugin (i.e. not through the use of the operator), installation of a `udev` rule as described in the previous section can be made to work around this issue. Be sure to pass the correct `{{NVIDIA_DRIVER_ROOT}}` in cases where the driver container is also in use.
396
+
397
+
- Explicitly disabling `systemd cgroup` management in `containerd` or `cri-o`:
398
+
- Remove the parameter `cgroup_manager = "systemd"` from `cri-o` configuration file (usually located here: `/etc/crio/crio.conf` or `/etc/crio/crio.conf.d/00-default`) and restart `cri-o`.
399
+
- Downgrading to a version of the `containerd.io` package where `systemd` is not the default `cgroup` manager (and not overriding that, of course).
0 commit comments