You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/vendor/embedded-overview.mdx
+20-6Lines changed: 20 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -302,12 +302,26 @@ You can include the NVIDIA GPU Operator in your release as an additional Helm ch
302
302
Using the NVIDIA GPU Operator with Embedded Cluster requires configuring the containerd options in the operator as follows:
303
303
304
304
```yaml
305
-
toolkit:
306
-
env:
307
-
- name: CONTAINERD_CONFIG
308
-
value: /etc/k0s/containerd.d/nvidia.toml
309
-
- name: CONTAINERD_SOCKET
310
-
value: /run/k0s/containerd.sock
305
+
# Embedded Cluster Config
306
+
307
+
extensions:
308
+
helm:
309
+
repositories:
310
+
- name: nvidia
311
+
url: https://nvidia.github.io/gpu-operator
312
+
charts:
313
+
- name: gpu-operator
314
+
chartname: nvidia/gpu-operator
315
+
namespace: gpu-operator
316
+
version: "v24.9.1"
317
+
values: |
318
+
# configure the containerd options
319
+
toolkit:
320
+
env:
321
+
- name: CONTAINERD_CONFIG
322
+
value: /etc/k0s/containerd.d/nvidia.toml
323
+
- name: CONTAINERD_SOCKET
324
+
value: /run/k0s/containerd.sock
311
325
```
312
326
313
327
When the containerd options are configured as shown above, the NVIDIA GPU Operator automatically creates the required configurations in the `/etc/k0s/containerd.d/nvidia.toml` file. It is not necessary to create this file manually.
0 commit comments