You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/vendor/embedded-overview.mdx
+27-9Lines changed: 27 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -295,18 +295,36 @@ This section outlines some additional use cases for Embedded Cluster. These are
295
295
296
296
### NVIDIA GPU Operator
297
297
298
-
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPUs. For more information about this operator, see the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) documentation. You can include the operator in your release as an additional Helm chart, or using the Embedded Cluster Helm extensions. For information about Helm extensions, see [extensions](/reference/embedded-config#extensions) in _Embedded Cluster Config_.
298
+
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPUs. For more information about this operator, see the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) documentation.
299
299
300
-
Using this operator with Embedded Cluster requires configuring the containerd options in the operator as follows:
300
+
You can include the NVIDIA GPU Operator in your release as an additional Helm chart, or using Embedded Cluster Helm extensions. For information about adding Helm extensions, see [extensions](/reference/embedded-config#extensions) in _Embedded Cluster Config_.
301
+
302
+
Using the NVIDIA GPU Operator with Embedded Cluster requires configuring the containerd options in the operator as follows:
301
303
302
304
```yaml
303
-
toolkit:
304
-
env:
305
-
- name: CONTAINERD_CONFIG
306
-
value: /etc/k0s/containerd.d/nvidia.toml
307
-
- name: CONTAINERD_SOCKET
308
-
value: /run/k0s/containerd.sock
309
-
```
305
+
# Embedded Cluster Config
306
+
307
+
extensions:
308
+
helm:
309
+
repositories:
310
+
- name: nvidia
311
+
url: https://nvidia.github.io/gpu-operator
312
+
charts:
313
+
- name: gpu-operator
314
+
chartname: nvidia/gpu-operator
315
+
namespace: gpu-operator
316
+
version: "v24.9.1"
317
+
values: |
318
+
# configure the containerd options
319
+
toolkit:
320
+
env:
321
+
- name: CONTAINERD_CONFIG
322
+
value: /etc/k0s/containerd.d/nvidia.toml
323
+
- name: CONTAINERD_SOCKET
324
+
value: /run/k0s/containerd.sock
325
+
```
326
+
327
+
When the containerd options are configured as shown above, the NVIDIA GPU Operator automatically creates the required configurations in the `/etc/k0s/containerd.d/nvidia.toml` file. It is not necessary to create this file manually.
0 commit comments