You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a cluster administrator, you can create a ClusterPolicy using the OpenShift Container Platform CLI.
293
295
Create the cluster policy using the CLI:
@@ -330,6 +332,52 @@ Without additional configuration, the GPU Operator creates a default set of devi
330
332
To learn more about how the vGPU Device Manager and configure which types of vGPU devices get created in your cluster, refer to :ref:`vGPU Device Configuration<vgpu-device-configuration>`.
331
333
332
334
335
+
Creating a ClusterPolicy for the GPU Operator using the OpenShift Container Platform Web Console
#. Expand the **NVIDIA GPU/vGPU Driver config** section.
353
+
354
+
#. Expand the **Sandbox Workloads config** section and select the checkbox to enable sandbox workloads.
355
+
356
+
In general, when sandbox workloads are enabled, ``ClusterPolicy`` controls whether the GPU Operator can provision GPU worker nodes for virtual machine workloads, in addition to container workloads. This flag is disabled by default, meaning all nodes get provisioned with the same software which enables container workloads, and the ``nvidia.com/gpu.workload.config`` node label is not used.
357
+
358
+
The term ``sandboxing`` refers to running software in a separate isolated environment, typically for added security (i.e. a virtual machine). We use the term ``sandbox workloads`` to signify workloads that run in a virtual machine, irrespective of the virtualization technology used.
#. If you are planning to use NVIDIA vGPU, expand the **NVIDIA vGPU Manager config** section and fill in your desired configuration settings, including:
364
+
365
+
* Select the **enabled** checkbox to enable the NVIDIA vGPU Manager.
366
+
* Add your **imagePullSecrets**.
367
+
* Under *driverManager*, fill in **repository** with the path to your private repository.
368
+
* Under *env*, fill in **image** with ``vgpu-manager`` and the **version** with your driver version.
369
+
370
+
If you are only using GPU passthrough, you dont need to fill this section out.
The vGPU Device Manager, deployed by the GPU Operator, automatically creates vGPU devices which can be assigned to KubeVirt VMs.
377
+
Without additional configuration, the GPU Operator creates a default set of devices on all GPUs.
378
+
To learn more about the vGPU Device Manager and how to configure which types of vGPU devices get created in your cluster, refer to :ref:`vGPU Device Configuration<vgpu-device-configuration>`.
@@ -404,7 +452,6 @@ The following example permits the A10 GPU device and A10-24Q vGPU device.
404
452
405
453
Refer to the `KubeVirt user guide <https://kubevirt.io/user-guide/virtual_machines/host-devices/#listing-permitted-devices>`_ for more information on the configuration options.
0 commit comments