You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: gpu-operator/install-gpu-operator-vgpu.rst
+16-14Lines changed: 16 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,23 +29,22 @@ About Installing the Operator and NVIDIA vGPU
29
29
*********************************************
30
30
31
31
NVIDIA Virtual GPU (vGPU) enables multiple virtual machines (VMs) to have simultaneous,
32
-
direct access to a single physical GPU, using the same NVIDIA graphics drivers that are
33
-
deployed on non-virtualized operating systems.
32
+
direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized operating systems.
34
33
35
34
The installation steps assume ``gpu-operator`` as the default namespace for installing the NVIDIA GPU Operator.
36
35
In case of Red Hat OpenShift Container Platform, the default namespace is ``nvidia-gpu-operator``.
37
36
Change the namespace shown in the commands accordingly based on your cluster configuration.
38
-
Also replace ``kubectl`` in the below commands with ``oc`` when running on RedHat OpenShift.
37
+
Also replace ``kubectl`` in the following commands with ``oc`` when running on Red Hat OpenShift.
39
38
40
39
NVIDIA vGPU is only supported with the NVIDIA License System.
41
40
42
41
****************
43
42
Platform Support
44
43
****************
45
44
46
-
For information about the supported platforms, see:ref:`Supported Deployment Options, Hypervisors, and NVIDIA vGPU Based Products`.
45
+
For information about the supported platforms, refer to:ref:`Supported Deployment Options, Hypervisors, and NVIDIA vGPU Based Products`.
47
46
48
-
For Red Hat OpenShift Virtualization, see:ref:`NVIDIA GPU Operator with OpenShift Virtualization`.
47
+
For Red Hat OpenShift Virtualization, refer to:ref:`NVIDIA GPU Operator with OpenShift Virtualization`.
49
48
50
49
51
50
*************
@@ -55,12 +54,17 @@ Prerequisites
55
54
Before installing the GPU Operator on NVIDIA vGPU, ensure the following:
56
55
57
56
* The NVIDIA vGPU Host Driver version 12.0 (or later) is pre-installed on all hypervisors hosting NVIDIA vGPU accelerated Kubernetes worker node virtual machines.
58
-
Refer to `NVIDIA Virtual GPU Software Documentation <https://docs.nvidia.com/grid/>`_ for details.
57
+
Refer to the `NVIDIA Virtual GPU Software Documentation <https://docs.nvidia.com/grid/>`_ for details.
59
58
* You must have access to the NVIDIA Enterprise Application Hub at https://nvid.nvidia.com/dashboard/ and the NVIDIA Licensing Portal.
60
59
* Your organization must have an instance of a Cloud License Service (CLS) or a Delegated License Service (DLS).
61
60
* You must generate and download a client configuration token for your CLS instance or DLS instance.
62
61
Refer to the |license-system-qs-guide-link|_ for information about generating a token.
63
-
* You have access to a private registry, such as NVIDIA NGC Private Registry, and can push container images to the registry.
62
+
63
+
.. note::
64
+
65
+
For vGPU 18.0 and later, ensure that you use DLS 3.4 or later.
66
+
67
+
* You have access to a private registry such as NVIDIA NGC Private Registry and can push container images to the registry.
64
68
* Git and Docker or Podman are required to build the vGPU driver image from source repository and push to the private registry.
65
69
* Each Kubernetes worker node in the cluster has access to the private registry.
66
70
Private registry access is usually managed through image pull secrets.
@@ -143,7 +147,7 @@ Perform the following steps to build and push a container image that includes th
143
147
For Red Hat OpenShift Container Platform, specify ``rhcos4.<x>`` where ``x`` is the supported minor OCP version.
144
148
Refer to :ref:`Supported Operating Systems and Kubernetes Platforms` for the list of supported OS distributions.
145
149
146
-
- Specify the driver container image tag, such as ``1.0.0``:
150
+
- Specify the driver container image tag such as ``1.0.0``:
147
151
148
152
.. code-block:: console
149
153
@@ -158,9 +162,8 @@ Perform the following steps to build and push a container image that includes th
158
162
159
163
$ export CUDA_VERSION=11.8.0
160
164
161
-
The CUDA version only specifies which base image is used to build the driver container.
162
-
The version does not have any correlation to the version of CUDA that is associated with or supported by the
163
-
resulting driver container.
165
+
The CUDA version only specifies the base image used to build the driver container.
166
+
The version does not have any correlation to the version of CUDA that is associated with or supported by the resulting driver container.
164
167
165
168
- Specify the Linux guest vGPU driver version that you downloaded from the NVIDIA Licensing Portal and append ``-grid``:
166
169
@@ -217,14 +220,13 @@ Configure the Cluster with the vGPU License Information and the Driver Container
217
220
# 4 => for NVIDIA Virtual Compute Server
218
221
FeatureType=1
219
222
220
-
#. Rename the client configuration token file that you downloaded to ``client_configuration_token.tok``
221
-
using a command like the following example:
223
+
#. Rename the client configuration token file that you downloaded to ``client_configuration_token.tok`` using a command like the following example:
0 commit comments