Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion gpu-operator/custom-driver-params.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,5 @@ containing the kernel module parameters.
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.kernelModuleConfig.name="kernel-module-params"
20 changes: 10 additions & 10 deletions gpu-operator/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ Procedure
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version}
--version=${version}

- Install the Operator and specify configuration options:

Expand All @@ -109,7 +109,7 @@ Procedure
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set <option-name>=<option-value>

Refer to the :ref:`gpu-operator-helm-chart-options`
Expand Down Expand Up @@ -295,7 +295,7 @@ For example, to install the GPU Operator in the ``nvidia-gpu-operator`` namespac
$ helm install --wait --generate-name \
-n nvidia-gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \

If you do not specify a namespace during installation, all GPU Operator components are installed in the ``default`` namespace.

Expand Down Expand Up @@ -333,7 +333,7 @@ In this scenario, use the NVIDIA Container Toolkit image that is built on UBI 8:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set toolkit.version=v1.16.1-ubi8

Replace the ``v1.16.1`` value in the preceding command with the version that is supported
Expand All @@ -354,7 +354,7 @@ In this scenario, the NVIDIA GPU driver is already installed on the worker nodes
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.enabled=false

The preceding command prevents the Operator from installing the GPU driver on any nodes in the cluster.
Expand Down Expand Up @@ -384,7 +384,7 @@ Install the Operator with the following options:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.enabled=false \
--set toolkit.enabled=false

Expand All @@ -407,7 +407,7 @@ In this scenario, the NVIDIA Container Toolkit is already installed on the worke
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set toolkit.enabled=false

Running a Custom Driver Image
Expand Down Expand Up @@ -436,7 +436,7 @@ you can build a custom driver container image. Follow these steps:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.repository=docker.io/nvidia \
--set driver.version="465.27"

Expand Down Expand Up @@ -474,7 +474,7 @@ If you need to specify custom values, refer to the following sample command for

helm install gpu-operator -n gpu-operator --create-namespace \
nvidia/gpu-operator $HELM_OPTIONS \
--set version=${version} \
--version=${version} \
--set toolkit.env[0].name=CONTAINERD_CONFIG \
--set toolkit.env[0].value=/etc/containerd/config.toml \
--set toolkit.env[1].name=CONTAINERD_SOCKET \
Expand Down Expand Up @@ -547,7 +547,7 @@ These options can be passed to GPU Operator during install time as below.

helm install gpu-operator -n gpu-operator --create-namespace \
nvidia/gpu-operator $HELM_OPTIONS \
--set version=${version} \
--version=${version} \
--set toolkit.env[0].name=CONTAINERD_CONFIG \
--set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \
--set toolkit.env[1].name=CONTAINERD_SOCKET \
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/google-gke.rst
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ You can create a node pool that uses a Container-Optimized OS node image or a Ub
$ helm install --wait --generate-name \
-n gpu-operator \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set hostPaths.driverInstallDir=/home/kubernetes/bin/nvidia \
--set toolkit.installDir=/home/kubernetes/bin/nvidia \
--set cdi.enabled=true \
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/gpu-driver-configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ Perform the following steps to install the GPU Operator and use the NVIDIA drive
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version}
--version=${version}
--set driver.nvidiaDriverCRD.enabled=true

By default, Helm configures a ``default`` NVIDIA driver custom resource during installation.
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/gpu-operator-confidential-containers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -407,7 +407,7 @@ Perform the following steps to install the Operator for use with confidential co
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set sandboxWorkloads.enabled=true \
--set kataManager.enabled=true \
--set ccManager.enabled=true \
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/gpu-operator-kata.rst
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ Perform the following steps to install the Operator for use with Kata Containers
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set sandboxWorkloads.enabled=true \
--set kataManager.enabled=true

Expand Down
4 changes: 2 additions & 2 deletions gpu-operator/gpu-operator-kubevirt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ Install the GPU Operator, enabling ``sandboxWorkloads``:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set sandboxWorkloads.enabled=true

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -172,7 +172,7 @@ Install the GPU Operator with ``sandboxWorkloads`` and ``vgpuManager`` enabled a
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set sandboxWorkloads.enabled=true \
--set vgpuManager.enabled=true \
--set vgpuManager.repository=<path to private repository> \
Expand Down
6 changes: 3 additions & 3 deletions gpu-operator/gpu-operator-mig.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Perform the following steps to install the Operator and configure MIG:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set mig.strategy=single

Set ``mig.strategy`` to ``mixed`` when MIG mode is not enabled on all GPUs on a node.
Expand Down Expand Up @@ -464,7 +464,7 @@ can be used to install the GPU Operator:
$ helm install gpu-operator \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.enabled=false


Expand Down Expand Up @@ -515,7 +515,7 @@ Alternatively, you can create a custom config map for use by MIG Manager by perf
$ helm install gpu-operator \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set migManager.gpuClientsConfig.name=gpu-clients
--set driver.enabled=false

Expand Down
6 changes: 3 additions & 3 deletions gpu-operator/gpu-operator-rdma.rst
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ To use DMA-BUF and network device drivers that are installed by the Network Oper
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.useOpenKernelModules=true

To use DMA-BUF and network device drivers that are installed on the host:
Expand All @@ -142,7 +142,7 @@ To use DMA-BUF and network device drivers that are installed on the host:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.useOpenKernelModules=true \
--set driver.rdma.useHostMofed=true

Expand Down Expand Up @@ -435,7 +435,7 @@ The following sample command applies to clusters that use the Network Operator t
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.useOpenKernelModules=true \
--set gds.enabled=true

Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/gpu-sharing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ Perform the following steps to configure time-slicing before installing the oper

$ helm install gpu-operator nvidia/gpu-operator \
-n gpu-operator \
--set version=${version} \
--version=${version} \
--set devicePlugin.config.name=time-slicing-config

#. Refer to either :ref:`time-slicing-cluster-wide-config` or
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/install-gpu-operator-nvaie.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ For newer releases, you can confirm the the supported driver branch by performin
#. Refer to :ref:`operator-component-matrix` to identify the recommended driver version that uses the same driver branch, 550, in this case.

After identifying the correct driver version, refer to :ref:`install-gpu-operator` to install the Operator by using Helm.
Specify the ``--set version=<supported-version>`` argument to install a supported version of the Operator for your NVIDIA AI Enterprise release.
Specify the ``--version=<supported-version>`` argument to install a supported version of the Operator for your NVIDIA AI Enterprise release.


*******************
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/install-gpu-operator-outdated-kernels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Deploy GPU Operator with updated ``values.yaml``:
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
-f values.yaml


Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/microsoft-aks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ deploying NVIDIA Driver Containers and the NVIDIA Container Toolkit.

$ helm install gpu-operator nvidia/gpu-operator \
-n gpu-operator --create-namespace \
--set version=${version} \
--version=${version} \
--set driver.enabled=false \
--set toolkit.enabled=false \
--set operator.runtimeClass=nvidia-container-runtime
Expand Down
2 changes: 1 addition & 1 deletion gpu-operator/precompiled-drivers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ Specify the ``--set driver.usePrecompiled=true`` and ``--set driver.version=<dri
$ helm install --wait gpu-operator \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set version=${version} \
--version=${version} \
--set driver.usePrecompiled=true \
--set driver.version="<driver-branch>"

Expand Down