diff --git a/gpu-operator/custom-driver-params.rst b/gpu-operator/custom-driver-params.rst index 1dd90389b..53fca0060 100644 --- a/gpu-operator/custom-driver-params.rst +++ b/gpu-operator/custom-driver-params.rst @@ -38,5 +38,5 @@ containing the kernel module parameters. $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.kernelModuleConfig.name="kernel-module-params" diff --git a/gpu-operator/getting-started.rst b/gpu-operator/getting-started.rst index 550641273..c5e1807c7 100644 --- a/gpu-operator/getting-started.rst +++ b/gpu-operator/getting-started.rst @@ -100,7 +100,7 @@ Procedure $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} + --version=${version} - Install the Operator and specify configuration options: @@ -109,7 +109,7 @@ Procedure $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set = Refer to the :ref:`gpu-operator-helm-chart-options` @@ -295,7 +295,7 @@ For example, to install the GPU Operator in the ``nvidia-gpu-operator`` namespac $ helm install --wait --generate-name \ -n nvidia-gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ If you do not specify a namespace during installation, all GPU Operator components are installed in the ``default`` namespace. @@ -333,7 +333,7 @@ In this scenario, use the NVIDIA Container Toolkit image that is built on UBI 8: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set toolkit.version=v1.16.1-ubi8 Replace the ``v1.16.1`` value in the preceding command with the version that is supported @@ -354,7 +354,7 @@ In this scenario, the NVIDIA GPU driver is already installed on the worker nodes $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.enabled=false The preceding command prevents the Operator from installing the GPU driver on any nodes in the cluster. @@ -384,7 +384,7 @@ Install the Operator with the following options: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.enabled=false \ --set toolkit.enabled=false @@ -407,7 +407,7 @@ In this scenario, the NVIDIA Container Toolkit is already installed on the worke $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set toolkit.enabled=false Running a Custom Driver Image @@ -436,7 +436,7 @@ you can build a custom driver container image. Follow these steps: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.repository=docker.io/nvidia \ --set driver.version="465.27" @@ -474,7 +474,7 @@ If you need to specify custom values, refer to the following sample command for helm install gpu-operator -n gpu-operator --create-namespace \ nvidia/gpu-operator $HELM_OPTIONS \ - --set version=${version} \ + --version=${version} \ --set toolkit.env[0].name=CONTAINERD_CONFIG \ --set toolkit.env[0].value=/etc/containerd/config.toml \ --set toolkit.env[1].name=CONTAINERD_SOCKET \ @@ -547,7 +547,7 @@ These options can be passed to GPU Operator during install time as below. helm install gpu-operator -n gpu-operator --create-namespace \ nvidia/gpu-operator $HELM_OPTIONS \ - --set version=${version} \ + --version=${version} \ --set toolkit.env[0].name=CONTAINERD_CONFIG \ --set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \ --set toolkit.env[1].name=CONTAINERD_SOCKET \ diff --git a/gpu-operator/google-gke.rst b/gpu-operator/google-gke.rst index 5152e99e6..bb1a646cd 100644 --- a/gpu-operator/google-gke.rst +++ b/gpu-operator/google-gke.rst @@ -169,7 +169,7 @@ You can create a node pool that uses a Container-Optimized OS node image or a Ub $ helm install --wait --generate-name \ -n gpu-operator \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set hostPaths.driverInstallDir=/home/kubernetes/bin/nvidia \ --set toolkit.installDir=/home/kubernetes/bin/nvidia \ --set cdi.enabled=true \ diff --git a/gpu-operator/gpu-driver-configuration.rst b/gpu-operator/gpu-driver-configuration.rst index af1342f40..3a3e871a7 100644 --- a/gpu-operator/gpu-driver-configuration.rst +++ b/gpu-operator/gpu-driver-configuration.rst @@ -277,7 +277,7 @@ Perform the following steps to install the GPU Operator and use the NVIDIA drive $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} + --version=${version} --set driver.nvidiaDriverCRD.enabled=true By default, Helm configures a ``default`` NVIDIA driver custom resource during installation. diff --git a/gpu-operator/gpu-operator-confidential-containers.rst b/gpu-operator/gpu-operator-confidential-containers.rst index 108d30a4c..a4d56bda7 100644 --- a/gpu-operator/gpu-operator-confidential-containers.rst +++ b/gpu-operator/gpu-operator-confidential-containers.rst @@ -407,7 +407,7 @@ Perform the following steps to install the Operator for use with confidential co $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set sandboxWorkloads.enabled=true \ --set kataManager.enabled=true \ --set ccManager.enabled=true \ diff --git a/gpu-operator/gpu-operator-kata.rst b/gpu-operator/gpu-operator-kata.rst index 9d3deb16c..83d420060 100644 --- a/gpu-operator/gpu-operator-kata.rst +++ b/gpu-operator/gpu-operator-kata.rst @@ -269,7 +269,7 @@ Perform the following steps to install the Operator for use with Kata Containers $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set sandboxWorkloads.enabled=true \ --set kataManager.enabled=true diff --git a/gpu-operator/gpu-operator-kubevirt.rst b/gpu-operator/gpu-operator-kubevirt.rst index 68367c1b3..446c463c1 100644 --- a/gpu-operator/gpu-operator-kubevirt.rst +++ b/gpu-operator/gpu-operator-kubevirt.rst @@ -140,7 +140,7 @@ Install the GPU Operator, enabling ``sandboxWorkloads``: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set sandboxWorkloads.enabled=true ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -172,7 +172,7 @@ Install the GPU Operator with ``sandboxWorkloads`` and ``vgpuManager`` enabled a $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set sandboxWorkloads.enabled=true \ --set vgpuManager.enabled=true \ --set vgpuManager.repository= \ diff --git a/gpu-operator/gpu-operator-mig.rst b/gpu-operator/gpu-operator-mig.rst index 0295c4222..fb5c0e0fc 100644 --- a/gpu-operator/gpu-operator-mig.rst +++ b/gpu-operator/gpu-operator-mig.rst @@ -57,7 +57,7 @@ Perform the following steps to install the Operator and configure MIG: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set mig.strategy=single Set ``mig.strategy`` to ``mixed`` when MIG mode is not enabled on all GPUs on a node. @@ -464,7 +464,7 @@ can be used to install the GPU Operator: $ helm install gpu-operator \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.enabled=false @@ -515,7 +515,7 @@ Alternatively, you can create a custom config map for use by MIG Manager by perf $ helm install gpu-operator \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set migManager.gpuClientsConfig.name=gpu-clients --set driver.enabled=false diff --git a/gpu-operator/gpu-operator-rdma.rst b/gpu-operator/gpu-operator-rdma.rst index 640688b04..fb11c740e 100644 --- a/gpu-operator/gpu-operator-rdma.rst +++ b/gpu-operator/gpu-operator-rdma.rst @@ -132,7 +132,7 @@ To use DMA-BUF and network device drivers that are installed by the Network Oper $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.useOpenKernelModules=true To use DMA-BUF and network device drivers that are installed on the host: @@ -142,7 +142,7 @@ To use DMA-BUF and network device drivers that are installed on the host: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.useOpenKernelModules=true \ --set driver.rdma.useHostMofed=true @@ -435,7 +435,7 @@ The following sample command applies to clusters that use the Network Operator t $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ --set driver.useOpenKernelModules=true \ --set gds.enabled=true diff --git a/gpu-operator/gpu-sharing.rst b/gpu-operator/gpu-sharing.rst index 03b647284..885381a3e 100644 --- a/gpu-operator/gpu-sharing.rst +++ b/gpu-operator/gpu-sharing.rst @@ -364,7 +364,7 @@ Perform the following steps to configure time-slicing before installing the oper $ helm install gpu-operator nvidia/gpu-operator \ -n gpu-operator \ - --set version=${version} \ + --version=${version} \ --set devicePlugin.config.name=time-slicing-config #. Refer to either :ref:`time-slicing-cluster-wide-config` or diff --git a/gpu-operator/install-gpu-operator-nvaie.rst b/gpu-operator/install-gpu-operator-nvaie.rst index e7b3483ad..fcc8a0ba8 100644 --- a/gpu-operator/install-gpu-operator-nvaie.rst +++ b/gpu-operator/install-gpu-operator-nvaie.rst @@ -190,7 +190,7 @@ For newer releases, you can confirm the the supported driver branch by performin #. Refer to :ref:`operator-component-matrix` to identify the recommended driver version that uses the same driver branch, 550, in this case. After identifying the correct driver version, refer to :ref:`install-gpu-operator` to install the Operator by using Helm. -Specify the ``--set version=`` argument to install a supported version of the Operator for your NVIDIA AI Enterprise release. +Specify the ``--version=`` argument to install a supported version of the Operator for your NVIDIA AI Enterprise release. ******************* diff --git a/gpu-operator/install-gpu-operator-outdated-kernels.rst b/gpu-operator/install-gpu-operator-outdated-kernels.rst index ce8517852..0afa6875b 100644 --- a/gpu-operator/install-gpu-operator-outdated-kernels.rst +++ b/gpu-operator/install-gpu-operator-outdated-kernels.rst @@ -87,7 +87,7 @@ Deploy GPU Operator with updated ``values.yaml``: $ helm install --wait --generate-name \ -n gpu-operator --create-namespace \ nvidia/gpu-operator \ - --set version=${version} \ + --version=${version} \ -f values.yaml diff --git a/gpu-operator/microsoft-aks.rst b/gpu-operator/microsoft-aks.rst index bbabbf35a..f24084706 100644 --- a/gpu-operator/microsoft-aks.rst +++ b/gpu-operator/microsoft-aks.rst @@ -112,7 +112,7 @@ deploying NVIDIA Driver Containers and the NVIDIA Container Toolkit. $ helm install gpu-operator nvidia/gpu-operator \ -n gpu-operator --create-namespace \ - --set version=${version} \ + --version=${version} \ --set driver.enabled=false \ --set toolkit.enabled=false \ --set operator.runtimeClass=nvidia-container-runtime diff --git a/gpu-operator/precompiled-drivers.rst b/gpu-operator/precompiled-drivers.rst index 2a10e4053..9401b2709 100644 --- a/gpu-operator/precompiled-drivers.rst +++ b/gpu-operator/precompiled-drivers.rst @@ -126,7 +126,7 @@ Specify the ``--set driver.usePrecompiled=true`` and ``--set driver.version=