Skip to content

Commit 7a44996

Browse files
Fix version argument (#131)
Signed-off-by: Mike McKiernan <[email protected]>
1 parent da27277 commit 7a44996

14 files changed

+28
-28
lines changed

gpu-operator/custom-driver-params.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,5 +38,5 @@ containing the kernel module parameters.
3838
$ helm install --wait --generate-name \
3939
-n gpu-operator --create-namespace \
4040
nvidia/gpu-operator \
41-
--set version=${version} \
41+
--version=${version} \
4242
--set driver.kernelModuleConfig.name="kernel-module-params"

gpu-operator/getting-started.rst

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ Procedure
100100
$ helm install --wait --generate-name \
101101
-n gpu-operator --create-namespace \
102102
nvidia/gpu-operator \
103-
--set version=${version}
103+
--version=${version}
104104
105105
- Install the Operator and specify configuration options:
106106

@@ -109,7 +109,7 @@ Procedure
109109
$ helm install --wait --generate-name \
110110
-n gpu-operator --create-namespace \
111111
nvidia/gpu-operator \
112-
--set version=${version} \
112+
--version=${version} \
113113
--set <option-name>=<option-value>
114114
115115
Refer to the :ref:`gpu-operator-helm-chart-options`
@@ -295,7 +295,7 @@ For example, to install the GPU Operator in the ``nvidia-gpu-operator`` namespac
295295
$ helm install --wait --generate-name \
296296
-n nvidia-gpu-operator --create-namespace \
297297
nvidia/gpu-operator \
298-
--set version=${version} \
298+
--version=${version} \
299299
300300
If you do not specify a namespace during installation, all GPU Operator components are installed in the ``default`` namespace.
301301

@@ -333,7 +333,7 @@ In this scenario, use the NVIDIA Container Toolkit image that is built on UBI 8:
333333
$ helm install --wait --generate-name \
334334
-n gpu-operator --create-namespace \
335335
nvidia/gpu-operator \
336-
--set version=${version} \
336+
--version=${version} \
337337
--set toolkit.version=v1.16.1-ubi8
338338
339339
Replace the ``v1.16.1`` value in the preceding command with the version that is supported
@@ -354,7 +354,7 @@ In this scenario, the NVIDIA GPU driver is already installed on the worker nodes
354354
$ helm install --wait --generate-name \
355355
-n gpu-operator --create-namespace \
356356
nvidia/gpu-operator \
357-
--set version=${version} \
357+
--version=${version} \
358358
--set driver.enabled=false
359359
360360
The preceding command prevents the Operator from installing the GPU driver on any nodes in the cluster.
@@ -384,7 +384,7 @@ Install the Operator with the following options:
384384
$ helm install --wait --generate-name \
385385
-n gpu-operator --create-namespace \
386386
nvidia/gpu-operator \
387-
--set version=${version} \
387+
--version=${version} \
388388
--set driver.enabled=false \
389389
--set toolkit.enabled=false
390390
@@ -407,7 +407,7 @@ In this scenario, the NVIDIA Container Toolkit is already installed on the worke
407407
$ helm install --wait --generate-name \
408408
-n gpu-operator --create-namespace \
409409
nvidia/gpu-operator \
410-
--set version=${version} \
410+
--version=${version} \
411411
--set toolkit.enabled=false
412412
413413
Running a Custom Driver Image
@@ -436,7 +436,7 @@ you can build a custom driver container image. Follow these steps:
436436
$ helm install --wait --generate-name \
437437
-n gpu-operator --create-namespace \
438438
nvidia/gpu-operator \
439-
--set version=${version} \
439+
--version=${version} \
440440
--set driver.repository=docker.io/nvidia \
441441
--set driver.version="465.27"
442442
@@ -474,7 +474,7 @@ If you need to specify custom values, refer to the following sample command for
474474
475475
helm install gpu-operator -n gpu-operator --create-namespace \
476476
nvidia/gpu-operator $HELM_OPTIONS \
477-
--set version=${version} \
477+
--version=${version} \
478478
--set toolkit.env[0].name=CONTAINERD_CONFIG \
479479
--set toolkit.env[0].value=/etc/containerd/config.toml \
480480
--set toolkit.env[1].name=CONTAINERD_SOCKET \
@@ -547,7 +547,7 @@ These options can be passed to GPU Operator during install time as below.
547547
548548
helm install gpu-operator -n gpu-operator --create-namespace \
549549
nvidia/gpu-operator $HELM_OPTIONS \
550-
--set version=${version} \
550+
--version=${version} \
551551
--set toolkit.env[0].name=CONTAINERD_CONFIG \
552552
--set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \
553553
--set toolkit.env[1].name=CONTAINERD_SOCKET \

gpu-operator/google-gke.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ You can create a node pool that uses a Container-Optimized OS node image or a Ub
169169
$ helm install --wait --generate-name \
170170
-n gpu-operator \
171171
nvidia/gpu-operator \
172-
--set version=${version} \
172+
--version=${version} \
173173
--set hostPaths.driverInstallDir=/home/kubernetes/bin/nvidia \
174174
--set toolkit.installDir=/home/kubernetes/bin/nvidia \
175175
--set cdi.enabled=true \

gpu-operator/gpu-driver-configuration.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -277,7 +277,7 @@ Perform the following steps to install the GPU Operator and use the NVIDIA drive
277277
$ helm install --wait --generate-name \
278278
-n gpu-operator --create-namespace \
279279
nvidia/gpu-operator \
280-
--set version=${version}
280+
--version=${version}
281281
--set driver.nvidiaDriverCRD.enabled=true
282282
283283
By default, Helm configures a ``default`` NVIDIA driver custom resource during installation.

gpu-operator/gpu-operator-confidential-containers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -407,7 +407,7 @@ Perform the following steps to install the Operator for use with confidential co
407407
$ helm install --wait --generate-name \
408408
-n gpu-operator --create-namespace \
409409
nvidia/gpu-operator \
410-
--set version=${version} \
410+
--version=${version} \
411411
--set sandboxWorkloads.enabled=true \
412412
--set kataManager.enabled=true \
413413
--set ccManager.enabled=true \

gpu-operator/gpu-operator-kata.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ Perform the following steps to install the Operator for use with Kata Containers
269269
$ helm install --wait --generate-name \
270270
-n gpu-operator --create-namespace \
271271
nvidia/gpu-operator \
272-
--set version=${version} \
272+
--version=${version} \
273273
--set sandboxWorkloads.enabled=true \
274274
--set kataManager.enabled=true
275275

gpu-operator/gpu-operator-kubevirt.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ Install the GPU Operator, enabling ``sandboxWorkloads``:
140140
$ helm install --wait --generate-name \
141141
-n gpu-operator --create-namespace \
142142
nvidia/gpu-operator \
143-
--set version=${version} \
143+
--version=${version} \
144144
--set sandboxWorkloads.enabled=true
145145
146146
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -172,7 +172,7 @@ Install the GPU Operator with ``sandboxWorkloads`` and ``vgpuManager`` enabled a
172172
$ helm install --wait --generate-name \
173173
-n gpu-operator --create-namespace \
174174
nvidia/gpu-operator \
175-
--set version=${version} \
175+
--version=${version} \
176176
--set sandboxWorkloads.enabled=true \
177177
--set vgpuManager.enabled=true \
178178
--set vgpuManager.repository=<path to private repository> \

gpu-operator/gpu-operator-mig.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Perform the following steps to install the Operator and configure MIG:
5757
$ helm install --wait --generate-name \
5858
-n gpu-operator --create-namespace \
5959
nvidia/gpu-operator \
60-
--set version=${version} \
60+
--version=${version} \
6161
--set mig.strategy=single
6262
6363
Set ``mig.strategy`` to ``mixed`` when MIG mode is not enabled on all GPUs on a node.
@@ -464,7 +464,7 @@ can be used to install the GPU Operator:
464464
$ helm install gpu-operator \
465465
-n gpu-operator --create-namespace \
466466
nvidia/gpu-operator \
467-
--set version=${version} \
467+
--version=${version} \
468468
--set driver.enabled=false
469469
470470
@@ -515,7 +515,7 @@ Alternatively, you can create a custom config map for use by MIG Manager by perf
515515
$ helm install gpu-operator \
516516
-n gpu-operator --create-namespace \
517517
nvidia/gpu-operator \
518-
--set version=${version} \
518+
--version=${version} \
519519
--set migManager.gpuClientsConfig.name=gpu-clients
520520
--set driver.enabled=false
521521

gpu-operator/gpu-operator-rdma.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ To use DMA-BUF and network device drivers that are installed by the Network Oper
132132
$ helm install --wait --generate-name \
133133
-n gpu-operator --create-namespace \
134134
nvidia/gpu-operator \
135-
--set version=${version} \
135+
--version=${version} \
136136
--set driver.useOpenKernelModules=true
137137
138138
To use DMA-BUF and network device drivers that are installed on the host:
@@ -142,7 +142,7 @@ To use DMA-BUF and network device drivers that are installed on the host:
142142
$ helm install --wait --generate-name \
143143
-n gpu-operator --create-namespace \
144144
nvidia/gpu-operator \
145-
--set version=${version} \
145+
--version=${version} \
146146
--set driver.useOpenKernelModules=true \
147147
--set driver.rdma.useHostMofed=true
148148
@@ -435,7 +435,7 @@ The following sample command applies to clusters that use the Network Operator t
435435
$ helm install --wait --generate-name \
436436
-n gpu-operator --create-namespace \
437437
nvidia/gpu-operator \
438-
--set version=${version} \
438+
--version=${version} \
439439
--set driver.useOpenKernelModules=true \
440440
--set gds.enabled=true
441441

gpu-operator/gpu-sharing.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ Perform the following steps to configure time-slicing before installing the oper
364364
365365
$ helm install gpu-operator nvidia/gpu-operator \
366366
-n gpu-operator \
367-
--set version=${version} \
367+
--version=${version} \
368368
--set devicePlugin.config.name=time-slicing-config
369369
370370
#. Refer to either :ref:`time-slicing-cluster-wide-config` or

0 commit comments

Comments
 (0)