Skip to content

Commit 844902d

Browse files
Update drivers and use substitutions for version (#129)
* Update drivers and use substitutions for version * Fix version arg to helm install * Feedback from Tariq --------- Signed-off-by: Mike McKiernan <mmckiernan@nvidia.com>
1 parent 5a0e6ee commit 844902d

18 files changed

+59
-43
lines changed

gpu-operator/custom-driver-params.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,4 +38,5 @@ containing the kernel module parameters.
3838
$ helm install --wait --generate-name \
3939
-n gpu-operator --create-namespace \
4040
nvidia/gpu-operator \
41+
--version=${version} \
4142
--set driver.kernelModuleConfig.name="kernel-module-params"

gpu-operator/getting-started.rst

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,8 @@ Procedure
9999
100100
$ helm install --wait --generate-name \
101101
-n gpu-operator --create-namespace \
102-
nvidia/gpu-operator
102+
nvidia/gpu-operator \
103+
--version=${version}
103104
104105
- Install the Operator and specify configuration options:
105106

@@ -108,6 +109,7 @@ Procedure
108109
$ helm install --wait --generate-name \
109110
-n gpu-operator --create-namespace \
110111
nvidia/gpu-operator \
112+
--version=${version} \
111113
--set <option-name>=<option-value>
112114
113115
Refer to the :ref:`gpu-operator-helm-chart-options`
@@ -291,7 +293,8 @@ For example, to install the GPU Operator in the ``nvidia-gpu-operator`` namespac
291293
292294
$ helm install --wait --generate-name \
293295
-n nvidia-gpu-operator --create-namespace \
294-
nvidia/gpu-operator
296+
nvidia/gpu-operator \
297+
--version=${version} \
295298
296299
If you do not specify a namespace during installation, all GPU Operator components are installed in the ``default`` namespace.
297300

@@ -329,6 +332,7 @@ In this scenario, use the NVIDIA Container Toolkit image that is built on UBI 8:
329332
$ helm install --wait --generate-name \
330333
-n gpu-operator --create-namespace \
331334
nvidia/gpu-operator \
335+
--version=${version} \
332336
--set toolkit.version=v1.16.1-ubi8
333337
334338
Replace the ``v1.16.1`` value in the preceding command with the version that is supported
@@ -349,6 +353,7 @@ In this scenario, the NVIDIA GPU driver is already installed on the worker nodes
349353
$ helm install --wait --generate-name \
350354
-n gpu-operator --create-namespace \
351355
nvidia/gpu-operator \
356+
--version=${version} \
352357
--set driver.enabled=false
353358
354359
The preceding command prevents the Operator from installing the GPU driver on any nodes in the cluster.
@@ -377,9 +382,10 @@ Install the Operator with the following options:
377382
378383
$ helm install --wait --generate-name \
379384
-n gpu-operator --create-namespace \
380-
nvidia/gpu-operator \
381-
--set driver.enabled=false \
382-
--set toolkit.enabled=false
385+
nvidia/gpu-operator \
386+
--version=${version} \
387+
--set driver.enabled=false \
388+
--set toolkit.enabled=false
383389
384390
385391
Pre-Installed NVIDIA Container Toolkit (but no drivers)
@@ -400,6 +406,7 @@ In this scenario, the NVIDIA Container Toolkit is already installed on the worke
400406
$ helm install --wait --generate-name \
401407
-n gpu-operator --create-namespace \
402408
nvidia/gpu-operator \
409+
--version=${version} \
403410
--set toolkit.enabled=false
404411
405412
Running a Custom Driver Image
@@ -428,6 +435,7 @@ you can build a custom driver container image. Follow these steps:
428435
$ helm install --wait --generate-name \
429436
-n gpu-operator --create-namespace \
430437
nvidia/gpu-operator \
438+
--version=${version} \
431439
--set driver.repository=docker.io/nvidia \
432440
--set driver.version="465.27"
433441
@@ -465,6 +473,7 @@ If you need to specify custom values, refer to the following sample command for
465473
466474
helm install gpu-operator -n gpu-operator --create-namespace \
467475
nvidia/gpu-operator $HELM_OPTIONS \
476+
--version=${version} \
468477
--set toolkit.env[0].name=CONTAINERD_CONFIG \
469478
--set toolkit.env[0].value=/etc/containerd/config.toml \
470479
--set toolkit.env[1].name=CONTAINERD_SOCKET \
@@ -539,6 +548,7 @@ These options can be passed to GPU Operator during install time as below.
539548
540549
helm install gpu-operator -n gpu-operator --create-namespace \
541550
nvidia/gpu-operator $HELM_OPTIONS \
551+
--version=${version} \
542552
--set toolkit.env[0].name=CONTAINERD_CONFIG \
543553
--set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \
544554
--set toolkit.env[1].name=CONTAINERD_SOCKET \

gpu-operator/google-gke.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -169,6 +169,7 @@ You can create a node pool that uses a Container-Optimized OS node image or a Ub
169169
$ helm install --wait --generate-name \
170170
-n gpu-operator \
171171
nvidia/gpu-operator \
172+
--version=${version} \
172173
--set hostPaths.driverInstallDir=/home/kubernetes/bin/nvidia \
173174
--set toolkit.installDir=/home/kubernetes/bin/nvidia \
174175
--set cdi.enabled=true \

gpu-operator/gpu-driver-configuration.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -277,6 +277,7 @@ Perform the following steps to install the GPU Operator and use the NVIDIA drive
277277
$ helm install --wait --generate-name \
278278
-n gpu-operator --create-namespace \
279279
nvidia/gpu-operator \
280+
--version=${version}
280281
--set driver.nvidiaDriverCRD.enabled=true
281282
282283
By default, Helm configures a ``default`` NVIDIA driver custom resource during installation.

gpu-operator/gpu-operator-confidential-containers.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -407,6 +407,7 @@ Perform the following steps to install the Operator for use with confidential co
407407
$ helm install --wait --generate-name \
408408
-n gpu-operator --create-namespace \
409409
nvidia/gpu-operator \
410+
--version=${version} \
410411
--set sandboxWorkloads.enabled=true \
411412
--set kataManager.enabled=true \
412413
--set ccManager.enabled=true \

gpu-operator/gpu-operator-kata.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -269,6 +269,7 @@ Perform the following steps to install the Operator for use with Kata Containers
269269
$ helm install --wait --generate-name \
270270
-n gpu-operator --create-namespace \
271271
nvidia/gpu-operator \
272+
--version=${version} \
272273
--set sandboxWorkloads.enabled=true \
273274
--set kataManager.enabled=true
274275

gpu-operator/gpu-operator-kubevirt.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,7 @@ Install the GPU Operator, enabling ``sandboxWorkloads``:
140140
$ helm install --wait --generate-name \
141141
-n gpu-operator --create-namespace \
142142
nvidia/gpu-operator \
143+
--version=${version} \
143144
--set sandboxWorkloads.enabled=true
144145
145146
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -171,6 +172,7 @@ Install the GPU Operator with ``sandboxWorkloads`` and ``vgpuManager`` enabled a
171172
$ helm install --wait --generate-name \
172173
-n gpu-operator --create-namespace \
173174
nvidia/gpu-operator \
175+
--version=${version} \
174176
--set sandboxWorkloads.enabled=true \
175177
--set vgpuManager.enabled=true \
176178
--set vgpuManager.repository=<path to private repository> \

gpu-operator/gpu-operator-mig.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ Perform the following steps to install the Operator and configure MIG:
5757
$ helm install --wait --generate-name \
5858
-n gpu-operator --create-namespace \
5959
nvidia/gpu-operator \
60+
--version=${version} \
6061
--set mig.strategy=single
6162
6263
Set ``mig.strategy`` to ``mixed`` when MIG mode is not enabled on all GPUs on a node.
@@ -463,6 +464,7 @@ can be used to install the GPU Operator:
463464
$ helm install gpu-operator \
464465
-n gpu-operator --create-namespace \
465466
nvidia/gpu-operator \
467+
--version=${version} \
466468
--set driver.enabled=false
467469
468470
@@ -513,6 +515,7 @@ Alternatively, you can create a custom config map for use by MIG Manager by perf
513515
$ helm install gpu-operator \
514516
-n gpu-operator --create-namespace \
515517
nvidia/gpu-operator \
518+
--version=${version} \
516519
--set migManager.gpuClientsConfig.name=gpu-clients
517520
--set driver.enabled=false
518521

gpu-operator/gpu-operator-rdma.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,7 @@ To use DMA-BUF and network device drivers that are installed by the Network Oper
132132
$ helm install --wait --generate-name \
133133
-n gpu-operator --create-namespace \
134134
nvidia/gpu-operator \
135+
--version=${version} \
135136
--set driver.useOpenKernelModules=true
136137
137138
To use DMA-BUF and network device drivers that are installed on the host:
@@ -141,6 +142,7 @@ To use DMA-BUF and network device drivers that are installed on the host:
141142
$ helm install --wait --generate-name \
142143
-n gpu-operator --create-namespace \
143144
nvidia/gpu-operator \
145+
--version=${version} \
144146
--set driver.useOpenKernelModules=true \
145147
--set driver.rdma.useHostMofed=true
146148
@@ -433,6 +435,7 @@ The following sample command applies to clusters that use the Network Operator t
433435
$ helm install --wait --generate-name \
434436
-n gpu-operator --create-namespace \
435437
nvidia/gpu-operator \
438+
--version=${version} \
436439
--set driver.useOpenKernelModules=true \
437440
--set gds.enabled=true
438441

gpu-operator/gpu-sharing.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -364,6 +364,7 @@ Perform the following steps to configure time-slicing before installing the oper
364364
365365
$ helm install gpu-operator nvidia/gpu-operator \
366366
-n gpu-operator \
367+
--version=${version} \
367368
--set devicePlugin.config.name=time-slicing-config
368369
369370
#. Refer to either :ref:`time-slicing-cluster-wide-config` or

0 commit comments

Comments
 (0)