You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/running-non-rdma-workloads-on-oke.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,11 +40,11 @@ Change the `driver.repository` and `driver.version` in the Helm command below.
40
40
helm install --wait \
41
41
-n gpu-operator --create-namespace \
42
42
gpu-operator nvidia/gpu-operator \
43
-
--version v23.3.2 \
43
+
--version v23.9.0 \
44
44
--set operator.defaultRuntime=crio \
45
45
--set driver.repository=<The repository that you pushed your image> \
46
46
--set driver.version=<The driver version in your pushed image. Only the version, don't add ol7.9 at the end> \
47
-
--set toolkit.version=v1.13.5-centos7
47
+
--set toolkit.version=v1.14.3-centos7
48
48
```
49
49
50
50
Wait until all network operator pods are running with `kubectl get pods -n gpu-operator`.
@@ -68,4 +68,4 @@ spec:
68
68
nvidia.com/gpu: "1"
69
69
```
70
70
71
-
Get the logs from the above pod by running `kubectl logs nvidia-version-check`, you should see the `nvidia-smi` output correctly listing the GPU driver and CUDA version.
71
+
Get the logs from the above pod by running `kubectl logs nvidia-version-check`, you should see the `nvidia-smi` output correctly listing the GPU driver and CUDA version.
0 commit comments