Skip to content

Commit 055e9d7

Browse files
Update running-non-rdma-workloads-on-oke.md
1 parent 75f2202 commit 055e9d7

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/running-non-rdma-workloads-on-oke.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,11 @@ Change the `driver.repository` and `driver.version` in the Helm command below.
4040
helm install --wait \
4141
-n gpu-operator --create-namespace \
4242
gpu-operator nvidia/gpu-operator \
43-
--version v23.3.2 \
43+
--version v23.9.0 \
4444
--set operator.defaultRuntime=crio \
4545
--set driver.repository=<The repository that you pushed your image> \
4646
--set driver.version=<The driver version in your pushed image. Only the version, don't add ol7.9 at the end> \
47-
--set toolkit.version=v1.13.5-centos7
47+
--set toolkit.version=v1.14.3-centos7
4848
```
4949

5050
Wait until all network operator pods are running with `kubectl get pods -n gpu-operator`.
@@ -68,4 +68,4 @@ spec:
6868
nvidia.com/gpu: "1"
6969
```
7070

71-
Get the logs from the above pod by running `kubectl logs nvidia-version-check`, you should see the `nvidia-smi` output correctly listing the GPU driver and CUDA version.
71+
Get the logs from the above pod by running `kubectl logs nvidia-version-check`, you should see the `nvidia-smi` output correctly listing the GPU driver and CUDA version.

0 commit comments

Comments
 (0)