You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cloud-infrastructure/ai-infra-gpu/GPU/nim-gpu-oke/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -301,7 +301,7 @@ $ oci os bucket delete --bucket-name NIM --empty
301
301
Resources:
302
302
303
303
*[NVIDIA releases NIM for deploying AI models at scale](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)
304
-
*[Deployng Triton on OCI](https://github.com/triton-inference-server/server/tree/main/deploy/oci)
304
+
*[Deploying Triton on OCI](https://github.com/triton-inference-server/server/tree/main/deploy/oci)
305
305
*[NIM documentation on how to use non prebuilt models](https://developer.nvidia.com/docs/nemo-microservices/inference/nmi_nonprebuilt_playbook.html)
0 commit comments