|
29 | 29 |
|
30 | 30 | >[!WARNING] |
31 | 31 | >You are currently on the `main` branch which tracks under-development progress |
32 | | ->towards the next release. The current release is version [2.57.0](https://github.com/triton-inference-server/server/releases/latest) |
33 | | ->and corresponds to the 25.04 container release on NVIDIA GPU Cloud (NGC). |
| 32 | +>towards the next release. The current release is version [2.58.0](https://github.com/triton-inference-server/server/releases/latest) |
| 33 | +>and corresponds to the 25.05 container release on NVIDIA GPU Cloud (NGC). |
34 | 34 |
|
35 | 35 | # Triton Inference Server |
36 | 36 |
|
@@ -90,16 +90,16 @@ Inference Server with the |
90 | 90 |
|
91 | 91 | ```bash |
92 | 92 | # Step 1: Create the example model repository |
93 | | -git clone -b r25.02 https://github.com/triton-inference-server/server.git |
| 93 | +git clone -b r25.05 https://github.com/triton-inference-server/server.git |
94 | 94 | cd server/docs/examples |
95 | 95 | ./fetch_models.sh |
96 | 96 |
|
97 | 97 | # Step 2: Launch triton from the NGC Triton container |
98 | | -docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:25.02-py3 tritonserver --model-repository=/models --model-control-mode explicit --load-model densenet_onnx |
| 98 | +docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:25.05-py3 tritonserver --model-repository=/models --model-control-mode explicit --load-model densenet_onnx |
99 | 99 |
|
100 | 100 | # Step 3: Sending an Inference Request |
101 | 101 | # In a separate console, launch the image_client example from the NGC Triton SDK container |
102 | | -docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:25.02-py3-sdk /workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg |
| 102 | +docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:25.05-py3-sdk /workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg |
103 | 103 |
|
104 | 104 | # Inference should return the following |
105 | 105 | Image '/workspace/images/mug.jpg': |
@@ -260,4 +260,3 @@ For questions, we recommend posting in our community |
260 | 260 |
|
261 | 261 | Please refer to the [NVIDIA Developer Triton page](https://developer.nvidia.com/nvidia-triton-inference-server) |
262 | 262 | for more information. |
263 | | - |
0 commit comments