Skip to content

Commit 27da17f

Browse files
authored
Update post-23.08 release (#753)
1 parent 65229d0 commit 27da17f

File tree

8 files changed

+12
-12
lines changed

8 files changed

+12
-12
lines changed

Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.07-py3
16-
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.07-py3-sdk
15+
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.08-py3
16+
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.08-py3-sdk
1717

1818
ARG MODEL_ANALYZER_VERSION=1.32.0dev
1919
ARG MODEL_ANALYZER_CONTAINER_VERSION=23.09dev

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@ limitations under the License.
2121
>**LATEST RELEASE:**<br>
2222
You are currently on the `main` branch which tracks
2323
under-development progress towards the next release. <br>The latest
24-
release of the Triton Model Analyzer is 1.30.0 and is available on
24+
release of the Triton Model Analyzer is 1.31.0 and is available on
2525
branch
26-
[r23.07](https://github.com/triton-inference-server/model_analyzer/tree/r23.07).
26+
[r23.08](https://github.com/triton-inference-server/model_analyzer/tree/r23.08).
2727

2828
Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
2929
<br><br>

docs/config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ cpu_only_composing_models: <comma-delimited-string-list>
153153
[ reload_model_disable: <bool> | default: false]
154154
155155
# Triton Docker image tag used when launching using Docker mode
156-
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.07-py3 ]
156+
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.08-py3 ]
157157
158158
# Triton Server HTTP endpoint url used by Model Analyzer client"
159159
[ triton_http_endpoint: <string> | default: localhost:8000 ]

docs/kubernetes_deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ images:
7979
8080
triton:
8181
image: nvcr.io/nvidia/tritonserver
82-
tag: 23.07-py3
82+
tag: 23.08-py3
8383
```
8484

8585
The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.

docs/mm_quick_start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ git pull origin main
4949
**1. Pull the SDK container:**
5050

5151
```
52-
docker pull nvcr.io/nvidia/tritonserver:23.07-py3-sdk
52+
docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
5353
```
5454

5555
**2. Run the SDK container**
@@ -59,7 +59,7 @@ docker run -it --gpus all \
5959
-v /var/run/docker.sock:/var/run/docker.sock \
6060
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
6161
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
62-
--net=host nvcr.io/nvidia/tritonserver:23.07-py3-sdk
62+
--net=host nvcr.io/nvidia/tritonserver:23.08-py3-sdk
6363
```
6464

6565
**Replacing** `<path-to-output-model-repo>` with the

docs/quick_start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ git pull origin main
4949
**1. Pull the SDK container:**
5050

5151
```
52-
docker pull nvcr.io/nvidia/tritonserver:23.07-py3-sdk
52+
docker pull nvcr.io/nvidia/tritonserver:23.08-py3-sdk
5353
```
5454

5555
**2. Run the SDK container**
@@ -59,7 +59,7 @@ docker run -it --gpus all \
5959
-v /var/run/docker.sock:/var/run/docker.sock \
6060
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
6161
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
62-
--net=host nvcr.io/nvidia/tritonserver:23.07-py3-sdk
62+
--net=host nvcr.io/nvidia/tritonserver:23.08-py3-sdk
6363
```
6464

6565
**Replacing** `<path-to-output-model-repo>` with the

helm-chart/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,4 @@ images:
4141

4242
triton:
4343
image: nvcr.io/nvidia/tritonserver
44-
tag: 23.07-py3
44+
tag: 23.08-py3

model_analyzer/config/input/config_defaults.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@
5656
DEFAULT_RUN_CONFIG_PROFILE_MODELS_CONCURRENTLY_ENABLE = False
5757
DEFAULT_REQUEST_RATE_SEARCH_ENABLE = False
5858
DEFAULT_TRITON_LAUNCH_MODE = "local"
59-
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:23.07-py3"
59+
DEFAULT_TRITON_DOCKER_IMAGE = "nvcr.io/nvidia/tritonserver:23.08-py3"
6060
DEFAULT_TRITON_HTTP_ENDPOINT = "localhost:8000"
6161
DEFAULT_TRITON_GRPC_ENDPOINT = "localhost:8001"
6262
DEFAULT_TRITON_METRICS_URL = "http://localhost:8002/metrics"

0 commit comments

Comments
 (0)