Skip to content

Commit e5b586e

Browse files
authored
Update README and versions for 23.06 branch (#708) (#720)
* Update README and versions for 23.06 branch * Update README.md for 23.06
1 parent 23b1929 commit e5b586e

File tree

8 files changed

+12
-12
lines changed

8 files changed

+12
-12
lines changed

Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.05-py3
16-
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.05-py3-sdk
15+
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.06-py3
16+
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:23.06-py3-sdk
1717

1818
ARG MODEL_ANALYZER_VERSION=1.30.0dev
1919
ARG MODEL_ANALYZER_CONTAINER_VERSION=23.07dev

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ limitations under the License.
2020

2121
**LATEST RELEASE: You are currently on the main branch which tracks
2222
under-development progress towards the next release. The latest
23-
release of the Triton Model Analyzer is 1.28.0 and is available on
23+
release of the Triton Model Analyzer is 1.29.0 and is available on
2424
branch
25-
[r23.05](https://github.com/triton-inference-server/model_analyzer/tree/r23.05).**
25+
[r23.06](https://github.com/triton-inference-server/model_analyzer/tree/r23.06).**
2626

2727
Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
2828
<br><br>

docs/config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ cpu_only_composing_models: <comma-delimited-string-list>
153153
[ reload_model_disable: <bool> | default: false]
154154
155155
# Triton Docker image tag used when launching using Docker mode
156-
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.05-py3 ]
156+
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:23.06-py3 ]
157157
158158
# Triton Server HTTP endpoint url used by Model Analyzer client"
159159
[ triton_http_endpoint: <string> | default: localhost:8000 ]

docs/kubernetes_deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ images:
7979
8080
triton:
8181
image: nvcr.io/nvidia/tritonserver
82-
tag: 23.05-py3
82+
tag: 23.06-py3
8383
```
8484

8585
The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.

docs/mm_quick_start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ git pull origin main
4949
**1. Pull the SDK container:**
5050

5151
```
52-
docker pull nvcr.io/nvidia/tritonserver:23.05-py3-sdk
52+
docker pull nvcr.io/nvidia/tritonserver:23.06-py3-sdk
5353
```
5454

5555
**2. Run the SDK container**
@@ -59,7 +59,7 @@ docker run -it --gpus all \
5959
-v /var/run/docker.sock:/var/run/docker.sock \
6060
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
6161
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
62-
--net=host nvcr.io/nvidia/tritonserver:23.05-py3-sdk
62+
--net=host nvcr.io/nvidia/tritonserver:23.06-py3-sdk
6363
```
6464

6565
**Replacing** `<path-to-output-model-repo>` with the

docs/quick_start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ git pull origin main
4949
**1. Pull the SDK container:**
5050

5151
```
52-
docker pull nvcr.io/nvidia/tritonserver:23.05-py3-sdk
52+
docker pull nvcr.io/nvidia/tritonserver:23.06-py3-sdk
5353
```
5454

5555
**2. Run the SDK container**
@@ -59,7 +59,7 @@ docker run -it --gpus all \
5959
-v /var/run/docker.sock:/var/run/docker.sock \
6060
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
6161
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
62-
--net=host nvcr.io/nvidia/tritonserver:23.05-py3-sdk
62+
--net=host nvcr.io/nvidia/tritonserver:23.06-py3-sdk
6363
```
6464

6565
**Replacing** `<path-to-output-model-repo>` with the

helm-chart/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,4 @@ images:
4141

4242
triton:
4343
image: nvcr.io/nvidia/tritonserver
44-
tag: 23.05-py3
44+
tag: 23.06-py3

model_analyzer/config/input/config_defaults.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
DEFAULT_RUN_CONFIG_PROFILE_MODELS_CONCURRENTLY_ENABLE = False
5555
DEFAULT_REQUEST_RATE_SEARCH_ENABLE = False
5656
DEFAULT_TRITON_LAUNCH_MODE = 'local'
57-
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:23.05-py3'
57+
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:23.06-py3'
5858
DEFAULT_TRITON_HTTP_ENDPOINT = 'localhost:8000'
5959
DEFAULT_TRITON_GRPC_ENDPOINT = 'localhost:8001'
6060
DEFAULT_TRITON_METRICS_URL = 'http://localhost:8002/metrics'

0 commit comments

Comments
 (0)