Skip to content

Commit 44d52b6

Browse files
committed
Update README and versions for 21.06 branch
1 parent 7362450 commit 44d52b6

File tree

8 files changed

+12
-18
lines changed

8 files changed

+12
-18
lines changed

Dockerfile

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.05-py3
16-
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.05-py3-sdk
15+
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.06-py3
16+
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.06-py3-sdk
1717

18-
ARG MODEL_ANALYZER_VERSION=1.5.0dev
19-
ARG MODEL_ANALYZER_CONTAINER_VERSION=21.06dev
18+
ARG MODEL_ANALYZER_VERSION=1.5.0
19+
ARG MODEL_ANALYZER_CONTAINER_VERSION=21.06
2020

2121
FROM ${TRITONSDK_BASE_IMAGE} as sdk
2222

README.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,6 @@ limitations under the License.
1818

1919
# Triton Model Analyzer
2020

21-
**LATEST RELEASE: You are currently on the main branch which tracks
22-
under-development progress towards the next release. The latest
23-
release of the Triton Model Analyzer is 1.4.0 and is available on
24-
branch
25-
[r21.05](https://github.com/triton-inference-server/model_analyzer/tree/r21.05).**
26-
2721
Triton Model Analyzer is a CLI tool to help with better understanding of the
2822
compute and memory requirements of the Triton Inference Server models. These
2923
reports will help the user better understand the trade-offs in different

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.5.0dev
1+
1.5.0

docs/config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ profile_models: <comma-delimited-string-list>
107107
[ perf_output: <bool> | default: false ]
108108
109109
# Triton Docker image tag used when launching using Docker mode
110-
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:21.05-py3 ]
110+
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:21.06-py3 ]
111111
112112
# Triton Server HTTP endpoint url used by Model Analyzer client. Will be ignored if server-launch-mode is not 'remote'".
113113
[ triton_http_endpoint: <string> | default: localhost:8000 ]

docs/install.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,15 +24,15 @@ There are three ways to use Triton Model Analyzer:
2424
can pull and run the SDK container with the following commands:
2525

2626
```
27-
$ docker pull nvcr.io/nvidia/tritonserver:21.05-py3-sdk
27+
$ docker pull nvcr.io/nvidia/tritonserver:21.06-py3-sdk
2828
```
2929

3030
If you are not planning to run Model Analyzer with
3131
`--triton-launch-mode=docker` you can run the container with the following
3232
command:
3333

3434
```
35-
$ docker run -it --gpus all --net=host nvcr.io/nvidia/tritonserver:21.05-py3-sdk
35+
$ docker run -it --gpus all --net=host nvcr.io/nvidia/tritonserver:21.06-py3-sdk
3636
```
3737

3838
If you intend to use `--triton-launch-mode=docker`, you will need to mount
@@ -51,7 +51,7 @@ There are three ways to use Triton Model Analyzer:
5151
$ docker run -it --gpus all \
5252
-v /var/run/docker.sock:/var/run/docker.sock \
5353
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
54-
--net=host nvcr.io/nvidia/tritonserver:21.05-py3-sdk
54+
--net=host nvcr.io/nvidia/tritonserver:21.06-py3-sdk
5555
```
5656

5757
Model Analyzer uses `pdfkit` for report generation. If you are running Model

docs/kubernetes_deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ images:
7979
8080
triton:
8181
image: nvcr.io/nvidia/tritonserver
82-
tag: 21.05-py3
82+
tag: 21.06-py3
8383
```
8484

8585
The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.

helm-chart/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,4 @@ images:
4141

4242
triton:
4343
image: nvcr.io/nvidia/tritonserver
44-
tag: 21.05-py3
44+
tag: 21.06-py3

tests/test_perf_analyzer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
MODEL_REPOSITORY_PATH = '/model_analyzer/models'
3737
PERF_BIN_PATH = 'perf_analyzer'
3838
TRITON_LOCAL_BIN_PATH = 'test_path'
39-
TRITON_VERSION = '21.05'
39+
TRITON_VERSION = '21.06'
4040
TEST_MODEL_NAME = 'test_model'
4141
TEST_CONCURRENCY_RANGE = '1:16:2'
4242
CONFIG_TEST_ARG = 'sync'

0 commit comments

Comments
 (0)