Skip to content

Commit a01bf80

Browse files
committed
Update README and versions for 21.11 branch
1 parent 9752c68 commit a01bf80

File tree

8 files changed

+12
-18
lines changed

8 files changed

+12
-18
lines changed

Dockerfile

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.10-py3
16-
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.10-py3-sdk
15+
ARG BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.11-py3
16+
ARG TRITONSDK_BASE_IMAGE=nvcr.io/nvidia/tritonserver:21.11-py3-sdk
1717

18-
ARG MODEL_ANALYZER_VERSION=1.10.0dev
19-
ARG MODEL_ANALYZER_CONTAINER_VERSION=21.11dev
18+
ARG MODEL_ANALYZER_VERSION=1.10.0
19+
ARG MODEL_ANALYZER_CONTAINER_VERSION=21.11
2020

2121
FROM ${TRITONSDK_BASE_IMAGE} as sdk
2222

README.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,6 @@ limitations under the License.
1818

1919
# Triton Model Analyzer
2020

21-
**LATEST RELEASE: You are currently on the main branch which tracks
22-
under-development progress towards the next release. The latest
23-
release of the Triton Model Analyzer is 1.9.0 and is available on
24-
branch
25-
[r21.10](https://github.com/triton-inference-server/model_analyzer/tree/r21.10).**
26-
2721
Triton Model Analyzer is a CLI tool to help with better understanding of the
2822
compute and memory requirements of the Triton Inference Server models. These
2923
reports will help the user better understand the trade-offs in different

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.10.0dev
1+
1.10.0

docs/config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ profile_models: <comma-delimited-string-list>
119119
[ perf_analyzer_max_auto_adjusts: <int> | default: 10 ]
120120
121121
# Triton Docker image tag used when launching using Docker mode
122-
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:21.10-py3 ]
122+
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:21.11-py3 ]
123123
124124
# Triton Server HTTP endpoint url used by Model Analyzer client. Will be ignored if server-launch-mode is not 'remote'".
125125
[ triton_http_endpoint: <string> | default: localhost:8000 ]

docs/install.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,15 @@ Catalog](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver). You can
2626
pull and run the SDK container with the following commands:
2727

2828
```
29-
$ docker pull nvcr.io/nvidia/tritonserver:21.10-py3-sdk
29+
$ docker pull nvcr.io/nvidia/tritonserver:21.11-py3-sdk
3030
```
3131

3232
If you are not planning to run Model Analyzer with
3333
`--triton-launch-mode=docker`, You can run the SDK container with the following
3434
command:
3535

3636
```
37-
$ docker run -it --gpus all --net=host nvcr.io/nvidia/tritonserver:21.10-py3-sdk
37+
$ docker run -it --gpus all --net=host nvcr.io/nvidia/tritonserver:21.11-py3-sdk
3838
```
3939

4040
You will need to build and install the Triton server binary inside the SDK
@@ -59,7 +59,7 @@ following:
5959
$ docker run -it --gpus all \
6060
-v /var/run/docker.sock:/var/run/docker.sock \
6161
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
62-
--net=host nvcr.io/nvidia/tritonserver:21.10-py3-sdk
62+
--net=host nvcr.io/nvidia/tritonserver:21.11-py3-sdk
6363
```
6464

6565
Model Analyzer uses `pdfkit` for report generation. If you are running Model

docs/kubernetes_deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ images:
7979
8080
triton:
8181
image: nvcr.io/nvidia/tritonserver
82-
tag: 21.10-py3
82+
tag: 21.11-py3
8383
```
8484

8585
The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.

helm-chart/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,4 @@ images:
4141

4242
triton:
4343
image: nvcr.io/nvidia/tritonserver
44-
tag: 21.10-py3
44+
tag: 21.11-py3

model_analyzer/config/input/config_defaults.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@
4444
DEFAULT_RUN_CONFIG_MAX_PREFERRED_BATCH_SIZE = 16
4545
DEFAULT_RUN_CONFIG_PREFERRED_BATCH_SIZE_DISABLE = False
4646
DEFAULT_TRITON_LAUNCH_MODE = 'local'
47-
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:21.10-py3'
47+
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:21.11-py3'
4848
DEFAULT_TRITON_HTTP_ENDPOINT = 'localhost:8000'
4949
DEFAULT_TRITON_GRPC_ENDPOINT = 'localhost:8001'
5050
DEFAULT_TRITON_METRICS_URL = 'http://localhost:8002/metrics'

0 commit comments

Comments
 (0)