You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Triton Model Analyzer is a CLI tool which can help you find a more optimal configuration, on a given piece of hardware, for single, multiple, ensemble, or BLS models running on a [Triton Inference Server](https://github.com/triton-inference-server/server/). Model Analyzer will also generate reports to help you better understand the trade-offs of the different configurations along with their compute and memory requirements.
30
-
<br><br>
31
-
32
-
# Features
33
-
34
-
### Search Modes
35
-
36
-
-[Quick Search](docs/config_search.md#quick-search-mode) will **sparsely** search the [Max Batch Size](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#maximum-batch-size),
37
-
[Dynamic Batching](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#dynamic-batcher), and
38
-
[Instance Group](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#instance-groups) spaces by utilizing a heuristic hill-climbing algorithm to help you quickly find a more optimal configuration
39
-
40
-
-[Automatic Brute Search](docs/config_search.md#automatic-brute-search) will **exhaustively** search the
-[Manual Brute Search](docs/config_search.md#manual-brute-search) allows you to create manual sweeps for every parameter that can be specified in the model configuration
47
-
48
-
### Model Types
49
-
50
-
-[Ensemble Model Search](docs/config_search.md#ensemble-model-search): Model Analyzer can help you find the optimal
51
-
settings when profiling an ensemble model, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm
52
-
53
-
-[BLS Model Search](docs/config_search.md#bls-model-search): Model Analyzer can help you find the optimal
54
-
settings when profiling a BLS model, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm
55
-
56
-
-[Multi-Model Search](docs/config_search.md#multi-model-search-mode): Model Analyzer can help you
57
-
find the optimal settings when profiling multiple concurrent models, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm
58
-
59
-
-[LLM Search](docs/config_search.md#llm-search-mode): Model Analyzer can help you
60
-
find the optimal settings when profiling large language models, utilizing the [Quick Search](docs/config_search.md#quick-search-mode) algorithm
61
-
62
-
### Other Features
63
-
64
-
-[Detailed and summary reports](docs/report.md): Model Analyzer is able to generate
65
-
summarized and detailed reports that can help you better understand the trade-offs
66
-
between different model configurations that can be used for your model.
67
-
68
-
-[QoS Constraints](docs/config.md#constraint): Constraints can help you
69
-
filter out the Model Analyzer results based on your QoS requirements. For
70
-
example, you can specify a latency budget to filter out model configurations
71
-
that do not satisfy the specified latency threshold.
72
-
<br><br>
73
-
74
-
# Examples and Tutorials
75
-
76
-
### **Single Model**
77
-
78
-
See the [Single Model Quick Start](docs/quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple PyTorch model.
79
-
80
-
### **Multi Model**
81
-
82
-
See the [Multi-model Quick Start](docs/mm_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on two models running concurrently on the same GPU.
83
-
84
-
### **Ensemble Model**
85
-
86
-
See the [Ensemble Model Quick Start](docs/ensemble_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple Ensemble model.
87
-
88
-
### **BLS Model**
89
-
90
-
See the [BLS Model Quick Start](docs/bls_quick_start.md) for a guide on how to use Model Analyzer to profile, analyze and report on a simple BLS model.
91
-
<br><br>
92
-
93
-
# Documentation
94
-
95
-
-[Installation](docs/install.md)
96
-
-[Model Analyzer CLI](docs/cli.md)
97
-
-[Launch Modes](docs/launch_modes.md)
98
-
-[Configuring Model Analyzer](docs/config.md)
99
-
-[Model Analyzer Metrics](docs/metrics.md)
100
-
-[Model Config Search](docs/config_search.md)
101
-
-[Checkpointing](docs/checkpoints.md)
102
-
-[Model Analyzer Reports](docs/report.md)
103
-
-[Deployment with Kubernetes](docs/kubernetes_deploy.md)
104
-
<br><br>
105
-
106
-
# Reporting problems, asking questions
107
-
108
-
We appreciate any feedback, questions or bug reporting regarding this
109
-
project. When help with code is needed, follow the process outlined in
110
-
the Stack Overflow (https://stackoverflow.com/help/mcve)
111
-
document. Ensure posted examples are:
112
-
113
-
- minimal – use as little code as possible that still produces the
114
-
same problem
115
-
116
-
- complete – provide all parts needed to reproduce the problem. Check
117
-
if you can strip external dependency and still show the problem. The
118
-
less time we spend on reproducing problems the more time we have to
119
-
fix it
120
-
121
-
- verifiable – test the code you're about to provide to make sure it
122
-
reproduces the problem. Remove all other problems that are not
**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
**Important:** The example above uses a single GPU. If you are running on multiple GPUs, you may need to increase the shared memory size accordingly<br><br>
Copy file name to clipboardExpand all lines: docs/kubernetes_deploy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,7 +79,7 @@ images:
79
79
80
80
triton:
81
81
image: nvcr.io/nvidia/tritonserver
82
-
tag: 24.03-py3
82
+
tag: 24.04-py3
83
83
```
84
84
85
85
The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.
0 commit comments