Skip to content

Commit 3877988

Browse files
committed
revert: proto -> yaml markup
1 parent 8f27edf commit 3877988

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
156156

157157
The section of model config file specifying this parameter will look like:
158158

159-
```yaml
159+
```proto
160160
parameters: {
161161
key: "DISABLE_OPTIMIZED_EXECUTION"
162162
value: { string_value: "true" }
@@ -175,7 +175,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
175175

176176
To enable inference mode, use the configuration example below:
177177

178-
```yaml
178+
```proto
179179
parameters: {
180180
key: "INFERENCE_MODE"
181181
value: { string_value: "true" }
@@ -195,7 +195,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
195195

196196
To disable cuDNN, use the configuration example below:
197197

198-
```yaml
198+
```proto
199199
parameters: {
200200
key: "DISABLE_CUDNN"
201201
value: { string_value: "true" }
@@ -210,7 +210,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
210210

211211
To enable weight sharing, use the configuration example below:
212212

213-
```yaml
213+
```proto
214214
parameters: {
215215
key: "ENABLE_WEIGHT_SHARING"
216216
value: { string_value: "true" }
@@ -228,7 +228,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
228228

229229
To enable cleaning of the CUDA cache after every execution, use the configuration example below:
230230

231-
```yaml
231+
```proto
232232
parameters: {
233233
key: "ENABLE_CACHE_CLEANING"
234234
value: { string_value: "true" }
@@ -251,7 +251,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
251251
252252
To set the inter-op thread count, use the configuration example below:
253253

254-
```yaml
254+
```proto
255255
parameters: {
256256
key: "INTER_OP_THREAD_COUNT"
257257
value: { string_value: "1" }
@@ -272,7 +272,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
272272
273273
To set the intra-op thread count, use the configuration example below:
274274

275-
```yaml
275+
```proto
276276
parameters: {
277277
key: "INTRA_OP_THREAD_COUNT"
278278
value: { string_value: "1" }
@@ -316,7 +316,7 @@ where the input tensors are placed as follows:
316316
317317
To set the model instance group, use the configuration example below:
318318

319-
```yaml
319+
```proto
320320
instance_group {
321321
count: 2
322322
kind: KIND_GPU
@@ -351,7 +351,7 @@ The following PyTorch settings may be customized by setting parameters on the
351351

352352
For example:
353353

354-
```yaml
354+
```proto
355355
parameters: {
356356
key: "NUM_THREADS"
357357
value: { string_value: "4" }

0 commit comments

Comments
 (0)