@@ -200,7 +200,7 @@ key: "ENABLE_CACHE_CLEANING"
200200* ` INTER_OP_THREAD_COUNT ` :
201201
202202PyTorch allows using multiple CPU threads during TorchScript model inference.
203- One or more inference threads execute a model’ s forward pass on the given
203+ One or more inference threads execute a model' s forward pass on the given
204204inputs. Each inference thread invokes a JIT interpreter that executes the ops
205205of a model inline, one by one. This parameter sets the size of this thread
206206pool. The default value of this setting is the number of cpu cores. Please refer
@@ -218,6 +218,10 @@ key: "INTER_OP_THREAD_COUNT"
218218}
219219```
220220
221+ ** NOTE** : This parameter is set globally for the PyTorch backend.
222+ The value from the first model config file that specifies this parameter will be used.
223+ Subsequent values from other model config files, if different, will be ignored.
224+
221225* ` INTRA_OP_THREAD_COUNT ` :
222226
223227In addition to the inter-op parallelism, PyTorch can also utilize multiple threads
@@ -238,6 +242,10 @@ key: "INTRA_OP_THREAD_COUNT"
238242}
239243```
240244
245+ ** NOTE** : This parameter is set globally for the PyTorch backend.
246+ The value from the first model config file that specifies this parameter will be used.
247+ Subsequent values from other model config files, if different, will be ignored.
248+
241249* Additional Optimizations: Three additional boolean parameters are available to disable
242250certain Torch optimizations that can sometimes cause latency regressions in models with
243251complex execution modes and dynamic shapes. If not specified, all are enabled by default.
0 commit comments