You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`model_repository_paths`|`string list`|`['']`| The absolute paths to your model repositories in your local file system (the structure should follow Triton requirements) <br/> E.g. `['/tmp/models']`|
327
-
|`model_name`|`string`|`""`| The name of your model. Under `model_repository_paths`, there should be a directory with this name, and it should align with the model name in the model configuration under this directory <br/> E.g. `mobilenetv2-1.0_triton_onnx`|
328
-
|`max_batch_size`|`uint16_t`|`8`| The maximum batch size allowed for the model. It should align with the model configuration|
329
-
|`num_concurrent_requests`|`uint16_t`|`10`| The number of requests the Triton server can take at a time. This should be set according to the tensor publisher frequency|
330
-
|`input_tensor_names`|`string list`|`['']`| A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names <br/> E.g. `['input']`|
331
-
|`input_binding_names`|`string list`|`['']`| A list of input tensor binding names specified by model <br/> E.g. `['data']`|
332
-
|`input_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `['nitros_tensor_list_nchw_rgb_f32']`|
333
-
|`output_tensor_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['output']`|
334
-
|`output_binding_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['mobilenetv20_output_flatten0_reshape0']`|
335
-
|`output_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `[nitros_tensor_list_nchw_rgb_f32]`|
|`model_repository_paths`|`string list`|`['']`| The absolute paths to your model repositories in your local file system (the structure should follow Triton requirements) <br/> E.g. `['/tmp/models']`|
327
+
|`model_name`|`string`|`""`| The name of your model. Under `model_repository_paths`, there should be a directory with this name, and it should align with the model name in the model configuration under this directory <br/> E.g. `peoplesemsegnet_shuffleseg`|
328
+
|`max_batch_size`|`uint16_t`|`8`| The maximum batch size allowed for the model. It should align with the model configuration |
329
+
|`num_concurrent_requests`|`uint16_t`|`10`| The number of requests the Triton server can take at a time. This should be set according to the tensor publisher frequency |
330
+
|`input_tensor_names`|`string list`|`['input_tensor']`| A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names |
331
+
|`input_binding_names`|`string list`|`['']`| A list of input tensor binding names specified by model <br/> E.g. `['input_2:0']`|
332
+
|`input_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `['nitros_tensor_list_nchw_rgb_f32']`|
333
+
|`output_tensor_names`|`string list`|`['output_tensor']`| A list of tensor names to be bound to specified output binding names |
334
+
|`output_binding_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['argmax_1']`|
335
+
|`output_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `[nitros_tensor_list_nchw_rgb_f32]`|
|`model_file_path`|`string`|`model.onnx`| The absolute path to your model file in the local file system (the model file must be .onnx) <br/> E.g. `model.onnx`|
359
-
|`engine_file_path`|`string`|`/tmp/trt_engine.plan`| The absolute path to either where you want your TensorRT engine plan to be generated (from your model file) or where your pre-generated engine plan file is located <br/> E.g. `model.plan`|
360
-
|`force_engine_update`|`bool`|`true`| If set to true, the node will always try to generate a TensorRT engine plan from your model file and needs to be set to false to use the pre-generated TensorRT engine plan |
361
-
|`input_tensor_names`|`string list`|`['']`| A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names <br/> E.g. `['input']`|
362
-
|`input_binding_names`|`string list`|`['']`| A list of input tensor binding names specified by model <br/> E.g. `['data']`|
363
-
|`input_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `['nitros_tensor_list_nchw_rgb_f32']`|
364
-
|`output_tensor_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['output']`|
365
-
|`output_binding_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['mobilenetv20_output_flatten0_reshape0']`|
366
-
|`output_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `[nitros_tensor_list_nchw_rgb_f32]`|
367
-
|`verbose`|`bool`|`true`| If set to true, the node will enable verbose logging to console from the internal TensorRT execution |
368
-
|`max_workspace_size`|`int64_t`|`67108864l`| The size of the working space in bytes |
369
-
|`max_batch_size`|`int32_t`|`1`| The maximum possible batch size incase the first dimension is dynamic and used as the batch size |
370
-
|`dla_core`|`int64_t`|`-1`| The DLA Core to use. Fallback to GPU is always enabled. The default setting is GPU only |
371
-
|`enable_fp16`|`bool`|`true`| Enables building a TensorRT engine plan file which uses FP16 precision for inference. If this setting is false, the plan file will use FP32 precision |
372
-
|`relaxed_dimension_check`|`bool`|`true`| Ignores dimensions of 1 for the input-tensor dimension check |
|`model_file_path`|`string`|`model.onnx`| The absolute path to your model file in the local file system (the model file must be .onnx) <br/> E.g. `model.onnx`|
359
+
|`engine_file_path`|`string`|`/tmp/trt_engine.plan`| The absolute path to either where you want your TensorRT engine plan to be generated (from your model file) or where your pre-generated engine plan file is located <br/> E.g. `model.plan`|
360
+
|`force_engine_update`|`bool`|`true`| If set to true, the node will always try to generate a TensorRT engine plan from your model file and needs to be set to false to use the pre-generated TensorRT engine plan |
361
+
|`input_tensor_names`|`string list`|`['input_tensor']`| A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names |
362
+
|`input_binding_names`|`string list`|`['']`| A list of input tensor binding names specified by model <br/> E.g. `['input_2:0']`|
363
+
|`input_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `['nitros_tensor_list_nchw_rgb_f32']`|
364
+
|`output_tensor_names`|`string list`|`['output_tensor']`| A list of tensor names to be bound to specified output binding names |
365
+
|`output_binding_names`|`string list`|`['']`| A list of tensor names to be bound to specified output binding names <br/> E.g. `['argmax_1']`|
366
+
|`output_tensor_formats`|`string list`|`['']`| A list of input tensor nitros formats. This should be given in sequential order <br/> E.g. `[nitros_tensor_list_nchw_rgb_f32]`|
367
+
|`verbose`|`bool`|`true`| If set to true, the node will enable verbose logging to console from the internal TensorRT execution |
368
+
|`max_workspace_size`|`int64_t`|`67108864l`| The size of the working space in bytes |
369
+
|`max_batch_size`|`int32_t`|`1`| The maximum possible batch size incase the first dimension is dynamic and used as the batch size |
370
+
|`dla_core`|`int64_t`|`-1`| The DLA Core to use. Fallback to GPU is always enabled. The default setting is GPU only |
371
+
|`enable_fp16`|`bool`|`true`| Enables building a TensorRT engine plan file which uses FP16 precision for inference. If this setting is false, the plan file will use FP32 precision |
372
+
|`relaxed_dimension_check`|`bool`|`true`| Ignores dimensions of 1 for the input-tensor dimension check |
0 commit comments