You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/py_api/trtorch.html
+9-4Lines changed: 9 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -596,11 +596,11 @@ <h2 id="functions">
596
596
<spanclass="s2">"workspace_size"</span><spanclass="p">:</span><spanclass="mi">0</span><spanclass="p">,</span><spanclass="c1"># Maximum size of workspace given to TensorRT</span>
597
597
<spanclass="s2">"max_batch_size"</span><spanclass="p">:</span><spanclass="mi">0</span><spanclass="p">,</span><spanclass="c1"># Maximum batch size (must be >= 1 to be set, 0 means not set)</span>
<spanclass="s2">"enabled"</span><spanclass="p">:</span><spanclass="kc">True</span><spanclass="p">,</span><spanclass="c1"># Turn on or turn off falling back to PyTorch if operations are not supported in TensorRT</span>
<spanclass="s2">"min_block_size"</span><spanclass="p">:</span><spanclass="mi">3</span><spanclass="c1"># Minimum number of ops an engine must incapsulate to be run in TensorRT</span>
604
604
<spanclass="p">}</span>
605
605
<spanclass="p">}</span>
606
606
</pre>
@@ -1018,7 +1018,7 @@ <h2 id="functions">
1018
1018
<spanclass="sig-paren">
1019
1019
)
1020
1020
</span>
1021
-
→ <torch._C.ScriptClass object at 0x7fabde990ef0>
1021
+
→ <torch._C.ScriptClass object at 0x7fdd093be6f0>
1022
1022
<aclass="headerlink" href="#trtorch.TensorRTCompileSpec" title="Permalink to this definition">
1023
1023
¶
1024
1024
</a>
@@ -1052,6 +1052,11 @@ <h2 id="functions">
1052
1052
</code>
1053
1053
, describing the input sizes or ranges for inputs
1054
1054
to the graph. All other keys are optional. Entries for each method to be compiled.
1055
+
</p>
1056
+
<p>
1057
+
Note: Partial compilation of TorchScript modules is not supported through the PyTorch TensorRT backend
1058
+
If you need this feature, use trtorch.compile to compile your module. Usage of the resulting module is
0 commit comments