Skip to content

Commit 9303ce2

Browse files
author
TRTorch Github Bot
committed
docs: [Automated] Regenerating documenation from
Signed-off-by: TRTorch Github Bot <[email protected]>
1 parent 4d8b299 commit 9303ce2

File tree

7 files changed

+63
-46
lines changed

7 files changed

+63
-46
lines changed

docs/_notebooks/Resnet50-example.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -675,7 +675,7 @@
675675
</div>
676676
</div>
677677
<p>
678-
<img alt="7ff72b2fb2f443dbaedf0e71b12aa769" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
678+
<img alt="89174e080c4b4d87aa6bb35bece7e6f2" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
679679
</p>
680680
<h1 id="notebooks-resnet50-example--page-root">
681681
TRTorch Getting Started - ResNet 50

docs/_notebooks/lenet-getting-started.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -769,7 +769,7 @@
769769
</div>
770770
</div>
771771
<p>
772-
<img alt="61ea82a269d94750a349171805a268b6" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
772+
<img alt="bc357ab3290b4127af16e398763e49ed" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
773773
</p>
774774
<h1 id="notebooks-lenet-getting-started--page-root">
775775
TRTorch Getting Started - LeNet

docs/_notebooks/ssd-object-detection-demo.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -789,7 +789,7 @@
789789
</div>
790790
</div>
791791
<p>
792-
<img alt="61fc82d7549042dd92c38dcd3c8986ee" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
792+
<img alt="403c4e31d922412bbcb15356bf72367f" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
793793
</p>
794794
<h1 id="notebooks-ssd-object-detection-demo--page-root">
795795
Object Detection with TRTorch (SSD)

docs/_sources/tutorials/use_from_pytorch.rst.txt

Lines changed: 24 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Start by loading ``trtorch`` into your application.
1414
import trtorch
1515
1616
17-
Then given a TorchScript module, you can lower it to TensorRT using the ``torch._C._jit_to_tensorrt`` API.
17+
Then given a TorchScript module, you can compile it with TensorRT using the ``torch._C._jit_to_backend("tensorrt", ...)`` API.
1818

1919
.. code-block:: python
2020
@@ -32,31 +32,36 @@ at the documentation for the TRTorch ``TensorRTCompileSpec`` API.
3232
.. code-block:: python
3333
3434
spec = {
35-
"forward": trtorch.TensorRTCompileSpec({
36-
"input_shapes": [[1, 3, 300, 300]],
37-
"op_precision": torch.half,
38-
"refit": False,
39-
"debug": False,
40-
"strict_types": False,
41-
"allow_gpu_fallback": True,
42-
"device_type": "gpu",
43-
"capability": trtorch.EngineCapability.default,
44-
"num_min_timing_iters": 2,
45-
"num_avg_timing_iters": 1,
46-
"max_batch_size": 0,
47-
})
48-
}
49-
50-
Now to compile with TRTorch, provide the target module objects and the spec dictionary to ``torch._C._jit_to_tensorrt``
35+
"forward":
36+
trtorch.TensorRTCompileSpec({
37+
"input_shapes": [[1, 3, 300, 300]],
38+
"op_precision": torch.half,
39+
"refit": False,
40+
"debug": False,
41+
"strict_types": False,
42+
"device": {
43+
"device_type": trtorch.DeviceType.GPU,
44+
"gpu_id": 0,
45+
"dla_core": 0,
46+
"allow_gpu_fallback": True
47+
},
48+
"capability": trtorch.EngineCapability.default,
49+
"num_min_timing_iters": 2,
50+
"num_avg_timing_iters": 1,
51+
"max_batch_size": 0,
52+
})
53+
}
54+
55+
Now to compile with TRTorch, provide the target module objects and the spec dictionary to ``torch._C._jit_to_backend("tensorrt", ...)``
5156

5257
.. code-block:: python
5358
54-
trt_model = torch._C._jit_to_tensorrt(script_model._c, spec)
59+
trt_model = torch._C._jit_to_backend("tensorrt", script_model, spec)
5560
5661
To run explicitly call the function of the method you want to run (vs. how you can just call on the module itself in standard PyTorch)
5762

5863
.. code-block:: python
5964
60-
input = torch.randn((1, 3, 300, 300).to("cuda").to(torch.half)
65+
input = torch.randn((1, 3, 300, 300)).to("cuda").to(torch.half)
6166
print(trt_model.forward(input))
6267

docs/py_api/trtorch.html

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -969,6 +969,7 @@ <h2 id="functions">
969969
<span class="sig-paren">
970970
)
971971
</span>
972+
→ &lt;torch._C.ScriptClass object at 0x7fc804e6c928&gt;
972973
<a class="headerlink" href="#trtorch.TensorRTCompileSpec" title="Permalink to this definition">
973974
974975
</a>
@@ -1016,10 +1017,10 @@ <h2 id="functions">
10161017
<span class="p">}</span> <span class="c1"># Dynamic input shape for input #2</span>
10171018
<span class="p">],</span>
10181019
<span class="s2">"device"</span><span class="p">:</span> <span class="p">{</span>
1019-
<span class="s2">"device_type"</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">device</span><span class="p">(</span><span class="s2">"cuda"</span><span class="p">),</span> <span class="c1"># Type of device to run engine on (for DLA use trtorch.DeviceType.DLA)</span>
1020-
<span class="s2">"gpu_id"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="c1"># Target gpu id to run engine (Use Xavier as gpu id for DLA)</span>
1021-
<span class="s2">"dla_core"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="c1"># (DLA only) Target dla core id to run engine</span>
1022-
<span class="s2">"allow_gpu_fallback"</span><span class="p">:</span> <span class="n">false</span><span class="p">,</span> <span class="c1"># (DLA only) Allow layers unsupported on DLA to run on GPU</span>
1020+
<span class="s2">"device_type"</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">device</span><span class="p">(</span><span class="s2">"cuda"</span><span class="p">),</span> <span class="c1"># Type of device to run engine on (for DLA use trtorch.DeviceType.DLA)</span>
1021+
<span class="s2">"gpu_id"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="c1"># Target gpu id to run engine (Use Xavier as gpu id for DLA)</span>
1022+
<span class="s2">"dla_core"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="c1"># (DLA only) Target dla core id to run engine</span>
1023+
<span class="s2">"allow_gpu_fallback"</span><span class="p">:</span> <span class="n">false</span><span class="p">,</span> <span class="c1"># (DLA only) Allow layers unsupported on DLA to run on GPU</span>
10231024
<span class="p">},</span>
10241025
<span class="s2">"op_precision"</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">half</span><span class="p">,</span> <span class="c1"># Operating precision set to FP16</span>
10251026
<span class="s2">"refit"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span> <span class="c1"># enable refit</span>

docs/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/tutorials/use_from_pytorch.html

Lines changed: 30 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -394,10 +394,13 @@ <h1 id="tutorials-use-from-pytorch--page-root">
394394
</div>
395395
</div>
396396
<p>
397-
Then given a TorchScript module, you can lower it to TensorRT using the
397+
Then given a TorchScript module, you can compile it with TensorRT using the
398398
<code class="docutils literal notranslate">
399399
<span class="pre">
400-
torch._C._jit_to_tensorrt
400+
torch._C._jit_to_backend("tensorrt",
401+
</span>
402+
<span class="pre">
403+
...)
401404
</span>
402405
</code>
403406
API.
@@ -451,34 +454,42 @@ <h1 id="tutorials-use-from-pytorch--page-root">
451454
<div class="highlight-python notranslate">
452455
<div class="highlight">
453456
<pre><span></span><span class="n">spec</span> <span class="o">=</span> <span class="p">{</span>
454-
<span class="s2">"forward"</span><span class="p">:</span> <span class="n">trtorch</span><span class="o">.</span><span class="n">TensorRTCompileSpec</span><span class="p">({</span>
455-
<span class="s2">"input_shapes"</span><span class="p">:</span> <span class="p">[[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">300</span><span class="p">]],</span>
456-
<span class="s2">"op_precision"</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">half</span><span class="p">,</span>
457-
<span class="s2">"refit"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
458-
<span class="s2">"debug"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
459-
<span class="s2">"strict_types"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
460-
<span class="s2">"allow_gpu_fallback"</span><span class="p">:</span> <span class="kc">True</span><span class="p">,</span>
461-
<span class="s2">"device_type"</span><span class="p">:</span> <span class="s2">"gpu"</span><span class="p">,</span>
462-
<span class="s2">"capability"</span><span class="p">:</span> <span class="n">trtorch</span><span class="o">.</span><span class="n">EngineCapability</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
463-
<span class="s2">"num_min_timing_iters"</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
464-
<span class="s2">"num_avg_timing_iters"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
465-
<span class="s2">"max_batch_size"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
466-
<span class="p">})</span>
467-
<span class="p">}</span>
457+
<span class="s2">"forward"</span><span class="p">:</span>
458+
<span class="n">trtorch</span><span class="o">.</span><span class="n">TensorRTCompileSpec</span><span class="p">({</span>
459+
<span class="s2">"input_shapes"</span><span class="p">:</span> <span class="p">[[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">300</span><span class="p">]],</span>
460+
<span class="s2">"op_precision"</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">half</span><span class="p">,</span>
461+
<span class="s2">"refit"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
462+
<span class="s2">"debug"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
463+
<span class="s2">"strict_types"</span><span class="p">:</span> <span class="kc">False</span><span class="p">,</span>
464+
<span class="s2">"device"</span><span class="p">:</span> <span class="p">{</span>
465+
<span class="s2">"device_type"</span><span class="p">:</span> <span class="n">trtorch</span><span class="o">.</span><span class="n">DeviceType</span><span class="o">.</span><span class="n">GPU</span><span class="p">,</span>
466+
<span class="s2">"gpu_id"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
467+
<span class="s2">"dla_core"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
468+
<span class="s2">"allow_gpu_fallback"</span><span class="p">:</span> <span class="kc">True</span>
469+
<span class="p">},</span>
470+
<span class="s2">"capability"</span><span class="p">:</span> <span class="n">trtorch</span><span class="o">.</span><span class="n">EngineCapability</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
471+
<span class="s2">"num_min_timing_iters"</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
472+
<span class="s2">"num_avg_timing_iters"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
473+
<span class="s2">"max_batch_size"</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
474+
<span class="p">})</span>
475+
<span class="p">}</span>
468476
</pre>
469477
</div>
470478
</div>
471479
<p>
472480
Now to compile with TRTorch, provide the target module objects and the spec dictionary to
473481
<code class="docutils literal notranslate">
474482
<span class="pre">
475-
torch._C._jit_to_tensorrt
483+
torch._C._jit_to_backend("tensorrt",
484+
</span>
485+
<span class="pre">
486+
...)
476487
</span>
477488
</code>
478489
</p>
479490
<div class="highlight-python notranslate">
480491
<div class="highlight">
481-
<pre><span></span><span class="n">trt_model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">_jit_to_tensorrt</span><span class="p">(</span><span class="n">script_model</span><span class="o">.</span><span class="n">_c</span><span class="p">,</span> <span class="n">spec</span><span class="p">)</span>
492+
<pre><span></span><span class="n">trt_model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">_jit_to_backend</span><span class="p">(</span><span class="s2">"tensorrt"</span><span class="p">,</span> <span class="n">script_model</span><span class="p">,</span> <span class="n">spec</span><span class="p">)</span>
482493
</pre>
483494
</div>
484495
</div>
@@ -487,7 +498,7 @@ <h1 id="tutorials-use-from-pytorch--page-root">
487498
</p>
488499
<div class="highlight-python notranslate">
489500
<div class="highlight">
490-
<pre><span></span><span class="nb">input</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">300</span><span class="p">)</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="s2">"cuda"</span><span class="p">)</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">half</span><span class="p">)</span>
501+
<pre><span></span><span class="nb">input</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">300</span><span class="p">))</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="s2">"cuda"</span><span class="p">)</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">half</span><span class="p">)</span>
491502
<span class="nb">print</span><span class="p">(</span><span class="n">trt_model</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="nb">input</span><span class="p">))</span>
492503
</pre>
493504
</div>

0 commit comments

Comments
 (0)