You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docsrc/tutorials/getting_started.rst
+65-11Lines changed: 65 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,18 @@ Getting Started
5
5
6
6
If you haven't already, aquire a tarball of the library by following the instructions in :ref:`Installation`
7
7
8
+
Background
9
+
*********************
10
+
8
11
.. _creating_a_ts_mod:
9
12
Creating a TorchScript Module
10
13
------------------------------
11
14
12
15
Once you have a trained model you want to compile with TRTorch, you need to start by converting that model from Python code to TorchScript code.
13
16
PyTorch has detailed documentation on how to do this https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html but briefly here is the
14
-
here is key background and the process:
17
+
here is key background information and the process:
15
18
16
-
PyTorch programs are based around `Module`s which can be used to compose higher level modules. Modules contain a constructor to set up the modules, parameters and sub-modules
19
+
PyTorch programs are based around ``Module`` s which can be used to compose higher level modules. ``Modules`` contain a constructor to set up the modules, parameters and sub-modules
17
20
and a forward function which describes how to use the parameters and submodules when the module is invoked.
18
21
19
22
For example, we can define a LeNet module like this:
@@ -130,12 +133,62 @@ TorchScript Modules are run the same way you run normal PyTorch modules. You can
130
133
``forward`` method or just calling the module ``torch_scirpt_module(in_tensor)`` The JIT compiler will compile
131
134
and optimize the module on the fly and then returns the results.
132
135
136
+
Saving TorchScript Module to Disk
137
+
-----------------------------------
138
+
139
+
For either traced or scripted modules, you can save the module to disk with the following command
140
+
141
+
.. code-block:: python
142
+
143
+
import torch.jit
144
+
145
+
model = LeNet()
146
+
script_model = torch.jit.script(model)
147
+
script_model.save("lenet_scripted.ts")
148
+
149
+
Using TRTorch
150
+
*********************
151
+
152
+
Now that there is some understanding of TorchScript and how to use it, we can now complete the pipeline and compile
153
+
our TorchScript into TensorRT accelerated TorchScript. Unlike the PyTorch JIT compiler, TRTorch is an Ahead-of-Time
154
+
(AOT) compiler. This means that unlike with PyTorch where the JIT compiler compiles from the high level PyTorch IR
155
+
to kernel implementation at runtime, modules that are to be compiled with TRTorch are compiled fully before runtime
156
+
(consider how you use a C compiler for an analogy). TRTorch has 3 main interfaces for using the compiler. You can
157
+
use a CLI application similar to how you may use GCC called ``trtorchc``, or you can embed the compiler in a model
158
+
freezing application / pipeline.
159
+
160
+
.. _trtorch_quickstart:
161
+
162
+
[TRTorch Quickstart] Compiling TorchScript Modules with ``trtorchc``
You can see the call where the engine is executed, based on a constant which is the ID of the engine, telling JIT how to find the engine and the input tensor which will be fed to TensorRT.
450
+
You can see the call where the engine is executed, after extracting the attribute containing the engine and constructing a list of inputs, then returns the tensors back to the user.
397
451
398
452
.. _unsupported_ops:
399
453
@@ -404,7 +458,7 @@ TRTorch is a new library and the PyTorch operator library is quite large, so the
404
458
shown above to make modules are fully TRTorch supported and ones that are not and stitch the modules together in the deployment application or you can register converters for missing ops.
405
459
406
460
You can check support without going through the full compilation pipleine using the ``trtorch::CheckMethodOperatorSupport(const torch::jit::Module& module, std::string method_name)`` api
407
-
to see what operators are not supported.
461
+
to see what operators are not supported. ``trtorchc`` automatically checks modules with this method before starting compilation and will print out a list of operators that are not supported.
0 commit comments