Skip to content

Commit 093a5ca

Browse files
committed
docs: General edits and reorganization, render notebooks in the
documentation Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent aa14cf4 commit 093a5ca

File tree

9 files changed

+148
-42
lines changed

9 files changed

+148
-42
lines changed

docsrc/Makefile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,14 @@ ifndef VERSION
3131
rm -rf /tmp/trtorch_docs
3232
endif
3333
rm -rf $(SOURCEDIR)/_cpp_api
34+
rm -rf $(SOURCEDIR)/_notebooks
3435
rm -rf $(SOURCEDIR)/_py_api
3536
rm -rf $(SOURCEDIR)/_build
3637
rm -rf $(SOURCEDIR)/_tmp
3738

3839
html:
40+
mkdir -p $(SOURCEDIR)/_notebooks
41+
cp -r $(SOURCEDIR)/../notebooks/*.ipynb $(SOURCEDIR)/_notebooks
3942
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
4043
mkdir -p $(DESTDIR)
4144
cp -r $(BUILDDIR)/html/* $(DESTDIR)

docsrc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Building the Documentation
22

3-
We use Sphinx and Doxygen for documentation, so begin by installing the dependencies:
3+
We use Sphinx, Doxygen and pandoc for documentation, so begin by installing the dependencies:
44

55
```
6-
apt install doxygen
6+
apt install doxygen pandoc
77
```
88

99
```

docsrc/conf.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@
3333
extensions = [
3434
'breathe',
3535
'exhale',
36+
'nbsphinx',
3637
'sphinx.ext.napoleon',
3738
'sphinx.ext.intersphinx',
3839
'sphinx.ext.autosummary',
@@ -115,7 +116,7 @@
115116
'repo_name': 'TRTorch',
116117

117118
# Visible levels of the global TOC; -1 means unlimited
118-
'globaltoc_depth': 2,
119+
'globaltoc_depth': 1,
119120
# If False, expand all TOC entries
120121
'globaltoc_collapse': False,
121122
# If True, show hidden TOC entries

docsrc/contributors/system_overview.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,13 @@ The repository is structured into:
1212
* core: Main compiler source code
1313
* cpp: C++ API
1414
* tests: tests of the C++ API, the core and converters
15+
* py: Python API
16+
* notebooks: Example applications built with TRTorch
1517
* docs: Documentation
1618
* docsrc: Documentation Source
1719
* third_party: BUILD files for dependency libraries
20+
* toolchains: Toolchains for different platforms
21+
1822

1923
The C++ API is unstable and subject to change until the library matures, though most work is done under the hood in the core.
2024

docsrc/index.rst

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,19 @@ Getting Started
3636
tutorials/getting_started
3737
tutorials/ptq
3838
tutorials/trtorchc
39+
_notebooks/lenet
40+
41+
Notebooks
42+
------------
43+
* :ref:`lenet`
44+
45+
.. toctree::
46+
:caption: Notebooks
47+
:maxdepth: 1
48+
:hidden:
49+
50+
_notebooks/lenet-getting-started
51+
3952

4053
Python API Documenation
4154
------------------------

docsrc/requirements.txt

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
sphinx>=2.0
2-
breathe>=4.13.0
1+
sphinx==3.1.2
2+
breathe==4.19.2
33
exhale
44
sphinx_rtd_theme==0.4.3
5-
sphinx-material>=0.0.29
5+
sphinx-material==0.0.30
6+
nbsphinx==0.7.1

docsrc/tutorials/getting_started.rst

Lines changed: 65 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,18 @@ Getting Started
55

66
If you haven't already, aquire a tarball of the library by following the instructions in :ref:`Installation`
77

8+
Background
9+
*********************
10+
811
.. _creating_a_ts_mod:
912
Creating a TorchScript Module
1013
------------------------------
1114

1215
Once you have a trained model you want to compile with TRTorch, you need to start by converting that model from Python code to TorchScript code.
1316
PyTorch has detailed documentation on how to do this https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html but briefly here is the
14-
here is key background and the process:
17+
here is key background information and the process:
1518

16-
PyTorch programs are based around `Module`s which can be used to compose higher level modules. Modules contain a constructor to set up the modules, parameters and sub-modules
19+
PyTorch programs are based around ``Module`` s which can be used to compose higher level modules. ``Modules`` contain a constructor to set up the modules, parameters and sub-modules
1720
and a forward function which describes how to use the parameters and submodules when the module is invoked.
1821

1922
For example, we can define a LeNet module like this:
@@ -130,12 +133,62 @@ TorchScript Modules are run the same way you run normal PyTorch modules. You can
130133
``forward`` method or just calling the module ``torch_scirpt_module(in_tensor)`` The JIT compiler will compile
131134
and optimize the module on the fly and then returns the results.
132135

136+
Saving TorchScript Module to Disk
137+
-----------------------------------
138+
139+
For either traced or scripted modules, you can save the module to disk with the following command
140+
141+
.. code-block:: python
142+
143+
import torch.jit
144+
145+
model = LeNet()
146+
script_model = torch.jit.script(model)
147+
script_model.save("lenet_scripted.ts")
148+
149+
Using TRTorch
150+
*********************
151+
152+
Now that there is some understanding of TorchScript and how to use it, we can now complete the pipeline and compile
153+
our TorchScript into TensorRT accelerated TorchScript. Unlike the PyTorch JIT compiler, TRTorch is an Ahead-of-Time
154+
(AOT) compiler. This means that unlike with PyTorch where the JIT compiler compiles from the high level PyTorch IR
155+
to kernel implementation at runtime, modules that are to be compiled with TRTorch are compiled fully before runtime
156+
(consider how you use a C compiler for an analogy). TRTorch has 3 main interfaces for using the compiler. You can
157+
use a CLI application similar to how you may use GCC called ``trtorchc``, or you can embed the compiler in a model
158+
freezing application / pipeline.
159+
160+
.. _trtorch_quickstart:
161+
162+
[TRTorch Quickstart] Compiling TorchScript Modules with ``trtorchc``
163+
---------------------------------------------------------------------
164+
165+
An easy way to get started with TRTorch and to check if your model can be supported without extra work is to run it through
166+
``trtorchc``, which supports almost all features of the compiler from the command line including post training quantization
167+
(given a previously created calibration cache). For example we can compile our lenet model by setting our preferred operating
168+
precision and input size. This new TorchScript file can be loaded into Python (note: you need to ``import trtorch`` before loading
169+
these compiled modules because the compiler extends the PyTorch the deserializer and runtime to execute compiled modules).
170+
171+
.. code-block:: shell
172+
173+
❯ trtorchc -p f16 lenet_scripted.ts trt_lenet_scripted.ts "(1,1,32,32)"
174+
175+
❯ python3
176+
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
177+
[GCC 8.4.0] on linux
178+
Type "help", "copyright", "credits" or "license" for more information.
179+
>>> import torch
180+
>>> import trtorch
181+
>>> ts_model = torch.jit.load(“trt_lenet_scripted.ts”)
182+
>>> ts_model(torch.randn((1,1,32,32)).to(“cuda”).half())
183+
184+
You can learn more about ``trtorchc`` usage here: :ref:`trtorchc`
185+
133186
.. _compile_py:
134187

135188
Compiling with TRTorch in Python
136189
---------------------------------
137190

138-
To compile your TorchScript module with TRTorch, all you need to do is provide the module and some compiler settings
191+
To compile your TorchScript module with TRTorch embedded into Python, all you need to do is provide the module and some compiler settings
139192
to TRTorch and you will be returned an optimized TorchScript module to run or add into another PyTorch module. The
140193
only required setting is the input size or input range which is defined as a list of either list types like ``lists``, ``tuples``
141194
or PyTorch ``size`` objects or dictionaries of minimum, optimial and maximum sizes. You can also specify settings such as
@@ -386,14 +439,15 @@ Here is the graph that you get back after compilation is complete:
386439

387440
.. code-block:: none
388441
389-
graph(%self.1 : __torch__.___torch_mangle_10.LeNet_trt,
390-
%2 : Tensor):
391-
%1 : int = prim::Constant[value=94106001690080]()
392-
%3 : Tensor = trt::execute_engine(%1, %2)
393-
return (%3)
394-
(AddEngineToGraph)
442+
graph(%self_1 : __torch__.lenet, %input_0 : Tensor):
443+
%1 : ...trt.Engine = prim::GetAttr[name="lenet"](%self_1)
444+
%3 : Tensor[] = prim::ListConstruct(%input_0)
445+
%4 : Tensor[] = trt::execute_engine(%3, %1)
446+
%5 : Tensor = prim::ListUnpack(%4)
447+
return (%5)
448+
395449
396-
You can see the call where the engine is executed, based on a constant which is the ID of the engine, telling JIT how to find the engine and the input tensor which will be fed to TensorRT.
450+
You can see the call where the engine is executed, after extracting the attribute containing the engine and constructing a list of inputs, then returns the tensors back to the user.
397451

398452
.. _unsupported_ops:
399453

@@ -404,7 +458,7 @@ TRTorch is a new library and the PyTorch operator library is quite large, so the
404458
shown above to make modules are fully TRTorch supported and ones that are not and stitch the modules together in the deployment application or you can register converters for missing ops.
405459

406460
You can check support without going through the full compilation pipleine using the ``trtorch::CheckMethodOperatorSupport(const torch::jit::Module& module, std::string method_name)`` api
407-
to see what operators are not supported.
461+
to see what operators are not supported. ``trtorchc`` automatically checks modules with this method before starting compilation and will print out a list of operators that are not supported.
408462

409463
.. _custom_converters:
410464

docsrc/tutorials/installation.rst

Lines changed: 53 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -53,21 +53,20 @@ Dependencies for Compilation
5353

5454
TRTorch is built with Bazel, so begin by installing it.
5555

56-
The easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
56+
* The easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
57+
* Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
58+
* Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
5759

58-
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
60+
.. code-block:: shell
5961
60-
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
62+
export BAZEL_VERSION=$(cat <PATH_TO_TRTORCH_ROOT>/.bazelversion)
63+
mkdir bazel
64+
cd bazel
65+
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
66+
unzip bazel-$BAZEL_VERSION-dist.zip
67+
bash ./compile.sh
68+
cp output/bazel /usr/local/bin/
6169
62-
```sh
63-
export BAZEL_VERSION=3.3.1
64-
mkdir bazel
65-
cd bazel
66-
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
67-
unzip bazel-$BAZEL_VERSION-dist.zip
68-
bash ./compile.sh
69-
cp output/bazel /usr/local/bin/
70-
```
7170
7271
You will also need to have CUDA installed on the system (or if running in a container, the system must have
7372
the CUDA driver installed and the container must have CUDA)
@@ -231,12 +230,25 @@ Debug Build
231230
232231
This also compiles a debug build of ``libtrtorch.so``
233232

234-
Building natively on aarch64 platform
235-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
233+
**Building Natively on aarch64 (Jetson)**
234+
-------------------------------------------
235+
236+
Prerequisites
237+
^^^^^^^^^^^^^^
238+
239+
Install or compile a build of PyTorch/LibTorch for aarch64
240+
241+
NVIDIA hosts builds the latest release branch for Jetson here:
236242

237-
To build natively on aarch64-linux-gnu platform, configure the WORKSPACE with local available dependencies.
243+
https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048
238244

239-
1. Disable the rules with http_archive for x86_64 platform by commenting rules as:
245+
246+
Enviorment Setup
247+
^^^^^^^^^^^^^^^^^
248+
249+
To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` with local available dependencies.
250+
251+
1. Disable the rules with ``http_archive`` for x86_64 by commenting the following rules:
240252

241253
.. code-block:: shell
242254
@@ -277,7 +289,7 @@ To build natively on aarch64-linux-gnu platform, configure the WORKSPACE with lo
277289
#)
278290
279291
280-
2. Disable the pip3_import rules as:
292+
2. Disable Python API testing dependencies:
281293

282294
.. code-block:: shell
283295
@@ -298,19 +310,27 @@ To build natively on aarch64-linux-gnu platform, configure the WORKSPACE with lo
298310
#pip_install()
299311
300312
301-
3. Configuring the local available dependencies by using new_local_repository rules as:
313+
3. Configure the correct paths to directory roots containing local dependencies in the ``new_local_repository`` rules:
314+
315+
NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
316+
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.6/dist-packages/torch``.
317+
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.6/site-packages/torch``.
318+
319+
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
320+
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
321+
use that library, set the paths to the same path but when you compile make sure to add the flag ``--config=pre_cxx11_abi``
302322

303323
.. code-block:: shell
304324
305325
new_local_repository(
306326
name = "libtorch",
307-
path = "/usr/local/lib/python3.6/site-packages/torch",
327+
path = "/usr/local/lib/python3.6/dist-packages/torch",
308328
build_file = "third_party/libtorch/BUILD"
309329
)
310330
311331
new_local_repository(
312332
name = "libtorch_pre_cxx11_abi",
313-
path = "/usr/local/lib/python3.6/site-packages/torch",
333+
path = "/usr/local/lib/python3.6/dist-packages/torch",
314334
build_file = "third_party/libtorch/BUILD"
315335
)
316336
@@ -326,12 +346,22 @@ To build natively on aarch64-linux-gnu platform, configure the WORKSPACE with lo
326346
build_file = "@//third_party/tensorrt/local:BUILD"
327347
)
328348
349+
Compile C++ Library and Compiler CLI
350+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
351+
352+
Compile TRTorch library using bazel command:
353+
354+
.. code-block:: shell
329355
330-
Note: If pip library for torch is installed using --user, configure the new_local_repository path for torch accordingly.
356+
bazel build //:libtrtorch
331357
358+
Compile Python API
359+
^^^^^^^^^^^^^^^^^^^^
332360

333-
Compile TRTorch library using bazel command as:
361+
Compile the Python API using the following command from the ``//py`` directory:
334362

335363
.. code-block:: shell
336364
337-
bazel build //:libtrtorch
365+
python3 setup.py install --use-cxx11-abi
366+
367+
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-cxx11-abi`` flag

docsrc/tutorials/trtorchc.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ All that is required to run the program after compilation is for C++ linking aga
1414
or in Python importing the trtorch package. All other aspects of using compiled modules are identical
1515
to standard TorchScript. Load with ``torch.jit.load()`` and run like you would run any other module.
1616

17-
.. code-block:: txt
17+
.. code-block::txt
1818
1919
trtorchc [input_file_path] [output_file_path]
2020
[input_shapes...] {OPTIONS}
@@ -86,6 +86,6 @@ to standard TorchScript. Load with ``torch.jit.load()`` and run like you would r
8686
8787
e.g.
8888

89-
.. code-block:: txt
89+
.. code-block:: shell
9090
9191
trtorchc tests/modules/ssd_traced.jit.pt ssd_trt.ts "[(1,3,300,300); (1,3,512,512); (1, 3, 1024, 1024)]" -p f16

0 commit comments

Comments
 (0)