You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-7Lines changed: 19 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,14 +12,25 @@ More Information / System Architecture:
12
12
13
13
14
14
15
-
## Building a docker container for Torch-TensorRT Preview
15
+
## Building a docker container for Torch-TensorRT
16
16
17
-
We provide a `Dockerfile` in `docker/` directory. We build `Torch-TensorRT` on top of a `Pytorch NGC container` which provide basic dependencies (like CUDA, CUDNN, CUBLAS, TensorRT, Pytorch and others) The dependency libraries in the container can be found in the <ahref="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
17
+
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the <ahref="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
18
18
19
19
Please follow this instruction to build a Docker container.
docker build --build-arg BASE=<CONTAINER VERSION e.g. 21.11> -f docker/Dockerfile -t torch_tensorrt:latest .
23
+
```
24
+
25
+
In the case of building on top of a custom base container, you first must determine the
26
+
version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the
If you would like to build outside a docker container, please follow the section [Compiling Torch-TensorRT](#compiling-torch-tensorrt)
@@ -86,18 +97,19 @@ torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedd
86
97
| Linux aarch64 / DLA |**Native Compilation Supported on JetPack-4.4+**|
87
98
| Windows / GPU |**Unofficial Support**|
88
99
| Linux ppc64le / GPU | - |
100
+
| NGC Containers |**Including in PyTorch NGC Containers 21.11+**|
89
101
90
-
Torch-TensorRT will be included in NVIDIA NGC containers (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) starting in 21.11.
102
+
> Torch-TensorRT will be included in NVIDIA NGC containers (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) starting in 21.11.
91
103
92
104
> Note: Refer NVIDIA NGC container(https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack.
93
105
94
106
### Dependencies
95
107
96
108
These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.
Torch-TensorRT is built with Bazel, so begin by installing it.
48
48
49
-
* The easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
49
+
* The easiest way is to install bazelisk using the method of your choosing https://github.com/bazelbuild/bazelisk
50
50
* Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
51
51
* Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
52
52
@@ -67,9 +67,40 @@ the CUDA driver installed and the container must have CUDA)
67
67
The correct LibTorch version will be pulled down for you by bazel.
68
68
69
69
NOTE: For best compatability with official PyTorch, use torch==1.10.0+cuda113, TensorRT 8.0 and cuDNN 8.2 for CUDA 11.3 however Torch-TensorRT itself supports
70
-
TensorRT and cuDNN for CUDA versions other than 11.1 for usecases such as using NVIDIA compiled distributions of PyTorch that use other versions of CUDA
70
+
TensorRT and cuDNN for other CUDA versions for usecases such as using NVIDIA compiled distributions of PyTorch that use other versions of CUDA
71
71
e.g. aarch64 or custom compiled version of PyTorch.
72
72
73
+
.. _abis:
74
+
75
+
Choosing the Right ABI
76
+
^^^^^^^^^^^^^^^^^^^^^^^^
77
+
78
+
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
79
+
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
80
+
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
81
+
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
82
+
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
83
+
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
0 commit comments