|
1 | 1 | # Building a Torch-TensorRT container
|
2 | 2 |
|
3 |
| -### Install Docker and NVIDIA Container Toolkit |
| 3 | +* Use `Dockerfile` to build a container which provides the exact development environment that our master branch is usually tested against. |
4 | 4 |
|
5 |
| -https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html |
| 5 | +* `Dockerfile` currently uses the exact library versions (Torch, CUDA, CUDNN, TensorRT) listed in <a href="https://github.com/pytorch/TensorRT#dependencies">dependencies</a> to build Torch-TensorRT. |
6 | 6 |
|
7 |
| -### Build Container |
| 7 | +* This `Dockerfile` installs `pre-cxx11-abi` versions of Pytorch and builds Torch-TRT using `pre-cxx11-abi` libtorch as well. |
| 8 | +Note: To install `cxx11_abi` version of Torch-TensorRT, enable `USE_CXX11=1` flag so that `dist-build.sh` can build it accordingly. |
| 9 | + |
| 10 | +### Dependencies |
| 11 | + |
| 12 | +* Install nvidia-docker by following https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker |
| 13 | + |
| 14 | +### Instructions |
8 | 15 |
|
9 | 16 | > From root of Torch-TensorRT repo
|
10 | 17 |
|
| 18 | +Build: |
| 19 | +``` |
| 20 | +DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile -t torch_tensorrt:latest . |
11 | 21 | ```
|
12 |
| -# Build: |
13 |
| -DOCKER_BUILDKIT=1 docker build --build-arg BASE={TensorRT Base Container Version} -f docker/Dockerfile -t torch_tensorrt1.0:latest . |
14 | 22 |
|
15 |
| -# Run: |
16 |
| -docker run --gpus all -it \ |
17 |
| - --shm-size=8gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ |
18 |
| - --name=torch_tensorrt1.0 --ipc=host --net=host torch_tensorrt1.0:latest |
| 23 | +Run: |
19 | 24 | ```
|
| 25 | +nvidia-docker run --gpus all -it --shm-size=8gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" --name=torch_tensorrt --ipc=host --net=host torch_tensorrt:latest |
| 26 | +``` |
| 27 | + |
| 28 | +Test: |
| 29 | + |
| 30 | + |
| 31 | +You can run any converter test to verify if Torch-TRT built sucessfully inside the container. Once you launch the container, you can run |
| 32 | +``` |
| 33 | +bazel test //tests/core/conversion/converters:test_activation --compilation_mode=opt --test_output=summary --config use_precompiled_torchtrt --config pre_cxx11_abi |
| 34 | +``` |
| 35 | + |
| 36 | +* `--config use_precompiled_torchtrt` : Indicates bazel to use pre-installed Torch-TRT library to test an application. |
| 37 | +* `--config pre_cxx11_abi` : This flag ensures `bazel test` uses `pre_cxx11_abi` version of `libtorch`. Use this flag corresponding to the ABI format of your Torch-TensorRT installation. |
| 38 | + |
| 39 | +### Pytorch NGC containers |
| 40 | + |
| 41 | +We also ship Torch-TensorRT in <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch">Pytorch NGC containers </a>. Release notes for these containers can be found <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">here</a>. Check out `release/ngc/23.XX` branch of Torch-TensorRT for source code that gets shipped with `23.XX` version of Pytorch NGC container. |
0 commit comments