Releases: NVIDIA-AI-IOT/torch2trt
Releases · NVIDIA-AI-IOT/torch2trt
v0.5.0
- Added tensor shape tracking to support dynamic shapes for flatten, squeeze, unsqueeze, view, reshape, interpolate, and getitem methods
- Added EasyOCR example
- Added the
DatasetRecordercontext manager, allowing to easily capture of module inputs in large pipeline for calibration and shape inference - Added support for legacy max_batch_size using optimization profiles
- Added support for nested tuple, dict and list module inputs and outputs via. the
Flattenerclass - Added ability to accept dataset as
inputsargument, and infer optimization profiles from the data - Added Dataset, TensorBatchDataset, ListDataset, and FolderDatset classes
- Added support for dynamic shapes
- Known limitation: Currently some converters (ie: View) may have unexpected behavior if their arguments are defined with dynamic Tensor shapes.
v0.4.0 - TensorRT 8, DLA, Native plugins library, explicit batch
- Added converter for
torch.nn.functional.group_normusing native TensorRT layers - Added converter for
torch.nn.ReflectionPad2dusing plugin layer - Added torch2trt_plugins library
- Added support for Deep Learning Accelerator (DLA)
- Added support for explicit batch
- Added support for TensorRT 8
v0.3.0 - contrib QAT example, additional converters
This version includes the introduction of the Quantization Aware Training workflow in torch2trt.contrib (thanks to @SrivastavaKshitij).
It also contains various converters added since the previous release. Please see the notes below.
Added
- Added converter for
torch.nn.functional.adaptive_avg_pool3d - Added converter for
torch.nn.functional.adaptive_max_pool3d - Added converter for
torch.maxpool3dandtorch.nn.functional.max_pool3d - Added Quantization Aware Training (QAT) workflow to contrib
- Added converter for
torch.roll - Added converter for
torch.nn.functional.layer_norm - Added converter for
torch.nn.functional.gelu - Added converter for
torch.nn.functional.linear - Added converter for
torch.nn.functional.silu
v0.2.0
Added
- Added converter for
torch.Tensor.expand - Added support for custom converters for methods defined outside of
torchmodule - Added names for TensorRT layers
- Added GroupNorm plugin which internally uses PyTorch aten::group_norm
- Replaced Tensor.ndim references with len(tensor.shape) to support older pytorch versions
- Added reduced precision documentation page
- Added converters for
floordiv,mod,ne, andtorch.tensoroperations - Extended
reluconverter to supportTensor.reluoperation - Extended
sigmoidconverter to supportTensor.sigmoidoperation
Initial release
- torch2trt method
- conversion hooks
- conversion context
TRTModuleclass- converters to support most
torchvisionimage classification models - image classification example notebook