This repository contains model designs, deployment solutions for state-of-the-art networks, and inference samples for Autonomous Vehicle (AV) applications on NVIDIA DRIVE Platforms.
-
Deployment and Inference Solutions
- ONNX Export Guidance for TensorRT
- Triton PTX/Cubin Integration for TensorRT Plugins
- BEVFormer INT8 Explicit Quantization
- DCNv4 TensorRT
- Far3D TensorRT
- Deploy LLMs with TensorRT-LLM
- MTMI TensorRT
- PETRv1&v2 TensorRT
- Sparsity INT8 Training and TensorRT Inference
- StreamPETR TensorRT
- UniAD TensorRT
- VAD-TensorRT
- Diffusion-Planner-TensorRT