This repo provides a TensorRT C++ implementation of Nvidia's NanoSAM, a distilled segment-anything model, for real-time inference on GPU.
- 
There are two ways to load engines: - 
Load engines built by trtexec: #include "nanosam/nanosam.h" NanoSam nanosam( "resnet18_image_encoder.engine", "mobile_sam_mask_decoder.engine" ); 
- 
Build engines directly from onnx files: NanoSam nanosam( "resnet18_image_encoder.onnx", "mobile_sam_mask_decoder.onnx" ); 
 
- 
- 
Segment an object using a prompt point: Mat image = imread("assets/dog.jpg"); // Foreground point vector<Point> points = { Point(1300, 900) }; vector<float> labels = { 1 }; Mat mask = nanosam.predict(image, points, labels); Input Output   
- 
Create masks from bounding boxes: Mat image = imread("assets/dogs.jpg"); // Bounding box top-left and bottom-right points vector<Point> points = { Point(100, 100), Point(750, 759) }; vector<float> labels = { 2, 3 }; Mat mask = nanosam.predict(image, points, labels); Input Output   NotesThe point labels may bePoint Label Description 0 Background point 1 Foreground point 2 Bounding box top-left 3 Bounding box bottom-right The inference time includes the pre-preprocessing time and the post-processing time: Device Image Shape(WxH) Model Shape(WxH) Inference Time(ms) RTX4090 2048x1365 1024x1024 14 - Download the image encoder: resnet18_image_encoder.onnx
- Download the mask decoder: mobile_sam_mask_decoder.onnx
- Download the TensorRT zip file that matches the Windows version you are using.
- Choose where you want to install TensorRT. The zip file will install everything into a subdirectory called TensorRT-8.x.x.x. This new subdirectory will be referred to as<installpath>in the steps below.
- Unzip the TensorRT-8.x.x.x.Windows10.x86_64.cuda-x.x.zipfile to the location that you chose. Where:
 - 8.x.x.xis your TensorRT version
- cuda-x.xis CUDA version- 11.8or- 12.0
 - Add the TensorRT library files to your system PATH. To do so, copy the DLL files from<installpath>/libto your CUDA installation directory, for example,C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin, wherevX.Yis your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.
- Ensure that the following is present in your Visual Studio Solution project properties:
 - <installpath>/libhas been added to your PATH variable and is present under VC++ Directories > Executable Directories.
- <installpath>/includeis present under C/C++ > General > Additional Directories.
- nvinfer.lib and any other LIB files that your project requires are present under Linker > Input > Additional Dependencies.
 - Download and install any recent OpenCV for Windows.
 This project is based on the following projects: 

