The https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples/vlm_ptq page indicates that TensorRT-Model-Optimizer currently only supports quantization for three multimodal models: Llava, VILA, and Phi-3-vision. It would be beneficial to add support for quantizing the Qwen2.5-VL model as well. <img width="706" height="260" alt="Image" src="https://github.com/user-attachments/assets/a288683e-3700-4795-836d-46ca7c9b3dd9" />