Skip to content

Commit 608e636

Browse files
committed
Update Readme for torch_quant_to_onnx.py example
Signed-off-by: ajrasane <[email protected]>
1 parent 17439e6 commit 608e636

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

examples/onnx_ptq/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,8 @@ python torch_quant_to_onnx.py \
152152
--onnx_save_path=<path to save the exported ONNX model>
153153
```
154154

155+
> *Note: TensorRT has limited support for Convolution layers with certain precision formats. FP8 Convolution layers remain restricted to specific kernel sizes and channel multiples, and there are no NVFP4 convolution kernels today—NVFP4 export is effectively limited to GEMM-heavy Transformer-style models (e.g., ViT). Convolution-centric CNNs such as ResNet, ConvNeXt, or MobileNet will fail when exported with `quantize_mode=nvfp4|int4_awq`.*
156+
155157
### Evaluation
156158

157159
If the input model is of type image classification, use the following script to evaluate it.

0 commit comments

Comments
 (0)