Hello,
I am using TensorRT 10.2 and noticed that the normal FP8 convolution has been updated.
However, when I try to use a simple QDQ + Conv model in ONNX, the FP8 convolution is not selected. Even timing FP8 tactics is not performed.
Here is the model I used. It was quantized by using TensorRT-Model-Optimizer. And I used H100 device.

$ trtexec --onnx=simple_conv_fp8.onnx --fp16 --fp8 --profilingVerbosity=detailed --verbose --exportLayerInfo=layerinfo.json