Description
After I inserted QDQ nodes, ConvTranspose2d could no longer fuse with the subsequent BN and Relu layers, and BN and Relu would be computed in FP32 instead.
However, a model without QDQ can directly generate fused operators using --int8.
Q1: How can I make QDQ-ConvTranspose2d fuse with BN and Relu?
Q2: Can sigmoid fuse with ConvTranspose2d?
Tensorrt version 8.6.1.6

