Skip to content

How to Fuse BN and ReLU with QDQ-ConvTranspose2d in TensorRT  #3412

@thfylsty

Description

@thfylsty

Description

After I inserted QDQ nodes, ConvTranspose2d could no longer fuse with the subsequent BN and Relu layers, and BN and Relu would be computed in FP32 instead.

However, a model without QDQ can directly generate fused operators using --int8.

Q1: How can I make QDQ-ConvTranspose2d fuse with BN and Relu?

Q2: Can sigmoid fuse with ConvTranspose2d?

Tensorrt version 8.6.1.6

20231101-101935

20231101-101925

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions