Skip to content

Add support for quantized deconv #7640

@digantdesai

Description

@digantdesai

🚀 The feature, motivation and pitch

We already have support for deconv (or transposed conv) from #7514. However, we need to update XNNPACK quantizer which lives in PyTorch/PyTorch for this op, and update the paritioner config to lower this as a quantized operator.

Alternatives

None.

Additional context

No response

RFC (Optional)

No response

cc @mcr229

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: xnnpackIssues related to xnnpack delegation and the code under backends/xnnpack/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions