Skip to content
This repository was archived by the owner on Nov 8, 2022. It is now read-only.

question: [Quantization] Which files to change to make inference faster for Q8BERT? #221

@sarthaklangde

Description

@sarthaklangde

I know from previous issues it is mentioned that that Q8BERT was just an experiment to measure the accuracy of quantized BERT model. But, given that the accuracy is good, what changed would need to be made to torch.nn.quantization file to replace the FP32 operations by INT8 operations?

Replacing the FP32 Linear layers with the torch.nn.quantized.Linear should theoretically work since it will have optimized operations, but it doesn't. Same for other layers.

If someone could just point out how to improve the inference speed (hints, tips, directions, code, anything), it would be helpful since the model's accuracy is really good and I would like to use it for downstream tasks. I don't mind even creating a PR once those changes are done so that it merges with the main repo.

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions