Quantization of full models #13100
Unanswered
Wheest
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to quantize some simple CIFAR10 models. I am following the documentation on quantization.
You can see this gist which I am using, which I have removed any pretraining from to keep simple, which reproduces my issue.
Note that there is a dependency of the
pytorch-lighting-cifar
pacakge (pip3 install pytorch-lighting-cifar
) which has the model architectures.Trying an example with
python quantize_standalone.py --model_arch mobilenetv2 --epochs 2
fails with:Since it is a callback,
out = F.relu(self.bn1(self.conv1(x)))
is not actually in thisquantization.py
, so I am not sure why it fails.However, if I add the line to
import torch.nn.functional as F
at the start of the filequantization.py
, I get further, however then I fail with:I get the same issue running other models such as
resnet18
andmobilenetv1
.Is there something I'm missing? I believe I'm following the docs correctly.
@Borda @awaelchli git blame says you have both committed heavily to the quantization code, is this something you have seen before, or can give any tips to tackle?
Beta Was this translation helpful? Give feedback.
All reactions