Cannot evaluate quantized model #13065
Unanswered
Wheest
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to quantize some simple CIFAR10 models. I am following the documentation on quantization.
You can see this gist which I am using, which I have removed any pretraining from to keep simple.
Dependencies are
pytorch-lighting-cifar
, which have the model architectures.Trying an example with
python quantize_standalone.py --model_arch mobilenetv2 --epoch 1
fails with:Since it is a callback,
out = F.relu(self.bn1(self.conv1(x)))
is not actually in thisquantization.py
, so I am not sure why it fails.If I add the line to
import torch.nn.functional as F
at the start of the filequantization.py
, I get further, however then I fail with:I get the same issue running other models such as
resnet18
andmobilenetv1
.Is there something I'm missing? I believe I'm following the docs correctly. My PyTorch Lightning version is 1.6.2, and my torch version is
1.11.0+cu102
Beta Was this translation helpful? Give feedback.
All reactions