Replies: 1 comment 4 replies
-
Hello, @i3abghany! There's a |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am having a trouble making AutoQ produce a model with bit widths other than 2, 4, and 8. I slightly modify the
nncf/examples/torch/classification/configs/mixed_precision/resnet50_imagenet_mixed_int_autoq_staged.json
to be as follows:The differences from the example file are:
resnet18
using theCIFAR10
dataset.pretrained=false
compression.precision.num_iters=10
compression.precision.bits=[2, 4, 8, 12, 16, 20, 24, 28, 32]
I use a small number of episodes just to have a quick run.
I suspect that
target_device
having the value ofVPU
makes it only produce weights and activations using specific sizes. I am not interested in a specific hardware architecture.When I totally remove the
target_device
property from the JSON file, I get an error saying that Automatic Precision Initialization is only supported for target devicesVPU
andNONE
(as the default is set toCPU
).I am just interested in producing a model that's maybe 90% the size of a model that uses only 32-bit floats with no specific hardware requirements in mind.
Beta Was this translation helpful? Give feedback.
All reactions