You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Summary:** This commit allows users to call the following,
which swaps `FakeQuantized*` modules back to the corresponding
`torch.nn.*` without performing post-training quantization.
```
QATConfig(base_config=None, step="convert")
```
This has the exact same functionality as this deprecated config:
```
FromIntXQuantizationAwareTrainingConfig()
```
This functionality is added back since it may be useful to users
who wish to save QAT trained checkpoints from models containing
only `torch.nn.*` modules (not `FakeQuanitzed*` modules), e.g.
when training and inference need to happen on different machines:
```
quantize_(model, QATConfig(base_ptq_config, step="prepare"))
train(model)
quantize_(model, QATConfig(step="convert"))
torch.save(model.state_dict(), "my_checkpoint.pt")
\# On a different machine
model.load_state_dict(torch.load("my_checkpoint.pt"))
quantize_(model, base_ptq_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k qat_config_init
python test/quantization/test_qat.py -k qat_api_convert_no_quantization
```
0 commit comments