Skip to content

Commit c2db6b6

Browse files
PT2E QAT Move model to eval before convert
1 parent 2f2db74 commit c2db6b6

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

prototype_source/pt2e_quant_x86_inductor.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -286,12 +286,12 @@ The PyTorch 2 Export QAT flow is largely similar to the PTQ flow:
286286
287287
# train omitted
288288
289-
converted_model = convert_pt2e(prepared_model)
290-
# we have a model with aten ops doing integer computations when possible
291-
292289
# move the quantized model to eval mode, equivalent to `m.eval()`
293290
torch.ao.quantization.move_exported_model_to_eval(converted_model)
294291
292+
converted_model = convert_pt2e(prepared_model)
293+
# we have a model with aten ops doing integer computations when possible
294+
295295
# Lower the model into Inductor
296296
with torch.no_grad():
297297
optimized_model = torch.compile(converted_model)

0 commit comments

Comments
 (0)