We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 2f2db74 commit c2db6b6Copy full SHA for c2db6b6
prototype_source/pt2e_quant_x86_inductor.rst
@@ -286,12 +286,12 @@ The PyTorch 2 Export QAT flow is largely similar to the PTQ flow:
286
287
# train omitted
288
289
- converted_model = convert_pt2e(prepared_model)
290
- # we have a model with aten ops doing integer computations when possible
291
-
292
# move the quantized model to eval mode, equivalent to `m.eval()`
293
torch.ao.quantization.move_exported_model_to_eval(converted_model)
294
+ converted_model = convert_pt2e(prepared_model)
+ # we have a model with aten ops doing integer computations when possible
+
295
# Lower the model into Inductor
296
with torch.no_grad():
297
optimized_model = torch.compile(converted_model)
0 commit comments