Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions docs/source/tutorials_source/pt2e_quant_x86_inductor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,8 @@ We will start by performing the necessary imports, capturing the FX Graph from t
# Note: requires torch >= 2.6
exported_model = export(
model,
example_inputs
example_inputs,
strict=True
)


Expand Down Expand Up @@ -266,7 +267,7 @@ The PyTorch 2 Export QAT flow is largely similar to the PTQ flow:
# Step 1. program capture
# NOTE: this API will be updated to torch.export API in the future, but the captured
# result shoud mostly stay the same
exported_model = export(m, example_inputs)
exported_model = export(m, example_inputs, strict=True)
# we get a model with aten ops

# Step 2. quantization-aware training
Expand Down
1 change: 1 addition & 0 deletions docs/source/tutorials_source/pt2e_quant_xpu_inductor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ We will start by performing the necessary imports, capturing the FX Graph from t
exported_model = export(
model,
example_inputs,
strict=True
).module()


Expand Down