You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update executorch documentation to use export_for_training (#5219)
Summary:
Switching over all the documentation and example references of capture_pre_autograd_graph in ExecuTorch to export_for_training.
Pull Request resolved: #5219
Reviewed By: JacobSzwejbka
Differential Revision: D62427911
fbshipit-source-id: 38f24cbd587835970886cae330b19944885de7dd
Copy file name to clipboardExpand all lines: docs/source/tutorial-xnnpack-delegate-lowering.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,13 +74,13 @@ After lowering to the XNNPACK Program, we can then prepare it for executorch and
74
74
The XNNPACK delegate can also execute symmetrically quantized models. To understand the quantization flow and learn how to quantize models, refer to [Custom Quantization](quantization-custom-quantization.md) note. For the sake of this tutorial, we will leverage the `quantize()` python helper function conveniently added to the `executorch/executorch/examples` folder.
75
75
76
76
```python
77
-
from torch._exportimportcapture_pre_autograd_graph
Quantization requires a two stage export. First we use the `capture_pre_autograd_graph` API to capture the model before giving it to `quantize` utility function. After performing the quantization step, we can now leverage the XNNPACK delegate to lower the quantized exported model graph. From here, the procedure is the same as for the non-quantized model lowering to XNNPACK.
110
+
Quantization requires a two stage export. First we use the `export_for_training` API to capture the model before giving it to `quantize` utility function. After performing the quantization step, we can now leverage the XNNPACK delegate to lower the quantized exported model graph. From here, the procedure is the same as for the non-quantized model lowering to XNNPACK.
0 commit comments