Skip to content

Commit 523b41e

Browse files
tarun292facebook-github-bot
authored andcommitted
Update executorch documentation to use export_for_training (#5219)
Summary: Switching over all the documentation and example references of capture_pre_autograd_graph in ExecuTorch to export_for_training. Pull Request resolved: #5219 Reviewed By: JacobSzwejbka Differential Revision: D62427911 fbshipit-source-id: 38f24cbd587835970886cae330b19944885de7dd
1 parent c032194 commit 523b41e

File tree

8 files changed

+24
-27
lines changed

8 files changed

+24
-27
lines changed

docs/source/native-delegates-executorch-xnnpack-delegate.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,9 +110,9 @@ quantizer.set_global(quantization_config)
110110
### Quantizing your model with the XNNPACKQuantizer
111111
After configuring our quantizer, we are now ready to quantize our model
112112
```python
113-
from torch._export import capture_pre_autograd_graph
113+
from torch.export import export_for_training
114114

115-
exported_model = capture_pre_autograd_graph(model_to_quantize, example_inputs)
115+
exported_model = export_for_training(model_to_quantize, example_inputs).module()
116116
prepared_model = prepare_pt2e(exported_model, quantizer)
117117
print(prepared_model.graph)
118118
```

docs/source/sdk-bundled-io.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -96,8 +96,7 @@ from executorch.devtools.bundled_program.config import MethodTestCase, MethodTes
9696
from executorch.devtools.bundled_program.serialize import (
9797
serialize_from_bundled_program_to_flatbuffer,
9898
)
99-
from torch._export import capture_pre_autograd_graph
100-
from torch.export import export
99+
from torch.export import export, export_for_training
101100

102101

103102
# Step 1: ExecuTorch Program Export
@@ -131,7 +130,7 @@ capture_input = (
131130

132131
# Export method's FX Graph.
133132
method_graph = export(
134-
capture_pre_autograd_graph(model, capture_input),
133+
export_for_training(model, capture_input).module(),
135134
capture_input,
136135
)
137136

@@ -338,7 +337,7 @@ inputs = (torch.ones(2, 2, dtype=torch.float), )
338337
339338
# Find each method of model needs to be traced my its name, export its FX Graph.
340339
method_graph = export(
341-
capture_pre_autograd_graph(model, inputs),
340+
export_for_training(model, inputs).module(),
342341
inputs,
343342
)
344343
@@ -474,7 +473,7 @@ inputs = (torch.ones(2, 2, dtype=torch.float),)
474473

475474
# Find each method of model needs to be traced my its name, export its FX Graph.
476475
method_graph = export(
477-
capture_pre_autograd_graph(model, inputs),
476+
export_for_training(model, inputs).module(),
478477
inputs,
479478
)
480479

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,13 +74,13 @@ After lowering to the XNNPACK Program, we can then prepare it for executorch and
7474
The XNNPACK delegate can also execute symmetrically quantized models. To understand the quantization flow and learn how to quantize models, refer to [Custom Quantization](quantization-custom-quantization.md) note. For the sake of this tutorial, we will leverage the `quantize()` python helper function conveniently added to the `executorch/executorch/examples` folder.
7575

7676
```python
77-
from torch._export import capture_pre_autograd_graph
77+
from torch.export import export_for_training
7878
from executorch.exir import EdgeCompileConfig
7979

8080
mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval()
8181
sample_inputs = (torch.randn(1, 3, 224, 224), )
8282

83-
mobilenet_v2 = capture_pre_autograd_graph(mobilenet_v2, sample_inputs) # 2-stage export for quantization path
83+
mobilenet_v2 = export_for_training(mobilenet_v2, sample_inputs).module() # 2-stage export for quantization path
8484

8585
from torch.ao.quantization.quantize_pt2e import convert_pt2e, prepare_pt2e
8686
from torch.ao.quantization.quantizer.xnnpack_quantizer import (
@@ -107,7 +107,7 @@ def quantize(model, example_inputs):
107107
quantized_mobilenetv2 = quantize(mobilenet_v2, sample_inputs)
108108
```
109109

110-
Quantization requires a two stage export. First we use the `capture_pre_autograd_graph` API to capture the model before giving it to `quantize` utility function. After performing the quantization step, we can now leverage the XNNPACK delegate to lower the quantized exported model graph. From here, the procedure is the same as for the non-quantized model lowering to XNNPACK.
110+
Quantization requires a two stage export. First we use the `export_for_training` API to capture the model before giving it to `quantize` utility function. After performing the quantization step, we can now leverage the XNNPACK delegate to lower the quantized exported model graph. From here, the procedure is the same as for the non-quantized model lowering to XNNPACK.
111111

112112
```python
113113
# Continued from earlier...

docs/source/tutorials_source/export-to-executorch-tutorial.py

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -179,8 +179,8 @@ def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
179179
# -----------------------
180180
#
181181
# To quantize a model, we first need to capture the graph with
182-
# ``torch._export.capture_pre_autograd_graph``, perform quantization, and then
183-
# call ``torch.export``. ``torch._export.capture_pre_autograd_graph`` returns a
182+
# ``torch.export.export_for_training``, perform quantization, and then
183+
# call ``torch.export``. ``torch.export.export_for_training`` returns a
184184
# graph which contains ATen operators which are Autograd safe, meaning they are
185185
# safe for eager-mode training, which is needed for quantization. We will call
186186
# the graph at this level, the ``Pre-Autograd ATen Dialect`` graph.
@@ -193,10 +193,10 @@ def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
193193
# will annotate the nodes in the graph with information needed to quantize the
194194
# model properly for a specific backend.
195195

196-
from torch._export import capture_pre_autograd_graph
196+
from torch.export import export_for_training
197197

198198
example_args = (torch.randn(1, 3, 256, 256),)
199-
pre_autograd_aten_dialect = capture_pre_autograd_graph(SimpleConv(), example_args)
199+
pre_autograd_aten_dialect = export_for_training(SimpleConv(), example_args).module()
200200
print("Pre-Autograd ATen Dialect Graph")
201201
print(pre_autograd_aten_dialect)
202202

@@ -562,8 +562,7 @@ def forward(self, a, x, b):
562562
# Here is an example for an entire end-to-end workflow:
563563

564564
import torch
565-
from torch._export import capture_pre_autograd_graph
566-
from torch.export import export, ExportedProgram
565+
from torch.export import export, export_for_training, ExportedProgram
567566

568567

569568
class M(torch.nn.Module):
@@ -577,7 +576,7 @@ def forward(self, x):
577576

578577

579578
example_args = (torch.randn(3, 4),)
580-
pre_autograd_aten_dialect = capture_pre_autograd_graph(M(), example_args)
579+
pre_autograd_aten_dialect = export_for_training(M(), example_args).module()
581580
# Optionally do quantization:
582581
# pre_autograd_aten_dialect = convert_pt2e(prepare_pt2e(pre_autograd_aten_dialect, CustomBackendQuantizer))
583582
aten_dialect: ExportedProgram = export(pre_autograd_aten_dialect, example_args)

examples/portable/scripts/export_and_delegate.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ def export_composite_module_with_lower_graph():
6161
m_compile_spec = m.get_compile_spec()
6262

6363
# pre-autograd export. eventually this will become torch.export
64-
m = torch._export.capture_pre_autograd_graph(m, m_inputs)
64+
m = torch.export.export_for_training(m, m_inputs).module()
6565
edge = export_to_edge(m, m_inputs)
6666
logging.info(f"Exported graph:\n{edge.exported_program().graph}")
6767

@@ -84,7 +84,7 @@ def forward(self, *args):
8484
m = CompositeModule()
8585
m = m.eval()
8686
# pre-autograd export. eventually this will become torch.export
87-
m = torch._export.capture_pre_autograd_graph(m, m_inputs)
87+
m = torch.export.export_for_training(m, m_inputs).module()
8888
composited_edge = export_to_edge(m, m_inputs)
8989

9090
# The graph module is still runnerable
@@ -134,7 +134,7 @@ def get_example_inputs(self):
134134
m = Model()
135135
m_inputs = m.get_example_inputs()
136136
# pre-autograd export. eventually this will become torch.export
137-
m = torch._export.capture_pre_autograd_graph(m, m_inputs)
137+
m = torch.export.export_for_training(m, m_inputs).module()
138138
edge = export_to_edge(m, m_inputs)
139139
logging.info(f"Exported graph:\n{edge.exported_program().graph}")
140140

@@ -171,7 +171,7 @@ def export_and_lower_the_whole_graph():
171171

172172
m_inputs = m.get_example_inputs()
173173
# pre-autograd export. eventually this will become torch.export
174-
m = torch._export.capture_pre_autograd_graph(m, m_inputs)
174+
m = torch.export.export_for_training(m, m_inputs).module()
175175
edge = export_to_edge(m, m_inputs)
176176
logging.info(f"Exported graph:\n{edge.exported_program().graph}")
177177

examples/xnnpack/aot_compiler.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181

8282
model = model.eval()
8383
# pre-autograd export. eventually this will become torch.export
84-
model = torch._export.capture_pre_autograd_graph(model, example_inputs)
84+
model = torch.export.export_for_training(model, example_inputs).module()
8585

8686
if args.quantize:
8787
logging.info("Quantizing Model...")

examples/xnnpack/quantization/example.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ def verify_xnnpack_quantizer_matching_fx_quant_model(model_name, model, example_
6060
m = model
6161

6262
# 1. pytorch 2.0 export quantization flow (recommended/default flow)
63-
m = torch._export.capture_pre_autograd_graph(m, copy.deepcopy(example_inputs))
63+
m = torch.export.export_for_training(m, copy.deepcopy(example_inputs)).module()
6464
quantizer = XNNPACKQuantizer()
6565
quantization_config = get_symmetric_quantization_config(is_per_channel=True)
6666
quantizer.set_global(quantization_config)
@@ -177,7 +177,7 @@ def main() -> None:
177177

178178
model = model.eval()
179179
# pre-autograd export. eventually this will become torch.export
180-
model = torch._export.capture_pre_autograd_graph(model, example_inputs)
180+
model = torch.export.export_for_training(model, example_inputs).module()
181181
start = time.perf_counter()
182182
quantized_model = quantize(model, example_inputs)
183183
end = time.perf_counter()

extension/export_util/utils.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,7 @@
1414
import torch
1515
from executorch.exir import EdgeProgramManager, ExecutorchProgramManager, to_edge
1616
from executorch.exir.tracer import Value
17-
from torch._export import capture_pre_autograd_graph
18-
from torch.export import export, ExportedProgram
17+
from torch.export import export, export_for_training, ExportedProgram
1918

2019

2120
_EDGE_COMPILE_CONFIG = exir.EdgeCompileConfig(
@@ -95,7 +94,7 @@ def export_to_exec_prog(
9594
) -> ExecutorchProgramManager:
9695
m = model.eval()
9796
# pre-autograd export. eventually this will become torch.export
98-
m = capture_pre_autograd_graph(m, example_inputs)
97+
m = export_for_training(m, example_inputs).module()
9998

10099
core_aten_ep = _to_core_aten(m, example_inputs, dynamic_shapes, strict=strict)
101100

0 commit comments

Comments
 (0)