Skip to content

Commit 38258af

Browse files
mcremon-metafacebook-github-bot
authored andcommitted
Make the quantized path the main compilation path (#7939)
Summary: This diff propagates the changes to the testing APIs to the regular (e.g. used in Bento or standalone) APIs. It introduces `quantize_and_export_to_executorch`, and makes `export_to_executorch` call it. Reviewed By: zonglinpeng Differential Revision: D67646596
1 parent 76fb844 commit 38258af

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

backends/cadence/aot/compiler.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,8 +142,8 @@ def quantize_pt2(
142142
Prepare, convert and fuse the model using the given quantizer.
143143
Returns a GraphModule with the quantized model.
144144
"""
145-
# Make the model inference mode by calling model.eval()
146-
model.eval()
145+
# # Make the model inference mode by calling model.eval()
146+
# model.eval()
147147

148148
# Instantiate the quantizer to CadenceQuantizer if not supplied
149149
if not quantizer:

0 commit comments

Comments
 (0)