You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: intermediate_source/torch_export_tutorial.py
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -365,7 +365,7 @@ def forward(
365
365
# When guards involve symbols allocated for input dimensions, the program contains restrictions on what input shapes are valid;
366
366
# i.e. the program's dynamic behavior. The symbolic shapes subsystem is the part responsible for taking in all the emitted guards
367
367
# and producing a final program representation that adheres to all of these guards. Before we see this "final representation" in
368
-
# an ExportedProgram, let's look at the guards emitted by the toy model we're tracing.
368
+
# an ``ExportedProgram``, let's look at the guards emitted by the toy model we're tracing.
369
369
#
370
370
# Here, each forward input tensor is annotated with the symbol allocated at the start of tracing:
371
371
@@ -416,7 +416,7 @@ def forward(
416
416
# so interesting currently, since this export call doesn't emit any guards related to symbol bounds and each base symbol has
417
417
# a generic bound, but this will come up later.
418
418
#
419
-
# So far, because we've been exporting this toy model, this experience has been misrepresentative of how hard
419
+
# So far, because we've been exporting this toy model, this experience has not been representative of how hard
420
420
# it typically is to debug dynamic shapes guards & issues. In most cases it isn't obvious what guards are being emitted,
421
421
# and which operations and parts of user code are responsible. For this toy model we pinpoint the exact lines, and the guards
422
422
# are rather intuitive.
@@ -542,7 +542,7 @@ def forward(self, w, x, y, z):
542
542
# and presenting what export believes is the overall dynamic behavior of the program. The drawback of this design appears once the user has stronger expectations or
543
543
# beliefs about the dynamic behavior of these models - maybe there is a strong desire on dynamism and specializations on particular dimensions are to be avoided at
544
544
# all costs, or maybe we just want to catch changes in dynamic behavior with changes to the original model code, or possibly underlying decompositions or meta-kernels.
545
-
# These changes won't be detected and the ``export()`` call will most likely succeed, unless tests are in place that check the resulting ExportedProgram representation.
545
+
# These changes won't be detected and the ``export()`` call will most likely succeed, unless tests are in place that check the resulting ``ExportedProgram`` representation.
546
546
#
547
547
# For such cases, our stance is to recommend the "traditional" way of specifying dynamic shapes, which longer-term users of export might be familiar with: named ``Dims``:
548
548
@@ -555,7 +555,7 @@ def forward(self, w, x, y, z):
# This style of dynamic shapes allows the user to specify what symbols are allocated for input dimensions, min/max bounds on those symbols, and places restrictions on the
558
-
# dynamic behavior of the ExportedProgram produced; ConstraintViolation errors will be raised if model tracing emits guards that conflict with the relations or static/dynamic
558
+
# dynamic behavior of the ``ExportedProgram`` produced; ``ConstraintViolation`` errors will be raised if model tracing emits guards that conflict with the relations or static/dynamic
559
559
# specifications given. For example, in the above specification, the following is asserted:
560
560
# - ``x.shape[0]`` is to have range ``[4, 256]``, and related to ``y.shape[0]`` by ``y.shape[0] == 2 * x.shape[0]``.
561
561
# - ``x.shape[1]`` is static.
@@ -571,7 +571,7 @@ def forward(self, w, x, y, z):
# One common issue with this specification style (before ``Dim.AUTO`` was introduced), is that the specification would often be mismatched with what was produced by model tracing.
574
-
# That would lead to ConstraintViolation errors and export suggested fixes - see for example with this model & specification, where the model inherently requires equality between
574
+
# That would lead to ``ConstraintViolation`` errors and export suggested fixes - see for example with this model & specification, where the model inherently requires equality between
575
575
# dimensions 0 of ``x`` and ``y``, and requires dimension 1 to be static.
576
576
577
577
classFoo(torch.nn.Module):
@@ -596,7 +596,7 @@ def forward(self, x, y):
596
596
# - ``None`` is a good option for static behavior:
597
597
# - ``dynamic_shapes=None`` (default) exports with the entire model being static.
598
598
# - specifying ``None`` at an input-level exports with all tensor dimensions static, and alternatively is also required for non-tensor inputs.
599
-
# - specfiying ``None`` at a dimension-level specializes that dimension, though this is deprecated in favor of ``Dim.STATIC``.
599
+
# - specifying ``None`` at a dimension-level specializes that dimension, though this is deprecated in favor of ``Dim.STATIC``.
600
600
# - specifying per-dimension integer values also produces static behavior, and will additionally check that the provided sample input matches the specification.
601
601
#
602
602
# These options are combined in the inputs & dynamic shapes spec below:
0 commit comments