Skip to content

Commit d9d09e0

Browse files
committed
typos in pytorch frontend documentation
1 parent 05f8a45 commit d9d09e0

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/frontend/pytorch.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
PyTorch and Brevitas
33
====================
44

5-
PyTorch frontend in ``hls4ml`` is implemented by parsing the symbolic trace of the ``torch.fx`` framework. This ensures proper execution graph is captured. Therefore, only models that can be traced with the FX framework can be parsed by ``hls4ml``.
5+
The PyTorch frontend in ``hls4ml`` is implemented by parsing the symbolic trace of the ``torch.fx`` framework. This ensures the proper execution graph is captured. Therefore, only models that can be traced with the FX framework can be parsed by ``hls4ml``.
66

7-
Provided the underlying opertion is supported in ``hls4ml``, we generally aim to support the use of both ``torch.nn`` classes and ``torch.nn.functional`` functions in the construction of PyTorch models. Generally, the use of classes is more thoroughly
7+
Provided the underlying operation is supported in ``hls4ml``, we generally aim to support the use of both ``torch.nn`` classes and ``torch.nn.functional`` functions in the construction of PyTorch models. Generally, the use of classes is more thoroughly
88
tested. Please reach out if you experience any issues with either case.
99

1010
The PyTorch/Brevitas parser is under heavy development and doesn't yet have the same feature set of the Keras parsers. Feel free to reach out to developers if you find a missing feature that is present in Keras parser and would like it implemented.
11-
The direct ingestion of models quantized from brevitas is not yet support. Exporting brevitas models in the ONNX format (see `here <https://xilinx.github.io/brevitas/tutorials/onnx_export.html>`_) and reading those with the ``hls4ml`` QONNX frontend
11+
The direct ingestion of models quantized from brevitas is not yet supported. Exporting brevitas models in the ONNX format (see `here <https://xilinx.github.io/brevitas/tutorials/onnx_export.html>`_) and reading those with the ``hls4ml`` QONNX frontend
1212
might be possible, but is untested.
1313

1414
For multi-dimensional tensors, ``hls4ml`` follows the channels-last convention adopted by Keras, whereas PyTorch uses channels-first. By default, ``hls4ml`` will automaticlly transpose any tensors associated with weights and biases of the internal layers

0 commit comments

Comments
 (0)