Skip to content

Commit b4d2ba9

Browse files
More fixes to docs, fix broken links and more typos (#15008)
Fix broken links in the docs and typos. Co-authored-by: Abhinayk <[email protected]>
1 parent 692e4b4 commit b4d2ba9

22 files changed

+38
-38
lines changed

docs/source/api-section.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ In this section, find complete API documentation for ExecuTorch's export, runtim
77
- {doc}`executorch-runtime-api-reference` — ExecuTorch Runtime API Reference
88
- {doc}`runtime-python-api-reference` — Runtime Python API Reference
99
- {doc}`api-life-cycle` — API Life Cycle
10-
- [Android doc →](https://pytorch.org/executorch/main/javadoc/)** — Android API Documentation
10+
- [Android doc →](https://pytorch.org/executorch/main/javadoc/) — Android API Documentation
1111
- {doc}`extension-module` — Extension Module
1212
- {doc}`extension-tensor` — Extension Tensor
1313
- {doc}`running-a-model-cpp-tutorial` — Detailed C++ Runtime APIs Tutorial

docs/source/backends-arm-ethos-u.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Arm&reg; Ethos&trade;-U NPU Backend
22

33
The Arm&reg; Ethos&trade;-U backend targets Edge/IoT-type AI use-cases by enabling optimal execution of quantized models on
4-
[Arm&reg; Ethos&trade;-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55), [Arm&reg; Ethos&trade;-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u65), and
4+
[Arm&reg; Ethos&trade;-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55), [Arm&reg; Ethos&trade;-U65 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u65), and
55
[Arm&reg; Ethos&trade;-U85 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85), leveraging [TOSA](https://www.mlplatform.org/tosa/) and the
66
[ethos-u-vela](https://pypi.org/project/ethos-u-vela/) graph compiler. This document is a technical reference for using the Ethos-U backend, for a top level view with code examples
77
please refer to the [Arm Ethos-U Backend Tutorial](https://docs.pytorch.org/executorch/stable/tutorial-arm-ethos-u.html).
@@ -283,4 +283,4 @@ full network is converted to use channels last. A word of caution must be given
283283
unsupported ops being inserted into the graph, and it is currently not widely tested, so the feature must so far be viewed as experimental.
284284

285285
## See Also
286-
- [Arm Ethos-U Backend Tutorial](tutorial-arm.md)
286+
- [Arm Ethos-U Backend Tutorial](tutorial-arm-ethos-u.md)

docs/source/backends-coreml.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ To quantize a PyTorch model for the Core ML backend, use the `CoreMLQuantizer`.
187187
Quantization with the Core ML backend requires exporting the model for iOS 17 or later.
188188
To perform 8-bit quantization with the PT2E flow, follow these steps:
189189

190-
1) Create a [`coremltools.optimize.torch.quantization.LinearQuantizerConfig`](https://apple.github.io/coremltools/source/coremltools.optimize.torch.quantization.html#coremltools.optimize.torch.quantization.LinearQuantizerConfig) and use to to create an instance of a `CoreMLQuantizer`.
190+
1) Create a [`coremltools.optimize.torch.quantization.LinearQuantizerConfig`](https://apple.github.io/coremltools/source/coremltools.optimize.torch.quantization.html#coremltools.optimize.torch.quantization.LinearQuantizerConfig) and use it to create an instance of a `CoreMLQuantizer`.
191191
2) Use `torch.export.export` to export a graph module that will be prepared for quantization.
192192
3) Call `prepare_pt2e` to prepare the model for quantization.
193193
4) Run the prepared model with representative samples to calibrate the quantizated tensor activation ranges.
@@ -386,4 +386,4 @@ If you're using Python 3.13, try reducing your python version to Python 3.12. c
386386
### At runtime
387387
1. [ETCoreMLModelCompiler.mm:55] [Core ML] Failed to compile model, error = Error Domain=com.apple.mlassetio Code=1 "Failed to parse the model specification. Error: Unable to parse ML Program: at unknown location: Unknown opset 'CoreML7'." UserInfo={NSLocalizedDescription=Failed to par$
388388

389-
This means the model requires the the Core ML opset 'CoreML7', which requires running the model on iOS >= 17 or macOS >= 14.
389+
This means the model requires the Core ML opset 'CoreML7', which requires running the model on iOS >= 17 or macOS >= 14.

docs/source/backends-nxp.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ If you want to test the runtime, you'll also need:
2323
- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a testing board like MIMXRT700-AVK
2424
- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC)
2525

26-
## Using NXP backend
26+
## Using NXP backend
2727

2828
To test converting a neural network model for inference on NXP eIQ Neutron Backend, you can use our example script:
2929

@@ -36,14 +36,14 @@ For a quick overview how to convert a custom PyTorch model, take a look at our [
3636

3737
### Partitioner API
3838

39-
The partitioner is defined in `NeutronPartitioner` in `backends/nxp/neutron_partitioner.py`. It has the following
39+
The partitioner is defined in `NeutronPartitioner` in `backends/nxp/neutron_partitioner.py`. It has the following
4040
arguments:
4141
* `compile_spec` - list of key-value pairs defining compilation. E.g. for specifying platform (i.MXRT700) and Neutron Converter flavor.
4242
* `custom_delegation_options` - custom options for specifying node delegation.
4343

4444
### Quantization
4545

46-
The quantization for Neutron Backend is defined in `NeutronQuantizer` in `backends/nxp/quantizer/neutron_quantizer.py`.
46+
The quantization for Neutron Backend is defined in `NeutronQuantizer` in `backends/nxp/quantizer/neutron_quantizer.py`.
4747
The quantization follows PT2E workflow, INT8 quantization is supported. Operators are quantized statically, activations
4848
follow affine and weights symmetric per-tensor quantization scheme.
4949

docs/source/backends-qualcomm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -290,7 +290,7 @@ Please refer to `$EXECUTORCH_ROOT/examples/qualcomm/scripts/` and `$EXECUTORCH_R
290290

291291
### Step-by-Step Implementation Guide
292292

293-
Please reference [the simple example](https://github.com/pytorch/executorch/blob/main/examples/qualcomm/scripts/export_example.py) and [more compilated examples](https://github.com/pytorch/executorch/tree/main/examples/qualcomm/scripts) for reference
293+
Please reference [the simple example](https://github.com/pytorch/executorch/blob/main/examples/qualcomm/scripts/export_example.py) and [more complicated examples](https://github.com/pytorch/executorch/tree/main/examples/qualcomm/scripts) for reference
294294
#### Step 1: Prepare Your Model
295295
```python
296296
import torch

docs/source/build-run-openvino.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For more information about OpenVINO build, refer to the [OpenVINO Build Instruct
6161

6262
Follow the steps below to setup your build environment:
6363

64-
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](getting-started-setup.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
64+
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](using-executorch-building-from-source.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
6565

6666
2. **Setup OpenVINO Backend Environment**
6767
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory

docs/source/bundled-io.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@ regenerate_bundled_program = deserialize_from_flatbuffer_to_bundled_program(seri
194194
```
195195

196196
## Runtime Stage
197-
This stage mainly focuses on executing the model with the bundled inputs and and comparing the model's output with the bundled expected output. We provide multiple APIs to handle the key parts of it.
197+
This stage mainly focuses on executing the model with the bundled inputs and comparing the model's output with the bundled expected output. We provide multiple APIs to handle the key parts of it.
198198

199199

200200
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer

docs/source/compiler-delegate-and-partitioner.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The diagram looks like following
3737
There are mainly two Ahead-of-Time entry point for backend to implement: `partition` and `preprocess`.
3838

3939
`partitioner` is an algorithm implemented by the backend to tag the nodes to be lowered to the backend. `to_backend` API will apply the partition algorithm and lower each subgraph, which consists of connected tagged nodes, to the targeted backend. Every subgraph
40-
will be sent to the `preprocess` part provided by the backend to compiled as a binary blob.
40+
will be sent to the `preprocess` part provided by the backend to be compiled as a binary blob.
4141

4242
During partition, the `exported_program` is not allowed to mutate the program, and it's supposed to apply tag to each node. The
4343
`PartitionResult` includes both tagged exported program and the partition tags dictionary for `to_backend` to look up the tag and
@@ -194,8 +194,8 @@ qnnpack is one backend and xnnpack is another backend. We haven't open-sourced
194194
these two backends delegates yet, and this example won't run out of box. It can
195195
be used as a reference to see how it can be done.
196196

197-
This option is easy to try becuase usually all backends will implement their own
198-
parititioner. However this option may get different results if we change the
197+
This option is easy to try because usually all backends will implement their own
198+
partitioner. However this option may get different results if we change the
199199
order of to_backend call. If we want to have a better control on the nodes, like
200200
which backend they should go, option 2 is better.
201201

docs/source/getting-started-architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This page describes the technical architecture of ExecuTorch and its individual
44

55
**Context**
66

7-
In order to target on-device AI with diverse hardware, critical power requirements, and realtime processing needs, a single monolithic solution is not practical. Instead, a modular, layered, and extendable architecture is desired. ExecuTorch defines a streamlined workflow to prepare (export, transformation, and compilation) and execute a PyTorch program, with opinionated out-of-the-box default components and well-defined entry points for customizations. This architecture greatly improves portability, allowing engineers to use a performant lightweight, cross-platform runtime that easily integrates into different devices and platforms.
7+
In order to target on-device AI with diverse hardware, critical power requirements, and real-time processing needs, a single monolithic solution is not practical. Instead, a modular, layered, and extensible architecture is desired. ExecuTorch defines a streamlined workflow to prepare (export, transformation, and compilation) and execute a PyTorch program, with opinionated out-of-the-box default components and well-defined entry points for customizations. This architecture greatly improves portability, allowing engineers to use a performant lightweight, cross-platform runtime that easily integrates into different devices and platforms.
88

99
## Overview
1010

docs/source/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ with open("model.pte", "wb") as f:
6868

6969
If the model requires varying input sizes, you will need to specify the varying dimensions and bounds as part of the `export` call. See [Model Export and Lowering](using-executorch-export.md) for more information.
7070

71-
The hardware backend to target is controlled by the partitioner parameter to to\_edge\_transform\_and\_lower. In this example, the XnnpackPartitioner is used to target mobile CPUs. See the [backend-specific documentation](backends-overview.md) for information on how to use each backend.
71+
The hardware backend to target is controlled by the partitioner parameter to `to_edge_transform_and_lower`. In this example, the XnnpackPartitioner is used to target mobile CPUs. See the [backend-specific documentation](backends-overview.md) for information on how to use each backend.
7272

7373
Quantization can also be done at this stage to reduce model size and runtime. Quantization is backend-specific. See the documentation for the target backend for a full description of supported quantization schemes.
7474

@@ -226,5 +226,5 @@ ExecuTorch provides a high-degree of customizability to support diverse hardware
226226
- [Using ExecuTorch on Android](using-executorch-android.md) and [Using ExecuTorch on iOS](using-executorch-ios.md) for mobile runtime integration.
227227
- [Using ExecuTorch with C++](using-executorch-cpp.md) for embedded and mobile native development.
228228
- [Profiling and Debugging](using-executorch-troubleshooting.md) for developer tooling and debugging.
229-
- [API Reference](export-to-executorch-api-reference.md) for a full description of available APIs.
229+
- [API Reference](export-to-executorch-api-reference.rst) for a full description of available APIs.
230230
- [Examples](https://github.com/pytorch/executorch/tree/main/examples) for demo apps and example code.

0 commit comments

Comments
 (0)