Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ executorch
│ ├── <a href="exir/verification">verification</a> - IR verification.
├── <a href="extension">extension</a> - Extensions built on top of the runtime.
│ ├── <a href="extension/android">android</a> - ExecuTorch wrappers for Android apps. Please refer to the <a href="docs/source/using-executorch-android.md">Android documentation</a> and <a href="https://pytorch.org/executorch/main/javadoc/">Javadoc</a> for more information.
│ ├── <a href="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <a href="docs/source/using-executorch-ios.md">iOS documentation</a> and <a href="https://pytorch.org/executorch/stable/apple-runtime.html">how to integrate into Apple platform</a> for more information.
│ ├── <a href="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <a href="docs/source/using-executorch-ios.md">iOS documentation</a> and <a href="https://pytorch.org/executorch/main/using-executorch-ios.html">how to integrate into Apple platform</a> for more information.
│ ├── <a href="extension/aten_util">aten_util</a> - Converts to and from PyTorch ATen types.
│ ├── <a href="extension/data_loader">data_loader</a> - 1st party data loader implementations.
│ ├── <a href="extension/evalue_util">evalue_util</a> - Helpers for working with EValue objects.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/build-run-openvino.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ For more information about OpenVINO build, refer to the [OpenVINO Build Instruct

Follow the steps below to setup your build environment:

1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](getting-started-setup.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.

2. **Setup OpenVINO Backend Environment**
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
Expand Down
10 changes: 5 additions & 5 deletions docs/source/memory-planning-inspection.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Memory Planning Inspection in ExecuTorch

After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
After the [Memory Planning](concepts.md#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](concepts.md#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.

## Usage
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
User should add this code after they call [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.

```python
from executorch.util.activation_memory_profiler import generate_memory_trace
Expand All @@ -13,18 +13,18 @@ generate_memory_trace(
enable_memory_offsets=True,
)
```
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
* `prog` is an instance of [`ExecuTorchProgramManager`](export-to-executorch-api-reference.rst#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch).
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.

## Chrome Trace
Open a Chrome browser tab and navigate to <chrome://tracing/>. Upload the generated `.json` to view.
Example of a [MobileNet V2](https://pytorch.org/vision/main/models/mobilenetv2.html) model:

![Memory planning Chrome trace visualization](/_static/img/memory_planning_inspection.png)
![Memory planning Chrome trace visualization](_static/img/memory_planning_inspection.png)

Note that, since we are repurposing the Chrome trace tool, the axes in this context may have different meanings compared to other Chrome trace graphs you may have encountered previously:
* The horizontal axis, despite being labeled in seconds (s), actually represents megabytes (MBs).
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.

## Further Reading
* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
* [Memory Planning](compiler-memory-planning.md)
2 changes: 1 addition & 1 deletion docs/source/new-contributor-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
git push # push updated local main to your GitHub fork
```

6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
6. [Build the project](using-executorch-building-from-source.md) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).

Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.
Expand Down
20 changes: 10 additions & 10 deletions docs/source/using-executorch-android.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.

Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](using-executorch-building-from-source.md#cross-compilation).

## Installation

Expand Down Expand Up @@ -41,8 +41,8 @@ dependencies {
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.

Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
<a href="_static/img/android_studio.mp4">
<img src="_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
</a>

## Using AAR file directly
Expand Down Expand Up @@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b

#### Using MediaTek backend

To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html),
To use [MediaTek backend](backends-mediatek.md),
after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path.

#### Using Qualcomm AI Engine Backend

To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend),
To use [Qualcomm AI Engine Backend](backends-qualcomm.md#qualcomm-ai-engine-backend),
after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path.

#### Using Vulkan Backend

To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend),
To use [Vulkan Backend](backends-vulkan.md#vulkan-backend),
set `EXECUTORCH_BUILD_VULKAN` to `ON`.

## Android Backends
Expand All @@ -149,10 +149,10 @@ The following backends are available for Android:

| Backend | Type | Doc |
| ------- | -------- | --- |
| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](./backends-xnnpack.md) |
| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](./backends-mediatek.md) |
| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](./backends-qualcomm.md) |
| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](./backends-vulkan.md) |
| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends-xnnpack.md) |
| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](backends-mediatek.md) |
| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) |
| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends-vulkan.md) |


## Runtime Integration
Expand Down
6 changes: 3 additions & 3 deletions docs/source/using-executorch-ios.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ Then select which ExecuTorch framework should link against which target.

Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model on iOS.

<a href="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
<a href="_static/img/swiftpm_xcode.mp4">
<img src="_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
</a>

#### CLI
Expand Down Expand Up @@ -293,7 +293,7 @@ From existing memory buffers:

From `NSData` / `Data`:
- `init(data:shape:dataType:...)`: Creates a tensor using an `NSData` object, referencing its bytes without copying.

From scalar arrays:
- `init(_:shape:dataType:...)`: Creates a tensor from an array of `NSNumber` scalars. Convenience initializers exist to infer shape or data type.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ export ANDROID_ABIS=arm64-v8a
MTK currently supports Llama 3 exporting.

### Set up Environment
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html) page
1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/main/getting-started-setup.html) page
2. Set-up MTK AoT environment
```
// Ensure that you are inside executorch/examples/mediatek directory
Expand Down
2 changes: 1 addition & 1 deletion examples/demo-apps/apple_ios/LLaMA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by

Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).

### XCode
* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ More specifically, it covers:
## Prerequisites
* [Xcode 15](https://developer.apple.com/xcode)
* [iOS 18 SDK](https://developer.apple.com/ios)
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/using-executorch-building-from-source) to set up the repo and dev environment:

## Setup ExecuTorch
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS).
Expand Down Expand Up @@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by

Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/using-executorch-ios.html).

<p align="center">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" style="width:600px">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ If you cannot add the package into your app target (it's greyed out), it might h



More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/apple-runtime.html#local-build).
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios.html#local-build).

### 3. Configure Build Schemes

Expand All @@ -175,7 +175,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration

We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed.

For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).

### 4. Build and Run the project

Expand Down
Loading