From 2f5111295fce26b1fafd0c3734e916ee9b68d27e Mon Sep 17 00:00:00 2001 From: Anthony Shoumikhin Date: Mon, 14 Apr 2025 14:58:42 -0700 Subject: [PATCH] Update doc links to relative markdown files --- CONTRIBUTING.md | 2 +- docs/source/build-run-openvino.md | 2 +- docs/source/memory-planning-inspection.md | 10 +++++----- docs/source/new-contributor-guide.md | 2 +- docs/source/using-executorch-android.md | 20 +++++++++---------- docs/source/using-executorch-ios.md | 6 +++--- .../docs/delegates/mediatek_README.md | 2 +- examples/demo-apps/apple_ios/LLaMA/README.md | 2 +- .../LLaMA/docs/delegates/mps_README.md | 4 ++-- .../LLaMA/docs/delegates/xnnpack_README.md | 4 ++-- 10 files changed, 27 insertions(+), 27 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 64c6f1d249e..32681cdb08f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -58,7 +58,7 @@ executorch │ ├── verification - IR verification. ├── extension - Extensions built on top of the runtime. │ ├── android - ExecuTorch wrappers for Android apps. Please refer to the Android documentation and Javadoc for more information. -│ ├── apple - ExecuTorch wrappers for iOS apps. Please refer to the iOS documentation and how to integrate into Apple platform for more information. +│ ├── apple - ExecuTorch wrappers for iOS apps. Please refer to the iOS documentation and how to integrate into Apple platform for more information. │ ├── aten_util - Converts to and from PyTorch ATen types. │ ├── data_loader - 1st party data loader implementations. │ ├── evalue_util - Helpers for working with EValue objects. diff --git a/docs/source/build-run-openvino.md b/docs/source/build-run-openvino.md index f9ea5df0862..db3d221ffc7 100644 --- a/docs/source/build-run-openvino.md +++ b/docs/source/build-run-openvino.md @@ -61,7 +61,7 @@ For more information about OpenVINO build, refer to the [OpenVINO Build Instruct Follow the steps below to setup your build environment: -1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment. +1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](getting-started-setup.md#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment. 2. **Setup OpenVINO Backend Environment** - Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory diff --git a/docs/source/memory-planning-inspection.md b/docs/source/memory-planning-inspection.md index 47951a72038..9f7d6d6b688 100644 --- a/docs/source/memory-planning-inspection.md +++ b/docs/source/memory-planning-inspection.md @@ -1,9 +1,9 @@ # Memory Planning Inspection in ExecuTorch -After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects. +After the [Memory Planning](concepts.md#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](concepts.md#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects. ## Usage -User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results. +User should add this code after they call [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results. ```python from executorch.util.activation_memory_profiler import generate_memory_trace @@ -13,18 +13,18 @@ generate_memory_trace( enable_memory_offsets=True, ) ``` -* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch). +* `prog` is an instance of [`ExecuTorchProgramManager`](export-to-executorch-api-reference.rst#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](export-to-executorch-api-reference.rst#executorch.exir.EdgeProgramManager.to_executorch). * Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space. ## Chrome Trace Open a Chrome browser tab and navigate to . Upload the generated `.json` to view. Example of a [MobileNet V2](https://pytorch.org/vision/main/models/mobilenetv2.html) model: -![Memory planning Chrome trace visualization](/_static/img/memory_planning_inspection.png) +![Memory planning Chrome trace visualization](_static/img/memory_planning_inspection.png) Note that, since we are repurposing the Chrome trace tool, the axes in this context may have different meanings compared to other Chrome trace graphs you may have encountered previously: * The horizontal axis, despite being labeled in seconds (s), actually represents megabytes (MBs). * The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows. ## Further Reading -* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html) +* [Memory Planning](compiler-memory-planning.md) diff --git a/docs/source/new-contributor-guide.md b/docs/source/new-contributor-guide.md index 3b2eebfa5f5..cc33e81d508 100644 --- a/docs/source/new-contributor-guide.md +++ b/docs/source/new-contributor-guide.md @@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code git push # push updated local main to your GitHub fork ``` -6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing). +6. [Build the project](using-executorch-building-from-source.md) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing). Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded. diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index 2b0d04da6c7..207da320ba9 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -2,7 +2,7 @@ To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file. -Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation). +Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](using-executorch-building-from-source.md#cross-compilation). ## Installation @@ -41,8 +41,8 @@ dependencies { Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`. Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio. - - Integrating and Running ExecuTorch on Android + + Integrating and Running ExecuTorch on Android ## Using AAR file directly @@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b #### Using MediaTek backend -To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html), +To use [MediaTek backend](backends-mediatek.md), after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path. #### Using Qualcomm AI Engine Backend -To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend), +To use [Qualcomm AI Engine Backend](backends-qualcomm.md#qualcomm-ai-engine-backend), after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path. #### Using Vulkan Backend -To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend), +To use [Vulkan Backend](backends-vulkan.md#vulkan-backend), set `EXECUTORCH_BUILD_VULKAN` to `ON`. ## Android Backends @@ -149,10 +149,10 @@ The following backends are available for Android: | Backend | Type | Doc | | ------- | -------- | --- | -| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](./backends-xnnpack.md) | -| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](./backends-mediatek.md) | -| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](./backends-qualcomm.md) | -| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](./backends-vulkan.md) | +| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends-xnnpack.md) | +| [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](backends-mediatek.md) | +| [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) | +| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends-vulkan.md) | ## Runtime Integration diff --git a/docs/source/using-executorch-ios.md b/docs/source/using-executorch-ios.md index 08c862341b5..61b260f4a00 100644 --- a/docs/source/using-executorch-ios.md +++ b/docs/source/using-executorch-ios.md @@ -35,8 +35,8 @@ Then select which ExecuTorch framework should link against which target. Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model on iOS. - - Integrating and Running ExecuTorch on Apple Platforms + + Integrating and Running ExecuTorch on Apple Platforms #### CLI @@ -293,7 +293,7 @@ From existing memory buffers: From `NSData` / `Data`: - `init(data:shape:dataType:...)`: Creates a tensor using an `NSData` object, referencing its bytes without copying. - + From scalar arrays: - `init(_:shape:dataType:...)`: Creates a tensor from an array of `NSNumber` scalars. Convenience initializers exist to infer shape or data type. diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md index abd3c5f31b9..2ad87df0653 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md @@ -65,7 +65,7 @@ export ANDROID_ABIS=arm64-v8a MTK currently supports Llama 3 exporting. ### Set up Environment -1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html) page +1. Follow the ExecuTorch set-up environment instructions found on the [Getting Started](https://pytorch.org/executorch/main/getting-started-setup.html) page 2. Set-up MTK AoT environment ``` // Ensure that you are inside executorch/examples/mediatek directory diff --git a/examples/demo-apps/apple_ios/LLaMA/README.md b/examples/demo-apps/apple_ios/LLaMA/README.md index 5ac8c80ca78..e44b4502cac 100644 --- a/examples/demo-apps/apple_ios/LLaMA/README.md +++ b/examples/demo-apps/apple_ios/LLaMA/README.md @@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed. -For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html). +For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html). ### XCode * Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`. diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md index bffe4465eee..8601774f0e8 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md @@ -9,7 +9,7 @@ More specifically, it covers: ## Prerequisites * [Xcode 15](https://developer.apple.com/xcode) * [iOS 18 SDK](https://developer.apple.com/ios) -* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment: +* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/using-executorch-building-from-source) to set up the repo and dev environment: ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). @@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed. -For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html). +For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/using-executorch-ios.html).

iOS LLaMA App Swift PM diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md index ffcb9f894b7..7e5090410a3 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md @@ -163,7 +163,7 @@ If you cannot add the package into your app target (it's greyed out), it might h - More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/apple-runtime.html#local-build). + More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios.html#local-build). ### 3. Configure Build Schemes @@ -175,7 +175,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed. -For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html). +For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html). ### 4. Build and Run the project