You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update build from source and getting started docs (#15311)
### Summary
Update Getting Started and Build from Source Docs:
* Integrate Windows steps into the main flow with minor Windows-specific
callouts.
* Clarify top-level flow for building from source - add a table by use
case.
* Clarify building ET as a submodule vs standalone build.
* Re-order, re-word, and clean up the content related to building from
source.
* Add info on NDK build for Android.
Tracked in #14791 and
#14759.
cc @mergennachin@byjlw
Copy file name to clipboardExpand all lines: docs/source/getting-started.md
+20-11Lines changed: 20 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,9 @@ The following are required to install the ExecuTorch host libraries, needed to e
10
10
11
11
- Python 3.10 - 3.12
12
12
- g++ version 7 or higher, clang++ version 5 or higher, or another C++17-compatible toolchain.
13
-
- Linux (x86_64 or ARM64) or macOS (ARM64).
13
+
- Linux (x86_64 or ARM64), macOS (ARM64), or Windows (x86_64).
14
14
- Intel-based macOS systems require building PyTorch from source (see [Building From Source](using-executorch-building-from-source.md) for instructions).
15
-
- Windows is supported via WSL.
15
+
- On Windows, Visual Studio 2022 or later. Clang build tools are needed to build from source.
16
16
17
17
## Installation
18
18
To use ExecuTorch, you will need to install both the Python package and the appropriate platform-specific runtime libraries. Pip is the recommended way to install the ExecuTorch python package.
@@ -25,6 +25,7 @@ pip install executorch
25
25
26
26
To build the framework from source, see [Building From Source](using-executorch-building-from-source.md). Backend delegates may require additional dependencies. See the appropriate backend documentation for more information.
27
27
28
+
> **_NOTE:_** On Windows, ExecuTorch requires a [Visual Studio Developer Powershell](https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2022). Running from outside of a developer prompt will manifest as errors related to CL.exe.
28
29
29
30
<hr/>
30
31
@@ -44,7 +45,7 @@ ExecuTorch provides hardware acceleration for a wide variety of hardware. The mo
44
45
For mobile use cases, consider using XNNPACK for Android and Core ML or XNNPACK for iOS as a first step. See [Hardware Backends](backends-overview.md) for more information.
45
46
46
47
### Exporting
47
-
Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For users working with Hugging Face models,
48
+
Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For Hugging Face models,
48
49
you can find a list of supported models in the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) repo.
For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/python).
105
106
106
-
Additionally, if you work with Hugging Face models, the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch, using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
107
+
Additionally, for Hugging Face models, the [*huggingface/bptimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
107
108
108
109
<hr/>
109
110
@@ -131,7 +132,7 @@ dependencies {
131
132
```
132
133
133
134
#### Runtime APIs
134
-
Models can be loaded and run using the `Module` class:
135
+
Models can be loaded and run from Java or Kotlin using the `Module` class.
Note that the [C++](#c) APIs can be used when targeting Android native.
152
+
150
153
For a full example of running a model on Android, see the [DeepLabV3AndroidDemo](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo). For more information on Android development, including building from source, a full description of the Java APIs, and information on using ExecuTorch from Android native code, see [Using ExecuTorch on Android](using-executorch-android.md).
151
154
155
+
152
156
### iOS
153
157
154
158
#### Installation
@@ -165,22 +169,27 @@ For more information on iOS integration, including an API reference, logging set
165
169
ExecuTorch provides C++ APIs, which can be used to target embedded or mobile devices. The C++ APIs provide a greater level of control compared to other language bindings, allowing for advanced memory management, data loading, and platform integration.
166
170
167
171
#### Installation
168
-
CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) for a detailed description of build integration, targets, and cross compilation.
172
+
CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md)and [Building from Source](using-executorch-building-from-source.md)for a detailed description of build integration, targets, and cross compilation.
0 commit comments