Skip to content

Commit cb63da3

Browse files
committed
Update build from source and getting started docs (#15311)
### Summary Update Getting Started and Build from Source Docs: * Integrate Windows steps into the main flow with minor Windows-specific callouts. * Clarify top-level flow for building from source - add a table by use case. * Clarify building ET as a submodule vs standalone build. * Re-order, re-word, and clean up the content related to building from source. * Add info on NDK build for Android. Tracked in #14791 and #14759. cc @mergennachin @byjlw
1 parent 5f6167f commit cb63da3

File tree

2 files changed

+155
-228
lines changed

2 files changed

+155
-228
lines changed

docs/source/getting-started.md

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ The following are required to install the ExecuTorch host libraries, needed to e
1010

1111
- Python 3.10 - 3.12
1212
- g++ version 7 or higher, clang++ version 5 or higher, or another C++17-compatible toolchain.
13-
- Linux (x86_64 or ARM64) or macOS (ARM64).
13+
- Linux (x86_64 or ARM64), macOS (ARM64), or Windows (x86_64).
1414
- Intel-based macOS systems require building PyTorch from source (see [Building From Source](using-executorch-building-from-source.md) for instructions).
15-
- Windows is supported via WSL.
15+
- On Windows, Visual Studio 2022 or later. Clang build tools are needed to build from source.
1616

1717
## Installation
1818
To use ExecuTorch, you will need to install both the Python package and the appropriate platform-specific runtime libraries. Pip is the recommended way to install the ExecuTorch python package.
@@ -25,6 +25,7 @@ pip install executorch
2525

2626
To build the framework from source, see [Building From Source](using-executorch-building-from-source.md). Backend delegates may require additional dependencies. See the appropriate backend documentation for more information.
2727

28+
> **_NOTE:_** On Windows, ExecuTorch requires a [Visual Studio Developer Powershell](https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2022). Running from outside of a developer prompt will manifest as errors related to CL.exe.
2829
2930
<hr/>
3031

@@ -44,7 +45,7 @@ ExecuTorch provides hardware acceleration for a wide variety of hardware. The mo
4445
For mobile use cases, consider using XNNPACK for Android and Core ML or XNNPACK for iOS as a first step. See [Hardware Backends](backends-overview.md) for more information.
4546

4647
### Exporting
47-
Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For users working with Hugging Face models,
48+
Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For Hugging Face models,
4849
you can find a list of supported models in the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) repo.
4950

5051
```python
@@ -103,7 +104,7 @@ print(torch.allclose(output[0], eager_reference_output, rtol=1e-3, atol=1e-5))
103104

104105
For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/python).
105106

106-
Additionally, if you work with Hugging Face models, the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch, using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
107+
Additionally, for Hugging Face models, the [*huggingface/bptimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
107108

108109
<hr/>
109110

@@ -131,7 +132,7 @@ dependencies {
131132
```
132133

133134
#### Runtime APIs
134-
Models can be loaded and run using the `Module` class:
135+
Models can be loaded and run from Java or Kotlin using the `Module` class.
135136
```java
136137
import org.pytorch.executorch.EValue;
137138
import org.pytorch.executorch.Module;
@@ -147,8 +148,11 @@ EValue[] output = model.forward(input_evalue);
147148
float[] scores = output[0].toTensor().getDataAsFloatArray();
148149
```
149150

151+
Note that the [C++](#c) APIs can be used when targeting Android native.
152+
150153
For a full example of running a model on Android, see the [DeepLabV3AndroidDemo](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo). For more information on Android development, including building from source, a full description of the Java APIs, and information on using ExecuTorch from Android native code, see [Using ExecuTorch on Android](using-executorch-android.md).
151154

155+
152156
### iOS
153157

154158
#### Installation
@@ -165,22 +169,27 @@ For more information on iOS integration, including an API reference, logging set
165169
ExecuTorch provides C++ APIs, which can be used to target embedded or mobile devices. The C++ APIs provide a greater level of control compared to other language bindings, allowing for advanced memory management, data loading, and platform integration.
166170

167171
#### Installation
168-
CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) for a detailed description of build integration, targets, and cross compilation.
172+
CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) and [Building from Source](using-executorch-building-from-source.md) for a detailed description of build integration, targets, and cross compilation.
169173

170174
```
171175
git clone -b release/1.0 https://github.com/pytorch/executorch.git
172176
```
173-
```python
177+
```cmake
178+
# Set CMAKE_CXX_STANDARD to 17 or above.
179+
set(CMAKE_CXX_STANDARD 17)
180+
174181
# CMakeLists.txt
182+
set(EXECUTORCH_BUILD_PRESET_FILE ${CMAKE_SOURCE_DIR}/executorch/tools/cmake/preset/llm.cmake)
183+
# Set other ExecuTorch options here.
184+
175185
add_subdirectory("executorch")
176186
...
177187
target_link_libraries(
178188
my_target
179189
PRIVATE executorch
180-
extension_module_static
181-
extension_tensor
182-
optimized_native_cpu_ops_lib
183-
xnnpack_backend)
190+
executorch::backends
191+
executorch::extensions
192+
executorch::kernels)
184193
```
185194

186195

0 commit comments

Comments
 (0)