Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/trunk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ jobs:
# Install requirements
${CONDA_RUN} sh install_requirements.sh
${CONDA_RUN} sh backends/apple/coreml/scripts/install_requirements.sh
${CONDA_RUN} python install_executorch.py --pybind coreml
${CONDA_RUN} python install_executorch.py
${CONDA_RUN} sh examples/models/llama/install_requirements.sh

# Test ANE llama
Expand Down
2 changes: 1 addition & 1 deletion backends/apple/mps/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ I 00:00:00.122615 executorch:mps_executor_runner.mm:501] Model verified successf
### [Optional] Run the generated model directly using pybind
1. Make sure `pybind` MPS support was installed:
```bash
./install_executorch.sh --pybind mps
CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh
```
2. Run the `mps_example` script to trace the model and run it directly from python:
```bash
Expand Down
2 changes: 1 addition & 1 deletion docs/source/backends-mps.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ I 00:00:00.122615 executorch:mps_executor_runner.mm:501] Model verified successf
### [Optional] Run the generated model directly using pybind
1. Make sure `pybind` MPS support was installed:
```bash
./install_executorch.sh --pybind mps
CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh
```
2. Run the `mps_example` script to trace the model and run it directly from python:
```bash
Expand Down
20 changes: 5 additions & 15 deletions docs/source/using-executorch-building-from-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,25 +64,15 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
./install_executorch.sh
```

Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
```bash
./install_executorch.sh --pybind <coreml | mps | xnnpack>

# Example: pybindings with CoreML *only*
./install_executorch.sh --pybind coreml

# Example: pybinds with CoreML *and* XNNPACK
./install_executorch.sh --pybind coreml xnnpack
```
Not all backends are built into the pip wheel by default. You can link these missing/experimental backends by turning on the corresponding cmake flag. For example, to include the MPS backend:

By default, `./install_executorch.sh` command installs pybindings for XNNPACK. To disable any pybindings altogether:
```bash
./install_executorch.sh --pybind off
```
```bash
CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh
```
Comment on lines +69 to +71
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I thought we don't provide the option? If we are providing it, why is this better than --pybind mps

Copy link
Contributor Author

@jathu jathu May 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to leave this option open mainly for development on new backends. After the MPS changes, I think it would make sense to remove this example from public docs


For development mode, run the command with `--editable`, which allows us to modify Python source code and see changes reflected immediately.
```bash
./install_executorch.sh --editable [--pybind xnnpack]
./install_executorch.sh --editable

# Or you can directly do the following if dependencies are already installed
# either via a previous invocation of `./install_executorch.sh` or by explicitly installing requirements via `./install_requirements.sh` first.
Expand Down
2 changes: 1 addition & 1 deletion examples/demo-apps/react-native/rnllama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ A React Native mobile application for running LLaMA language models using ExecuT

3. Pull submodules: `git submodule sync && git submodule update --init`

4. Install dependencies: `./install_executorch.sh --pybind xnnpack && ./examples/models/llama/install_requirements.sh`
4. Install dependencies: `./install_executorch.sh && ./examples/models/llama/install_requirements.sh`

5. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`

Expand Down
4 changes: 2 additions & 2 deletions examples/models/llama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Llama 3 8B performance was measured on the Samsung Galaxy S22, S24, and OnePlus
## Step 1: Setup
> :warning: **double check your python environment**: make sure `conda activate <VENV>` is run before all the bash and python scripts.

1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh --pybind xnnpack`
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
2. Run `examples/models/llama/install_requirements.sh` to install a few dependencies.


Expand Down Expand Up @@ -528,7 +528,7 @@ This example tries to reuse the Python code, with minimal modifications to make
git clean -xfd
pip uninstall executorch
./install_executorch.sh --clean
./install_executorch.sh --pybind xnnpack
./install_executorch.sh
```
- If you encounter `pthread` related issues during link time, add `pthread` in `target_link_libraries` in `CMakeLists.txt`
- On Mac, if there is linking error in Step 4 with error message like
Expand Down
2 changes: 1 addition & 1 deletion examples/models/phi-3-mini/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This example demonstrates how to run a [Phi-3-mini](https://huggingface.co/micro

# Instructions
## Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh --pybind xnnpack`
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
2. Currently, we support transformers v4.44.2. Install transformers with the following command:
```
pip uninstall -y transformers ; pip install transformers==4.44.2
Expand Down
20 changes: 5 additions & 15 deletions extension/pybindings/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,18 @@
This Python module, named `portable_lib`, provides a set of functions and classes for loading and executing bundled programs. To install it, run the fullowing command:

```bash
CMAKE_ARGS="-DEXECUTORCH_BUILD_XNNPACK=ON" pip install . --no-build-isolation
```

Or when installing the rest of dependencies:
./install_executorch.sh

```bash
install_executorch.sh --pybind
# ...or use pip directly
pip install . --no-build-isolation
```

# Link Backends

You can link the runtime against some backends to make sure a delegated or partitioned model can still run by Python module successfully:

```bash
CMAKE_ARGS="-DEXECUTORCH_BUILD_XNNPACK=ON -DEXECUTORCH_BUILD_COREML=ON -DEXECUTORCH_BUILD_MPS=ON" \
pip install . --no-build-isolation
```

Similarly, when installing the rest of dependencies:
Not all backends are built into the pip wheel by default. You can link these missing/experimental backends by turning on the corresponding cmake flag. For example, to include the MPS backend:

```bash
install_executorch.sh --pybind xnnpack coreml mps
CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh
```

## Functions
Expand Down
88 changes: 12 additions & 76 deletions install_executorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,12 @@

import argparse
import glob
import itertools
import logging
import os
import shutil
import subprocess
import sys
from contextlib import contextmanager
from typing import List, Tuple

from install_requirements import (
install_requirements,
Expand Down Expand Up @@ -52,10 +50,6 @@ def clean():
print("Done cleaning build artifacts.")


# Please keep this insync with `ShouldBuild.pybindings` in setup.py.
VALID_PYBINDS = ["coreml", "mps", "xnnpack", "training", "openvino"]


################################################################################
# Git submodules
################################################################################
Expand Down Expand Up @@ -139,14 +133,9 @@ def check_folder(folder: str, file: str) -> bool:
logger.info("All required submodules are present.")


def build_args_parser() -> argparse.ArgumentParser:
# Parse options.
parser = argparse.ArgumentParser()
parser.add_argument(
"--pybind",
action="append",
nargs="+",
help="one or more of coreml/mps/xnnpack, or off",
def _parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Install executorch in your Python environment."
)
parser.add_argument(
"--clean",
Expand All @@ -166,83 +155,34 @@ def build_args_parser() -> argparse.ArgumentParser:
"picked up without rebuilding the wheel. Extension libraries will be "
"installed inside the source tree.",
)
return parser


# Returns (wants_off, wanted_pybindings)
def _list_pybind_defines(args) -> Tuple[bool, List[str]]:
if args.pybind is None:
return False, []

# Flatten list of lists.
args.pybind = list(itertools.chain(*args.pybind))
if "off" in args.pybind:
if len(args.pybind) != 1:
raise Exception(f"Cannot combine `off` with other pybinds: {args.pybind}")
return True, []

cmake_args = []
for pybind_arg in args.pybind:
if pybind_arg not in VALID_PYBINDS:
raise Exception(
f"Unrecognized pybind argument {pybind_arg}; valid options are: {', '.join(VALID_PYBINDS)}"
)
if pybind_arg == "training":
cmake_args.append("-DEXECUTORCH_BUILD_EXTENSION_TRAINING=ON")
else:
cmake_args.append(f"-DEXECUTORCH_BUILD_{pybind_arg.upper()}=ON")

return False, cmake_args
return parser.parse_args()


def main(args):
if not python_is_compatible():
sys.exit(1)

parser = build_args_parser()
args = parser.parse_args()

cmake_args = [os.getenv("CMAKE_ARGS", "")]
use_pytorch_nightly = True

wants_pybindings_off, pybind_defines = _list_pybind_defines(args)
if wants_pybindings_off:
cmake_args.append("-DEXECUTORCH_BUILD_PYBIND=OFF")
else:
cmake_args += pybind_defines
args = _parse_args()

if args.clean:
clean()
return

if args.use_pt_pinned_commit:
# This option is used in CI to make sure that PyTorch build from the pinned commit
# is used instead of nightly. CI jobs wouldn't be able to catch regression from the
# latest PT commit otherwise
use_pytorch_nightly = False

cmake_args = [os.getenv("CMAKE_ARGS", "")]
# Use ClangCL on Windows.
# ClangCL is an alias to Clang that configures it to work in an MSVC-compatible
# mode. Using it on Windows to avoid compiler compatibility issues for MSVC.
if os.name == "nt":
cmake_args.append("-T ClangCL")

#
# Install executorch pip package. This also makes `flatc` available on the path.
# The --extra-index-url may be necessary if pyproject.toml has a dependency on a
# pre-release or nightly version of a torch package.
#

# Set environment variables
os.environ["CMAKE_ARGS"] = " ".join(cmake_args)

# Check if the required submodules are present and update them if not
check_and_update_submodules()

install_requirements(use_pytorch_nightly)

# Run the pip install command
subprocess.run(
# This option is used in CI to make sure that PyTorch build from the pinned commit
# is used instead of nightly. CI jobs wouldn't be able to catch regression from the
# latest PT commit otherwise
install_requirements(use_pytorch_nightly=not args.use_pt_pinned_commit)
os.execvp(
sys.executable,
[
sys.executable,
"-m",
Expand All @@ -257,14 +197,10 @@ def main(args):
"--extra-index-url",
TORCH_NIGHTLY_URL,
],
check=True,
)


if __name__ == "__main__":
# Before doing anything, cd to the directory containing this script.
os.chdir(os.path.dirname(os.path.abspath(__file__)))
if not python_is_compatible():
sys.exit(1)

main(sys.argv[1:])
Loading