You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/using-executorch-building-from-source.md
+5-15Lines changed: 5 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,25 +64,15 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
64
64
./install_executorch.sh
65
65
```
66
66
67
-
Use the [`--pybind` flag](https://github.com/pytorch/executorch/blob/main/install_executorch.sh#L26-L29) to install with pybindings and dependencies for other backends.
Not all backends are built into the pip wheel by default. You can link these missing/experimental backends by turning on the corresponding cmake flag. For example, to include the MPS backend:
77
68
78
-
By default, `./install_executorch.sh` command installs pybindings for XNNPACK. To disable any pybindings altogether:
5. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`
Copy file name to clipboardExpand all lines: examples/models/llama/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ Llama 3 8B performance was measured on the Samsung Galaxy S22, S24, and OnePlus
148
148
## Step 1: Setup
149
149
> :warning:**double check your python environment**: make sure `conda activate <VENV>` is run before all the bash and python scripts.
150
150
151
-
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh --pybind xnnpack`
151
+
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
152
152
2. Run `examples/models/llama/install_requirements.sh` to install a few dependencies.
153
153
154
154
@@ -528,7 +528,7 @@ This example tries to reuse the Python code, with minimal modifications to make
528
528
git clean -xfd
529
529
pip uninstall executorch
530
530
./install_executorch.sh --clean
531
-
./install_executorch.sh --pybind xnnpack
531
+
./install_executorch.sh
532
532
```
533
533
- If you encounter `pthread` related issues during link time, add `pthread` in `target_link_libraries` in `CMakeLists.txt`
534
534
- On Mac, if there is linking error in Step 4 with error message like
Copy file name to clipboardExpand all lines: examples/models/phi-3-mini/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ This example demonstrates how to run a [Phi-3-mini](https://huggingface.co/micro
3
3
4
4
# Instructions
5
5
## Step 1: Setup
6
-
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh --pybind xnnpack`
6
+
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
7
7
2. Currently, we support transformers v4.44.2. Install transformers with the following command:
Copy file name to clipboardExpand all lines: extension/pybindings/README.md
+5-15Lines changed: 5 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,28 +2,18 @@
2
2
This Python module, named `portable_lib`, provides a set of functions and classes for loading and executing bundled programs. To install it, run the fullowing command:
Similarly, when installing the rest of dependencies:
13
+
Not all backends are built into the pip wheel by default. You can link these missing/experimental backends by turning on the corresponding cmake flag. For example, to include the MPS backend:
0 commit comments