Skip to content

Commit 07c105e

Browse files
authored
Update xnnpack_README.md
1 parent d0ce19a commit 07c105e

File tree

1 file changed

+14
-26
lines changed

1 file changed

+14
-26
lines changed

examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md

Lines changed: 14 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -13,40 +13,41 @@ More specifically, it covers:
1313
## Setup ExecuTorch
1414
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS).
1515

16-
Create a Conda environment
16+
Checkout ExecuTorch repo and sync submodules
1717

1818
```
19-
conda create -n et_xnnpack python=3.10.0
20-
conda activate et_xnnpack
19+
git clone -b release/0.6 https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules && cd executorch
2120
```
2221

23-
Checkout ExecuTorch repo and sync submodules
22+
Create either a Python virtual environment:
2423

2524
```
26-
git clone https://github.com/pytorch/executorch.git
27-
cd executorch
28-
git submodule sync
29-
git submodule update --init
25+
python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
3026
```
3127

32-
Install dependencies
28+
Or a Conda environment:
3329

3430
```
35-
./install_executorch.sh
31+
conda create -n et_xnnpack python=3.10.0
32+
conda activate et_xnnpack
3633
```
37-
Optional: Use the --pybind flag to install with pybindings.
34+
35+
Install dependencies
36+
3837
```
39-
./install_executorch.sh --pybind xnnpack
38+
./install_executorch.sh
4039
```
40+
4141
## Prepare Models
4242
In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5.
4343
* You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/).
4444
* For chat use-cases, download the instruct models instead of pretrained.
4545
* Install the required packages to export the model:
4646

4747
```
48-
sh examples/models/llama/install_requirements.sh
48+
./examples/models/llama/install_requirements.sh
4949
```
50+
5051
### For Llama 3.2 1B and 3B SpinQuant models
5152
Meta has released prequantized INT4 SpinQuant Llama 3.2 models that ExecuTorch supports on the XNNPACK backend.
5253
* Export Llama model and generate .pte file as below:
@@ -112,19 +113,6 @@ There are two options to add ExecuTorch runtime package into your XCode project:
112113

113114
The current XCode project is pre-configured to automatically download and link the latest prebuilt package via Swift Package Manager.
114115

115-
If you have an old ExecuTorch package cached before in XCode, or are running into any package dependencies issues (incorrect checksum hash, missing package, outdated package), close XCode and run the following command in terminal inside your ExecuTorch directory
116-
117-
```
118-
rm -rf \
119-
~/Library/org.swift.swiftpm \
120-
~/Library/Caches/org.swift.swiftpm \
121-
~/Library/Caches/com.apple.dt.Xcode \
122-
~/Library/Developer/Xcode/DerivedData \
123-
examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.xcworkspace/xcshareddata/swiftpm
124-
```
125-
126-
The command above will clear all the package cache, and when you re-open the XCode project, it should re-download the latest package and link them correctly.
127-
128116
#### (Optional) Changing the prebuilt package version
129117
While we recommended using the latest prebuilt package pre-configured with the XCode project, you can also change the package version manually to your desired version.
130118

0 commit comments

Comments
 (0)