You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS).
Optional: Use the --pybind flag to install with pybindings.
43
+
Install dependencies
43
44
```
44
-
./install_executorch.sh --pybind xnnpack
45
+
./install_executorch.sh
45
46
```
46
47
47
-
48
48
## Prepare Models
49
49
In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5.
50
50
* You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/).
51
51
* For chat use-cases, download the instruct models instead of pretrained.
52
-
* Run `examples/models/llama/install_requirements.sh` to install dependencies.
52
+
* Run `./examples/models/llama/install_requirements.sh` to install dependencies.
53
53
* Rename tokenizer for Llama3.x with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly.
0 commit comments