Skip to content

Commit 9ef8ed2

Browse files
committed
fix: readme
1 parent f94b0ff commit 9ef8ed2

File tree

1 file changed

+5
-3
lines changed
  • examples/demo-apps/react-native/rnllama

1 file changed

+5
-3
lines changed

examples/demo-apps/react-native/rnllama/README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,17 +24,19 @@ A React Native mobile application for running LLaMA language models using ExecuT
2424

2525
2. Navigate to the root of the repository: `cd executorch`
2626

27-
3. Install dependencies: `./install_requirements.sh --pybind xnnpack && ./examples/models/llama/install_requirements.sh`
27+
3. Pull submodules: `git submodule sync && git submodule update --init`
2828

29-
4. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`
29+
4. Install dependencies: `./install_requirements.sh --pybind xnnpack && ./examples/models/llama/install_requirements.sh`
30+
31+
5. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`
3032

3133
6. Navigate to the example: `cd examples/demo-apps/react-native/rnllama`
3234

3335
7. Install dependencies: `npm install && cd ios && pod install && cd ..`
3436

3537
8. Run the app: `npx expo run:ios --device` and select a USB connected iOS device
3638

37-
9. Find the device in finder, and place the exported `.pte` model and the downloaded tokenizer under the app.
39+
9. Find the device in finder, and place the exported `.pte` model and the downloaded tokenizer under the app
3840

3941
10. Select the model and tokenizer in the app to start chatting:
4042

0 commit comments

Comments
 (0)