You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
4. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`
5. Follow the instructions in the [README](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#option-a-download-and-export-llama32-1b3b-model) to export a model as `.pte`
30
32
31
33
6. Navigate to the example: `cd examples/demo-apps/react-native/rnllama`
32
34
33
35
7. Install dependencies: `npm install && cd ios && pod install && cd ..`
34
36
35
37
8. Run the app: `npx expo run:ios --device` and select a USB connected iOS device
36
38
37
-
9. Find the device in finder, and place the exported `.pte` model and the downloaded tokenizer under the app.
39
+
9. Find the device in finder, and place the exported `.pte` model and the downloaded tokenizer under the app
38
40
39
41
10. Select the model and tokenizer in the app to start chatting:
0 commit comments