You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This PR simplifies README.md's cmake command by using preset.
This pull request simplifies the build process for the Llama model and
its runner by introducing a preset configuration and removing redundant
build flags. Additionally, it updates the `CMakeLists.txt` file to
enable specific features for tokenizers.
### Build process simplification:
*
[`examples/models/llama/README.md`](diffhunk://#diff-535f376de1f099ede770ee4d5b3c3193b5784c6a0342e292e667fe4ff9d1633eL272-L298):
Replaced the detailed `cmake` commands with a preset configuration
(`--preset llm`) for building executorch and removed redundant flags for
the Llama runner build process. This streamlines the instructions and
reduces complexity.
### Tokenizer configuration updates:
*
[`extension/llm/runner/CMakeLists.txt`](diffhunk://#diff-ab47c38904702e3d66a37419ca35a07815f7d4735f7e94330d17643b9f77ad2bR47-R48):
Added `SUPPORT_REGEX_LOOKAHEAD` and `PCRE2_STATIC_PIC` settings to
enable regex lookahead support and ensure position-independent code for
tokenizers.
Copy file name to clipboardExpand all lines: examples/models/llama/README.md
+3-19Lines changed: 3 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -269,33 +269,17 @@ You can export and run the original Llama 3 8B instruct model.
269
269
270
270
1. Build executorch with optimized CPU performance as follows. Build options available [here](https://github.com/pytorch/executorch/blob/main/CMakeLists.txt#L59).
0 commit comments