Skip to content

Commit 41fdf13

Browse files
Add documentation for pytorch_tokenizer missing (#13289)
Co-authored-by: Mergen Nachin <[email protected]>
1 parent 3a02146 commit 41fdf13

File tree

2 files changed

+17
-0
lines changed

2 files changed

+17
-0
lines changed

docs/source/llm/export-llm.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,16 @@
22

33
Instead of needing to manually write code to call torch.export(), use ExecuTorch's assortment of lowering APIs, or even interact with TorchAO quantize_ APIs for quantization, we have provided an out of box experience which performantly exports a selection of supported models to ExecuTorch.
44

5+
## Prerequisites
6+
7+
The LLM export functionality requires the `pytorch_tokenizers` package. If you encounter a `ModuleNotFoundError: No module named 'pytorch_tokenizers'` error, install it from the ExecutorTorch source code:
8+
9+
```bash
10+
pip install -e ./extension/llm/tokenizers/
11+
```
12+
13+
## Supported Models
14+
515
As of this doc, the list of supported LLMs include the following:
616
- Llama 2/3/3.1/3.2
717
- Qwen 2.5/3

docs/source/using-executorch-faqs.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,13 @@ sudo apt install python<version>-dev
1414
```
1515
if you are using Ubuntu, or use an equivalent install command.
1616

17+
### ModuleNotFoundError: No module named 'pytorch_tokenizers'
18+
19+
The `pytorch_tokenizers` package is required for LLM export functionality. Install it from the ExecutorTorch source code:
20+
```
21+
pip install -e ./extension/llm/tokenizers/
22+
```
23+
1724
## Export
1825

1926
### Missing out variants: { _ }

0 commit comments

Comments
 (0)