Skip to content

Commit bba4a01

Browse files
authored
Update README.md
1 parent 6b936c5 commit bba4a01

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

examples/openvino/llama/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Follow the [instructions](../../examples/models/llama#step-2-prepare-model) to d
88
Follow the [instructions](../../backends/openvino/README.md) of **Prerequisites** and **Setup** in `backends/openvino/README.md` to set up the OpenVINO backend.
99

1010
## Export the model:
11-
Navigate into `<executorch_root>/examples/openvino/llama` and execute the commands below to export the model. Update the model file paths to match the location where your model is downloaded. The exported model will be generated in the same directory with the filename `llama3_2.pte`.
11+
Navigate into `<executorch_root>/examples/openvino/llama` and execute the commands below to export the model. Update the model file paths to match the location where your model is downloaded. Replace device with the target hardware you want to compile the model for (`CPU`, `GPU`, or `NPU`). The exported model will be generated in the same directory with the filename `llama3_2.pte`.
1212

1313
```
1414
LLAMA_CHECKPOINT=<path/to/model/folder>/consolidated.00.pth
@@ -17,6 +17,7 @@ LLAMA_TOKENIZER=<path/to/model/folder>/tokenizer.model
1717
1818
python -m executorch.extension.llm.export.export_llm \
1919
--config llama3_2_ov_4wo.yaml \
20+
+backend.openvino.device="CPU" \
2021
+base.model_class="llama3_2" \
2122
+base.checkpoint="${LLAMA_CHECKPOINT:?}" \
2223
+base.params="${LLAMA_PARAMS:?}" \

0 commit comments

Comments
 (0)