Skip to content

Commit 4426541

Browse files
authored
Update README.md
1 parent 35f1d84 commit 4426541

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/openvino/llama/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Follow the [instructions](../../examples/models/llama#step-2-prepare-model) to d
88
Follow the [instructions](../../backends/openvino/README.md) of **Prerequisites** and **Setup** in `backends/openvino/README.md` to set up the OpenVINO backend.
99

1010
## Export the model:
11-
Navigate into `<executorch_root>/examples/openvino/llama` and execute the commands below to export the model. Update the model file paths to match the location where your model is downloaded.
11+
Navigate into `<executorch_root>/examples/openvino/llama` and execute the commands below to export the model. Update the model file paths to match the location where your model is downloaded. The exported model will be generated in the same directory with the filename `llama3_2.pte`.
1212

1313
```
1414
LLAMA_CHECKPOINT=<path/to/model/folder>/consolidated.00.pth
@@ -37,5 +37,5 @@ The executable is saved in `<executorch_root>/cmake-out/examples/models/llama/ll
3737
## Execute Inference Using Llama Runner
3838
Update the model tokenizer file path to match the location where your model is downloaded and replace the prompt.
3939
```
40-
./cmake-out/examples/models/llama/llama_main --model_path=llama3_2.pte --tokenizer_path=<path/to/model/folder>/tokenizer.model --prompt="Your custom prompt"
40+
./cmake-out/examples/models/llama/llama_main --model_path=<executorch_root>/examples/openvino/llama/llama3_2.pte --tokenizer_path=<path/to/model/folder>/tokenizer.model --prompt="Your custom prompt"
4141
```

0 commit comments

Comments
 (0)