You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/Build-Llama3-Chat-Android-App-Using-Executorch-And-XNNPACK/4-Prepare-LLaMA-models.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,26 +8,26 @@ layout: learningpathall
8
8
9
9
## Download and export the Llama 3.2 1B model
10
10
11
-
To get started with Llama 3, you can obtain the pre-trained parameters by visiting [Meta's Llama Downloads](https://llama.meta.com/llama-downloads/) page. Request access by filling out your details, and read through and accept the Responsible Use Guide. This grants you a license and a download link which is valid for 24 hours. The Llama 3.2 1B model is used for this exercise, but the same instructions apply to other options as well with minimal modification.
11
+
To get started with Llama 3, you can obtain the pre-trained parameters by visiting [Meta's Llama Downloads](https://llama.meta.com/llama-downloads/) page. Request access by filling out your details, and read through and accept the Responsible Use Guide. This grants you a license and a download link which is valid for 24 hours. The Llama 3.2 1B Instruct model is used for this exercise, but the same instructions apply to other options as well with minimal modification.
12
12
13
13
Install the `llama-stack` package from `pip`.
14
14
```bash
15
15
pip install llama-stack
16
16
```
17
17
Run the command to download, and paste the download link from the email when prompted.
18
18
```bash
19
-
llama model download --source meta --model-id Llama3.2-1B
19
+
llama model download --source meta --model-id Llama3.2-1B-Instruct
20
20
```
21
21
22
22
When the download is finished, the installation path is printed as output.
23
23
```output
24
-
Successfully downloaded model to /<path-to-home>/.llama/checkpoints/Llama3.2-1B
24
+
Successfully downloaded model to /<path-to-home>/.llama/checkpoints/Llama3.2-1B-Instruct
25
25
```
26
26
27
27
Verify by viewing the downloaded files under this path:
0 commit comments