You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-26Lines changed: 10 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,47 +1,29 @@
1
1
# Llama Recipes: Examples to get started using the Llama models from Meta
2
2
<!-- markdown-link-check-disable -->
3
-
The 'llama-recipes' repository is a companion to the [Meta Llama](https://github.com/meta-llama/llama-models) models. We support the latest version, [Llama 3.1](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run Llama locally, in the cloud, and on-prem.
3
+
The 'llama-recipes' repository is a companion to the [Meta Llama](https://github.com/meta-llama/llama-models) models. We support the latest version, [Llama 3.2](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/), in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run Llama locally, in the cloud, and on-prem.
4
4
5
5
<!-- markdown-link-check-enable -->
6
6
> [!IMPORTANT]
7
-
> Meta Llama 3.1 has a new prompt template and special tokens.
7
+
> Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token `<|image|>` representing the input image for the multimodal models.
8
+
>
8
9
> | Token | Description |
9
10
> |---|---|
10
11
> `<\|begin_of_text\|>` | Specifies the start of the prompt. |
12
+
> `<\|image\|>` | Represents the image tokens passed as an input to Llama. |
11
13
> `<\|eot_id\|>` | This token signifies the end of a turn i.e. the end of the model's interaction either with the user or tool executor. |
12
14
> `<\|eom_id\|>` | End of Message. A message represents a possible stopping point where the model can inform the execution environment that a tool call needs to be made. |
13
15
> `<\|python_tag\|>` | A special tag used in the model’s response to signify a tool call. |
14
16
> `<\|finetune_right_pad_id\|>` | Used for padding text sequences in a batch to the same length. |
15
17
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant and ipython. |
16
18
> `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused, this token is expected to be generated only by the base models. |
17
19
>
18
-
> A multiturn-conversation with Meta Llama 3.1 that includes tool-calling follows this structure:
> Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change.
33
-
>
34
-
> More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1).
20
+
> More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found [on the documentation website](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_2).
21
+
35
22
36
-
>
37
-
> [!NOTE]
38
-
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
39
-
>
40
-
> Make sure you update your local clone by running `git pull origin main`
41
23
42
24
## Table of Contents
43
25
44
-
- [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta)
26
+
-[Llama Recipes: Examples to get started using the Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta)
45
27
-[Table of Contents](#table-of-contents)
46
28
-[Getting Started](#getting-started)
47
29
-[Prerequisites](#prerequisites)
@@ -50,7 +32,7 @@ The 'llama-recipes' repository is a companion to the [Meta Llama](https://github
50
32
-[Install with pip](#install-with-pip)
51
33
-[Install with optional dependencies](#install-with-optional-dependencies)
52
34
-[Install from source](#install-from-source)
53
-
- [Getting the Llama models](#getting-the-llama-models)
35
+
-[Getting the Meta Llama models](#getting-the-meta-llama-models)
54
36
-[Model conversion to Hugging Face](#model-conversion-to-hugging-face)
@@ -192,6 +174,8 @@ Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduc
192
174
## License
193
175
<!-- markdown-link-check-disable -->
194
176
177
+
See the License file for Meta Llama 3.2 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/USE_POLICY.md)
178
+
195
179
See the License file for Meta Llama 3.1 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/USE_POLICY.md)
196
180
197
181
See the License file for Meta Llama 3 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3/USE_POLICY.md)
0 commit comments