Skip to content

Commit 6ade4eb

Browse files
committed
update readme for 3.2
1 parent 3e39ed0 commit 6ade4eb

File tree

1 file changed

+10
-26
lines changed

1 file changed

+10
-26
lines changed

README.md

Lines changed: 10 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,29 @@
11
# Llama Recipes: Examples to get started using the Llama models from Meta
22
<!-- markdown-link-check-disable -->
3-
The 'llama-recipes' repository is a companion to the [Meta Llama](https://github.com/meta-llama/llama-models) models. We support the latest version, [Llama 3.1](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run Llama locally, in the cloud, and on-prem.
3+
The 'llama-recipes' repository is a companion to the [Meta Llama](https://github.com/meta-llama/llama-models) models. We support the latest version, [Llama 3.2](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/), in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run Llama locally, in the cloud, and on-prem.
44

55
<!-- markdown-link-check-enable -->
66
> [!IMPORTANT]
7-
> Meta Llama 3.1 has a new prompt template and special tokens.
7+
> Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token `<|image|>` representing the input image for the multimodal models.
8+
>
89
> | Token | Description |
910
> |---|---|
1011
> `<\|begin_of_text\|>` | Specifies the start of the prompt. |
12+
> `<\|image\|>` | Represents the image tokens passed as an input to Llama. |
1113
> `<\|eot_id\|>` | This token signifies the end of a turn i.e. the end of the model's interaction either with the user or tool executor. |
1214
> `<\|eom_id\|>` | End of Message. A message represents a possible stopping point where the model can inform the execution environment that a tool call needs to be made. |
1315
> `<\|python_tag\|>` | A special tag used in the model’s response to signify a tool call. |
1416
> `<\|finetune_right_pad_id\|>` | Used for padding text sequences in a batch to the same length. |
1517
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant and ipython. |
1618
> `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused, this token is expected to be generated only by the base models. |
1719
>
18-
> A multiturn-conversation with Meta Llama 3.1 that includes tool-calling follows this structure:
19-
> ```
20-
> <|begin_of_text|><|start_header_id|>system<|end_header_id|>
21-
>
22-
> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
23-
>
24-
> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
25-
>
26-
> <|python_tag|>{{ model_tool_call_1 }}<|eom_id|><|start_header_id|>ipython<|end_header_id|>
27-
>
28-
> {{ tool_response }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
29-
>
30-
> {{model_response_based_on_tool_response}}<|eot_id|>
31-
> ```
32-
> Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change.
33-
>
34-
> More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1).
20+
> More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found [on the documentation website](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_2).
21+
3522

36-
>
37-
> [!NOTE]
38-
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
39-
>
40-
> Make sure you update your local clone by running `git pull origin main`
4123

4224
## Table of Contents
4325

44-
- [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta)
26+
- [Llama Recipes: Examples to get started using the Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta)
4527
- [Table of Contents](#table-of-contents)
4628
- [Getting Started](#getting-started)
4729
- [Prerequisites](#prerequisites)
@@ -50,7 +32,7 @@ The 'llama-recipes' repository is a companion to the [Meta Llama](https://github
5032
- [Install with pip](#install-with-pip)
5133
- [Install with optional dependencies](#install-with-optional-dependencies)
5234
- [Install from source](#install-from-source)
53-
- [Getting the Llama models](#getting-the-llama-models)
35+
- [Getting the Meta Llama models](#getting-the-meta-llama-models)
5436
- [Model conversion to Hugging Face](#model-conversion-to-hugging-face)
5537
- [Repository Organization](#repository-organization)
5638
- [`recipes/`](#recipes)
@@ -192,6 +174,8 @@ Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduc
192174
## License
193175
<!-- markdown-link-check-disable -->
194176

177+
See the License file for Meta Llama 3.2 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/USE_POLICY.md)
178+
195179
See the License file for Meta Llama 3.1 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/USE_POLICY.md)
196180

197181
See the License file for Meta Llama 3 [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3/LICENSE) and Acceptable Use Policy [here](https://github.com/meta-llama/llama-models/blob/main/models/llama3/USE_POLICY.md)

0 commit comments

Comments
 (0)