You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-16Lines changed: 5 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
7
7
> | Token | Description |
8
8
> |---|---|
9
9
> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
10
-
> `<\|eot_id\|>` | This signifies the end of the message in a turn. The generate function needs to be set up as shown below or in [this example](./recipes/inference/local_inference/chat_completion/chat_completion.py) to terminate the generation after the turn.|
10
+
> `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.|
11
+
> `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.|
11
12
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
12
-
> `<\|end_of_text\|>` | This is equivalent to the EOS token. Its usually not used during multiturn-conversations. Instead, each message is terminated with `<\|eot_id\|>` |
13
13
>
14
14
> A multiturn-conversation with Llama 3 follows this prompt template:
15
15
> ```
@@ -23,21 +23,10 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
> Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change.
26
27
>
27
-
> To signal the end of the current message the model emits the `<\|eot_id\|>` token. To terminate the generation we need to call the model's generate function as follows:
28
-
> ```
29
-
> terminators = [
30
-
> tokenizer.eos_token_id,
31
-
> tokenizer.convert_tokens_to_ids("<|eot_id|>")
32
-
> ]
33
-
> ...
34
-
> outputs = model.generate(
35
-
> ...
36
-
> eos_token_id=terminators,
37
-
> )
38
-
> ```
28
+
> More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3).
39
29
>
40
-
> More details on the new tokenizer and prompt template: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3
41
30
> [!NOTE]
42
31
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
43
32
>
@@ -69,7 +58,7 @@ These instructions will get you a copy of the project up and running on your loc
69
58
### Prerequisites
70
59
71
60
#### PyTorch Nightlies
72
-
I you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
61
+
If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
73
62
74
63
### Installing
75
64
Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.
0 commit comments