You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-3Lines changed: 17 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
7
7
> | Token | Description |
8
8
> |---|---|
9
9
> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
10
-
> `<\|eot_id\|>` | This signifies the end of the message in a turn. |
10
+
> `<\|eot_id\|>` | This signifies the end of the message in a turn. The generate function needs to be set up as shown below or in [this example](./recipes/inference/local_inference/chat_completion/chat_completion.py) to terminate the generation after the turn.|
11
11
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
12
-
> `<\|end_of_text\|>` | This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens |
12
+
> `<\|end_of_text\|>` | This is equivalent to the EOS token. Its usually not used during multiturn-conversations. Instead, each message is terminated with `<\|eot_id\|>` |
13
13
>
14
14
> A multiturn-conversation with Llama 3 follows this prompt template:
15
15
> ```
@@ -23,7 +23,21 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
> More details on the new tokenizer and prompt template: <PLACEHOLDER_URL>
26
+
>
27
+
> To signal the end of the current message the model emits the `<\|eot_id\|>` token. To terminate the generation we need to call the model's generate function as follows:
28
+
> ```
29
+
> terminators = [
30
+
> tokenizer.eos_token_id,
31
+
> tokenizer.convert_tokens_to_ids("<|eot_id|>")
32
+
> ]
33
+
> ...
34
+
> outputs = model.generate(
35
+
> ...
36
+
> eos_token_id=terminators,
37
+
> )
38
+
> ```
39
+
>
40
+
> More details on the new tokenizer and prompt template: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3
27
41
> [!NOTE]
28
42
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
0 commit comments