You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,15 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://gith
4
4
5
5
<!-- markdown-link-check-enable -->
6
6
> [!IMPORTANT]
7
-
> Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer).
7
+
> Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer).
8
8
> | Token | Description |
9
9
> |---|---|
10
10
> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
11
11
> `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.|
12
12
> `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.|
13
13
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
14
14
>
15
-
> A multiturn-conversation with Llama 3 follows this prompt template:
15
+
> A multiturn-conversation with Meta Llama 3 follows this prompt template:
0 commit comments