Skip to content

Commit 79aa704

Browse files
committed
Adapt readme + check_completion.py to reflect that no manual change is needed to support eot_id
1 parent 8765312 commit 79aa704

File tree

2 files changed

+5
-22
lines changed

2 files changed

+5
-22
lines changed

README.md

Lines changed: 5 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
77
> | Token | Description |
88
> |---|---|
99
> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
10-
> `<\|eot_id\|>` | This signifies the end of the message in a turn. The generate function needs to be set up as shown below or in [this example](./recipes/inference/local_inference/chat_completion/chat_completion.py) to terminate the generation after the turn.|
10+
> `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.|
11+
> `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.|
1112
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
12-
> `<\|end_of_text\|>` | This is equivalent to the EOS token. Its usually not used during multiturn-conversations. Instead, each message is terminated with `<\|eot_id\|>` |
1313
>
1414
> A multiturn-conversation with Llama 3 follows this prompt template:
1515
> ```
@@ -23,21 +23,10 @@ The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://gith
2323
>
2424
> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
2525
> ```
26+
> Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change.
2627
>
27-
> To signal the end of the current message the model emits the `<\|eot_id\|>` token. To terminate the generation we need to call the model's generate function as follows:
28-
> ```
29-
> terminators = [
30-
> tokenizer.eos_token_id,
31-
> tokenizer.convert_tokens_to_ids("<|eot_id|>")
32-
> ]
33-
> ...
34-
> outputs = model.generate(
35-
> ...
36-
> eos_token_id=terminators,
37-
> )
38-
> ```
28+
> More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3).
3929
>
40-
> More details on the new tokenizer and prompt template: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3
4130
> [!NOTE]
4231
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
4332
>
@@ -69,7 +58,7 @@ These instructions will get you a copy of the project up and running on your loc
6958
### Prerequisites
7059
7160
#### PyTorch Nightlies
72-
I you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
61+
If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
7362
7463
### Installing
7564
Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.

recipes/inference/local_inference/chat_completion/chat_completion.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,11 +75,6 @@ def main(
7575

7676
chats = tokenizer.apply_chat_template(dialogs)
7777

78-
terminators = [
79-
tokenizer.eos_token_id,
80-
tokenizer.convert_tokens_to_ids("<|eot_id|>")
81-
]
82-
8378
with torch.no_grad():
8479
for idx, chat in enumerate(chats):
8580
safety_checker = get_safety_checker(enable_azure_content_safety,
@@ -118,7 +113,6 @@ def main(
118113
top_k=top_k,
119114
repetition_penalty=repetition_penalty,
120115
length_penalty=length_penalty,
121-
eos_token_id=terminators,
122116
**kwargs
123117
)
124118

0 commit comments

Comments
 (0)