diff --git a/chapters/en/chapter2/3.mdx b/chapters/en/chapter2/3.mdx index cf6309eb1..582ad05fe 100644 --- a/chapters/en/chapter2/3.mdx +++ b/chapters/en/chapter2/3.mdx @@ -51,7 +51,7 @@ config.json model.safetensors If you look inside the *config.json* file, you'll see all the necessary attributes needed to build the model architecture. This file also contains some metadata, such as where the checkpoint originated and what 🤗 Transformers version you were using when you last saved the checkpoint. -The *pytorch_model.safetensors* file is known as the state dictionary; it contains all your model's weights. The two files work together: the configuration file is needed to know about the model architecture, while the model weights are the parameters of the model. +The *model.safetensors* file is known as the state dictionary; it contains all your model's weights. The two files work together: the configuration file is needed to know about the model architecture, while the model weights are the parameters of the model. To reuse a saved model, use the `from_pretrained()` method again: @@ -135,7 +135,7 @@ You'll notice that the tokenizer has added special tokens — `[CLS]` and `[SEP] You can encode multiple sentences at once, either by batching them together (we'll discuss this soon) or by passing a list: ```py -encoded_input = tokenizer("How are you?", "I'm fine, thank you!") +encoded_input = tokenizer(["How are you?", "I'm fine, thank you!"]) print(encoded_input) ```