diff --git a/chapters/en/chapter1/4.mdx b/chapters/en/chapter1/4.mdx index 3870b541f..2f31efe19 100644 --- a/chapters/en/chapter1/4.mdx +++ b/chapters/en/chapter1/4.mdx @@ -35,7 +35,6 @@ The [Transformer architecture](https://arxiv.org/abs/1706.03762) was introduced - **May 2020**, [GPT-3](https://huggingface.co/papers/2005.14165), an even bigger version of GPT-2 that is able to perform well on a variety of tasks without the need for fine-tuning (called _zero-shot learning_) - **January 2022**: [InstructGPT](https://huggingface.co/papers/2203.02155), a version of GPT-3 that was trained to follow instructions better -This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories: - **January 2023**: [Llama](https://huggingface.co/papers/2302.13971), a large language model that is able to generate text in a variety of languages. @@ -44,6 +43,7 @@ This list is far from comprehensive, and is just meant to highlight a few of the - **May 2024**: [Gemma 2](https://huggingface.co/papers/2408.00118), a family of lightweight, state-of-the-art open models ranging from 2B to 27B parameters that incorporate interleaved local-global attentions and group-query attention, with smaller models trained using knowledge distillation to deliver performance competitive with models 2-3 times larger. - **November 2024**: [SmolLM2](https://huggingface.co/papers/2502.02737), a state-of-the-art small language model (135 million to 1.7 billion parameters) that achieves impressive performance despite its compact size, and unlocking new possibilities for mobile and edge devices. +This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories: - GPT-like (also called _auto-regressive_ Transformer models) - BERT-like (also called _auto-encoding_ Transformer models)