Skip to content

Commit 7f1d487

Browse files
authored
Merge pull request #1084 from dhatri713/fix-typo
fix typo in (en) chapter 1, sub-chatper 4
2 parents 12acf75 + 6dc3025 commit 7f1d487

File tree

1 file changed

+3
-2
lines changed
  • chapters/en/chapter1

1 file changed

+3
-2
lines changed

chapters/en/chapter1/4.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,7 @@ The [Transformer architecture](https://arxiv.org/abs/1706.03762) was introduced
3434

3535
- **May 2020**, [GPT-3](https://huggingface.co/papers/2005.14165), an even bigger version of GPT-2 that is able to perform well on a variety of tasks without the need for fine-tuning (called _zero-shot learning_)
3636

37-
- **January 2022**: [InstructGPT](https://huggingface.co/papers/2203.02155), a version of GPT-3 that was trained to follow instructions better
38-
This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories:
37+
- **January 2022**: [InstructGPT](https://huggingface.co/papers/2203.02155), a version of GPT-3 that was trained to follow instructions better.
3938

4039
- **January 2023**: [Llama](https://huggingface.co/papers/2302.13971), a large language model that is able to generate text in a variety of languages.
4140

@@ -45,6 +44,8 @@ This list is far from comprehensive, and is just meant to highlight a few of the
4544

4645
- **November 2024**: [SmolLM2](https://huggingface.co/papers/2502.02737), a state-of-the-art small language model (135 million to 1.7 billion parameters) that achieves impressive performance despite its compact size, and unlocking new possibilities for mobile and edge devices.
4746

47+
This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories:
48+
4849
- GPT-like (also called _auto-regressive_ Transformer models)
4950
- BERT-like (also called _auto-encoding_ Transformer models)
5051
- T5-like (also called _sequence-to-sequence_ Transformer models)

0 commit comments

Comments
 (0)