Skip to content

Commit 6796f92

Browse files
committed
Merge branch 'main' of github.com:huggingface/course into unit-12-discord
2 parents c050cd8 + bb697e2 commit 6796f92

File tree

269 files changed

+41895
-1952
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

269 files changed

+41895
-1952
lines changed

.github/workflows/build_documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,6 @@ jobs:
1414
package: course
1515
path_to_docs: course/chapters/
1616
additional_args: --not_python_module
17-
languages: ar bn de en es fa fr gj he hi id it ja ko ne pt ru rum th tr vi zh-CN zh-TW
17+
languages: ar bn de en es fa fr gj he hi id it ja ko my ne pl pt ru ro te th tr vi zh-CN zh-TW
1818
secrets:
1919
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

.github/workflows/build_pr_documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,4 @@ jobs:
1616
package: course
1717
path_to_docs: course/chapters/
1818
additional_args: --not_python_module
19-
languages: ar bn de en es fa fr gj he hi id it ja ko ne pt ru rum th tr vi zh-CN zh-TW
19+
languages: ar bn de en es fa fr gj he hi id it ja ko my ne pl pt ru ro te th tr vi zh-CN zh-TW

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,13 @@ This repo contains the content that's used to create the **[Hugging Face course]
2121
| [Korean](https://huggingface.co/course/ko/chapter1/1) (WIP) | [`chapters/ko`](https://github.com/huggingface/course/tree/main/chapters/ko) | [@Doohae](https://github.com/Doohae), [@wonhyeongseo](https://github.com/wonhyeongseo), [@dlfrnaos19](https://github.com/dlfrnaos19), [@nsbg](https://github.com/nsbg) |
2222
| [Portuguese](https://huggingface.co/course/pt/chapter1/1) (WIP) | [`chapters/pt`](https://github.com/huggingface/course/tree/main/chapters/pt) | [@johnnv1](https://github.com/johnnv1), [@victorescosta](https://github.com/victorescosta), [@LincolnVS](https://github.com/LincolnVS) |
2323
| [Russian](https://huggingface.co/course/ru/chapter1/1) (WIP) | [`chapters/ru`](https://github.com/huggingface/course/tree/main/chapters/ru) | [@pdumin](https://github.com/pdumin), [@svv73](https://github.com/svv73), [@blademoon](https://github.com/blademoon) |
24+
| [Telugu]( https://huggingface.co/course/te/chapter0/1 ) (WIP) | [`chapters/te`](https://github.com/huggingface/course/tree/main/chapters/te) | [@Ajey95](https://github.com/Ajey95), [@RahulKonda18](https://github.com/RahulKonda18)
2425
| [Thai](https://huggingface.co/course/th/chapter1/1) (WIP) | [`chapters/th`](https://github.com/huggingface/course/tree/main/chapters/th) | [@peeraponw](https://github.com/peeraponw), [@a-krirk](https://github.com/a-krirk), [@jomariya23156](https://github.com/jomariya23156), [@ckingkan](https://github.com/ckingkan) |
2526
| [Turkish](https://huggingface.co/course/tr/chapter1/1) (WIP) | [`chapters/tr`](https://github.com/huggingface/course/tree/main/chapters/tr) | [@tanersekmen](https://github.com/tanersekmen), [@mertbozkir](https://github.com/mertbozkir), [@ftarlaci](https://github.com/ftarlaci), [@akkasayaz](https://github.com/akkasayaz) |
2627
| [Vietnamese](https://huggingface.co/course/vi/chapter1/1) | [`chapters/vi`](https://github.com/huggingface/course/tree/main/chapters/vi) | [@honghanhh](https://github.com/honghanhh) |
2728
| [Chinese (simplified)](https://huggingface.co/course/zh-CN/chapter1/1) | [`chapters/zh-CN`](https://github.com/huggingface/course/tree/main/chapters/zh-CN) | [@zhlhyx](https://github.com/zhlhyx), [petrichor1122](https://github.com/petrichor1122), [@1375626371](https://github.com/1375626371) |
28-
| [Chinese (traditional)](https://huggingface.co/course/zh-TW/chapter1/1) (WIP) | [`chapters/zh-TW`](https://github.com/huggingface/course/tree/main/chapters/zh-TW) | [@davidpeng86](https://github.com/davidpeng86) |
29-
29+
| [Chinese (traditional)](https://huggingface.co/course/zh-TW/chapter1/1) (WIP) | [`chapters/zh-TW`](https://github.com/huggingface/course/tree/main/chapters/zh-TW) | [@davidpeng86](https://github.com/davidpeng86), [@thliang01](https://github.com/thliang01) |
30+
| [Romanian](https://huggingface.co/course/ro/chapter1/1) (WIP) | [`chapters/ro`](https://github.com/huggingface/course/tree/main/chapters/ro) | [@Sigmoid](https://github.com/SigmoidAI), [@eduard-balamatiuc](https://github.com/eduard-balamatiuc), [@FriptuLudmila](https://github.com/FriptuLudmila), [@tokyo-s](https://github.com/tokyo-s), [@hbkdesign](https://github.com/hbkdesign), [@grumpycatyo-collab](https://github.com/grumpycatyo-collab), [@Angroys](https://github.com/Angroys) |
3031

3132
### Translating the course into your language
3233

chapters/de/chapter3/2.mdx

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@ Wir fahren mit dem Beispiel aus dem [vorigen Kapitel](/course/chapter2) fort. Fo
2727

2828
```python
2929
import torch
30-
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification
30+
from torch.optim import AdamW
31+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
3132

3233
# Genau wie vorher
3334
checkpoint = "bert-base-uncased"

chapters/de/chapter3/4.mdx

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ Alle 🤗 Transformer Modelle geben den Verlust zurück, wenn `labels` angegeben
105105
Wir sind fast so weit, unsere Trainingsschleife zu schreiben! Es fehlen nur noch zwei Dinge: ein Optimierer und ein Scheduler für die Lernrate. Da wir versuchen, das zu wiederholen, was der `Trainer` automatisch gemacht hat, werden wir die gleichen Standardwerte verwenden. Der Optimierer, den der `Trainer` verwendet, heißt "AdamW" und ist größtenteils derselbe wie Adam, abgesehen von einer Abwandlung für die "Weight Decay Regularization" (siehe ["Decoupled Weight Decay Regularization"] (https://arxiv.org/abs/1711.05101) von Ilya Loshchilov und Frank Hutter):
106106

107107
```py
108-
from transformers import AdamW
108+
from torch.optim import AdamW
109109

110110
optimizer = AdamW(model.parameters(), lr=5e-5)
111111
```
@@ -209,7 +209,8 @@ Auch hier werden deine Ergebnisse wegen der Zufälligkeit bei der Initialisierun
209209
Die Trainingsschleife, die wir zuvor definiert haben, funktioniert gut auf einer einzelnen CPU oder GPU. Aber mit der Bibliothek [🤗 Accelerate](https://github.com/huggingface/accelerate) können wir mit wenigen Anpassungen verteiltes Training auf mehreren GPUs oder TPUs implementieren. Beginnend mit der Erstellung der Trainings- und Validierungsdaten, sieht unsere manuelle Trainingsschleife nun folgendermaßen aus:
210210

211211
```py
212-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
212+
from torch.optim import AdamW
213+
from transformers import AutoModelForSequenceClassification, get_scheduler
213214

214215
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
215216
optimizer = AdamW(model.parameters(), lr=3e-5)
@@ -246,7 +247,8 @@ Und hier sind die Änderungen:
246247

247248
```diff
248249
+ from accelerate import Accelerator
249-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
250+
from torch.optim import AdamW
251+
from transformers import AutoModelForSequenceClassification, get_scheduler
250252

251253
+ accelerator = Accelerator()
252254

@@ -298,7 +300,8 @@ Wenn du damit experimentieren möchtest, siehst du hier, wie die komplette Train
298300

299301
```py
300302
from accelerate import Accelerator
301-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
303+
from torch.optim import AdamW
304+
from transformers import AutoModelForSequenceClassification, get_scheduler
302305

303306
accelerator = Accelerator()
304307

chapters/en/_toctree.yml

Lines changed: 22 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,23 +8,25 @@
88
- local: chapter1/1
99
title: Introduction
1010
- local: chapter1/2
11-
title: Natural Language Processing
11+
title: Natural Language Processing and Large Language Models
1212
- local: chapter1/3
1313
title: Transformers, what can they do?
1414
- local: chapter1/4
1515
title: How do Transformers work?
1616
- local: chapter1/5
17-
title: Encoder models
17+
title: How 🤗 Transformers solve tasks
1818
- local: chapter1/6
19-
title: Decoder models
19+
title: Transformer Architectures
2020
- local: chapter1/7
21-
title: Sequence-to-sequence models
21+
title: Quick quiz
2222
- local: chapter1/8
23-
title: Bias and limitations
23+
title: Inference with LLMs
2424
- local: chapter1/9
25-
title: Summary
25+
title: Bias and limitations
2626
- local: chapter1/10
27-
title: End-of-chapter quiz
27+
title: Summary
28+
- local: chapter1/11
29+
title: Certification exam
2830
quiz: 1
2931

3032
- title: 2. Using 🤗 Transformers
@@ -44,6 +46,8 @@
4446
- local: chapter2/7
4547
title: Basic usage completed!
4648
- local: chapter2/8
49+
title: Optimized Inference Deployment
50+
- local: chapter2/9
4751
title: End-of-chapter quiz
4852
quiz: 2
4953

@@ -54,13 +58,14 @@
5458
- local: chapter3/2
5559
title: Processing the data
5660
- local: chapter3/3
57-
title: Fine-tuning a model with the Trainer API or Keras
58-
local_fw: { pt: chapter3/3, tf: chapter3/3_tf }
61+
title: Fine-tuning a model with the Trainer API
5962
- local: chapter3/4
60-
title: A full training
63+
title: A full training loop
6164
- local: chapter3/5
62-
title: Fine-tuning, Check!
65+
title: Understanding Learning Curves
6366
- local: chapter3/6
67+
title: Fine-tuning, Check!
68+
- local: chapter3/7
6469
title: End-of-chapter quiz
6570
quiz: 3
6671

@@ -126,7 +131,7 @@
126131
title: End-of-chapter quiz
127132
quiz: 6
128133

129-
- title: 7. Main NLP tasks
134+
- title: 7. Classical NLP tasks
130135
sections:
131136
- local: chapter7/1
132137
title: Introduction
@@ -143,7 +148,7 @@
143148
- local: chapter7/7
144149
title: Question answering
145150
- local: chapter7/8
146-
title: Mastering NLP
151+
title: Mastering LLMs
147152
- local: chapter7/9
148153
title: End-of-chapter quiz
149154
quiz: 7
@@ -238,11 +243,15 @@
238243
title: Reinforcement Learning on LLMs
239244
- local: chapter12/3
240245
title: The Aha Moment in the DeepSeek R1 Paper
246+
- local: chapter12/3a
247+
title: Advanced Understanding of GRPO in DeepSeekMath
241248
- local: chapter12/4
242249
title: Implementing GRPO in TRL
243250
- local: chapter12/5
244251
title: Practical Exercise to Fine-tune a model with GRPO
245252
- local: chapter12/6
253+
title: Practical Exercise with Unsloth
254+
- local: chapter12/7
246255
title: Coming soon...
247256

248257
- title: Course Events

0 commit comments

Comments
 (0)