You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once that's run, commit any changes, open a pull request, and tag [@lewtun](https://github.com/lewtun) for a review. Congratulations, you've now completed your first translation 🥳!
114
+
Once that's run, commit any changes, open a pull request, and tag [@lewtun](https://github.com/lewtun) and [@stevhliu](https://github.com/stevhliu) for a review. If you also know other native-language speakers who are able to review the translation, tag them as well for help. Congratulations, you've now completed your first translation 🥳!
114
115
115
116
> 🚨 To build the course on the website, double-check your language code exists in `languages` field of the `build_documentation.yml` and `build_pr_documentation.yml` files in the `.github` folder. If not, just add them in their alphabetical order.
116
117
@@ -144,6 +145,25 @@ If you get stuck, check out one of the existing chapters -- this will often show
144
145
145
146
Once you are happy with the content, open a pull request and tag [@lewtun](https://github.com/lewtun) for a review. We recommend adding the first chapter draft as a single pull request -- the team will then provide feedback internally to iterate on the content 🤗!
146
147
148
+
## Deploying to hf.co/course (for HF staff)
149
+
150
+
The course content is deployed to [hf.co/course](https://huggingface.co/learn/nlp-course/chapter1/1) by triggering the [GitHub CI](.github/workflows/build_documentation.yml) from the `release` branch. To trigger the build, first create a new branch from `main` that will be used to update the current state on `release`:
151
+
152
+
```shell
153
+
git checkout main
154
+
git checkout -b bump_release
155
+
```
156
+
157
+
Next, resolve any conflicts between the `release` and `bump_release` branches. Since this is tiresome to do manually, we can do the following to accept the latest changes:
158
+
159
+
```shell
160
+
git checkout bump_release
161
+
git merge -s ours release
162
+
```
163
+
164
+
Next, push the `bump_release` branch and open a PR against `release` (not `main`!). Here is an example [PR](https://github.com/huggingface/course/pull/768). Once the CI is green, merge the PR and this will trigger the GitHub CI to build the new course content. This takes around 10-15 minutes, after which the latest changes will be visible on [hf.co/course](https://huggingface.co/learn/nlp-course/chapter1/1)!
165
+
166
+
147
167
## 🙌 Acknowledgements
148
168
149
169
The structure of this repo and README are inspired by the wonderful [Advanced NLP with spaCy](https://github.com/ines/spacy-course) course.
Copy file name to clipboardExpand all lines: chapters/de/chapter3/4.mdx
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -105,7 +105,7 @@ Alle 🤗 Transformer Modelle geben den Verlust zurück, wenn `labels` angegeben
105
105
Wir sind fast so weit, unsere Trainingsschleife zu schreiben! Es fehlen nur noch zwei Dinge: ein Optimierer und ein Scheduler für die Lernrate. Da wir versuchen, das zu wiederholen, was der `Trainer` automatisch gemacht hat, werden wir die gleichen Standardwerte verwenden. Der Optimierer, den der `Trainer` verwendet, heißt "AdamW" und ist größtenteils derselbe wie Adam, abgesehen von einer Abwandlung für die "Weight Decay Regularization" (siehe ["Decoupled Weight Decay Regularization"] (https://arxiv.org/abs/1711.05101) von Ilya Loshchilov und Frank Hutter):
106
106
107
107
```py
108
-
fromtransformersimport AdamW
108
+
fromtorch.optimimport AdamW
109
109
110
110
optimizer = AdamW(model.parameters(), lr=5e-5)
111
111
```
@@ -209,7 +209,8 @@ Auch hier werden deine Ergebnisse wegen der Zufälligkeit bei der Initialisierun
209
209
Die Trainingsschleife, die wir zuvor definiert haben, funktioniert gut auf einer einzelnen CPU oder GPU. Aber mit der Bibliothek [🤗 Accelerate](https://github.com/huggingface/accelerate) können wir mit wenigen Anpassungen verteiltes Training auf mehreren GPUs oder TPUs implementieren. Beginnend mit der Erstellung der Trainings- und Validierungsdaten, sieht unsere manuelle Trainingsschleife nun folgendermaßen aus:
210
210
211
211
```py
212
-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
212
+
from torch.optim import AdamW
213
+
from transformers import AutoModelForSequenceClassification, get_scheduler
213
214
214
215
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
215
216
optimizer = AdamW(model.parameters(), lr=3e-5)
@@ -246,7 +247,8 @@ Und hier sind die Änderungen:
246
247
247
248
```diff
248
249
+ from accelerate import Accelerator
249
-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
250
+
from torch.optim import AdamW
251
+
from transformers import AutoModelForSequenceClassification, get_scheduler
250
252
251
253
+ accelerator = Accelerator()
252
254
@@ -298,7 +300,8 @@ Wenn du damit experimentieren möchtest, siehst du hier, wie die komplette Train
298
300
299
301
```py
300
302
from accelerate import Accelerator
301
-
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
303
+
from torch.optim import AdamW
304
+
from transformers import AutoModelForSequenceClassification, get_scheduler
0 commit comments