1
1
# Model Garden NLP Common Training Driver
2
2
3
- [ train.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/train.py ) is the common training driver that supports multiple
3
+ [ train.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/train.py )
4
+ is the common training driver that supports multiple
4
5
NLP tasks (e.g., pre-training, GLUE and SQuAD fine-tuning etc) and multiple
5
6
models (e.g., BERT, ALBERT, MobileBERT etc).
6
7
7
8
## Experiment Configuration
8
9
9
- [ train.py] is driven by configs defined by the [ ExperimentConfig] ( https://github.com/tensorflow/models/blob/master/official/core/config_definitions.py )
10
+ [ train.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/train.py )
11
+ is driven by configs defined by the [ ExperimentConfig] ( https://github.com/tensorflow/models/blob/master/official/core/config_definitions.py )
10
12
including configurations for ` task ` , ` trainer ` and ` runtime ` . The pre-defined
11
13
NLP related [ ExperimentConfig] ( https://github.com/tensorflow/models/blob/master/official/core/config_definitions.py ) can be found in
12
14
[ configs/experiment_configs.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/configs/experiment_configs.py ) .
@@ -78,7 +80,9 @@ setting `task.validation_data.input_path` in `PARAMS`.
78
80
79
81
## Run on Cloud TPUs
80
82
81
- Next, we will describe how to run the [ train.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/train.py ) on Cloud TPUs.
83
+ Next, we will describe how to run
84
+ the [ train.py] ( https://github.com/tensorflow/models/blob/master/official/nlp/train.py )
85
+ on Cloud TPUs.
82
86
83
87
### Setup
84
88
First, you need to create a ` tf-nightly ` TPU with
@@ -99,6 +103,8 @@ pip3 install --user -r official/requirements.txt
99
103
100
104
### Fine-tuning Sentence Classification with BERT from TF-Hub
101
105
106
+ <details >
107
+
102
108
This example fine-tunes BERT-base from TF-Hub on the the Multi-Genre Natural
103
109
Language Inference (MultiNLI) corpus using TPUs.
104
110
@@ -163,8 +169,12 @@ python3 train.py \
163
169
You can monitor the training progress in the console and find the output
164
170
models in ` $OUTPUT_DIR ` .
165
171
172
+ </details >
173
+
166
174
### Fine-tuning SQuAD with a pre-trained BERT checkpoint
167
175
176
+ <details >
177
+
168
178
This example fine-tunes a pre-trained BERT checkpoint on the
169
179
Stanford Question Answering Dataset (SQuAD) using TPUs.
170
180
The [ SQuAD website] ( https://rajpurkar.github.io/SQuAD-explorer/ ) contains
@@ -219,4 +229,6 @@ python3 train.py \
219
229
220
230
```
221
231
232
+ </details >
233
+
222
234
Note: More examples about pre-training will come soon.
0 commit comments