Skip to content

Commit 29a8147

Browse files
tianxinyingyibiao
andauthored
Tiny fix of README (#548)
* add ernie_matching point-wise & pair-wise * Rename some Class * fix some typo * add FewCLUE 9 datasets * 1. Using ernie-gram to train ernie_matching point-wise & pair-wise 2. update README.md * Tiny fix * Tiny fix of README.md Co-authored-by: yingyibiao <[email protected]>
1 parent 9dde7ce commit 29a8147

File tree

3 files changed

+14
-14
lines changed

3 files changed

+14
-14
lines changed

examples/few_shot/efl/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ train_ds, dev_ds, public_test_ds = load_dataset("fewclue", name="tnews", splits=
2727
通过如下命令,指定 GPU 0 卡, 在 FewCLUE 的 `tnews` 数据集上进行训练&评估
2828
```
2929
python -u -m paddle.distributed.launch --gpus "0" \
30-
ptuning.py \
30+
train.py \
3131
--task_name "tnews" \
3232
--device gpu \
3333
--negative_num 1 \

examples/text_matching/ernie_matching/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -57,21 +57,21 @@ python -u -m paddle.distributed.launch --gpus "0" train_pointwise.py \
5757

5858
```python
5959

60-
# 使用 ernie-gram 预训练模型
60+
# 使用 ERNIE-Gram 预训练模型
6161
model = ppnlp.transformers.ErnieGramModel.from_pretrained('ernie-gram-zh')
6262
tokenizer = ppnlp.transformers.ErnieGramTokenizer.from_pretrained('ernie-gram-zh')
6363

64-
# 使用ernie预训练模型
65-
# ernie
66-
#model = ppnlp.transformers.ErnieModel.from_pretrained('ernie'))
67-
#tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie')
64+
# 使用 ERNIE 预训练模型
65+
# ernie-1.0
66+
#model = ppnlp.transformers.ErnieModel.from_pretrained('ernie-1.0'))
67+
#tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie-1.0')
6868

6969
# ernie-tiny
7070
# model = ppnlp.transformers.ErnieModel.from_pretrained('ernie-tiny'))
7171
# tokenizer = ppnlp.transformers.ErnieTinyTokenizer.from_pretrained('ernie-tiny')
7272

7373

74-
# 使用bert预训练模型
74+
# 使用 BERT 预训练模型
7575
# bert-base-chinese
7676
# model = ppnlp.transformers.BertModel.from_pretrained('bert-base-chinese')
7777
# tokenizer = ppnlp.transformers.BertTokenizer.from_pretrained('bert-base-chinese')
@@ -85,7 +85,7 @@ tokenizer = ppnlp.transformers.ErnieGramTokenizer.from_pretrained('ernie-gram-zh
8585
# tokenizer = ppnlp.transformers.BertTokenizer.from_pretrained('bert-wwm-ext-chinese')
8686

8787

88-
# 使用roberta预训练模型
88+
# 使用 RoBERTa 预训练模型
8989
# roberta-wwm-ext
9090
# model = ppnlp.transformers.RobertaModel.from_pretrained('roberta-wwm-ext')
9191
# tokenizer = ppnlp.transformers.RobertaTokenizer.from_pretrained('roberta-wwm-ext')

examples/text_matching/sentence_transformers/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -91,17 +91,17 @@ $ python -m paddle.distributed.launch --gpus "0" train.py --device gpu --save_di
9191
代码示例中使用的预训练模型是ERNIE,如果想要使用其他预训练模型如BERT,RoBERTa,Electra等,只需更换`model``tokenizer`即可。
9292

9393
```python
94-
# 使用ernie预训练模型
95-
# ernie
96-
model = ppnlp.transformers.ErnieModel.from_pretrained('ernie'))
97-
tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie')
94+
# 使用 ERNIE 预训练模型
95+
# ernie-1.0
96+
model = ppnlp.transformers.ErnieModel.from_pretrained('ernie-1.0'))
97+
tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie-1.0')
9898

9999
# ernie-tiny
100100
# model = ppnlp.transformers.ErnieModel.from_pretrained('ernie-tiny'))
101101
# tokenizer = ppnlp.transformers.ErnieTinyTokenizer.from_pretrained('ernie-tiny')
102102

103103

104-
# 使用bert预训练模型
104+
# 使用 BERT 预训练模型
105105
# bert-base-chinese
106106
# model = ppnlp.transformers.BertModel.from_pretrained('bert-base-chinese')
107107
# tokenizer = ppnlp.transformers.BertTokenizer.from_pretrained('bert-base-chinese')
@@ -115,7 +115,7 @@ tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie')
115115
# tokenizer = ppnlp.transformers.BertTokenizer.from_pretrained('bert-wwm-ext-chinese')
116116

117117

118-
# 使用roberta预训练模型
118+
# 使用 RoBERTa 预训练模型
119119
# roberta-wwm-ext
120120
# model = ppnlp.transformers.RobertaModel.from_pretrained('roberta-wwm-ext')
121121
# tokenizer = ppnlp.transformers.RobertaTokenizer.from_pretrained('roberta-wwm-ext')

0 commit comments

Comments
 (0)