Skip to content

Commit 3dc7452

Browse files
authored
Fix FT doc (PaddlePaddle#1028)
* fix doc * tran -> trans
1 parent c3705e4 commit 3dc7452

File tree

3 files changed

+16
-16
lines changed

3 files changed

+16
-16
lines changed

examples/machine_translation/transformer/faster_transformer/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,11 @@ datasets = load_dataset('wmt14ende', splits=('test'))
109109

110110
使用模型推断前提是需要指定一个合适的 checkpoint,需要在对应的 `../configs/transformer.base.yaml` 中修改对应的模型载入的路径参数 `init_from_params`
111111

112-
我们提供一个已经训练好的动态图的 base model 的 checkpoint 以供使用,可以通过[tranformer-base-wmt_ende_bpe](https://paddlenlp.bj.bcebos.com/models/transformers/transformer/tranformer-base-wmt_ende_bpe.tar.gz)下载。
112+
我们提供一个已经训练好的动态图的 base model 的 checkpoint 以供使用,可以通过[transformer-base-wmt_ende_bpe](https://paddlenlp.bj.bcebos.com/models/transformers/transformer/transformer-base-wmt_ende_bpe.tar.gz)下载。
113113

114114
``` sh
115-
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/tranformer-base-wmt_ende_bpe.tar.gz
116-
tar -zxf tranformer-base-wmt_ende_bpe.tar.gz
115+
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/transformer-base-wmt_ende_bpe.tar.gz
116+
tar -zxf transformer-base-wmt_ende_bpe.tar.gz
117117
```
118118

119119
然后,需要修改对应的 `../configs/transformer.base.yaml` 配置文件中的 `init_from_params` 的值为 `./base_trained_models/step_final/`
@@ -126,7 +126,7 @@ tar -zxf tranformer-base-wmt_ende_bpe.tar.gz
126126
# setting visible devices for prediction
127127
export CUDA_VISIBLE_DEVICES=0
128128
export FLAGS_fraction_of_gpu_memory_to_use=0.1
129-
cp -rf ../../../../paddlenlp/ops/build/third-party/build/bin/decoding_gemm ./
129+
cp -rf ../../../../paddlenlp/ops/build/third-party/build/fastertransformer/bin/decoding_gemm ./
130130
./decoding_gemm 8 4 8 64 38512 32 512 0
131131
python encoder_decoding_predict.py --config ../configs/transformer.base.yaml --decoding_lib ../../../../paddlenlp/ops/build/lib/libdecoding_op.so --decoding_strategy beam_search --beam_size 5
132132
```
@@ -153,7 +153,7 @@ float16 与 float32 预测的基本流程相同,不过在使用 float16 的 de
153153
# setting visible devices for prediction
154154
export CUDA_VISIBLE_DEVICES=0
155155
export FLAGS_fraction_of_gpu_memory_to_use=0.1
156-
cp -rf ../../../../paddlenlp/ops/build/third-party/build/bin/decoding_gemm ./
156+
cp -rf ../../../../paddlenlp/ops/build/third-party/build/fastertransformer/bin/decoding_gemm ./
157157
./decoding_gemm 8 4 8 64 38512 32 512 1
158158
python encoder_decoding_predict.py --config ../configs/transformer.base.yaml --decoding_lib ../../../../paddlenlp/ops/build/lib/libdecoding_op.so --use_fp16_decoding --decoding_strategy beam_search --beam_size 5
159159
```
@@ -240,7 +240,7 @@ cd ../
240240

241241
### 导出基于 Faster Transformer 自定义 op 的预测库可使用模型文件
242242

243-
我们提供一个已经基于动态图训练好的 base model 的 checkpoint 以供使用,当前 checkpoint 是基于 WMT 英德翻译的任务训练。可以通过[tranformer-base-wmt_ende_bpe](https://paddlenlp.bj.bcebos.com/models/transformers/transformer/tranformer-base-wmt_ende_bpe.tar.gz)下载。
243+
我们提供一个已经基于动态图训练好的 base model 的 checkpoint 以供使用,当前 checkpoint 是基于 WMT 英德翻译的任务训练。可以通过[transformer-base-wmt_ende_bpe](https://paddlenlp.bj.bcebos.com/models/transformers/transformer/transformer-base-wmt_ende_bpe.tar.gz)下载。
244244

245245
使用 C++ 预测库,首先,我们需要做的是将动态图的 checkpoint 导出成预测库能使用的模型文件和参数文件。可以执行 `export_model.py` 实现这个过程。
246246

@@ -276,7 +276,7 @@ cd bin/
276276

277277
``` sh
278278
cd bin/
279-
../third-party/build/bin/decoding_gemm 8 5 8 64 38512 256 512 0
279+
../third-party/build/fastertransformer/bin/decoding_gemm 8 5 8 64 38512 256 512 0
280280
./transformer_e2e -batch_size 8 -gpu_id 0 -model_dir ./infer_model/ -vocab_dir DATA_HOME/WMT14ende/WMT14.en-de/wmt14_ende_data_bpe/vocab_all.bpe.33708 -data_dir DATA_HOME/WMT14ende/WMT14.en-de/wmt14_ende_data_bpe/newstest2014.tok.bpe.33708.en
281281
```
282282

paddlenlp/ops/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ transformer = FasterTransformer(
106106
``` sh
107107
export CUDA_VISIBLE_DEVICES=0
108108
export FLAGS_fraction_of_gpu_memory_to_use=0.1
109-
./build/third-party/build/bin/decoding_gemm 32 4 8 64 30000 32 512 0
109+
./build/third-party/build/fastertransformer/bin/decoding_gemm 32 4 8 64 30000 32 512 0
110110
python ./faster_transformer/sample/decoding_sample.py --config ./faster_transformer/sample/config/decoding.sample.yaml --decoding_lib ./build/lib/libdecoding_op.so
111111
```
112112

@@ -116,7 +116,7 @@ python ./faster_transformer/sample/decoding_sample.py --config ./faster_transfor
116116
``` sh
117117
export CUDA_VISIBLE_DEVICES=0
118118
export FLAGS_fraction_of_gpu_memory_to_use=0.1
119-
./build/third-party/build/bin/decoding_gemm 32 4 8 64 30000 32 512 1
119+
./build/third-party/build/fastertransformer/bin/decoding_gemm 32 4 8 64 30000 32 512 1
120120
python ./faster_transformer/sample/decoding_sample.py --config ./faster_transformer/sample/config/decoding.sample.yaml --decoding_lib ./build/lib/libdecoding_op.so --use_fp16_decoding
121121
```
122122

@@ -243,7 +243,7 @@ cd bin/
243243

244244
``` sh
245245
cd bin/
246-
../third-party/build/bin/decoding_gemm 8 5 8 64 38512 256 512 0
246+
../third-party/build/fastertransformer/bin/decoding_gemm 8 5 8 64 38512 256 512 0
247247
./transformer_e2e -batch_size 8 -gpu_id 0 -model_dir ./infer_model/ -vocab_dir DATA_HOME/WMT14ende/WMT14.en-de/wmt14_ende_data_bpe/vocab_all.bpe.33708 -data_dir DATA_HOME/WMT14ende/WMT14.en-de/wmt14_ende_data_bpe/newstest2014.tok.bpe.33708.en
248248
```
249249

tests/prepare.sh

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ elif [ ${MODE} = "whole_infer" ]; then
7272

7373
# Trained transformer base model checkpoint.
7474
# For infer.
75-
if [ ! -f tranformer-base-wmt_ende_bpe.tar.gz ]; then
76-
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/tranformer-base-wmt_ende_bpe.tar.gz
77-
tar -zxf tranformer-base-wmt_ende_bpe.tar.gz
75+
if [ ! -f transformer-base-wmt_ende_bpe.tar.gz ]; then
76+
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/transformer-base-wmt_ende_bpe.tar.gz
77+
tar -zxf transformer-base-wmt_ende_bpe.tar.gz
7878
mv base_trained_models/ trained_models/
7979
fi
8080
# For train.
@@ -191,9 +191,9 @@ else # infer
191191
sed -i "s/^shuffle:.*/shuffle: True/g" configs/transformer.big.yaml
192192

193193
# Trained transformer base model checkpoint.
194-
if [ ! -f tranformer-base-wmt_ende_bpe.tar.gz ]; then
195-
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/tranformer-base-wmt_ende_bpe.tar.gz
196-
tar -zxf tranformer-base-wmt_ende_bpe.tar.gz
194+
if [ ! -f transformer-base-wmt_ende_bpe.tar.gz ]; then
195+
wget https://paddlenlp.bj.bcebos.com/models/transformers/transformer/transformer-base-wmt_ende_bpe.tar.gz
196+
tar -zxf transformer-base-wmt_ende_bpe.tar.gz
197197
mv base_trained_models/ trained_models/
198198
fi
199199
# Whole data set prepared.

0 commit comments

Comments
 (0)