You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/demo/embedding_model/index.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,11 +54,11 @@ The general command of extracting desired parameters from the pretrained embeddi
54
54
Here, you can simply run the command:
55
55
56
56
cd $PADDLE_ROOT/demo/seqToseq/data/
57
-
./paraphase_model.sh
57
+
./paraphrase_model.sh
58
58
59
59
And you will see following embedding model structure:
60
60
61
-
paraphase_model
61
+
paraphrase_model
62
62
|--- _source_language_embedding
63
63
|--- _target_language_embedding
64
64
@@ -90,7 +90,7 @@ Then, train the model by running the command:
90
90
91
91
where `train.sh` is almost the same as `demo/seqToseq/translation/train.sh`, the only difference is following two command arguments:
92
92
93
-
-`--init_model_path`: path of the initialization model, here is `data/paraphase_model`
93
+
-`--init_model_path`: path of the initialization model, here is `data/paraphrase_model`
94
94
-`--load_missing_parameter_strategy`: operations when model file is missing, here use a normal distibution to initialize the other parameters except for the embedding layer
95
95
96
96
For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](text_generation.md).
0 commit comments