Skip to content

Commit 8746725

Browse files
committed
fix errors
1 parent 8ea54e2 commit 8746725

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

python/paddle/fluid/io.py

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -407,7 +407,7 @@ def name_has_fc(var):
407407
def load_params(executor, dirname, main_program=None, filename=None):
408408
"""
409409
This function filters out all parameters from the give `main_program`
410-
and then try to load these parameters from the folder `dirname` or
410+
and then trys to load these parameters from the folder `dirname` or
411411
the file `filename`.
412412
413413
Use the `dirname` to specify the folder where parameters were saved. If
@@ -586,6 +586,7 @@ def save_inference_model(dirname,
586586
587587
Examples:
588588
.. code-block:: python
589+
589590
exe = fluid.Executor(fluid.CPUPlace())
590591
path = "./infer_model"
591592
fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'],
@@ -693,7 +694,7 @@ def load_inference_model(dirname,
693694
feed={feed_target_names[0]: tensor_img},
694695
fetch_list=fetch_targets)
695696
696-
# In this exsample, the inference program is saved in the
697+
# In this exsample, the inference program was saved in the
697698
# "./infer_model/__model__" and parameters were saved in
698699
# separate files in ""./infer_model".
699700
# After getting inference program, feed target names and
@@ -804,20 +805,20 @@ def save_checkpoint(executor,
804805
trainer_args=None,
805806
main_program=None,
806807
max_num_checkpoints=3):
807-
""""
808+
"""
808809
This function filters out all checkpoint variables from the give
809-
main_program and then saves these variables to the 'checkpoint_dir'
810+
main_program and then saves these variables to the `checkpoint_dir`
810811
directory.
811812
812813
In the training precess, we generally save a checkpoint in each
813814
iteration. So there might be a lot of checkpoints in the
814-
'checkpoint_dir'. To avoid them taking too much disk space, the
815+
`checkpoint_dir`. To avoid them taking too much disk space, the
815816
`max_num_checkpoints` are introduced to limit the total number of
816817
checkpoints. If the number of existing checkpints is greater than
817-
the `max_num_checkpoints`, the oldest ones will be scroll deleted.
818+
the `max_num_checkpoints`, oldest ones will be scroll deleted.
818819
819-
A variable is a checkpoint variable and will be loaded if it meets
820-
all the following conditions:
820+
A variable is a checkpoint variable and will be saved if it meets
821+
all following conditions:
821822
1. It's persistable.
822823
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
823824
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
@@ -882,16 +883,16 @@ def load_checkpoint(executor, checkpoint_dir, serial, main_program):
882883
"""
883884
This function filters out all checkpoint variables from the give
884885
main_program and then try to load these variables from the
885-
'checkpoint_dir' directory.
886+
`checkpoint_dir` directory.
886887
887888
In the training precess, we generally save a checkpoint in each
888889
iteration. So there are more than one checkpoint in the
889-
'checkpoint_dir'(each checkpoint has its own sub folder), use
890-
'serial' to specify which serial of checkpoint you would like to
890+
`checkpoint_dir`(each checkpoint has its own sub folder), use
891+
`serial` to specify which serial of checkpoint you would like to
891892
load.
892893
893894
A variable is a checkpoint variable and will be loaded if it meets
894-
all the following conditions:
895+
all following conditions:
895896
1. It's persistable.
896897
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
897898
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
@@ -962,9 +963,9 @@ def load_persist_vars_without_grad(executor,
962963
has_model_dir=False):
963964
"""
964965
This function filters out all checkpoint variables from the give
965-
program and then try to load these variables from the given directory.
966+
program and then trys to load these variables from the given directory.
966967
967-
A variable is a checkpoint variable if it meets all the following
968+
A variable is a checkpoint variable if it meets all following
968969
conditions:
969970
1. It's persistable.
970971
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
@@ -1014,7 +1015,7 @@ def save_persist_vars_without_grad(executor, dirname, program):
10141015
program and then save these variables to a sub-folder '__model__' of
10151016
the given directory.
10161017
1017-
A variable is a checkpoint variable if it meets all the following
1018+
A variable is a checkpoint variable if it meets all following
10181019
conditions:
10191020
1. It's persistable.
10201021
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.

0 commit comments

Comments
 (0)