@@ -407,7 +407,7 @@ def name_has_fc(var):
407
407
def load_params (executor , dirname , main_program = None , filename = None ):
408
408
"""
409
409
This function filters out all parameters from the give `main_program`
410
- and then try to load these parameters from the folder `dirname` or
410
+ and then trys to load these parameters from the folder `dirname` or
411
411
the file `filename`.
412
412
413
413
Use the `dirname` to specify the folder where parameters were saved. If
@@ -586,6 +586,7 @@ def save_inference_model(dirname,
586
586
587
587
Examples:
588
588
.. code-block:: python
589
+
589
590
exe = fluid.Executor(fluid.CPUPlace())
590
591
path = "./infer_model"
591
592
fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'],
@@ -693,7 +694,7 @@ def load_inference_model(dirname,
693
694
feed={feed_target_names[0]: tensor_img},
694
695
fetch_list=fetch_targets)
695
696
696
- # In this exsample, the inference program is saved in the
697
+ # In this exsample, the inference program was saved in the
697
698
# "./infer_model/__model__" and parameters were saved in
698
699
# separate files in ""./infer_model".
699
700
# After getting inference program, feed target names and
@@ -804,20 +805,20 @@ def save_checkpoint(executor,
804
805
trainer_args = None ,
805
806
main_program = None ,
806
807
max_num_checkpoints = 3 ):
807
- """"
808
+ """
808
809
This function filters out all checkpoint variables from the give
809
- main_program and then saves these variables to the ' checkpoint_dir'
810
+ main_program and then saves these variables to the ` checkpoint_dir`
810
811
directory.
811
812
812
813
In the training precess, we generally save a checkpoint in each
813
814
iteration. So there might be a lot of checkpoints in the
814
- ' checkpoint_dir' . To avoid them taking too much disk space, the
815
+ ` checkpoint_dir` . To avoid them taking too much disk space, the
815
816
`max_num_checkpoints` are introduced to limit the total number of
816
817
checkpoints. If the number of existing checkpints is greater than
817
- the `max_num_checkpoints`, the oldest ones will be scroll deleted.
818
+ the `max_num_checkpoints`, oldest ones will be scroll deleted.
818
819
819
- A variable is a checkpoint variable and will be loaded if it meets
820
- all the following conditions:
820
+ A variable is a checkpoint variable and will be saved if it meets
821
+ all following conditions:
821
822
1. It's persistable.
822
823
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
823
824
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
@@ -882,16 +883,16 @@ def load_checkpoint(executor, checkpoint_dir, serial, main_program):
882
883
"""
883
884
This function filters out all checkpoint variables from the give
884
885
main_program and then try to load these variables from the
885
- ' checkpoint_dir' directory.
886
+ ` checkpoint_dir` directory.
886
887
887
888
In the training precess, we generally save a checkpoint in each
888
889
iteration. So there are more than one checkpoint in the
889
- ' checkpoint_dir' (each checkpoint has its own sub folder), use
890
- ' serial' to specify which serial of checkpoint you would like to
890
+ ` checkpoint_dir` (each checkpoint has its own sub folder), use
891
+ ` serial` to specify which serial of checkpoint you would like to
891
892
load.
892
893
893
894
A variable is a checkpoint variable and will be loaded if it meets
894
- all the following conditions:
895
+ all following conditions:
895
896
1. It's persistable.
896
897
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
897
898
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
@@ -962,9 +963,9 @@ def load_persist_vars_without_grad(executor,
962
963
has_model_dir = False ):
963
964
"""
964
965
This function filters out all checkpoint variables from the give
965
- program and then try to load these variables from the given directory.
966
+ program and then trys to load these variables from the given directory.
966
967
967
- A variable is a checkpoint variable if it meets all the following
968
+ A variable is a checkpoint variable if it meets all following
968
969
conditions:
969
970
1. It's persistable.
970
971
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
@@ -1014,7 +1015,7 @@ def save_persist_vars_without_grad(executor, dirname, program):
1014
1015
program and then save these variables to a sub-folder '__model__' of
1015
1016
the given directory.
1016
1017
1017
- A variable is a checkpoint variable if it meets all the following
1018
+ A variable is a checkpoint variable if it meets all following
1018
1019
conditions:
1019
1020
1. It's persistable.
1020
1021
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
0 commit comments