Skip to content

Commit e1904ac

Browse files
author
chengduo
authored
Add doc (#13765)
test=develop
1 parent e176170 commit e1904ac

File tree

1 file changed

+36
-2
lines changed

1 file changed

+36
-2
lines changed

paddle/fluid/pybind/pybind.cc

Lines changed: 36 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -620,7 +620,23 @@ All parameter, weight, gradient are variables in Paddle.
620620

621621
// -- python binds for parallel executor.
622622
py::class_<ParallelExecutor> pe(m, "ParallelExecutor");
623-
py::class_<ExecutionStrategy> exec_strategy(pe, "ExecutionStrategy");
623+
py::class_<ExecutionStrategy> exec_strategy(pe, "ExecutionStrategy", R"DOC(
624+
ExecutionStrategy allows the user to more preciously control how to run
625+
the program in ParallelExecutor by setting the property.
626+
627+
The available properties include:
628+
use_cuda (bool): Whether to use CUDA or not. Default True.
629+
num_threads (int): The number of threads that used to run the
630+
operators in ParallelExecutor. If it is not set, it will be
631+
set in ParallelExecutor according to the device count.
632+
Default 0.
633+
allow_op_delay (bool): Whether to delay the communication operators
634+
to run. Default False.
635+
num_iteration_per_drop_scope (int): how many iterations between
636+
the two dropping local scopes. Default 100.
637+
638+
)DOC");
639+
624640
exec_strategy.def(py::init())
625641
.def_property(
626642
"num_threads",
@@ -658,7 +674,25 @@ All parameter, weight, gradient are variables in Paddle.
658674
: ExecutionStrategy::kDefault;
659675
});
660676

661-
py::class_<BuildStrategy> build_strategy(pe, "BuildStrategy");
677+
py::class_<BuildStrategy> build_strategy(pe, "BuildStrategy", R"DOC(
678+
BuildStrategy allows the user to more preciously control how to
679+
build the SSA Graph in ParallelExecutor by setting the property.
680+
681+
The available properties include:
682+
reduce_strategy (str): There are two reduce strategies, 'AllReduce'
683+
and 'Reduce'. If you want that all parameters will be optimized
684+
on all devices, you can choose 'AllReduce'; if you choose
685+
'Reduce', all parameters will be evenly allocated to different
686+
devices for optimization, and then broadcast the optimized
687+
parameter to other devices. Default 'AllReduce'.
688+
gradient_scale_strategy (str): There are two ways of defining loss@grad,
689+
'CoeffNumDevice' and 'Customized'. By default, ParallelExecutor
690+
sets the loss@grad according to the number of devices. If you want
691+
to customize loss@grad, you can choose 'Customized'.
692+
Default 'CoeffNumDevice'.
693+
debug_graphviz_path (str): Whether to write the SSA Graph to file in the
694+
form of graphviz. It is useful for debugging. Default "".
695+
)DOC");
662696

663697
py::enum_<BuildStrategy::ReduceStrategy>(build_strategy, "ReduceStrategy")
664698
.value("Reduce", BuildStrategy::ReduceStrategy::kReduce)

0 commit comments

Comments
 (0)