@@ -1638,23 +1638,23 @@ def batch_norm(input,
1638
1638
1639
1639
Args:
1640
1640
input(variable): The input variable which is a LoDTensor.
1641
- act(string, default None): Activation type, linear|relu|prelu|...
1642
- is_test(bool, default False): Used for training or training.
1643
- momentum(float, default 0.9):
1644
- epsilon(float, default 1e-05):
1641
+ act(string, Default None): Activation type, linear|relu|prelu|...
1642
+ is_test(bool, Default False): Used for training or training.
1643
+ momentum(float, Default 0.9):
1644
+ epsilon(float, Default 1e-05):
1645
1645
param_attr(ParamAttr): The parameter attribute for Parameter `scale`.
1646
1646
bias_attr(ParamAttr): The parameter attribute for Parameter `bias`.
1647
1647
data_layout(string, default NCHW): NCHW|NHWC
1648
- in_place(bool, default False): Make the input and output of batch norm reuse memory.
1648
+ in_place(bool, Default False): Make the input and output of batch norm reuse memory.
1649
1649
use_mkldnn(bool, Default false): ${use_mkldnn_comment}
1650
1650
name(string, Default None): A name for this layer(optional). If set None, the layer
1651
1651
will be named automatically.
1652
1652
moving_mean_name(string, Default None): The name of moving_mean which store the global Mean.
1653
1653
moving_variance_name(string, Default None): The name of the moving_variance which store the global Variance.
1654
- do_model_average_for_mean_and_var(bool, Default False):
1654
+ do_model_average_for_mean_and_var(bool, Default False): Do model average for mean and variance or not.
1655
1655
1656
1656
Returns:
1657
- The sequence's last step variable which is a Tensor .
1657
+ Variable: A tensor variable which is the result after applying batch normalization on the input .
1658
1658
1659
1659
Examples:
1660
1660
0 commit comments