Skip to content

Commit 4f9141c

Browse files
authored
fix api part2 (#6687)
1 parent 88376b8 commit 4f9141c

10 files changed

+10
-10
lines changed

docs/api/paddle/nn/AdaptiveMaxPool3D_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
AdaptiveMaxPool3D
44
-------------------------------
55

6-
.. py:function:: paddle.nn.AdaptiveMaxPool3D(output_size, return_mask=False, name=None)
6+
.. py:class:: paddle.nn.AdaptiveMaxPool3D(output_size, return_mask=False, name=None)
77
根据输入 `x` , `output_size` 等参数对一个输入 Tensor 计算 3D 的自适应最大池化。输入和输出都是 5-D Tensor,
88
默认是以 `NCDHW` 格式表示的,其中 `N` 是 batch size, `C` 是通道数,`D` , `H` , `W` 分别是输入特征的深度,高度,宽度。
99

docs/api/paddle/nn/AlphaDropout_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
AlphaDropout
44
-------------------------------
55

6-
.. py:function:: paddle.nn.AlphaDropout(p=0.5, name=None)
6+
.. py:class:: paddle.nn.AlphaDropout(p=0.5, name=None)
77
88
AlphaDropout 是一种具有自归一化性质的 dropout。均值为 0,方差为 1 的输入,经过 AlphaDropout 计算之后,输出的均值和方差与输入保持一致。AlphaDropout 通常与 SELU 激活函数组合使用。论文请参考:`Self-Normalizing Neural Networks <https://arxiv.org/abs/1706.02515>`_
99

docs/api/paddle/nn/AvgPool1D_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
AvgPool1D
44
-------------------------------
55

6-
.. py:function:: paddle.nn.AvgPool1D(kernel_size, stride=None, padding=0, exclusive=True, ceil_mode=False, name=None)
6+
.. py:class:: paddle.nn.AvgPool1D(kernel_size, stride=None, padding=0, exclusive=True, ceil_mode=False, name=None)
77
88
根据输入 `x` , `kernel_size` 等参数对一个输入 Tensor 计算 1D 的平均池化。输入和输出都是 3-D Tensor,
99
默认是以 `NCL` 格式表示的,其中 `N` 是 batch size, `C` 是通道数,`L` 是输入特征的长度。

docs/api/paddle/nn/AvgPool2D_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
AvgPool2D
44
-------------------------------
55

6-
.. py:function:: paddle.nn.AvgPool2D(kernel_size, stride=None, padding=0, ceil_mode=False, exclusive=True, divisor_override=None, data_format="NCHW", name=None)
6+
.. py:class:: paddle.nn.AvgPool2D(kernel_size, stride=None, padding=0, ceil_mode=False, exclusive=True, divisor_override=None, data_format="NCHW", name=None)
77
构建 `AvgPool2D` 类的一个可调用对象,其将构建一个二维平均池化层,根据输入参数 `kernel_size`, `stride`,
88
`padding` 等参数对输入做平均池化操作。
99

docs/api/paddle/nn/AvgPool3D_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
AvgPool3D
44
-------------------------------
55

6-
.. py:function:: paddle.nn.AvgPool3D(kernel_size, stride=None, padding=0, ceil_mode=False, exclusive=True, divisor_override=None, data_format="NCDHW", name=None)
6+
.. py:class:: paddle.nn.AvgPool3D(kernel_size, stride=None, padding=0, ceil_mode=False, exclusive=True, divisor_override=None, data_format="NCDHW", name=None)
77
构建 `AvgPool3D` 类的一个可调用对象,其将构建一个二维平均池化层,根据输入参数 `kernel_size`, `stride`,
88
`padding` 等参数对输入做平均池化操作。
99

docs/api/paddle/nn/Bilinear_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Bilinear
44
-------------------------------
55

6-
.. py:function:: paddle.nn.Bilinear(in1_features, in2_features, out_features, weight_attr=None, bias_attr=None, name=None)
6+
.. py:class:: paddle.nn.Bilinear(in1_features, in2_features, out_features, weight_attr=None, bias_attr=None, name=None)
77
88
该层对两个输入执行双线性 Tensor 积。
99

docs/api/paddle/nn/ChannelShuffle_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
ChannelShuffle
44
-------------------------------
55

6-
.. py:function:: paddle.nn.ChannelShuffle(groups, data_format="NCHW", name=None)
6+
.. py:class:: paddle.nn.ChannelShuffle(groups, data_format="NCHW", name=None)
77
将一个形为 [N, C, H, W] 或是 [N, H, W, C] 的 Tensor 按通道分成 g 组,得到形为 [N, g, C/g, H, W] 或 [N, H, W, g, C/g] 的 Tensor,然后转置为 [N, C/g, g, H, W] 或 [N, H, W, C/g, g] 的形状,最后重塑为原来的形状。这样做可以增加通道间的信息流动,提高特征的重用率。详见张祥雨等人在 2017 年发表的论文 `ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices <https://arxiv.org/abs/1707.01083>`_ 。
88

99

docs/api/paddle/nn/CosineEmbeddingLoss_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
CosineEmbeddingLoss
44
-------------------------------
55

6-
.. py:function:: paddle.nn.CosineEmbeddingLoss(margin=0, reduction='mean', name=None)
6+
.. py:class:: paddle.nn.CosineEmbeddingLoss(margin=0, reduction='mean', name=None)
77
88
该函数计算给定的输入 input1, input2 和 label 之间的 `CosineEmbedding` 损失,通常用于学习非线性嵌入或半监督学习
99

docs/api/paddle/nn/CrossEntropyLoss_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
CrossEntropyLoss
44
-------------------------------
55

6-
.. py:function:: paddle.nn.CrossEntropyLoss(weight=None, ignore_index=-100, reduction='mean', soft_label=False, axis=-1, use_softmax=True, name=None)
6+
.. py:class:: paddle.nn.CrossEntropyLoss(weight=None, ignore_index=-100, reduction='mean', soft_label=False, axis=-1, use_softmax=True, name=None)
77
88
默认情况下, CrossEntropyLoss 使用 softmax 实现(即 use_softmax=True )。该函数结合了 softmax 操作的计算和交叉熵损失函数,以提供更稳定的数值计算。
99

docs/api/paddle/nn/Dropout2D_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Dropout2D
44
-------------------------------
55

6-
.. py:function:: paddle.nn.Dropout2D(p=0.5, data_format='NCHW', name=None)
6+
.. py:class:: paddle.nn.Dropout2D(p=0.5, data_format='NCHW', name=None)
77
88
根据丢弃概率 `p`,在训练过程中随机将某些通道特征图置 0 (对一个形状为 `NCHW` 的 4 维 Tensor,通道特征图指的是其中的形状为 `HW` 的 2 维特征图)。Dropout2D 可以提高通道特征图之间的独立性。论文请参考:`Efficient Object Localization Using Convolutional Networks <https://arxiv.org/abs/1411.4280>`_
99

0 commit comments

Comments
 (0)