Skip to content

Commit a25d60f

Browse files
authored
[docs] 修复 api 文档中函数声明的名称【Part1】 (#6686)
* fix api part1 * update
1 parent 4f9141c commit a25d60f

File tree

10 files changed

+14
-14
lines changed

10 files changed

+14
-14
lines changed

docs/api/paddle/DataParallel_cn.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,17 +46,17 @@ COPY-FROM: paddle.DataParallel:dp-example
4646
::::::::::::
4747
COPY-FROM: paddle.DataParallel:dp-pylayer-example
4848

49-
.. py:function:: no_sync()
5049

50+
方法
51+
::::::::::::
52+
no_sync()
53+
'''''''''
5154
用于暂停梯度同步的上下文管理器。在 no_sync()中参数梯度只会在模型上累加;直到 with 之外的第一个 forward-backward,梯度才会被同步。
5255

53-
代码示例
54-
::::::::::::
56+
**代码示例**
5557

5658
COPY-FROM: paddle.DataParallel.no_sync
5759

58-
方法
59-
::::::::::::
6060
state_dict(destination=None, include_sublayers=True)
6161
'''''''''
6262

docs/api/paddle/distributed/destroy_process_group_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ destroy_process_group
44
-------------------------------
55

66

7-
.. py:function:: destroy_process_group(group=None)
7+
.. py:function:: paddle.distributed.destroy_process_group(group=None)
88
99
销毁一个指定的通信组。
1010

docs/api/paddle/distributed/get_group_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
get_group
44
-------------------------------
55

6-
.. py:function:: get_group(id=0)
6+
.. py:function:: paddle.distributed.get_group(id=0)
77
88
通过通信组 id 获取通信组实例
99

docs/api/paddle/distributed/get_rank_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
get_rank
44
----------
55

6-
.. py:function:: paddle.distributed.get_rank(group=None)
6+
.. py:function:: paddle.distributed.get_rank(group=None)
77
88
返回当前进程在给定通信组下的 rank,rank 是在 [0, world_size) 范围内的连续整数。如果没有指定通信组,则默认使用全局通信组。
99

docs/api/paddle/distributed/new_group_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ new_group
44
-------------------------------
55

66

7-
.. py:function:: new_group(ranks=None, backend=None)
7+
.. py:function:: paddle.distributed.new_group(ranks=None, backend=None)
88
99
创建分布式通信组。
1010

docs/api/paddle/distributed/to_static_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
to_static
44
-------------------------------
55

6-
.. py:function:: to_static(layer, loader, loss=None, optimizer=None, strategy=None)
6+
.. py:function:: paddle.distributed.to_static(layer, loader, loss=None, optimizer=None, strategy=None)
77
88
将带有分布式切分信息的动态图 ``layer`` 转换为静态图分布式模型, 可在静态图模式下进行分布式训练;同时将动态图下所使用的数据迭代器 ``loader`` 转换为静态图分布式训练所使用的数据迭代器。
99

docs/api/paddle/distributed/unshard_dtensor_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
unshard_dtensor
44
-------------------------------
55

6-
.. py:function:: unshard_dtensor(dist_tensor)
6+
.. py:function:: paddle.distributed.unshard_dtensor(dist_tensor)
77
88
将带有分布式信息的分布式 Tensor 转换为普通 Tensor。
99

docs/api/paddle/distributed/wait_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ wait
44
-------------------------------
55

66

7-
.. py:function:: wait(tensor, group=None, use_calc_stream=True)
7+
.. py:function:: paddle.distributed.wait(tensor, group=None, use_calc_stream=True)
88
99
1010
同步通信组

docs/api/paddle/incubate/LookAhead_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
LookAhead
44
-------------------------------
55

6-
.. py:function:: class paddle.incubate.LookAhead(inner_optimizer, alpha=0.5, k=5, name=None)
6+
.. py:class:: paddle.incubate.LookAhead(inner_optimizer, alpha=0.5, k=5, name=None)
77
此 API 为论文 `Lookahead Optimizer: k steps forward, 1 step back <https://arxiv.org/abs/1907.08610>`_ 中 Lookahead 优化器的实现。
88
Lookahead 保留两组参数:fast_params 和 slow_params。每次训练迭代中 inner_optimizer 更新 fast_params。
99
Lookahead 每 k 次训练迭代更新 slow_params 和 fast_params,如下所示:

docs/api/paddle/incubate/xpu/resnet_block/ResNetBasicBlock_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
ResNetBasicBlock
44
-------------------------------
5-
.. py:class:: paddle.incubate.xpu.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, ilter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
5+
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, ilter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
66
77
该接口用于构建 ``ResNetBasicBlock`` 类的一个可调用对象,实现一次性计算多个 ``Conv2D``、 ``BatchNorm`` 和 ``ReLU`` 的功能,排列顺序参见源码链接。
88

0 commit comments

Comments
 (0)