Skip to content

Commit 5c0d942

Browse files
committed
Merge branch 'develop' of https://github.com/liangqi520/docs into develop
Merge remote-tracking branch 'origin/develop' to sync remote changes
2 parents fb66ba0 + 798730c commit 5c0d942

File tree

268 files changed

+2525
-2258
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

268 files changed

+2525
-2258
lines changed

_typos.toml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ arange = "arange"
2525
unsupport = "unsupport"
2626
Nervana = "Nervana"
2727
datas = "datas"
28+
feeded = "feeded"
2829

2930
# These words need to be fixed
3031
Learing = "Learing"
@@ -62,8 +63,6 @@ outpu = "outpu"
6263
outpus = "outpus"
6364
overrided = "overrided"
6465
overwrited = "overwrited"
65-
porcess = "porcess"
66-
processer = "processer"
6766
samle = "samle"
6867
schedual = "schedual"
6968
secenarios = "secenarios"

ci_scripts/hooks/pre-doc-compile.sh

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,3 +120,27 @@ else
120120
echo "ERROR: Generated API mapping file not found at $GENERATED_FILE"
121121
handle_failure
122122
fi
123+
124+
python "${APIMAPPING_ROOT}/tools/validate_api_difference_format.py"
125+
126+
# 获取上一条命令的退出状态码
127+
exit_code=$?
128+
129+
# 根据退出状态码决定后续操作
130+
if [ $exit_code -eq 0 ]; then
131+
echo "API DIFFERENCE FORMAT VALIDATE SUCCESS!"
132+
# 在这里继续添加您需要执行的命令
133+
else
134+
echo "ERROR: API DIFFERENCE FORMAT VALIDATE FAILURE! error code: $exit_code" >&2
135+
exit 1
136+
fi
137+
138+
python "${APIMAPPING_ROOT}/tools/validate_pytorch_api_mapping.py" --skip-url-check
139+
exit_code=$?
140+
141+
if [ $exit_code -eq 0 ]; then
142+
echo "PYTORCH API MAPPING VALIDATE SUCCESS!"
143+
else
144+
echo "ERROR: PYTORCH API MAPPING VALIDATE FAILURE! error code: $exit_code" >&2
145+
exit 1
146+
fi

docs/api/paddle/incubate/xpu/resnet_block/ResNetBasicBlock_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
ResNetBasicBlock
44
-------------------------------
5-
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, ilter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
5+
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, filter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
66
77
该接口用于构建 ``ResNetBasicBlock`` 类的一个可调用对象,实现一次性计算多个 ``Conv2D``、 ``BatchNorm`` 和 ``ReLU`` 的功能,排列顺序参见源码链接。
88

docs/api/paddle/nn/GRU_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ GRU
3535
- **input_size** (int) - 输入 :math:`x` 的大小。
3636
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
3737
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
38-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
38+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
3939
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
4040
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
4141
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。

docs/api/paddle/nn/LSTM_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ LSTM
4343
- **input_size** (int) - 输入 :math:`x` 的大小。
4444
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
4545
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
46-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
46+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
4747
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps, batch_size, input_size],否则为[batch_size, time_steps, input_size]。`time_steps` 指输入序列的长度。默认为 False。
4848
- **dropout** (float,可选) - dropout 概率,指的是除第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
4949
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。

docs/api/paddle/nn/SimpleRNN_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ SimpleRNN
2525
- **input_size** (int) - 输入 :math:`x` 的大小。
2626
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
2727
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
28-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
28+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
2929
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
3030
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
3131
- **activation** (str,可选) - 网络中每个单元的激活函数。可以是 tanh 或 relu。默认为 tanh。

docs/api/paddle/static/nn/batch_norm_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ moving_mean 和 moving_var 是训练过程中统计得到的全局均值和方
4848
参数
4949
::::::::::::
5050

51-
- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:flaot16, float32, float64。
51+
- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:float16, float32, float64。
5252
- **act** (string)- 激活函数类型,可以是 leaky_realu、relu、prelu 等。默认:None。
5353
- **is_test** (bool) - 指示它是否在测试阶段,非训练阶段使用训练过程中统计到的全局均值和全局方差。默认:False。
5454
- **momentum** (float|Tensor)- 此值用于计算 moving_mean 和 moving_var,是一个 float 类型或者一个 shape 为[1],数据类型为 float32 的 Tensor 类型。更新公式为::math:`moving\_mean = moving\_mean * momentum + new\_mean * (1. - momentum)` , :math:`moving\_var = moving\_var * momentum + new\_var * (1. - momentum)`,默认:0.9。

docs/api/paddle/static/nn/conv3d_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ conv3d
7676
::::::::::::
7777

7878
- **input** (Tensor) - 形状为 :math:`[N, C, D, H, W]` 或 :math:`[N, D, H, W, C]` 的 5-D Tensor,N 是批尺寸,C 是通道数,D 是特征深度,H 是特征高度,W 是特征宽度,数据类型为 float16, float32 或 float64。
79-
- **num_fliters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
79+
- **num_filters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
8080
- **filter_size** (int|list|tuple) - 滤波器大小。如果它是一个列表或元组,则必须包含三个整数值:(filter_size_depth, filter_size_height,filter_size_width)。若为一个整数,则 filter_size_depth = filter_size_height = filter_size_width = filter_size。
8181
- **stride** (int|list|tuple,可选) - 步长大小。滤波器和输入进行卷积计算时滑动的步长。如果它是一个列表或元组,则必须包含三个整型数:(stride_depth, stride_height, stride_width)。若为一个整数,stride_depth = stride_height = stride_width = stride。默认值:1。
8282
- **padding** (int|list|tuple|str,可选) - 填充大小。如果它是一个字符串,可以是"VALID"或者"SAME",表示填充算法,计算细节可参考上述 ``padding`` = "SAME"或 ``padding`` = "VALID" 时的计算公式。如果它是一个元组或列表,它可以有 3 种格式:

docs/design/concepts/tensor_array.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ Since each step of RNN can only take a tensor-represented batch of data as input
218218
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.
219219

220220
Such cut-like operations can be embedded into `TensorArray` as general methods called `unpack` and `pack`,
221-
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formated as a LoD tensor rather than a tensor.
221+
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formatted as a LoD tensor rather than a tensor.
222222

223223
Some definitions are like
224224

docs/design/concurrent/channel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Introduction
44

55
A Channel is a data structure that allows for synchronous interprocess
6-
communication via message passing. It is a fundemental component of CSP
6+
communication via message passing. It is a fundamental component of CSP
77
(communicating sequential processes), and allows for users to pass data
88
between threads without having to worry about synchronization.
99

0 commit comments

Comments
 (0)