Skip to content

Commit 2a632aa

Browse files
Merge branch 'develop' into update-cI
2 parents 080c36b + b9c8dd7 commit 2a632aa

File tree

24 files changed

+41
-62
lines changed

24 files changed

+41
-62
lines changed

_typos.toml

Lines changed: 0 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -28,27 +28,6 @@ datas = "datas"
2828
feeded = "feeded"
2929

3030
# These words need to be fixed
31-
Learing = "Learing"
32-
Operaton = "Operaton"
33-
Optimizaing = "Optimizaing"
34-
Optimzier = "Optimzier"
35-
Setment = "Setment"
36-
Simle = "Simle"
37-
Sovler = "Sovler"
38-
libary = "libary"
39-
matrics = "matrics"
40-
metrices = "metrices"
41-
mutbale = "mutbale"
42-
occurence = "occurence"
43-
opeartor = "opeartor"
44-
opeartors = "opeartors"
45-
operaters = "operaters"
46-
optmization = "optmization"
47-
outpu = "outpu"
48-
outpus = "outpus"
49-
overrided = "overrided"
50-
overwrited = "overwrited"
51-
samle = "samle"
5231
schedual = "schedual"
5332
secenarios = "secenarios"
5433
sematic = "sematic"

docs/api/gen_doc.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
# "short_name":"", # without module name
3636
# "module_name":"", # the module of the real api belongs to
3737
# "display":True/Flase, # consider the not_display_doc_list and the display_doc_list
38-
# "has_overwrited_doc":True/False #
38+
# "has_overwritten_doc":True/False #
3939
# "doc_filename" # document filename without suffix
4040
# "suggested_name":"", # the shortest name in all_names
4141
# }

docs/api/paddle/linalg/svd_cn.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ svd
2626
返回
2727
::::::::::::
2828

29-
- Tensor U,奇异值分解的 U 矩阵。如果 full_matrics 设置为 False,则 Shape 为 ``[*, M, K]``,如果 full_metrices 设置为 True,那么 Shape 为 ``[*, M, M]``。其中 K 为 M 和 N 的最小值。
29+
- Tensor U,奇异值分解的 U 矩阵。如果 full_matrices 设置为 False,则 Shape 为 ``[*, M, K]``,如果 full_matrices 设置为 True,那么 Shape 为 ``[*, M, M]``。其中 K 为 M 和 N 的最小值。
3030
- Tensor S,奇异值向量,Shape 为 ``[*, K]`` 。
31-
- Tensor VH,奇异值分解的 VH 矩阵。如果 full_matrics 设置为 False,则 Shape 为 ``[*, K, N]``,如果 full_metrices 设置为 True,那么 Shape 为 ``[*, N, N]``。其中 K 为 M 和 N 的最小值。
31+
- Tensor VH,奇异值分解的 VH 矩阵。如果 full_matrices 设置为 False,则 Shape 为 ``[*, K, N]``,如果 full_matrices 设置为 True,那么 Shape 为 ``[*, N, N]``。其中 K 为 M 和 N 的最小值。
3232

3333
代码示例
3434
::::::::::

docs/api/paddle/utils/cpp_extension/load_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ load
2929
from paddle.utils.cpp_extension import load
3030
3131
custom_op_module = load(
32-
name="op_shared_libary_name", # 生成动态链接库的名称
32+
name="op_shared_library_name", # 生成动态链接库的名称
3333
sources=['relu_op.cc', 'relu_op.cu'], # 自定义 OP 的源码文件列表
3434
extra_cxx_cflags=['-g', '-w'], # 可选,指定编译。cc/.cpp 文件时额外的编译选项
3535
extra_cuda_cflags=['-O2'], # 可选,指定编译。cu 文件时额外的编译选项

docs/design/concurrent/parallel_do.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ AddOutput(kOutputs, "Outputs needed to be merged from different devices").AsDupl
1515
AddOutput(kParallelScopes,
1616
"Scopes for all local variables in forward pass. One scope for each device");
1717
AddAttr<framework::BlockDesc *>(kParallelBlock,
18-
"List of operaters to be executed in parallel");
18+
"List of operators to be executed in parallel");
1919
```
2020
2121
A vanilla implementation of parallel_do can be shown as the following (`|` means single thread and
@@ -94,7 +94,7 @@ There are serial places we can make this parallel_do faster.
9494

9595
### forward: split input onto different devices
9696

97-
If the input of the parallel_do is independent from any prior opeartors, we can avoid this step by
97+
If the input of the parallel_do is independent from any prior operators, we can avoid this step by
9898
prefetching the input onto different devices in a separate background thread. And the python code
9999
looks like this.
100100
```python

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ In Fluid, a neural network is represented as a protobuf message called [ProgramD
101101
### Operator level requirement
102102
Each operator has many kernels for different data types, devices, and library types. The operator will select the appropriate kernel to run based on, among other things, the data type of the input variables. By default, every Fluid operator has a float data type kernel that takes float variables as input and generates float output.
103103

104-
This means that if we provide float input to the first operator in a program, then each opeartor will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
104+
This means that if we provide float input to the first operator in a program, then each operator will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
105105

106106
The same principle applies if we want a program to run in float16 mode. We provide input variable of float16 data type to the first operator, and then one by one, each operator in the program will run the float16 kernel (provided that each operator in this program has float16 kernels registered) until we finally obtain a float16 output variable.
107107

docs/design/dynamic_rnn/rnn_design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ std::vector<SortedSeqItem> SortBySeqLen(const LODTensor& tensor);
198198
由于输入序列的顺序变化,以下现有的接口需要针对性地修改:
199199

200200
- InitMemories, memory 需要根据 `sorted_seqs` 重新排列
201-
- SetmentInputs
201+
- SegmentInputs
202202
- ConcatOutputs
203203

204204
此外,由于 `sorted_seqs` 需要被 `RecurrentGradientOp` 复用,因此会变成 `RecurrentOp` 一个新的 output 输出,

docs/design/dynamic_rnn/rnn_design_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ std::vector<SortedSeqItem> SortBySeqLen(const LODTensor& tensor);
136136
Due to the sequence of input sequences, the following existing interfaces need to be modified:
137137
138138
- InitMemories, memory needs to be rearranged according to `sorted_seqs`
139-
- SetmentInputs
139+
- SegmentInputs
140140
- ConcatOutputs
141141
142142
In addition, because `sorted_seqs` needs to be multiplexed with `RecurrentGradientOp`, it will become a new output of `RecurrentOp`.

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ In former control flow graph, the out-edges of node 5 are 5 --> 6 and 5 --> 2, a
7979

8080
- Uses and Defs
8181

82-
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
82+
An assignmemt to a variable or temporary defines that variable. An occurrence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
8383

8484
- Liveness
8585

docs/design/mkldnn/int8/QAT/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Notes:
6262
```... → input1 → conv2d → output1 → batch_norm → output2 → relu → output3 → ...```
6363
and we want to quantize the `conv2d` op, then after applying FP32 optimizations the sequence will become
6464
```... → input1 → conv2d → output3 → ...```
65-
and the quantization scales have to be collected for the `input1` and `outpu3` tensors in the Quant model.
65+
and the quantization scales have to be collected for the `input1` and `output3` tensors in the Quant model.
6666
2. Quantization of the following operators is supported: `conv2d`, `depthwise_conv2d`, `mul`, `fc`, `matmul`, `pool2d`, `reshape2`, `transpose2`, `concat`.
6767
3. The longest sequence of consecutive quantizable operators in the model, the biggest performance boost can be achieved through quantization:
6868
```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...```

0 commit comments

Comments
 (0)