Skip to content

Commit 51b54de

Browse files
authored
[CodeStyle][Typos][A-8,A-[10-17]] Fix typo(wiht,avaiable,acutal,apporach,apporaches,arguements,arguemnts,assgin,assginment,auxilary,avaiable) (#7545)
1 parent a849230 commit 51b54de

File tree

12 files changed

+25
-35
lines changed

12 files changed

+25
-35
lines changed

_typos.toml

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -83,16 +83,6 @@ Wether = "Wether"
8383
accordding = "accordding"
8484
accoustic = "accoustic"
8585
accpetance = "accpetance"
86-
accracy = "accracy"
87-
acutal = "acutal"
88-
apporach = "apporach"
89-
apporaches = "apporaches"
90-
arguements = "arguements"
91-
arguemnts = "arguemnts"
92-
assgin = "assgin"
93-
assginment = "assginment"
94-
auxilary = "auxilary"
95-
avaiable = "avaiable"
9686
baisc = "baisc"
9787
basci = "basci"
9888
beacuse = "beacuse"

ci_scripts/check_api_label_cn.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ def run_cn_api_label_checking(rootdir, files):
7777
for file in files:
7878
if should_test(file) and not check_api_label(rootdir, file):
7979
logger.error(
80-
f"The first line in {rootdir}/{file} is not avaiable, please re-check it!"
80+
f"The first line in {rootdir}/{file} is not available, please re-check it!"
8181
)
8282
sys.exit(1)
8383
valid_api_labels = find_all_api_labels_in_dir(rootdir)

docs/api/paddle/put_along_axis_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ put_along_axis
1414
- **indices** (Tensor) - 索引矩阵,包含沿轴提取 1d 切片的下标,必须和 arr 矩阵有相同的维度。当 ``broadcast`` 为 ``True`` 时,需要能够 broadcast 与 arr 矩阵对齐,否则除 ``axis`` 维度,其他维度都需要小于等于 ``arr`` 与 ``values`` 的对应维度。数据类型为:int32、int64。
1515
- **values** (float) - 需要插入的值,当 ``broadcast`` 为 ``True`` 时,形状和维度需要能够被 broadcast 与 indices 矩阵匹配,否则各维度需大于等于 ``indices`` 的各维度。数据类型为:bfloat16、float16、float32、float64、int32、int64、uint8、int16。
1616
- **axis** (int) - 指定沿着哪个维度获取对应的值,数据类型为:int。
17-
- **reduce** (str,可选) - 归约操作类型,默认为 ``assign``,可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 value 对于输入矩阵 arr 会有不同的行为,如为 ``assgin`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
17+
- **reduce** (str,可选) - 归约操作类型,默认为 ``assign``,可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 value 对于输入矩阵 arr 会有不同的行为,如为 ``assign`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
1818
- **include_self** (bool,可选) - 规约时是否包含 arr 的元素,默认为 ``True``。
1919
- **broadcast** (bool,可选) - 是否广播 ``index`` 矩阵,默认为 ``True``。
2020

docs/api/paddle/scatter_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ PyTorch 兼容的 scatter 函数。基于 :ref:`cn_api_paddle_put_along_axis`
5959
- **dim** (int) - 进行 scatter 操作的维度,范围为 ``[-input.ndim, input.ndim)``。
6060
- **index** (Tensor)- 索引矩阵,包含沿轴提取 1d 切片的下标,必须和 arr 矩阵有相同的维度。注意,除了 ``dim`` 维度外, ``index`` 张量的各维度大小应该小于等于 ``input`` 以及 ``src`` 张量。内部的值应该在 ``input.shape[dim]`` 范围内。数据类型可以是 int32,int64。
6161
- **src** (Tensor)- 需要插入的值。``src`` 是张量时,各维度大小需要至少大于等于 ``index`` 各维度。不受到 ``input`` 的各维度约束。当为标量值时,会自动广播大小到 ``index``。数据类型为:bfloat16、float16、float32、float64、int32、int64、uint8、int16。本参数有一个互斥的别名 ``value``。
62-
- **reduce** (str,可选)- 指定 scatter 的归约方式。默认值为 None,等效为 ``assign``。可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 src 对于输入矩阵 arr 会有不同的行为,如为 ``assgin`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
62+
- **reduce** (str,可选)- 指定 scatter 的归约方式。默认值为 None,等效为 ``assign``。可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 src 对于输入矩阵 arr 会有不同的行为,如为 ``assign`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
6363
- **out** (Tensor,可选) - 用于引用式传入输出值,注意:动态图下 out 可以是任意 Tensor,默认值为 None。
6464

6565
返回

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ In former control flow graph, the out-edges of node 5 are 5 --> 6 and 5 --> 2, a
7979

8080
- Uses and Defs
8181

82-
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assginment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
82+
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
8383

8484
- Liveness
8585

docs/design/phi/design_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1219,7 +1219,7 @@ REGISTER_OPERATOR(sign, ops::SignOp, ops::SignOpMaker<float>,
12191219
* The infrt declare like:
12201220
*
12211221
* def PDKEL_Reshape_to_CPU : Pat<
1222-
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguements
1222+
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguments
12231223
* (PDKEL_ReshapeKernelAttr $x, fn($shape_attr)>; // Kernel arguments
12241224
* def PDKEL_Reshape_to_CPU : Pat<
12251225
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr),

docs/design/phi/design_en.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ We hope to be able to achieve the same three-layer arguments of Python API -> Op
7878
- The initial construction of the PHI operator library paid more attention to Kernel "migration". Due to the consideration of time and labor costs, the original OpKernel logic migration is not forced to be upgraded to "combined" writing for the time being, and the same is true for the forward and backward Kernels
7979
- The "combined Kernel extension development" capability provided by the PHI operator library initially serves the new operators of subsequent increments, and the existing operators still maintain their original coding implementation, reducing the cost of migration
8080
- The "new hardware expansion capability" provided by the PHI operator library is initially only provided within the scope of the new hardware itself. For example, the XPU has implemented 50 Kernels, and then it can combine new Kernels based on 50 Kernels, but this is only limited to the XPU Within the scope, its implementation is not common with CPU, CUDA, etc.
81-
- The PHI operator library project focuses on the work of "Kernel functionalization & Op normalization", Kernel is changed to functional format, C++ API and Op naming and arguemnts list are gradually normalized to Python API under the premise of ensuring compatibility as much as possible
81+
- The PHI operator library project focuses on the work of "Kernel functionalization & Op normalization", Kernel is changed to functional format, C++ API and Op naming and arguments list are gradually normalized to Python API under the premise of ensuring compatibility as much as possible
8282

8383

8484
## 2. Design Overview
@@ -1219,7 +1219,7 @@ At present, the `ArgumentMapping` function mapping is designed. In the `phi/ops/
12191219
* The infrt declare like:
12201220
*
12211221
* def PDKEL_Reshape_to_CPU : Pat<
1222-
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguements
1222+
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguments
12231223
* (PDKEL_ReshapeKernelAttr $x, fn($shape_attr)>; // Kernel arguments
12241224
* def PDKEL_Reshape_to_CPU : Pat<
12251225
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr),

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Fixed-point quantization uses lower bits, for example, 2-bit, 3-bit or 8-bit fixed point to represent weights and activations, which usually are in singe-precision float-point with 32 bits. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially important for the inference in embedded-device deployment.
22

3-
According to some experiments, the apporach to quantize the model trained in float point directly works effectively on the large models, like the VGG model having many parameters. But the accuracy drops a lot for the small model. In order to improve the tradeoff between accuracy and latency, many quantized training apporaches are proposed.
3+
According to some experiments, the approach to quantize the model trained in float point directly works effectively on the large models, like the VGG model having many parameters. But the accuracy drops a lot for the small model. In order to improve the tradeoff between accuracy and latency, many quantized training approaches are proposed.
44

55
This document is to design a quantized training framework on Fluid. The first part will introduce how to quantize, The second part will describe the quantized training framework. The last part will illustrate how to calculate the quantization scale.
66

docs/dev_guides/custom_device_docs/custom_kernel_docs/context_api_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Context APIs
22

33
## CustomContext
4-
`CustomContext` is the acutal parameter of the template parameter Context of the custom kernel function. For details, please refer to [custom_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/backends/custom/custom_context.h).
4+
`CustomContext` is the actual parameter of the template parameter Context of the custom kernel function. For details, please refer to [custom_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/backends/custom/custom_context.h).
55

66
```c++
77
// Constructor

docs/guides/model_convert/convert_from_pytorch/cv_quick_start_cn.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -881,10 +881,10 @@ def evaluate(image, labels, model, acc, tag, reprod_logger):
881881
model.eval()
882882
output = model(image)
883883

884-
accracy = acc(output, labels, topk=(1, 5))
884+
accuracy = acc(output, labels, topk=(1, 5))
885885

886-
reprod_logger.add("acc_top1", np.array(accracy[0]))
887-
reprod_logger.add("acc_top5", np.array(accracy[1]))
886+
reprod_logger.add("acc_top1", np.array(accuracy[0]))
887+
reprod_logger.add("acc_top5", np.array(accuracy[1]))
888888

889889
reprod_logger.save("./result/metric_{}.npy".format(tag))
890890
```

0 commit comments

Comments
 (0)