Skip to content

Commit bfde2fc

Browse files
[CodeStyle][Typos][D-[11-15],P-[11,12]] Fix typo("desgin","desigin","desginated","determinated","diffcult","porcess","processer") (#7614)
* fix-c19-c23 * fix-c6-c7-c24-c26 * fix-d6-d10 * fix-some-qe * test-commit * fix-d11-d15-p11-12 * debug * fix typos: processer and porcess --------- Co-authored-by: ooooo <[email protected]>
1 parent 51a16a6 commit bfde2fc

File tree

13 files changed

+32
-39
lines changed

13 files changed

+32
-39
lines changed

_typos.toml

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,6 @@ Optimzier = "Optimzier"
3535
Setment = "Setment"
3636
Simle = "Simle"
3737
Sovler = "Sovler"
38-
desgin = "desgin"
39-
desginated = "desginated"
40-
desigin = "desigin"
41-
determinated = "determinated"
42-
diffcult = "diffcult"
4338
dimention = "dimention"
4439
dimentions = "dimentions"
4540
dirrectories = "dirrectories"
@@ -90,8 +85,6 @@ outpu = "outpu"
9085
outpus = "outpus"
9186
overrided = "overrided"
9287
overwrited = "overwrited"
93-
porcess = "porcess"
94-
processer = "processer"
9588
samle = "samle"
9689
schedual = "schedual"
9790
secenarios = "secenarios"

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ fp16_tensor.set(tensor.astype(numpy.float16).view(numpy.uint16), GPUPlace)
130130
```
131131

132132
### Consistent API requirement
133-
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and diffcult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
133+
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and difficult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
134134

135135
This problem can be solved by introducing a type-casting operator which takes an input variable of certain data type, cast it to another specified data type, and put the casted data into the output variable. Insert cast operator where needed can make a program internally run in float16 mode.
136136

docs/design/others/graph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For each parameter, like W and b created by `layer.fc`, marked as double circles
5656

5757
## Block and Graph
5858

59-
The word block and graph are interchangable in the desgin of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
59+
The word block and graph are interchangable in the design of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
6060

6161
A Block keeps operators in an array `BlockDesc::ops`
6262

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ From these formulas, dequantization also can be moved before GEMM, do dequantiza
7979
Figure 2. Equivalent forward in training with simulated quantization.
8080
</p>
8181

82-
We use this equivalent workflow in the training. In our desigin, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
82+
We use this equivalent workflow in the training. In our design, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
8383

8484
#### Backward pass
8585

docs/dev_guides/custom_device_docs/custom_runtime_en.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Device APIs
2929
+---------------------------+----------------------------------------------------------+----------+
3030
| get_device | To get the current device | Y |
3131
+---------------------------+----------------------------------------------------------+----------+
32-
| synchronize_device | To synchronize the desginated device | Y |
32+
| synchronize_device | To synchronize the designated device | Y |
3333
+---------------------------+----------------------------------------------------------+----------+
3434
| get_device_count | To count available devices | Y |
3535
+---------------------------+----------------------------------------------------------+----------+

docs/guides/model_convert/convert_from_pytorch/deprecated/apply_references_deprecated.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ def record_api(api):
265265
return [line]
266266

267267

268-
def reference_mapping_item_processer(line, line_idx, state, output, context):
268+
def reference_mapping_item_processor(line, line_idx, state, output, context):
269269
if not line.startswith("|"):
270270
output.append(line)
271271
return True
@@ -365,7 +365,7 @@ def get_c2a_dict(conditions, meta_dict):
365365

366366
# 第二遍正式读,读并处理
367367
ret_code = reference_mapping_item(
368-
mapping_index_file, reference_mapping_item_processer, reference_context
368+
mapping_index_file, reference_mapping_item_processor, reference_context
369369
)
370370

371371
# 检查是否重复出现

docs/guides/model_convert/convert_from_pytorch/deprecated/validate_mapping_files_deprecated.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -507,14 +507,14 @@ def get_meta_from_diff_file(
507507
return meta_data
508508

509509

510-
def process_mapping_index(index_path, item_processer, context={}):
510+
def process_mapping_index(index_path, item_processor, context={}):
511511
"""
512512
线性处理 `pytorch_api_mapping_cn.md` 文件
513513
- index_path: 该 md 文件路径
514-
- item_processer: 对文件每行的处理方式,输入参数 (line, line_idx, state, output, context)。
514+
- item_processor: 对文件每行的处理方式,输入参数 (line, line_idx, state, output, context)。
515515
如果处理出错则返回 False,否则返回 True。
516516
- context: 用于存储处理过程中的上下文信息
517-
- output: 使用 context["output"] 初始化,如果不调用 item_processer,直接加入原文件对应行,否则 item_processer 处理 output 逻辑。
517+
- output: 使用 context["output"] 初始化,如果不调用 item_processor,直接加入原文件对应行,否则 item_processor 处理 output 逻辑。
518518
- 返回值:是否成功处理,成功返回 0。
519519
"""
520520
if not os.path.exists(index_path):
@@ -558,7 +558,7 @@ def process_mapping_index(index_path, item_processer, context={}):
558558
column_names.extend([c.strip() for c in columns])
559559
column_count = len(column_names)
560560

561-
if not item_processer(line, i, state, output, context):
561+
if not item_processor(line, i, state, output, context):
562562
break
563563

564564
if column_names == expect_column_names:
@@ -577,7 +577,7 @@ def process_mapping_index(index_path, item_processer, context={}):
577577
raise Exception(
578578
f"Table seperator not match at line {i + 1}: {line}"
579579
)
580-
if not item_processer(line, i, state, output, context):
580+
if not item_processor(line, i, state, output, context):
581581
break
582582
state = IndexParserState.table_row_ignore
583583
elif state == IndexParserState.table_sep:
@@ -588,16 +588,16 @@ def process_mapping_index(index_path, item_processer, context={}):
588588
raise Exception(
589589
f"Table seperator not match at line {i + 1}: {line}"
590590
)
591-
if not item_processer(line, i, state, output, context):
591+
if not item_processor(line, i, state, output, context):
592592
break
593593
state = IndexParserState.table_row
594594
elif state == IndexParserState.table_row_ignore:
595-
if not item_processer(line, i, state, output, context):
595+
if not item_processor(line, i, state, output, context):
596596
break
597597
elif state == IndexParserState.table_row:
598598
try:
599599
context["columns"] = columns
600-
if not item_processer(line, i, state, output, context):
600+
if not item_processor(line, i, state, output, context):
601601
break
602602
context["table_row_idx"] += 1
603603
except Exception as e:

docs/guides/model_convert/convert_from_pytorch/nlp_fast_explore_cn.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -280,29 +280,29 @@ PyTorch 模块通常继承`torch.nn.Module`,飞桨模块通常继承`paddle.nn
280280

281281

282282
<p align="center">
283-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/embedding.png" align="middle" width="500" />
283+
<img src="../pictures/embedding.png" align="middle" width="500" />
284284
</p>
285285

286286

287287
- [EncoderLayer](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/nn/layer/transformer.py#:~:text=class%20TransformerEncoderLayer):继承自 `torch.nn.Layer`,是 Bert 网络中基本模块,由 MultiHeadAttention、FeedForward 组成。后者由 LayerNorm,Dropout,Linear 层和激活函数构成。
288288

289289
<p align="center">
290-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/encoder.png" align="middle" width="500" />
290+
<img src="../pictures/encoder.png" align="middle" width="500" />
291291
</p>
292292

293293

294294
- SelfAttention 层的 K,Q,V 矩阵用于计算单词之间的相关性分数,他们由 Linear 层组成。
295295

296296

297297
<p align="center">
298-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/kqv.png" align="middle" width="500" />
298+
<img src="../pictures/kqv.png" align="middle" width="500" />
299299
</p>
300300

301301

302302
- [MultiHeadAttention](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/nn/layer/transformer.py#:~:text=class%20MultiHeadAttention):由 SelfAttention 层和 Softmax 函数构成。
303303

304304
<p align="center">
305-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/malti-head.png" align="middle" width="500" />
305+
<img src="../pictures/malti-head.png" align="middle" width="500" />
306306
</p>
307307

308308

docs/guides/model_convert/convert_from_pytorch/nlp_migration_experiences_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616

1717

1818
<p align="center">
19-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/porcess.png" align="middle" width="500" />
19+
<img src="../pictures/process.png" align="middle" width="500" />
2020
</p>
2121

2222

@@ -1137,7 +1137,7 @@ paddle.where(b, paddle.zeros(c.shape), c)
11371137

11381138

11391139
<p align="center">
1140-
<img src="https://raw.githubusercontent.com/ymyjl/docs/torch_migrate/docs/guides/model_convert/pictures/information.png" align="middle" width="500" />
1140+
<img src="../pictures/information.png" align="middle" width="500" />
11411141
</p>
11421142

11431143
**【可能原因】**

docs/guides/model_convert/convert_from_pytorch/tools/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,13 +119,13 @@ API 别名表的生成逻辑与单个 API 项映射类似,实现于 `apply_ref
119119

120120
生成工具读取时,当遇到符合预期的表格表头即进入准备读取的状态,随后跳过表格的分隔线,开始对预处理命令的读取状态,直到所在行不是预处理命令时回到普通状态。
121121

122-
由于该读取逻辑可复用,因此将这部分逻辑实现在验证工具的 `process_mapping_index` 方法,通过传入 `item_processer` 回调和 `context` 上下文来控制行为,使用 `IndexParserState` 状态集来控制读取状态。
122+
由于该读取逻辑可复用,因此将这部分逻辑实现在验证工具的 `process_mapping_index` 方法,通过传入 `item_processor` 回调和 `context` 上下文来控制行为,使用 `IndexParserState` 状态集来控制读取状态。
123123

124124
两次读取中,第一次读取用于分析表格匹配条件,第二次读取进行实际的预处理命令替换。
125125

126126
第一次读取时使用 `reference_table_scanner` 方法作为回调,收集所有的 API 表引用项,记录其参数作为 API 分类的条件。随后在生成工具的 `get_c2a_dict` 方法中对所有条件按照优先 `prefix` 长度降序,次优 `max_depth` 升序的顺序进行排序,并对所有映射文件元数据按照条件进行匹配。
127127

128-
第二次读取时使用 `reference_mapping_item_processer` 方法作为回调,对于所有需要处理的表格行进行转换,将转换结果写回 `context``output` 项中。
128+
第二次读取时使用 `reference_mapping_item_processor` 方法作为回调,对于所有需要处理的表格行进行转换,将转换结果写回 `context``output` 项中。
129129

130130
完成读取后,检查是否有 API 重复出现,如果重复出现则输出重复出现的 API 名称和所在行,不写回源文件并进行 CI 报错。
131131

0 commit comments

Comments
 (0)