Skip to content

Commit 9bc0ec8

Browse files
dune0310421ooooo-createCopilot
authored
[CodeStyle][Typos][M-[1-2],M-[4-7],M-[9-11]] Fix typo(Moible, …) (#7627)
* fix typo: 'Moible', 'mantained', 'mdule', 'mechnism', 'memeory', 'memroy', 'messege', 'metaphore', 'muliply', 'mulitplying', * Update docs/guides/model_convert/convert_from_pytorch/api_difference/torch_more_args/torch.scalar_tensor.md Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: ooo oo <[email protected]> Co-authored-by: Copilot <[email protected]>
1 parent d17db04 commit 9bc0ec8

File tree

17 files changed

+20
-30
lines changed

17 files changed

+20
-30
lines changed

_typos.toml

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ feeded = "feeded"
2929

3030
# These words need to be fixed
3131
Learing = "Learing"
32-
Moible = "Moible"
3332
Operaton = "Operaton"
3433
Optimizaing = "Optimizaing"
3534
Optimzier = "Optimzier"
@@ -44,17 +43,8 @@ interchangable = "interchangable"
4443
intializers = "intializers"
4544
intput = "intput"
4645
libary = "libary"
47-
mantained = "mantained"
4846
matrics = "matrics"
49-
mdule = "mdule"
50-
mechnism = "mechnism"
51-
memeory = "memeory"
52-
memroy = "memroy"
53-
messege = "messege"
54-
metaphore = "metaphore"
5547
metrices = "metrices"
56-
muliply = "muliply"
57-
mulitplying = "mulitplying"
5848
mutbale = "mutbale"
5949
occurence = "occurence"
6050
opeartor = "opeartor"

docs/design/concurrent/go_op.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ for more details.
218218

219219
#### Green Threads
220220

221-
Golang utilizes `green threads`, which is a mechnism for the runtime library to
221+
Golang utilizes `green threads`, which is a mechanism for the runtime library to
222222
manage multiple threads (instead of natively by the OS). Green threads usually
223223
allows for faster thread creation and switching, as there is less overhead
224224
when spawning these threads. For the first version of CSP, we only support

docs/design/memory/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ I got inspiration from Majel and Caffe2, though above design look different from
116116
117117
### Caffe2
118118
119-
In Caffe2, `Tensor<Context>::mutable_data()` allocates the memroy. In particular, [`Tensor<Context>::mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L523) calls [`Tensor<Context>::raw_mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L459), which in turn calls [`Context::New`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L479).
119+
In Caffe2, `Tensor<Context>::mutable_data()` allocates the memory. In particular, [`Tensor<Context>::mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L523) calls [`Tensor<Context>::raw_mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L459), which in turn calls [`Context::New`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L479).
120120
121121
There are two implementations of `Context`:
122122

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ After op1, we can process variable b and variable c; After op2, we can process v
197197

198198
#### memory sharing policy
199199

200-
A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.
200+
A memory pool will be maintained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.
201201

202202
```
203203
if op.support_inplace():

docs/design/mkldnn/gru/gru.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,12 +99,12 @@ Because oneDNN assumes that all sentences are of equal length, before reorder, w
9999
![](images/input_is_reverse.svg)
100100

101101
* PaddlePaddle WeightX -> oneDNN WeightX\
102-
WeightX does not need custom reorders because memory arrangement is the same for both PP and oneDNN. However, it has to be modified if `origin_mode==false` by mulitplying update gate part by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
102+
WeightX does not need custom reorders because memory arrangement is the same for both PP and oneDNN. However, it has to be modified if `origin_mode==false` by multiplying update gate part by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
103103
* PaddlePaddle WeightH -> oneDNN WeightH\
104104
WeightH tensor has different representation in PP and oneDNN. PaddlePaddle stores it as 2 connected blocks of memory, where first contains reset and update gate recurrent weights, and second stores output gate recurrent weights. In oneDNN, these weights are stored in a single memory block of size `[OC, 3, OC]`. Therefore, custom reorder is needed here. After that, if `origin_mode==false`, update gate part is multiplied by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
105105
![](images/different_tensor_memory_arrangement.svg)
106106
* PaddlePaddle Bias -> oneDNN Bias\
107-
Bias does not require reorder from PP to oneDNN. However, if it is not provided by user, it has to be created and filled with `0.0f` because oneDNN requires it. If it was provided, it has to be modified when `origin_mode==false` by mulitplying update gate part by `-1`. Note: bias is always of `float` data type, even in `int8` and `bfloat16` kernels.
107+
Bias does not require reorder from PP to oneDNN. However, if it is not provided by user, it has to be created and filled with `0.0f` because oneDNN requires it. If it was provided, it has to be modified when `origin_mode==false` by multiplying update gate part by `-1`. Note: bias is always of `float` data type, even in `int8` and `bfloat16` kernels.
108108
* oneDNN TNC/NTC -> PaddlePaddle Output LoD\
109109
After execution of oneDNN GRU primitive, output tensor has to be converted back to PP representation. It is done in the same way as input reorder but in a reverse manner.
110110

docs/design/others/graph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For each parameter, like W and b created by `layer.fc`, marked as double circles
5656

5757
## Block and Graph
5858

59-
The word block and graph are interchangable in the design of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
59+
The word block and graph are interchangable in the design of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphor of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
6060

6161
A Block keeps operators in an array `BlockDesc::ops`
6262

docs/dev_guides/custom_device_docs/memory_api_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ It copies synchronous memory in the device.
180180
181181
device - the device to be used
182182
183-
dst - the address of the destination device memroy
183+
dst - the address of the destination device memory
184184
185185
src - the address of the source device memory
186186

docs/eval/evaluation_of_docs_system.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
271271
- Training Transformer models using Pipeline Parallelism
272272
- Training Transformer models using Distributed Data Parallel and Pipeline Parallelism
273273
- Distributed Training with Uneven Inputs Using the Join Context Manager
274-
- Moible
274+
- Mobile
275275
- Image Segmentation DeepLabV3 on iOS
276276
- Image Segmentation DeepLabV3 on Android
277277
- Recommendation Systems

docs/eval/【Hackathon No.69】PR.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ torch.tensor(data,
9797

9898
在 paddle.to_tensor 中,stop_gradient 表示是否阻断梯度传导,PyTorch 的 requires_grad 表示是否不阻断梯度传导。
9999

100-
在 torch.tensor 中,pin_memeory 表示是否使用锁页内存,而 PaddlePaddle 却无此参数。
100+
在 torch.tensor 中,pin_memory 表示是否使用锁页内存,而 PaddlePaddle 却无此参数。
101101

102102
------
103103

docs/faq/train_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
##### 问题:请问`paddle.matmul``paddle.multiply`有什么区别?
1010

11-
+ 答复:`matmul`支持的两个 tensor 的矩阵乘操作。`muliply`是支持两个 tensor 进行逐元素相乘。
11+
+ 答复:`matmul`支持的两个 tensor 的矩阵乘操作。`multiply`是支持两个 tensor 进行逐元素相乘。
1212

1313
----------
1414

0 commit comments

Comments
 (0)