Skip to content

Commit 1cb4e2d

Browse files
authored
Merge branch 'develop' into develop
2 parents 59d8638 + 9bc0ec8 commit 1cb4e2d

File tree

16 files changed

+19
-29
lines changed

16 files changed

+19
-29
lines changed

_typos.toml

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ feeded = "feeded"
2929

3030
# These words need to be fixed
3131
Learing = "Learing"
32-
Moible = "Moible"
3332
Operaton = "Operaton"
3433
Optimizaing = "Optimizaing"
3534
Optimzier = "Optimzier"
@@ -49,17 +48,8 @@ interchangable = "interchangable"
4948
intializers = "intializers"
5049
intput = "intput"
5150
libary = "libary"
52-
mantained = "mantained"
5351
matrics = "matrics"
54-
mdule = "mdule"
55-
mechnism = "mechnism"
56-
memeory = "memeory"
57-
memroy = "memroy"
58-
messege = "messege"
59-
metaphore = "metaphore"
6052
metrices = "metrices"
61-
muliply = "muliply"
62-
mulitplying = "mulitplying"
6353
mutbale = "mutbale"
6454
occurence = "occurence"
6555
opeartor = "opeartor"

docs/design/concurrent/go_op.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ for more details.
218218

219219
#### Green Threads
220220

221-
Golang utilizes `green threads`, which is a mechnism for the runtime library to
221+
Golang utilizes `green threads`, which is a mechanism for the runtime library to
222222
manage multiple threads (instead of natively by the OS). Green threads usually
223223
allows for faster thread creation and switching, as there is less overhead
224224
when spawning these threads. For the first version of CSP, we only support

docs/design/memory/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ I got inspiration from Majel and Caffe2, though above design look different from
116116
117117
### Caffe2
118118
119-
In Caffe2, `Tensor<Context>::mutable_data()` allocates the memroy. In particular, [`Tensor<Context>::mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L523) calls [`Tensor<Context>::raw_mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L459), which in turn calls [`Context::New`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L479).
119+
In Caffe2, `Tensor<Context>::mutable_data()` allocates the memory. In particular, [`Tensor<Context>::mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L523) calls [`Tensor<Context>::raw_mutable_data`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L459), which in turn calls [`Context::New`](https://github.com/caffe2/caffe2/blob/v0.7.0/caffe2/core/tensor.h#L479).
120120
121121
There are two implementations of `Context`:
122122

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ After op1, we can process variable b and variable c; After op2, we can process v
197197

198198
#### memory sharing policy
199199

200-
A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.
200+
A memory pool will be maintained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.
201201

202202
```
203203
if op.support_inplace():

docs/design/mkldnn/gru/gru.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,12 +99,12 @@ Because oneDNN assumes that all sentences are of equal length, before reorder, w
9999
![](images/input_is_reverse.svg)
100100

101101
* PaddlePaddle WeightX -> oneDNN WeightX\
102-
WeightX does not need custom reorders because memory arrangement is the same for both PP and oneDNN. However, it has to be modified if `origin_mode==false` by mulitplying update gate part by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
102+
WeightX does not need custom reorders because memory arrangement is the same for both PP and oneDNN. However, it has to be modified if `origin_mode==false` by multiplying update gate part by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
103103
* PaddlePaddle WeightH -> oneDNN WeightH\
104104
WeightH tensor has different representation in PP and oneDNN. PaddlePaddle stores it as 2 connected blocks of memory, where first contains reset and update gate recurrent weights, and second stores output gate recurrent weights. In oneDNN, these weights are stored in a single memory block of size `[OC, 3, OC]`. Therefore, custom reorder is needed here. After that, if `origin_mode==false`, update gate part is multiplied by `-1`. At the end, oneDNN reorder is called to convert weights to correct type and strides selected by primitive.
105105
![](images/different_tensor_memory_arrangement.svg)
106106
* PaddlePaddle Bias -> oneDNN Bias\
107-
Bias does not require reorder from PP to oneDNN. However, if it is not provided by user, it has to be created and filled with `0.0f` because oneDNN requires it. If it was provided, it has to be modified when `origin_mode==false` by mulitplying update gate part by `-1`. Note: bias is always of `float` data type, even in `int8` and `bfloat16` kernels.
107+
Bias does not require reorder from PP to oneDNN. However, if it is not provided by user, it has to be created and filled with `0.0f` because oneDNN requires it. If it was provided, it has to be modified when `origin_mode==false` by multiplying update gate part by `-1`. Note: bias is always of `float` data type, even in `int8` and `bfloat16` kernels.
108108
* oneDNN TNC/NTC -> PaddlePaddle Output LoD\
109109
After execution of oneDNN GRU primitive, output tensor has to be converted back to PP representation. It is done in the same way as input reorder but in a reverse manner.
110110

docs/dev_guides/custom_device_docs/memory_api_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ It copies synchronous memory in the device.
180180
181181
device - the device to be used
182182
183-
dst - the address of the destination device memroy
183+
dst - the address of the destination device memory
184184
185185
src - the address of the source device memory
186186

docs/eval/evaluation_of_docs_system.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
271271
- Training Transformer models using Pipeline Parallelism
272272
- Training Transformer models using Distributed Data Parallel and Pipeline Parallelism
273273
- Distributed Training with Uneven Inputs Using the Join Context Manager
274-
- Moible
274+
- Mobile
275275
- Image Segmentation DeepLabV3 on iOS
276276
- Image Segmentation DeepLabV3 on Android
277277
- Recommendation Systems

docs/eval/【Hackathon No.69】PR.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ torch.tensor(data,
9797

9898
在 paddle.to_tensor 中,stop_gradient 表示是否阻断梯度传导,PyTorch 的 requires_grad 表示是否不阻断梯度传导。
9999

100-
在 torch.tensor 中,pin_memeory 表示是否使用锁页内存,而 PaddlePaddle 却无此参数。
100+
在 torch.tensor 中,pin_memory 表示是否使用锁页内存,而 PaddlePaddle 却无此参数。
101101

102102
------
103103

docs/faq/train_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
##### 问题:请问`paddle.matmul``paddle.multiply`有什么区别?
1010

11-
+ 答复:`matmul`支持的两个 tensor 的矩阵乘操作。`muliply`是支持两个 tensor 进行逐元素相乘。
11+
+ 答复:`matmul`支持的两个 tensor 的矩阵乘操作。`multiply`是支持两个 tensor 进行逐元素相乘。
1212

1313
----------
1414

docs/guides/model_convert/convert_from_pytorch/api_difference/composite_implement/torch._assert.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ Paddle 无此 API,需要组合实现。
99
### 转写示例
1010
```python
1111
# PyTorch 写法
12-
torch._assert(condition=1==2, message='error messege')
12+
torch._assert(condition=1==2, message='error message')
1313

1414
# Paddle 写法
15-
assert 1==2, 'error messege'
15+
assert 1==2, 'error message'
1616
```

0 commit comments

Comments
 (0)