Skip to content

Commit 3ca827a

Browse files
committed
resolve merge conflict in _typos.toml
2 parents 7bc7049 + bff0e2b commit 3ca827a

File tree

27 files changed

+44
-67
lines changed

27 files changed

+44
-67
lines changed

.github/CODEOWNERS

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
# Paddle API Docs
2-
docs/api/paddle @jzhang533 @sunzhongkai588 @mattheliu @Echo-Nie
2+
docs/api/paddle @jzhang533 @sunzhongkai588 @mattheliu @Echo-Nie @ooooo-create

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
# virtualenv
77
venv/
88
ENV/
9+
.venv/
910

1011
# Compiled Python files
1112
__pycache__/

_typos.toml

Lines changed: 0 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -27,21 +27,13 @@ Archetecture = "Archetecture"
2727
Asynchoronous = "Asynchoronous"
2828
Attrbute = "Attrbute"
2929
Attribtue = "Attribtue"
30-
Bounary = "Bounary"
3130
Classfication = "Classfication"
3231
Comparision = "Comparision"
3332
Contructing = "Contructing"
3433
Creenshot = "Creenshot"
35-
DELCARE = "DELCARE"
36-
Dateset = "Dateset"
37-
Discription = "Discription"
38-
Distrbuted = "Distrbuted"
39-
Driect = "Driect"
4034
Embeddding = "Embeddding"
4135
Embeding = "Embeding"
4236
Engish = "Engish"
43-
Fasle = "Fasle"
44-
Flase = "Flase"
4537
Generater = "Generater"
4638
Gloabal = "Gloabal"
4739
Imporvement = "Imporvement"
@@ -76,20 +68,10 @@ Traning = "Traning"
7668
Transfomed = "Transfomed"
7769
Tthe = "Tthe"
7870
Ture = "Ture"
79-
Varialble = "Varialble"
80-
Varible = "Varible"
81-
Varient = "Varient"
8271
Wether = "Wether"
8372
accordding = "accordding"
8473
accoustic = "accoustic"
8574
accpetance = "accpetance"
86-
baisc = "baisc"
87-
basci = "basci"
88-
beacuse = "beacuse"
89-
bechmark = "bechmark"
90-
benckmark = "benckmark"
91-
boradcast = "boradcast"
92-
brodcast = "brodcast"
9375
caculate = "caculate"
9476
cantains = "cantains"
9577
choosen = "choosen"
@@ -253,12 +235,6 @@ transfered = "transfered"
253235
trasformed = "trasformed"
254236
treshold = "treshold"
255237
trian = "trian"
256-
varialbes = "varialbes"
257-
varibale = "varibale"
258-
varibales = "varibales"
259-
varience = "varience"
260-
varient = "varient"
261-
visting = "visting"
262238
warpped = "warpped"
263239
wether = "wether"
264240
wiht = "wiht"

docs/api/paddle/incubate/autograd/Overview_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,7 @@ _________________________
225225
- 输入数据不支持可变形状写法,如[None, 1]、[-1, 1]。如果训练数据形状是变化的,一种可行 Workaround 方案是根据不同数据形状创建不同网络,即在组网阶段将形状固定,具体参考附 1 代码。
226226
- 我们尚未在 windows 平台进行完整验证和支持。
227227
- 目前只支持使用 default_main_program 和 default_startup_program。
228-
- boradcast 语意尚未完整支持。
228+
- broadcast 语意尚未完整支持。
229229

230230

231231
.. _autograd_design_details:

docs/api/paddle/static/IpuStrategy_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ set_pipelining_config(self, enable_pipelining, batches_per_step, enable_gradient
4646

4747
- **enable_pipelining** (bool,可选)- 是否使能子图之间的数据流水线。仅支持当 enable_manual_shard=True 时,enable_pipelining 可以置为 True。默认值为 False,表示不使能该功能。
4848
- **batches_per_step** (int,可选)- 指定数据流水线每次运算多少个 batch 的数据。默认值为 1,表示不使能数据流水线功能。
49-
- **enable_gradient_accumulation** (bool,可选)- 是否使能梯度累积,只用于训练模式。默认值为 Flase,表示不使能梯度累积功能。
49+
- **enable_gradient_accumulation** (bool,可选)- 是否使能梯度累积,只用于训练模式。默认值为 False,表示不使能梯度累积功能。
5050
- **accumulation_factor** (int,可选)- 指定累积运算多少个 batch 更新一次权重。默认值为 1,表示不使能权重累积更新功能。
5151

5252
**代码示例**

docs/design/concepts/tensor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ Please reference the section of `Learn from Majel` for more details.
161161

162162
`ArrayView` is an encapsulation of `Array`, which introduces extra iterator methods, such as `begin()` and `end()`. The `begin()` method returns an iterator pointing to the first element in the ArrayView. And the `end()` method returns an iterator pointing to the pass-the-end element in the ArrayView.
163163

164-
`ArrayView` make the visting and manipulating an array more efficiently, flexibly and safely.
164+
`ArrayView` make the visiting and manipulating an array more efficiently, flexibly and safely.
165165

166166

167167
A global function `make_view` is provided to transform an array to corresponding arrayview.

docs/design/concepts/tensor_array.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,7 @@ class TensorArray:
212212
```
213213

214214
## DenseTensor-related Supports
215-
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes varience-length sequences as input, and output sequences too.
215+
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes variable-length sequences as input, and output sequences too.
216216

217217
Since each step of RNN can only take a tensor-represented batch of data as input,
218218
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.
@@ -244,10 +244,10 @@ def pack(level, indices_map):
244244
pass
245245
```
246246

247-
With these two methods, a varience-length sentence supported RNN can be implemented like
247+
With these two methods, a variable-length sentence supported RNN can be implemented like
248248

249249
```c++
250-
// input is the varient-length data
250+
// input is the variable-length data
251251
LodTensor sentence_input(xxx);
252252
TensorArray ta;
253253
Tensor indice_map;
@@ -268,4 +268,4 @@ for (int step = 0; step = ta.size(); step++) {
268268
DenseTensor rnn_output = ta.pack(ta, indice_map);
269269
```
270270
the code above shows that by embedding the DenseTensor-related preprocess operations into `TensorArray`,
271-
the implementation of a RNN that supports varient-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.
271+
the implementation of a RNN that supports variable-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.

docs/design/dist_train/mpi_enabled_design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ When we do distribute multi GPU training, the communication overhead between ser
88

99
We will use OpenMPI API to PaddlePaddle, which can bring two benefits to PaddlePaddle:
1010
1. Enable RDMA with PaddlePaddle, which bring high-performance low latency networks.
11-
2. Enable GPUDriect with PaddlePaddle, which bring the highest throughput and lowest latency GPU read and write.
11+
2. Enable GPUDirect with PaddlePaddle, which bring the highest throughput and lowest latency GPU read and write.
1212

1313
# Change list
1414
* Compile args: Need add compile args to enable MPI support.

docs/design/dynamic_rnn/rnn_design_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Varient Length supported RNN Design
1+
# Variable Length supported RNN Design
22
For the learning of variable length sequences, the existing mainstream frameworks such as tensorflow, pytorch, caffe2, mxnet and so on all use padding.
33

44
Different-length sequences in a mini-batch will be padded with zeros and transformed to same length.

docs/design/modules/backward.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ def _append_backward_ops_(target,
6161
target_block(Block): the block which is going to hold new generated grad ops
6262
no_grad_dict(dict):
6363
key(int) block index
64-
val(set) a set of varibale names. These varibales have no gradient
64+
val(set) a set of variable names. These variables have no gradient
6565
grad_to_var(dict)(output argument):
6666
key(str): grad variable name
6767
val(str): corresponding forward variable name

0 commit comments

Comments
 (0)