Skip to content

Commit c43efd3

Browse files
committed
fix Variable-length
1 parent e43c051 commit c43efd3

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/design/concepts/tensor_array.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,7 @@ class TensorArray:
212212
```
213213

214214
## DenseTensor-related Supports
215-
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes variance-length sequences as input, and output sequences too.
215+
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes variable-length sequences as input, and output sequences too.
216216

217217
Since each step of RNN can only take a tensor-represented batch of data as input,
218218
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.
@@ -244,10 +244,10 @@ def pack(level, indices_map):
244244
pass
245245
```
246246

247-
With these two methods, a variance-length sentence supported RNN can be implemented like
247+
With these two methods, a variable-length sentence supported RNN can be implemented like
248248

249249
```c++
250-
// input is the variant-length data
250+
// input is the variable-length data
251251
LodTensor sentence_input(xxx);
252252
TensorArray ta;
253253
Tensor indice_map;
@@ -268,4 +268,4 @@ for (int step = 0; step = ta.size(); step++) {
268268
DenseTensor rnn_output = ta.pack(ta, indice_map);
269269
```
270270
the code above shows that by embedding the DenseTensor-related preprocess operations into `TensorArray`,
271-
the implementation of a RNN that supports variant-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.
271+
the implementation of a RNN that supports variable-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.

docs/design/dynamic_rnn/rnn_design_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Variant Length supported RNN Design
1+
# Variable Length supported RNN Design
22
For the learning of variable length sequences, the existing mainstream frameworks such as tensorflow, pytorch, caffe2, mxnet and so on all use padding.
33

44
Different-length sequences in a mini-batch will be padded with zeros and transformed to same length.

0 commit comments

Comments
 (0)