You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/design/concepts/tensor_array.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -212,7 +212,7 @@ class TensorArray:
212
212
```
213
213
214
214
## DenseTensor-related Supports
215
-
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes variance-length sequences as input, and output sequences too.
215
+
The `RecurrentGradientMachine` in Paddle serves as a flexible RNN layer; it takes variable-length sequences as input, and output sequences too.
216
216
217
217
Since each step of RNN can only take a tensor-represented batch of data as input,
218
218
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.
the code above shows that by embedding the DenseTensor-related preprocess operations into `TensorArray`,
271
-
the implementation of a RNN that supports variant-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.
271
+
the implementation of a RNN that supports variable-length sentences is far more concise than `RecurrentGradientMachine` because the latter mixes all the codes together, hard to read and extend.
Copy file name to clipboardExpand all lines: docs/design/dynamic_rnn/rnn_design_en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Variant Length supported RNN Design
1
+
# Variable Length supported RNN Design
2
2
For the learning of variable length sequences, the existing mainstream frameworks such as tensorflow, pytorch, caffe2, mxnet and so on all use padding.
3
3
4
4
Different-length sequences in a mini-batch will be padded with zeros and transformed to same length.
0 commit comments