Skip to content

Commit db65f49

Browse files
lcy-sesoabhinavarora
authored andcommitted
Update comments for two operators. (#7457)
* update code comments. * update the comments. * follow comments.
1 parent c3e062e commit db65f49

File tree

4 files changed

+77
-46
lines changed

4 files changed

+77
-46
lines changed

paddle/operators/reorder_lod_tensor_by_rank_op.cc

Lines changed: 33 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -26,22 +26,44 @@ class ReorderLoDTensorByRankTableOpProtoMaker
2626
ReorderLoDTensorByRankTableOpProtoMaker(OpProto *proto,
2727
OpAttrChecker *op_checker)
2828
: OpProtoAndCheckerMaker(proto, op_checker) {
29-
AddInput("X", "(LoDTensor) the input lod tensor need to be reordered.");
29+
AddInput("X",
30+
"(LoDTensor), the input lod tensor to be reordered according to "
31+
"Input(RankTable).");
3032
AddInput("RankTable",
31-
"(LoDRankTable) the rank table that input need follow");
32-
AddOutput("Out", "(LoDTensor) reordered lod tensor");
33-
AddComment(R"DOC(ReorderLoDTensorByRankTable
33+
"(LoDRankTable), the rank table according to which Input(X) is "
34+
"reordered.");
35+
AddOutput("Out", "(LoDTensor), the reordered lod tensor.");
36+
AddComment(R"DOC(ReorderLoDTensorByRankTable operator.
3437
35-
Reorder the input X by the rank of `RankTable`. If `RankTable` is ordered by
36-
index [3, 0, 2, 1]. Input X will reorder its sequence, the third sequence of
37-
X will be the first sequence of Output.
38-
39-
NOTE: The RankTable does not need to be calculated by X.
38+
Input(X) is a batch of sequences. Input(RankTable) stores new orders of the
39+
input sequence batch. The reorder_lod_tensor_by_rank operator reorders the
40+
Input(X) according to the information provided by Input(RankTable).
4041
4142
For example:
42-
The X = [Seq0, Seq1, Seq2, Seq3]. The indices of RankTable are [3, 0, 2, 1].
4343
44-
The Out = [Seq3, Seq0, Seq2, Seq1] with correct LoD information.
44+
If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the
45+
Input(X) will be reordered that the fourth sequence in Input(X) will become the
46+
first one, and then followed by the original first, third, and the second one.
47+
48+
This is:
49+
X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1].
50+
Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.
51+
52+
If the LoD information of Input(X) is empty, this means Input(X) is not sequence
53+
data. This is also identical to a batch of sequences where each sequence has a
54+
fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders
55+
each slice of Input(X) along the first axis according to Input(RankTable).
56+
57+
This is:
58+
X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The
59+
indices in RankTable are [3, 0, 2, 1].
60+
Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.
61+
62+
NOTE: This operator sorts Input(X) according to a given LoDRankTable which does
63+
not need to be calculated according to Input(X). It can be calculated according
64+
to another different sequence, and then this operator sorts Input(X) according
65+
to the given LoDRankTable.
66+
4567
)DOC");
4668
}
4769
};

paddle/operators/shrink_rnn_memory_op.cc

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ class ShrinkRNNMemoryOp : public ArrayOp {
4545
rank_items.begin();
4646

4747
auto *out_var = scope.FindVar(Output("Out"));
48-
PADDLE_ENFORCE(out_var != nullptr, "Output Out must be set");
48+
PADDLE_ENFORCE(out_var != nullptr, "Output(Out) must be set.");
4949
auto &out_tensor = *out_var->GetMutable<framework::LoDTensor>();
5050

5151
size_t height = dst_num_rows;
@@ -76,15 +76,17 @@ class ShrinkRNNMemoryOpProtoMaker : public framework::OpProtoAndCheckerMaker {
7676
"(LoDTensor) The step index. The RNN step memory 'X' will be "
7777
"shrinked to match the size of the input of the index'th step.");
7878
AddOutput("Out", "(LoDTensor) The shrinked RNN step memory.");
79-
AddComment(
80-
R"DOC(
81-
In dynamic RNN, we are able to handle sequences of different lengths.
82-
Because of the multiple lengths, the size of each step input can be
83-
different, which may lead to a mismatching between the input of
84-
the current step and the memory generated by the previous one. This
85-
operator shrinks memory according to the size of the next step input,
86-
to make sure that they can match each other.
87-
)DOC");
79+
AddComment(R"DOC(
80+
This operator is used to shrink output batch of memory defined in dynamic RNN.
81+
82+
Dynamic RNN is able to handle variable-length sequences, in which, sequences in
83+
a mini-batch are sorted by their lengths first. After that, the longest sequence
84+
becomes the first one in the sorted batch, followed by the second longest, the
85+
third longest, and so on. Dynamic RNN then slices a batch input timestep by
86+
timestep from the sorted input. Once any sequence in the input batch reaches its
87+
end, memory defined in dynamicRNN has to shrink its outputs to adapt to the input
88+
batch size for the next time step.
89+
)DOC");
8890
}
8991
};
9092

python/paddle/v2/fluid/layers/control_flow.py

Lines changed: 23 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -742,11 +742,10 @@ def topk(input, k):
742742

743743

744744
def lod_tensor_to_array(x, table):
745-
"""This function performs the operation that converts an LOD_Tensor to
746-
an array.
745+
""" Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.
747746
748747
Args:
749-
x (Variable|list): The tensor that needs to be converted to an array.
748+
x (Variable|list): The LOD tensor to be converted to a LOD tensor array.
750749
table (ParamAttr|list): The variable that stores the level of lod
751750
which is ordered by sequence length in
752751
descending order.
@@ -776,11 +775,10 @@ def lod_tensor_to_array(x, table):
776775

777776

778777
def array_to_lod_tensor(x, table):
779-
"""This function performs the operations that converts an array to
780-
an LOD_Tensor.
778+
"""Convert a LoD_Tensor_Aarry to an LoDTensor.
781779
782780
Args:
783-
x (Variable|list): The array that needs to be converted to a tensor.
781+
x (Variable|list): The lod tensor array to be converted to a tensor.
784782
table (ParamAttr|list): The variable that stores the level of lod
785783
which is ordered by sequence length in
786784
descending order.
@@ -808,7 +806,8 @@ def array_to_lod_tensor(x, table):
808806

809807

810808
def increment(x, value=1.0, in_place=True):
811-
"""This function performs an operation that increments each value in the
809+
"""
810+
This function performs an operation that increments each value in the
812811
input :math:`x` by an amount: :math:`value` as mentioned in the input
813812
parameter. This operation is performed in-place by default.
814813
@@ -841,17 +840,24 @@ def increment(x, value=1.0, in_place=True):
841840

842841

843842
def array_write(x, i, array=None):
844-
"""This function performs the operation to write the data out as an
845-
LOD_TENSOR_ARRAY.
843+
"""
844+
This function writes the given input variable to the specified position
845+
indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the
846+
output LOD_TENSOR_ARRAY is not given(None), a new one will be created and
847+
returned.
846848
847849
Args:
848850
x (Variable|list): The input tensor from which the data will be read.
849-
i (Variable|list): The subscript index in tensor array, that points the
850-
place from which data will be read.
851-
array (Variable|list): The data can be read into this variable if
852-
this is assigned.
851+
i (Variable|list): The index of the output LOD_TENSOR_ARRAY, pointing to
852+
the position to which the input tensor will be
853+
written.
854+
array (Variable|list): The output LOD_TENSOR_ARRAY to which the input
855+
tensor will be written. If this parameter is
856+
NONE, a new LOD_TENSOR_ARRAY will be created and
857+
returned.
858+
853859
Returns:
854-
Variable: The tensor type variable that has the data written to it.
860+
Variable: The output LOD_TENSOR_ARRAY where the input tensor is written.
855861
856862
Examples:
857863
.. code-block::python
@@ -1228,7 +1234,7 @@ def step_input(self, x):
12281234
self._assert_in_rnn_block_("step_input")
12291235
if not isinstance(x, Variable):
12301236
raise TypeError(
1231-
"step_input() can only take a Variable as its input")
1237+
"step_input() can only take a Variable as its input.")
12321238
parent_block = self._parent_block_()
12331239
if self.lod_rank_table is None:
12341240
self.lod_rank_table = parent_block.create_var(
@@ -1289,8 +1295,8 @@ def block(self):
12891295

12901296
def __call__(self, *args, **kwargs):
12911297
if self.status != DynamicRNN.AFTER_RNN:
1292-
raise ValueError(
1293-
"Dynamic RNN outputs can only be retrieved after rnn block")
1298+
raise ValueError(("Output of the dynamic RNN can only be visited "
1299+
"outside the rnn block."))
12941300
if len(self.outputs) == 1:
12951301
return self.outputs[0]
12961302
else:

python/paddle/v2/fluid/layers/tensor.py

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -176,25 +176,26 @@ def fill_constant(shape, dtype, value, out=None):
176176
"""
177177
**fill_constant**
178178
179-
This function creates a tensor of specified *shape* and
180-
*dtype*, and initializes this with a constant supplied in *value*.
179+
This function creates a tensor with specified `shape` and `dtype`, and
180+
initializes it with a constant specifed by `value`.
181181
182-
It also sets *stop_gradient* to True.
182+
The attribute `stop_gradient` of the created tensor is set to True.
183183
184184
Args:
185-
shape(tuple|list|None): Shape of output tensor
186-
dtype(np.dtype|core.DataType|str): Data type of output tensor
187-
value(float): Constant value to initialize the output tensor
188-
out(Variable): Output Variable to initialize
185+
shape(tuple|list|None): Shape of the output tensor.
186+
dtype(np.dtype|core.DataType|str): Data type of the output tensor.
187+
value(float): The constant value used to initialize the output tensor.
188+
out(Variable): The output tensor.
189189
190190
Returns:
191-
Variable: The tensor variable storing the output
191+
Variable: The tensor variable storing the output.
192192
193193
Examples:
194194
.. code-block:: python
195195
196196
data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')
197197
"""
198+
198199
helper = LayerHelper("fill_constant", **locals())
199200
if out is None:
200201
out = helper.create_tmp_variable(dtype=dtype)

0 commit comments

Comments
 (0)