Skip to content

Commit 16a0f74

Browse files
authored
Merge pull request #11383 from jacquesqiao/update-api-reference-1
update split_lod_tensor, create_array and array_length doc
2 parents ce60bbf + 46ae1c9 commit 16a0f74

File tree

9 files changed

+253
-101
lines changed

9 files changed

+253
-101
lines changed

paddle/fluid/operators/activation_op.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ Relu Activation Operator.
133133
__attribute__((unused)) constexpr char TanhDoc[] = R"DOC(
134134
Tanh Activation Operator.
135135
136-
$$out = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
136+
$$out = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
137137
138138
)DOC";
139139

paddle/fluid/operators/detection/polygon_box_transform_op.cc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,11 +83,13 @@ class PolygonBoxTransformOpMaker : public framework::OpProtoAndCheckerMaker {
8383

8484
AddComment(R"DOC(
8585
PolygonBoxTransform Operator.
86+
87+
PolygonBoxTransform Operator is used to transform the coordinate shift to the real coordinate.
88+
8689
The input is the final geometry output in detection network.
8790
We use 2*n numbers to denote the coordinate shift from n corner vertices of
8891
the polygon_box to the pixel location. As each distance offset contains two numbers (xi, yi),
8992
the geometry output contains 2*n channels.
90-
PolygonBoxTransform Operator is used to transform the coordinate shift to the real coordinate.
9193
)DOC");
9294
}
9395
};

paddle/fluid/operators/shape_op.cc

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,10 +36,13 @@ class ShapeOpMaker : public framework::OpProtoAndCheckerMaker {
3636
public:
3737
void Make() override {
3838
AddInput("Input", "(Tensor), The input tensor.");
39-
AddOutput("Out", "(Tensor), The shape of input tensor.");
39+
AddOutput("Out",
40+
"(Tensor), The shape of input tensor, the data type of the shape"
41+
" is int64_t, will be on the same device with the input Tensor.");
4042
AddComment(R"DOC(
41-
Shape Operator.
42-
Get the shape of input tensor.
43+
Shape Operator
44+
45+
Get the shape of input tensor. Only support CPU input Tensor now.
4346
)DOC");
4447
}
4548
};

paddle/fluid/operators/sigmoid_cross_entropy_with_logits_op.cc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -113,14 +113,14 @@ The logistic loss is given as follows:
113113
114114
$$loss = -Labels * \log(\sigma(X)) - (1 - Labels) * \log(1 - \sigma(X))$$
115115
116-
We know that $$\sigma(X) = (1 / (1 + \exp(-X)))$$. By substituting this we get:
116+
We know that $$\sigma(X) = \\frac{1}{1 + \exp(-X)}$$. By substituting this we get:
117117
118118
$$loss = X - X * Labels + \log(1 + \exp(-X))$$
119119
120120
For stability and to prevent overflow of $$\exp(-X)$$ when X < 0,
121121
we reformulate the loss as follows:
122122
123-
$$loss = \max(X, 0) - X * Labels + \log(1 + \exp(-|X|))$$
123+
$$loss = \max(X, 0) - X * Labels + \log(1 + \exp(-\|X\|))$$
124124
125125
Both the input `X` and `Labels` can carry the LoD (Level of Details) information.
126126
However the output only shares the LoD with input `X`.

python/paddle/fluid/layers/control_flow.py

Lines changed: 71 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -55,34 +55,36 @@
5555

5656
def split_lod_tensor(input, mask, level=0):
5757
"""
58-
**split_lod_tensor**
59-
6058
This function takes in an input that contains the complete lod information,
6159
and takes in a mask which is used to mask certain parts of the input.
6260
The output is the true branch and the false branch with the mask applied to
63-
the input at a certain level in the tensor.
61+
the input at a certain level in the tensor. Mainly used in IfElse to split
62+
data into two parts.
6463
6564
Args:
6665
input(tuple|list|None): The input tensor that contains complete
6766
lod information needed to construct the output.
6867
mask(list): A bool column vector which masks the input.
69-
level(int): The specific lod level to rank.
68+
level(int): The specific lod level to split.
7069
7170
Returns:
72-
Variable: The true branch of tensor as per the mask applied to input.
73-
Variable: The false branch of tensor as per the mask applied to input.
71+
tuple(Variable, Variable):
72+
The true branch of tensor as per the mask applied to input.
73+
74+
The false branch of tensor as per the mask applied to input.
7475
7576
Examples:
7677
.. code-block:: python
7778
78-
x = layers.data(name='x', shape=[1])
79+
x = fluid.layers.data(name='x', shape=[1])
7980
x.persistable = True
8081
81-
y = layers.data(name='y', shape=[1])
82+
y = fluid.layers.data(name='y', shape=[1])
8283
y.persistable = True
8384
84-
out_true, out_false = layers.split_lod_tensor(
85+
out_true, out_false = fluid.layers.split_lod_tensor(
8586
input=x, mask=y, level=level)
87+
8688
"""
8789
helper = LayerHelper('split_lod_tensor', **locals())
8890
out_true = helper.create_tmp_variable(dtype=input.dtype)
@@ -105,16 +107,17 @@ def merge_lod_tensor(in_true, in_false, x, mask, level=0):
105107
106108
This function takes in an input :math:`x`, the True branch, the False
107109
branch and a binary :math:`mask`. Using this information, this function
108-
merges the True and False branches of the tensor into a single Output
109-
at a certain lod level indiacted by :math:`level`.
110+
merges the True and False branches of the tensor into a single tensor as
111+
output at a certain lod level indicated by :math:`level`. Used in IfElse
112+
to merge the output if True block and False Block.
110113
111114
Args:
112115
in_true(tuple|list|None): The True branch to be merged.
113116
in_false(tuple|list|None): The False branch to be merged.
114117
x(tuple|list|None): The input tensor that contains complete
115118
lod information needed to construct the output.
116119
mask(list): A bool column vector which masks the input.
117-
level(int): The specific lod level to rank.
120+
level(int): The specific lod level to merge.
118121
119122
Returns:
120123
Variable: The merged output tensor.
@@ -965,14 +968,17 @@ def array_write(x, i, array=None):
965968

966969

967970
def create_array(dtype):
968-
"""This function creates an array of type :math:`LOD_TENSOR_ARRAY` using the
969-
LayerHelper.
971+
"""
972+
**Create LoDTensorArray**
973+
974+
This function creates an array of LOD_TENSOR_ARRAY . It is mainly used to
975+
implement RNN with array_write, array_read and While.
970976
971977
Args:
972-
dtype (int|float): The data type of the elements in the array.
978+
dtype (int|float): The data type of the elements in the lod_tensor_array.
973979
974980
Returns:
975-
Variable: The tensor variable storing the elements of data type.
981+
Variable: The lod_tensor_array variable storing the elements of data type.
976982
977983
Examples:
978984
.. code-block:: python
@@ -1083,10 +1089,9 @@ def array_read(array, i):
10831089
Examples:
10841090
.. code-block:: python
10851091
1086-
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
1087-
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
1088-
arr = fluid.layers.array_read(tmp, i=i)
1089-
1092+
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
1093+
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
1094+
arr = layers.array_read(tmp, i=i)
10901095
"""
10911096
helper = LayerHelper('array_read', **locals())
10921097
if not isinstance(
@@ -1140,9 +1145,14 @@ def shrink_memory(x, i, table):
11401145

11411146

11421147
def array_length(array):
1143-
"""This function performs the operation to find the length of the input
1148+
"""
1149+
**Get the Length of Input LoDTensorArray**
1150+
1151+
This function performs the operation to find the length of the input
11441152
LOD_TENSOR_ARRAY.
11451153
1154+
Related API: array_read, array_write, While.
1155+
11461156
Args:
11471157
array (LOD_TENSOR_ARRAY): The input array that will be used
11481158
to compute the length.
@@ -1151,12 +1161,13 @@ def array_length(array):
11511161
Variable: The length of the input LoDTensorArray.
11521162
11531163
Examples:
1154-
.. code-block::python
1164+
.. code-block:: python
11551165
11561166
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
11571167
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
11581168
arr = fluid.layers.array_write(tmp, i=i)
11591169
arr_len = fluid.layers.array_length(arr)
1170+
11601171
"""
11611172
helper = LayerHelper('array_length', **locals())
11621173
tmp = helper.create_tmp_variable(dtype='int64')
@@ -1247,6 +1258,42 @@ def complete(self):
12471258

12481259

12491260
class Switch(object):
1261+
"""
1262+
Switch class works just like a `if-elif-else`. Can be used in learning rate scheduler
1263+
to modify learning rate
1264+
1265+
The Semantics:
1266+
1267+
1. A `switch` control-flow checks cases one-by-one.
1268+
1269+
2. The condition of each case is a boolean value, which is a scalar Variable.
1270+
1271+
3. It runs the first matched case, or the default case if there is one.
1272+
1273+
4. Once it matches a case, it runs the corresponding branch and only that branch.
1274+
1275+
Examples:
1276+
.. code-block:: python
1277+
1278+
lr = fluid.layers.tensor.create_global_var(
1279+
shape=[1],
1280+
value=0.0,
1281+
dtype='float32',
1282+
persistable=True,
1283+
name="learning_rate")
1284+
one_var = tensor.fill_constant(
1285+
shape=[1], dtype='float32', value=1.0)
1286+
two_var = tensor.fill_constant(
1287+
shape=[1], dtype='float32', value=2.0)
1288+
1289+
with fluid.layers.control_flow.Switch() as switch:
1290+
with switch.case(global_step == zero_var):
1291+
fluid.layers.tensor.assign(input=one_var, output=lr)
1292+
with switch.default():
1293+
fluid.layers.tensor.assign(input=two_var, output=lr)
1294+
1295+
"""
1296+
12501297
def __init__(self, name=None):
12511298
self.helper = LayerHelper('switch', name=name)
12521299
self.inside_scope = False
@@ -1276,7 +1323,8 @@ def case(self, condition):
12761323
return ConditionalBlockGuard(cond_block)
12771324

12781325
def default(self):
1279-
"""create a default case for this switch
1326+
"""
1327+
create a default case for this switch
12801328
"""
12811329
pre_cond_num = len(self.pre_not_conditions)
12821330
if pre_cond_num == 0:

python/paddle/fluid/layers/detection.py

Lines changed: 39 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -620,7 +620,7 @@ def prior_box(input,
620620
offset=0.5,
621621
name=None):
622622
"""
623-
**Prior box operator**
623+
**Prior Box Operator**
624624
625625
Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm.
626626
Each position of the input produce N prior boxes, N is determined by
@@ -649,26 +649,30 @@ def prior_box(input,
649649
name(str): Name of the prior box op. Default: None.
650650
651651
Returns:
652-
boxes(Variable): the output prior boxes of PriorBox.
653-
The layout is [H, W, num_priors, 4].
654-
H is the height of input, W is the width of input,
655-
num_priors is the total
656-
box count of each position of input.
657-
Variances(Variable): the expanded variances of PriorBox.
658-
The layout is [H, W, num_priors, 4].
659-
H is the height of input, W is the width of input
660-
num_priors is the total
661-
box count of each position of input
652+
tuple: A tuple with two Variable (boxes, variances)
653+
654+
boxes: the output prior boxes of PriorBox.
655+
The layout is [H, W, num_priors, 4].
656+
H is the height of input, W is the width of input,
657+
num_priors is the total
658+
box count of each position of input.
659+
660+
variances: the expanded variances of PriorBox.
661+
The layout is [H, W, num_priors, 4].
662+
H is the height of input, W is the width of input
663+
num_priors is the total
664+
box count of each position of input
662665
663666
664667
Examples:
665668
.. code-block:: python
666-
box, var = prior_box(
667-
input=conv1,
668-
image=images,
669-
min_sizes=[100.],
670-
flip=True,
671-
clip=True)
669+
670+
box, var = fluid.layers.prior_box(
671+
input=conv1,
672+
image=images,
673+
min_sizes=[100.],
674+
flip=True,
675+
clip=True)
672676
"""
673677
helper = LayerHelper("prior_box", **locals())
674678
dtype = helper.input_dtype()
@@ -738,11 +742,9 @@ def multi_box_head(inputs,
738742
stride=1,
739743
name=None):
740744
"""
741-
**Prior_boxes**
742-
743745
Generate prior boxes for SSD(Single Shot MultiBox Detector)
744746
algorithm. The details of this algorithm, please refer the
745-
section 2.2 of SSD paper (SSD: Single Shot MultiBox Detector)
747+
section 2.2 of SSD paper `SSD: Single Shot MultiBox Detector
746748
<https://arxiv.org/abs/1512.02325>`_ .
747749
748750
Args:
@@ -783,24 +785,27 @@ def multi_box_head(inputs,
783785
name(str): Name of the prior box layer. Default: None.
784786
785787
Returns:
786-
mbox_loc(Variable): The predicted boxes' location of the inputs.
787-
The layout is [N, H*W*Priors, 4]. where Priors
788-
is the number of predicted boxes each position of each input.
789-
mbox_conf(Variable): The predicted boxes' confidence of the inputs.
790-
The layout is [N, H*W*Priors, C]. where Priors
791-
is the number of predicted boxes each position of each input
792-
and C is the number of Classes.
793-
boxes(Variable): the output prior boxes of PriorBox.
794-
The layout is [num_priors, 4]. num_priors is the total
795-
box count of each position of inputs.
796-
Variances(Variable): the expanded variances of PriorBox.
797-
The layout is [num_priors, 4]. num_priors is the total
798-
box count of each position of inputs
788+
tuple: A tuple with four Variables. (mbox_loc, mbox_conf, boxes, variances)
789+
790+
mbox_loc: The predicted boxes' location of the inputs. The layout
791+
is [N, H*W*Priors, 4]. where Priors is the number of predicted
792+
boxes each position of each input.
793+
794+
mbox_conf: The predicted boxes' confidence of the inputs. The layout
795+
is [N, H*W*Priors, C]. where Priors is the number of predicted boxes
796+
each position of each input and C is the number of Classes.
797+
798+
boxes: the output prior boxes of PriorBox. The layout is [num_priors, 4].
799+
num_priors is the total box count of each position of inputs.
800+
801+
variances: the expanded variances of PriorBox. The layout is
802+
[num_priors, 4]. num_priors is the total box count of each position of inputs
799803
800804
801805
Examples:
802806
.. code-block:: python
803-
mbox_locs, mbox_confs, box, var = layers.multi_box_head(
807+
808+
mbox_locs, mbox_confs, box, var = fluid.layers.multi_box_head(
804809
inputs=[conv1, conv2, conv3, conv4, conv5, conv5],
805810
image=images,
806811
num_classes=21,

python/paddle/fluid/layers/learning_rate_scheduler.py

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -199,25 +199,28 @@ def polynomial_decay(learning_rate,
199199
end_learning_rate=0.0001,
200200
power=1.0,
201201
cycle=False):
202-
"""Applies polynomial decay to the initial learning rate.
202+
"""
203+
Applies polynomial decay to the initial learning rate.
204+
205+
.. code-block:: python
206+
207+
if cycle:
208+
decay_steps = decay_steps * ceil(global_step / decay_steps)
209+
else:
210+
global_step = min(global_step, decay_steps)
211+
decayed_learning_rate = (learning_rate - end_learning_rate) *
212+
(1 - global_step / decay_steps) ^ power + end_learning_rate
203213
204-
>>> if cycle:
205-
>>> decay_steps = decay_steps * ceil(global_step / decay_steps)
206-
>>> else:
207-
>>> global_step = min(global_step, decay_steps)
208-
>>> decayed_learning_rate = (learning_rate - end_learning_rate) *
209-
>>> (1 - global_step / decay_steps) ^ power +
210-
>>> end_learning_rate
211214
Args:
212-
learning_rate: A scalar float32 value or a Variable. This
213-
will be the initial learning rate during training
214-
decay_steps: A Python `int32` number.
215-
end_learning_rate: A Python `float` number.
216-
power: A Python `float` number
217-
cycle: Boolean. If set true, decay the learning rate every decay_steps.
215+
learning_rate(Variable|float32): A scalar float32 value or a Variable. This
216+
will be the initial learning rate during training.
217+
decay_steps(int32): A Python `int32` number.
218+
end_learning_rate(float): A Python `float` number.
219+
power(float): A Python `float` number.
220+
cycle(bool): If set true, decay the learning rate every decay_steps.
218221
219222
Returns:
220-
The decayed learning rate
223+
Variable: The decayed learning rate
221224
"""
222225
global_step = _decay_step_counter()
223226

0 commit comments

Comments
 (0)