Skip to content

Commit 0f53f3d

Browse files
authored
[paddle v2.0.0rc1: API fixs] assign/conv2d/conv2d_transpose/cast/ParamAttr (#29397)
* fix bug,test=develop * fix DLTP-15151, paddle.ParamAttr API * fix DLTP-15083/DLTP-15274, paddle.nn.functionl.assign paddle.cast API * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API * fix DLTP-15083, paddle.nn.functionl.assign API * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API * support in_dygraph_mode for cast op, test=develop * fix bug,test=develop * fix doc * fix DLTP-15431/DLTP-15432, paddle.static.nn.conv2d paddle.static.nn.conv2d_transpose API, test=document_fix
1 parent de3c067 commit 0f53f3d

File tree

3 files changed

+32
-33
lines changed

3 files changed

+32
-33
lines changed

python/paddle/fluid/layers/nn.py

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1403,7 +1403,7 @@ def conv2d(input,
14031403
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
14041404

14051405
Args:
1406-
input (Variable): The input is 4-D Tensor with shape [N, C, H, W], the data type
1406+
input (Tensor): The input is 4-D Tensor with shape [N, C, H, W], the data type
14071407
of input is float16 or float32 or float64.
14081408
num_filters(int): The number of filter. It is as same as the output
14091409
image channel.
@@ -1456,9 +1456,9 @@ def conv2d(input,
14561456
`[batch_size, input_channels, input_height, input_width]`.
14571457

14581458
Returns:
1459-
A Variable holding Tensor representing the conv2d, whose data type is the
1460-
same with input. If act is None, the tensor variable storing the convolution
1461-
result, and if act is not None, the tensor variable storing convolution
1459+
A Tensor representing the conv2d, whose data type is the
1460+
same with input. If act is None, the tensor storing the convolution
1461+
result, and if act is not None, the tensor storing convolution
14621462
and non-linearity activation result.
14631463

14641464
Raises:
@@ -1477,12 +1477,12 @@ def conv2d(input,
14771477
Examples:
14781478
.. code-block:: python
14791479

1480-
import paddle.fluid as fluid
14811480
import paddle
14821481
paddle.enable_static()
14831482

1484-
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
1485-
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
1483+
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
1484+
conv2d = paddle.static.nn.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
1485+
print(conv2d.shape) # [-1, 2, 30, 30]
14861486
"""
14871487

14881488
check_variable_and_dtype(input, 'input', ['float16', 'float32', 'float64'],
@@ -3805,7 +3805,7 @@ def conv2d_transpose(input,
38053805
conv2d_transpose can compute the kernel size automatically.
38063806

38073807
Args:
3808-
input(Variable): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
3808+
input(Tensor): 4-D Tensor with [N, C, H, W] or [N, H, W, C] format,
38093809
its data type is float32 or float64.
38103810
num_filters(int): The number of the filter. It is as same as the output
38113811
image channel.
@@ -3823,15 +3823,14 @@ def conv2d_transpose(input,
38233823
stride(int|tuple, optional): The stride size. It means the stride in transposed convolution.
38243824
If stride is a tuple, it must contain two integers, (stride_height, stride_width).
38253825
Otherwise, stride_height = stride_width = stride. Default: stride = 1.
3826-
padding(int|list|str|tuple, optional): The padding size. The padding argument effectively adds
3827-
`dilation * (kernel - 1)` amount of zero-padding on both sides of input. If `padding` is a
3828-
string, either 'VALID' or 'SAME' supported, which is the padding algorithm.
3829-
If `padding` is a tuple or list, it could be in three forms:
3830-
`[pad_height, pad_width]` or
3831-
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`, and
3832-
when `data_format` is `'NCHW'`,
3833-
`padding` can be in the form `[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
3834-
when `data_format` is `'NHWC'`, `padding` can be in the form
3826+
padding(str|int|list|tuple, optional): The padding size. It means the number of zero-paddings
3827+
on both sides for each dimension. If `padding` is a string, either 'VALID' or
3828+
'SAME' which is the padding algorithm. If `padding` is a tuple or list,
3829+
it could be in three forms: `[pad_height, pad_width]` or
3830+
`[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right]`,
3831+
and when `data_format` is `"NCHW"`, `padding` can be in the form
3832+
`[[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]`.
3833+
when `data_format` is `"NHWC"`, `padding` can be in the form
38353834
`[[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]`.
38363835
Default: padding = 0.
38373836
dilation(int|tuple, optional): The dilation size. It means the spacing between the kernel points.
@@ -3869,11 +3868,11 @@ def conv2d_transpose(input,
38693868
`[batch_size, input_channels, input_height, input_width]`.
38703869

38713870
Returns:
3872-
A Variable holding Tensor representing the conv2d_transpose, whose
3871+
A Tensor representing the conv2d_transpose, whose
38733872
data type is the same with input and shape is (num_batches, channels, out_h,
3874-
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor variable
3873+
out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor
38753874
storing the transposed convolution result, and if act is not None, the
3876-
tensor variable storing transposed convolution and non-linearity activation
3875+
tensor storing transposed convolution and non-linearity activation
38773876
result.
38783877

38793878
Raises:
@@ -3892,11 +3891,12 @@ def conv2d_transpose(input,
38923891
Examples:
38933892
.. code-block:: python
38943893

3895-
import paddle.fluid as fluid
38963894
import paddle
38973895
paddle.enable_static()
3898-
data = fluid.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
3899-
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)
3896+
3897+
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
3898+
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
3899+
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
39003900
"""
39013901
assert param_attr is not False, "param_attr should not be False in conv2d_transpose."
39023902
if data_format not in ['NCHW', 'NHWC']:

python/paddle/fluid/layers/tensor.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ def create_global_var(shape,
203203
def cast(x, dtype):
204204
"""
205205
206-
This OP takes in the Variable :attr:`x` with :attr:`x.dtype` and casts it
206+
This OP takes in the Tensor :attr:`x` with :attr:`x.dtype` and casts it
207207
to the output with :attr:`dtype`. It's meaningless if the output dtype
208208
equals the input dtype, but it's fine if you do so.
209209
@@ -539,20 +539,20 @@ def assign(input, output=None):
539539
The OP copies the :attr:`input` to the :attr:`output`.
540540
541541
Parameters:
542-
input (Variable|numpy.ndarray): A tensor or numpy ndarray, its data type supports
542+
input (Tensor|numpy.ndarray): A tensor or numpy ndarray, its data type supports
543543
float16, float32, float64, int32 and int64.
544-
output (Variable, optional): A tensor. If :attr:`output` is None, a new tensor will
544+
output (Tensor, optional): A tensor. If :attr:`output` is None, a new tensor will
545545
be created as :attr:`output`. Default: None.
546546
547547
Returns:
548-
Variable: A tensor with the same shape, data type and value as :attr:`input`.
548+
Tensor: A tensor with the same shape, data type and value as :attr:`input`.
549549
550550
Examples:
551551
.. code-block:: python
552552
553553
import paddle
554554
import numpy as np
555-
data = paddle.fill_constant(shape=[3, 2], value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
555+
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
556556
array = np.array([[1, 1],
557557
[3, 4],
558558
[1, 3]]).astype(np.int64)

python/paddle/fluid/param_attr.py

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ class ParamAttr(object):
3737
Note:
3838
``gradient_clip`` of ``ParamAttr`` HAS BEEN DEPRECATED since 2.0.
3939
Please use ``need_clip`` in ``ParamAttr`` to speficiy the clip scope.
40-
There are three clipping strategies: :ref:`api_paddle_nn_GradientClipByGlobalNorm` ,
41-
:ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` .
40+
There are three clipping strategies: :ref:`api_paddle_nn_ClipGradByGlobalNorm` ,
41+
:ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` .
4242
4343
Parameters:
4444
name (str, optional): The parameter's name. Default None, meaning that the name
@@ -50,8 +50,8 @@ class ParamAttr(object):
5050
optimize is the global learning rates times the parameter's learning rate times
5151
the factor of learning rate scheduler. Default 1.0.
5252
regularizer (WeightDecayRegularizer, optional): Regularization strategy. There are two method:
53-
:ref:`api_fluid_regularizer_L1Decay` , :ref:`api_fluid_regularizer_L2Decay` . If
54-
regularizer is also set in ``optimizer`` (such as :ref:`api_fluid_optimizer_SGDOptimizer` ),
53+
:ref:`api_paddle_regularizer_L1Decay` , :ref:`api_paddle_regularizer_L2Decay` . If
54+
regularizer is also set in ``optimizer`` (such as :ref:`api_paddle_optimizer_SGD` ),
5555
that regularizer setting in optimizer will be ignored. Default None, meaning there is
5656
no regularization.
5757
trainable (bool): Whether this parameter is trainable. Default True.
@@ -63,7 +63,6 @@ class ParamAttr(object):
6363
.. code-block:: python
6464
6565
import paddle
66-
paddle.enable_static()
6766
6867
weight_attr = paddle.ParamAttr(name="weight",
6968
learning_rate=0.5,

0 commit comments

Comments
 (0)