Skip to content

Commit c837a0a

Browse files
committed
follow comments
1 parent be97c47 commit c837a0a

File tree

1 file changed

+50
-88
lines changed
  • python/paddle/fluid/layers

1 file changed

+50
-88
lines changed

python/paddle/fluid/layers/nn.py

Lines changed: 50 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -6385,6 +6385,7 @@ def expand(x, expand_times, name=None):
63856385
from paddle.fluid.framework import convert_np_dtype_to_dtype_
63866386

63876387

6388+
@templatedoc()
63886389
def uniform_random_batch_size_like(input,
63896390
shape,
63906391
dtype='float32',
@@ -6394,22 +6395,19 @@ def uniform_random_batch_size_like(input,
63946395
max=1.0,
63956396
seed=0):
63966397
"""
6397-
UniformRandomBatchSizeLike operator.
6398-
This operator initializes a tensor with the same batch_size as the Input tensor with random values sampled from a uniform distribution.
6399-
6398+
${comment}
64006399
64016400
Args:
6402-
input (Variable): Tensor whose input_dim_idx'th dimension specifies the batch_size.
6403-
shape (tuple|list): the shape of the output.
6404-
input_dim_idx (Int): The index of input's batch size dimension.
6405-
output_dim_idx (Int): The index of output's batch size dimension.
6406-
min (Float): Minimum value of uniform random.
6407-
max (Float): Maximum value of uniform random.
6408-
seed (Int): Random seed used for generating samples. 0 means use a seed generated by the system.
6409-
Note that if seed is not 0, this operator will always generate the same random numbers every time.
6401+
input (Variable): ${input_comment}
6402+
shape (tuple|list): ${shape_comment}
6403+
input_dim_idx (Int): ${input_dim_idx_comment}
6404+
output_dim_idx (Int): ${output_dim_idx}
6405+
min (Float): ${min_comment}
6406+
max (Float): ${max_comment}
6407+
seed (Int): ${seed_comment}
64106408
dtype(np.dtype|core.VarDesc.VarType|str): The type of data : float32, float_16, int etc
64116409
Returns:
6412-
out (Variable): Output of this operator.
6410+
out (Variable): ${out_comment}
64136411
64146412
"""
64156413

@@ -6433,28 +6431,26 @@ def uniform_random_batch_size_like(input,
64336431
return out
64346432

64356433

6434+
@templatedoc()
64366435
def gaussian_random(shape,
64376436
mean=0.0,
64386437
std=1.0,
64396438
seed=0,
64406439
dtype='float32',
64416440
use_mkldnn=False):
64426441
"""
6443-
GaussianRandom Operator.
6444-
6445-
Used to initialize tensors with gaussian random generator.
6442+
${comment}
64466443
64476444
Args:
6448-
shape (tuple|list): The dimension of random tensor.
6449-
mean (Float): Mean of random tensor.
6450-
std (Float): Std of random tensor.
6451-
seed (Int): Random seed of generator.0 means use system wide seed.
6452-
Note that if seed is not 0, this operator will always generate the same random numbers every time.
6445+
shape (tuple|list): ${shape_comment}
6446+
mean (Float): ${mean_comment}
6447+
std (Float): ${std_comment}
6448+
seed (Int): ${seed_comment}
64536449
dtype(np.dtype|core.VarDesc.VarType|str): Output data type.
64546450
use_mkldnn (Bool): Only used in mkldnn kernel.
64556451
64566452
Returns:
6457-
out (Variable): Output of this operator.
6453+
out (Variable): ${out_comment}
64586454
64596455
"""
64606456

@@ -6476,23 +6472,20 @@ def gaussian_random(shape,
64766472
return out
64776473

64786474

6475+
@templatedoc()
64796476
def sampling_id(x, min=0.0, max=1.0, seed=0, dtype='float32'):
64806477
"""
6481-
SamplingId Operator.
6482-
6483-
A layer for sampling id from multinomial distribution from the input.
6484-
Sampling one id for one sample.
6478+
${comment}
64856479
64866480
Args:
6487-
x (Variable): The input tensor of softmax. 2-D with shape [batch_size, input_feature_dimensions].
6488-
min (Float): Minimum value of random.
6489-
max (Float): Maximun value of random.
6490-
seed (Float): random seed used for the random number engine.0 means use a seed generated by the system.
6491-
Note that if seed is not 0, this operator will always generate the same random numbers every time.
6481+
x (Variable): ${x_comment}
6482+
min (Float): ${min_comment}
6483+
max (Float): ${max_comment}
6484+
seed (Float): ${seed_comment}
64926485
dtype(np.dtype|core.VarDesc.VarType|str): The type of output data : float32, float_16, int etc
64936486
64946487
Returns:
6495-
out (Variable): Output of this operator.
6488+
out (Variable): ${out_comment}
64966489
64976490
"""
64986491

@@ -6509,6 +6502,7 @@ def sampling_id(x, min=0.0, max=1.0, seed=0, dtype='float32'):
65096502
return out
65106503

65116504

6505+
@templatedoc()
65126506
def gaussian_random_batch_size_like(input,
65136507
shape,
65146508
input_dim_idx=0,
@@ -6518,20 +6512,20 @@ def gaussian_random_batch_size_like(input,
65186512
seed=0,
65196513
dtype='float32'):
65206514
"""
6521-
Used to initialize tensors with gaussian random generator. The defalut mean of the distribution is 0. and defalut standard deviation (std) of the distribution is 1.. Uers can set mean and std by input arguments.
6515+
${comment}
65226516
65236517
Args:
6524-
input (Variable): Tensor whose input_dim_idx'th dimension specifies the batch_size.
6525-
shape (tuple|list): the shape of the output.
6526-
input_dim_idx (Int): The index of input's batch size dimension
6527-
output_dim_idx (Int): The index of output's batch size dimension
6528-
mean (Float): The mean (or center) of the gaussian distribution.
6529-
std (Float): The standard deviation (std, or spread) of the gaussian distribution.
6530-
seed (Int): Random seed of generator.0 means use system wide seed._note that if seed is not 0, this operator will always generate the same random numbers every time.
6518+
input (Variable): ${input_comment}
6519+
shape (tuple|list): ${shape_comment}
6520+
input_dim_idx (Int): ${input_dim_idx}
6521+
output_dim_idx (Int): ${output_dim_idx_comment}
6522+
mean (Float): ${mean_comment}
6523+
std (Float): ${std_comment}
6524+
seed (Int): ${seed_comment}
65316525
dtype(np.dtype|core.VarDesc.VarType|str): The type of output data : float32, float_16, int etc
65326526
65336527
Returns:
6534-
out (Variable): Output of this operator
6528+
out (Variable): ${out_comment}
65356529
"""
65366530

65376531
helper = LayerHelper('gaussian_random_batch_size_like', **locals())
@@ -6554,19 +6548,17 @@ def gaussian_random_batch_size_like(input,
65546548
return out
65556549

65566550

6551+
@templatedoc()
65576552
def sum(x, use_mkldnn=False):
65586553
"""
6559-
Sum operator.
6560-
This operators sums the input tensors. All the inputs can carry
6561-
the LoD (Level of Details) information. However, the output only
6562-
shares the LoD information with the first input.
6554+
${comment}
65636555
65646556
Args:
6565-
x (Variable): The input tensors of sum operator.
6566-
use_mkldnn (Bool): Only used in mkldnn kernel
6557+
x (Variable): ${x_comment}
6558+
use_mkldnn (Bool): ${use_mkldnn_comment}
65676559
65686560
Returns:
6569-
out (Variable): Output of this operator
6561+
out (Variable): ${out_comment}
65706562
65716563
"""
65726564

@@ -6581,49 +6573,19 @@ def sum(x, use_mkldnn=False):
65816573
return out
65826574

65836575

6576+
@templatedoc()
65846577
def slice(input, axes, starts, ends):
65856578
"""
6586-
Slice Operator.
6587-
6588-
Produces a slice of the input tensor along multiple axes. Similar to numpy:
6589-
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
6590-
Slice uses `axes`, `starts` and `ends` attributes to specify the start and
6591-
end dimension for each axis in the list of axes, it uses this information
6592-
to slice the input data tensor. If a negative value is passed for any of
6593-
the start or end indices, it represents number of elements before the end
6594-
of that dimension. If the value passed to start or end is larger than
6595-
the n (the number of elements in this dimension), it represents n.
6596-
For slicing to the end of a dimension with unknown size, it is recommended
6597-
to pass in INT_MAX. If axes are omitted, they are set to [0, ..., ndim-1].
6598-
Following examples will explain how slice works:
6599-
6600-
.. code-block:: text
6601-
6602-
Cast1:
6603-
Given:
6604-
data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
6605-
axes = [0, 1]
6606-
starts = [1, 0]
6607-
ends = [2, 3]
6608-
Then:
6609-
result = [ [5, 6, 7], ]
6610-
6611-
Cast2:
6612-
Given:
6613-
data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
6614-
starts = [0, 1]
6615-
ends = [-1, 1000]
6616-
Then:
6617-
result = [ [2, 3, 4], ]
6579+
${comment}
66186580
66196581
Args:
6620-
input (Variable): Tensor of data to extract slices from.
6621-
axes (List): Axes that `starts` and `ends` apply to. It's optional._if not present, will be treated as [0, 1, ..., len(`starts`) - 1].
6622-
starts (List): Starting indices of corresponding axis in `axes`.
6623-
ends (List): Starting indices of corresponding axis in `axes`.
6582+
input (Variable): ${input_comment}.
6583+
axes (List): ${axes_comment}
6584+
starts (List): ${starts_comment}
6585+
ends (List): ${ends_comment}
66246586
66256587
Returns:
6626-
out (Variable): The output of this operator.
6588+
out (Variable): ${output_comment}
66276589
66286590
"""
66296591

@@ -6640,16 +6602,16 @@ def slice(input, axes, starts, ends):
66406602
return out
66416603

66426604

6605+
@templatedoc()
66436606
def shape(input):
66446607
"""
6645-
Shape Operator
6646-
Get the shape of input tensor. Only support CPU input Tensor now.
6608+
${comment}
66476609
66486610
Args:
6649-
input (Variable): The input tensor.
6611+
input (Variable): ${input_comment}
66506612
66516613
Returns:
6652-
out (Variable): The output of this operator.
6614+
out (Variable): ${out_comment}
66536615
66546616
"""
66556617

0 commit comments

Comments
 (0)