Skip to content

Commit c791a51

Browse files
authored
API Compatiblity: modify compat softmax document (#74982)
1 parent a2f1f65 commit c791a51

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

python/paddle/tensor/compat_softmax.py

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -43,18 +43,18 @@ def softmax(
4343
r"""
4444
This operator implements the compat.softmax. The calculation process is as follows:
4545
46-
1. The dimension :attr:`axis` of ``x`` will be permuted to the last.
46+
1. The dimension :attr:`dim` of ``input`` will be permuted to the last.
4747
48-
2. Then ``x`` will be logically flattened to a 2-D matrix. The matrix's second
49-
dimension(row length) is the same as the dimension :attr:`axis` of ``x``,
48+
2. Then ``input`` will be logically flattened to a 2-D matrix. The matrix's second
49+
dimension(row length) is the same as the dimension :attr:`axis` of ``input``,
5050
and the first dimension(column length) is the product of all other dimensions
51-
of ``x``. For each row of the matrix, the softmax operator squashes the
52-
K-dimensional(K is the width of the matrix, which is also the size of ``x``'s
53-
dimension :attr:`axis`) vector of arbitrary real values to a K-dimensional
51+
of ``input``. For each row of the matrix, the softmax operator squashes the
52+
K-dimensional(K is the width of the matrix, which is also the size of ``input``'s
53+
dimension :attr:`dim`) vector of arbitrary real values to a K-dimensional
5454
vector of real values in the range [0, 1] that add up to 1.
5555
5656
3. After the softmax operation is completed, the inverse operations of steps 1 and 2
57-
are performed to restore the two-dimensional matrix to the same dimension as the ``x`` .
57+
are performed to restore the two-dimensional matrix to the same dimension as the ``input`` .
5858
5959
It computes the exponential of the given dimension and the sum of exponential
6060
values of all the other dimensions in the K-dimensional vector input.
@@ -66,24 +66,24 @@ def softmax(
6666
6767
.. math::
6868
69-
softmax[i, j] = \frac{\exp(x[i, j])}{\sum_j(exp(x[i, j])}
69+
softmax[i, j] = \frac{\exp(input[i, j])}{\sum_j(exp(input[i, j])}
7070
7171
Example:
7272
7373
.. code-block:: text
7474
7575
Case 1:
7676
Input:
77-
x.shape = [2, 3, 4]
78-
x.data = [[[2.0, 3.0, 4.0, 5.0],
77+
input.shape = [2, 3, 4]
78+
input.data = [[[2.0, 3.0, 4.0, 5.0],
7979
[3.0, 4.0, 5.0, 6.0],
8080
[7.0, 8.0, 8.0, 9.0]],
8181
[[1.0, 2.0, 3.0, 4.0],
8282
[5.0, 6.0, 7.0, 8.0],
8383
[6.0, 7.0, 8.0, 9.0]]]
8484
8585
Attrs:
86-
axis = -1
86+
dim = -1
8787
8888
Output:
8989
out.shape = [2, 3, 4]
@@ -96,15 +96,15 @@ def softmax(
9696
9797
Case 2:
9898
Input:
99-
x.shape = [2, 3, 4]
100-
x.data = [[[2.0, 3.0, 4.0, 5.0],
99+
input.shape = [2, 3, 4]
100+
input.data = [[[2.0, 3.0, 4.0, 5.0],
101101
[3.0, 4.0, 5.0, 6.0],
102102
[7.0, 8.0, 8.0, 9.0]],
103103
[[1.0, 2.0, 3.0, 4.0],
104104
[5.0, 6.0, 7.0, 8.0],
105105
[6.0, 7.0, 8.0, 9.0]]]
106106
Attrs:
107-
axis = 1
107+
dim = 1
108108
109109
Output:
110110
out.shape = [2, 3, 4]
@@ -117,16 +117,16 @@ def softmax(
117117
118118
Parameters:
119119
input (Tensor): The input Tensor with data type bfloat16, float16, float32, float64.
120-
dim (int, optional): The axis along which to perform softmax
120+
dim (int, optional): The dim along which to perform softmax
121121
calculations. It should be in range [-D, D), where D is the
122-
rank of ``x`` . If ``axis`` < 0, it works the same way as
123-
:math:`axis + D` . Default is None.
122+
rank of ``input`` . If ``dim`` < 0, it works the same way as
123+
:math:`dim + D` . Default is None.
124124
dtype (str, optional): The data type of the output tensor, can be bfloat16, float16, float32, float64.
125125
out (Tensor, optional): The output Tensor.
126126
127127
Returns:
128128
A Tensor with the same shape and data type (use ``dtype`` if it is
129-
specified) as x.
129+
specified) as input.
130130
131131
Examples:
132132
.. code-block:: python

0 commit comments

Comments
 (0)