Skip to content

Commit 686a3ad

Browse files
author
ranqiu
committed
Add api doc std
1 parent 0e1f82f commit 686a3ad

File tree

2 files changed

+299
-0
lines changed

2 files changed

+299
-0
lines changed

doc/fluid/dev/api_doc_std_cn.md

Lines changed: 219 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,219 @@
1+
# API注释撰写标准
2+
3+
- [API注释模块](#API注释模块)
4+
- [格式及示例](#格式及示例)
5+
- [完整示例](#完整示例)
6+
7+
8+
## API注释模块
9+
10+
API文档须包含以下几个模块(排列顺序为文档撰写顺序):
11+
12+
- Python API Definition
13+
14+
API的代码定义。
15+
16+
- Function Description
17+
18+
API的功能描述。描述该API的含义、作用或对输入所做的操作,及参考文献和对应链接(如果有),必要时给出公式,并解释公式中关键变量的含义。
19+
20+
- Args Description
21+
22+
API参数介绍。按代码定义中的参数顺序逐个介绍,介绍内容包含数据类型、默认值(如果有)、含义等。
23+
24+
- Returns
25+
26+
API返回值介绍。介绍返回值含义,必要时给出对应的形状。若返回值为包含多个参数的tuple,则按顺序逐个介绍各参数。
27+
28+
- Raises(如果有)
29+
30+
可能抛出的异常或错误及可能的产生原因,当可能抛出多种异常或错误时应分条列出。
31+
32+
- Note(如果有)
33+
34+
注意事项。当有多条注意事项时,应分条列出。
35+
36+
- Examples
37+
38+
API的使用示例。
39+
40+
41+
## 格式及示例
42+
43+
API文档各模块格式及示例如下(以下以fc为例进行说明):
44+
45+
- Python API Definition
46+
47+
- 格式:
48+
49+
[Python API Definition]
50+
51+
- 示例
52+
53+
```
54+
fc(input,
55+
size,
56+
num_flatten_dims=1,
57+
param_attr=None,
58+
bias_attr=None,
59+
act=None,
60+
name=None,
61+
main_program=None,
62+
startup_program=None)
63+
```
64+
65+
- Function Description
66+
67+
- 格式
68+
69+
本模块应包含以下内容(排列顺序为文档撰写顺序):
70+
71+
[Function Description]
72+
73+
[Formula]
74+
75+
[Symbols' Descriptions if necessary]
76+
77+
[References if necessary]
78+
79+
- 示例
80+
81+
[Function Description]
82+
83+
```
84+
**Fully Connected Layer**
85+
86+
The fully connected layer can take multiple tensors as its inputs. It
87+
creates a variable called weights for each input tensor, which represents
88+
a fully connected weight matrix from each input unit to each output unit.
89+
The fully connected layer multiplies each input tensor with its coresponding
90+
weight to produce an output Tensor. If multiple input tensors are given,
91+
the results of multiple multiplications will be sumed up. If bias_attr is
92+
not None, a bias variable will be created and added to the output. Finally,
93+
if activation is not None, it will be applied to the output as well.
94+
```
95+
96+
[Formula]
97+
98+
```
99+
This process can be formulated as follows:
100+
101+
.. math::
102+
103+
Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})
104+
```
105+
106+
[Symbols' Descriptions if necessary]
107+
108+
```
109+
In the above equation:
110+
111+
* :math:`N`: Number of the input.
112+
* :math:`X_i`: The input tensor.
113+
* :math:`W`: The weights created by this layer.
114+
* :math:`b`: The bias parameter created by this layer (if needed).
115+
* :math:`Act`: The activation function.
116+
* :math:`Out`: The output tensor.
117+
```
118+
119+
[References if necessary]
120+
121+
因fc没有必要列出的参考文献,故该内容省略。其他情况下需明确给出对应的参考文献和对应连接,以 layer_norm 为例:
122+
123+
```
124+
Refer to `Layer Normalization <https://arxiv.org/pdf/1607.06450v1.pdf>`_ for more details.
125+
```
126+
127+
128+
- Args Description
129+
130+
- 格式
131+
132+
\[Arg's Name\][(Data Type, Default Value)][Description]
133+
134+
- 示例
135+
136+
fc的部分参数注释如下:
137+
138+
```
139+
Args:
140+
input (Tensor): The input tensor(s) of the layer.
141+
param_attr (ParamAttr|list of ParamAttr, default None): The parameter attribute for learnable
142+
parameters/weights of this layer.
143+
name (str, default None): The name of this layer.
144+
```
145+
146+
- Returns
147+
148+
- 格式
149+
150+
[Name][Shape]
151+
152+
- 示例
153+
154+
```
155+
Returns:
156+
A tensor variable storing the transformation result.
157+
```
158+
159+
当返回值为包含多个参数的tuple时,应按顺序逐个介绍各参数,以dynamic_lstm为例:
160+
161+
```
162+
Returns:
163+
A tuple containing:
164+
The hidden state of LSTM whose shape is (T X D).
165+
The cell state of LSTM whose shape is (T X D).
166+
```
167+
168+
- Raises
169+
170+
- 格式
171+
172+
[Exception Type][Condition]
173+
174+
- 示例
175+
176+
```
177+
Raises:
178+
ValueError: If the rank of the input is less than 2.
179+
```
180+
181+
- Note
182+
183+
- 格式
184+
185+
[Note]
186+
187+
- 示例
188+
189+
fc没有注意事项,故该模块省略不写。其他情况应明确给出,若有多条注意事项,须分条列出,以scaled\_dot\_product\_attention为例:
190+
191+
```
192+
Note:
193+
1. When num_heads > 1, three linear projections are learned respectively
194+
to map input queries, keys and values into queries', keys' and values'.
195+
queries', keys' and values' have the same shapes with queries, keys
196+
and values.
197+
2. When num_heads == 1, scaled_dot_product_attention has no learnable
198+
parameters.
199+
```
200+
201+
- Examples
202+
203+
- 格式
204+
205+
\[Python Code Snipper]
206+
207+
- 示例
208+
209+
```
210+
Examples:
211+
.. code-block:: python
212+
213+
data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
214+
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
215+
```
216+
217+
## 完整示例
218+
219+
fc 的完整注释见[示例](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/fluid/dev/src/fc.py)。

doc/fluid/dev/src/fc.py

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
16+
def fc(input,
17+
size,
18+
num_flatten_dims=1,
19+
param_attr=None,
20+
bias_attr=None,
21+
act=None,
22+
name=None):
23+
"""
24+
**Fully Connected Layer**
25+
26+
The fully connected layer can take multiple tensors as its inputs. It
27+
creates a variable called weights for each input tensor, which represents
28+
a fully connected weight matrix from each input unit to each output unit.
29+
The fully connected layer multiplies each input tensor with its coresponding
30+
weight to produce an output Tensor. If multiple input tensors are given,
31+
the results of multiple multiplications will be sumed up. If bias_attr is
32+
not None, a bias variable will be created and added to the output. Finally,
33+
if activation is not None, it will be applied to the output as well.
34+
35+
This process can be formulated as follows:
36+
37+
.. math::
38+
39+
Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})
40+
41+
In the above equation:
42+
43+
* :math:`N`: Number of the input.
44+
* :math:`X_i`: The input tensor.
45+
* :math:`W`: The weights created by this layer.
46+
* :math:`b`: The bias parameter created by this layer (if needed).
47+
* :math:`Act`: The activation function.
48+
* :math:`Out`: The output tensor.
49+
50+
Args:
51+
input (Tensor|list of Tensor): The input tensor(s) to this layer.
52+
size(int): The number of output units in the fully connected layer.
53+
num_flatten_dims (int, default 1): The fc layer can accept an input tensor with more than
54+
two dimensions. If this happens, the multidimensional tensor will first be flattened
55+
into a 2-dimensional matrix. The parameter `num_flatten_dims` determines how the input
56+
tensor is flattened: the first `num_flatten_dims` (inclusive, index starts from 1)
57+
dimensions will be flatten to form the first dimension of the final matrix (height of
58+
the matrix), and the rest `rank(X) - num_flatten_dims` dimensions are flattened to
59+
form the second dimension of the final matrix (width of the matrix). For example, suppose
60+
`X` is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and `num_flatten_dims` = 3.
61+
Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
62+
param_attr (ParamAttr|list of ParamAttr, default None): The parameter attribute for learnable
63+
parameters/weights of this layer.
64+
bias_attr (ParamAttr|list of ParamAttr, default None): The parameter attribute for the bias
65+
parameter of this layer. If set None, no bias will be added to the output units.
66+
act (str, default None): Activation to be applied to the output of this layer.
67+
name (str, default None): The name of this layer.
68+
69+
Returns:
70+
A tensor variable storing the transformation result.
71+
72+
Raises:
73+
ValueError: If rank of the input tensor is less than 2.
74+
75+
Examples:
76+
.. code-block:: python
77+
78+
data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
79+
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
80+
"""

0 commit comments

Comments
 (0)