Skip to content

Commit 982dabe

Browse files
authored
Merge pull request #11866 from panyx0718/move_func
Move some v2 codes to a legacy directory.
2 parents 5056d3e + a9086bf commit 982dabe

File tree

671 files changed

+527
-523
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

671 files changed

+527
-523
lines changed

CMakeLists.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ include(inference_lib) # add paddle fluid inference libraries
196196

197197

198198
include_directories("${PADDLE_SOURCE_DIR}")
199-
include_directories("${PADDLE_SOURCE_DIR}/paddle/cuda/include")
199+
include_directories("${PADDLE_SOURCE_DIR}/paddle/legacy/cuda/include")
200200
include_directories("${CMAKE_CURRENT_BINARY_DIR}/proto")
201201
include_directories("${CMAKE_CURRENT_BINARY_DIR}/go/pserver/client/c")
202202

@@ -240,7 +240,7 @@ add_subdirectory(proto)
240240
if(NOT MOBILE_INFERENCE AND NOT WITH_FLUID_ONLY)
241241
# "add_subdirectory(go)" should be placed after the following loine,
242242
# because it depends on paddle/optimizer.
243-
add_subdirectory(paddle/optimizer)
243+
add_subdirectory(paddle/legacy/optimizer)
244244
endif()
245245

246246
# "add_subdirectory(paddle)" and "add_subdirectory(python)" should be

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,4 +159,4 @@ This will enable VLOG messages generated by `buddy_allocator.{h,cc}` and in the
159159
- verbose level 1: [framework](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework)
160160
- verbose level 3: [operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators)
161161
- verbose level 5: [memory](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/memory), [platform](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/platform)
162-
- verbose level 7: [math](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/math)
162+
- verbose level 7: [math](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/legacy/math)

doc/v2/design/interface/00.why_plain_c.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ paddle_error paddle_matrix_get_shape(paddle_matrix matrix,
6565
而在CPP里面实现这个C的接口,文件 `paddle_matrix.cpp`
6666
6767
```cpp
68-
#include "paddle/math/matrix.h"
68+
#include "paddle/legacy/math/matrix.h"
6969
extern "C"
7070
paddle_error paddle_matrix_shape(paddle_matrix matrix,
7171
uint64_t *width,

doc/v2/dev/new_layer_cn.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
5858
实现C++类
5959
===================
6060

61-
一个网络层的C++类需要实现初始化,前向和后向。全连接层的实现位于:code:`paddle/gserver/layers/FullyConnectedLayer.h`及:code:`paddle/gserver/layers/FullyConnectedLayer.cpp`。这里我们展示一份简化过的代码。
61+
一个网络层的C++类需要实现初始化,前向和后向。全连接层的实现位于:code:`paddle/legacy/gserver/layers/FullyConnectedLayer.h`及:code:`paddle/legacy/gserver/layers/FullyConnectedLayer.cpp`。这里我们展示一份简化过的代码。
6262

6363
这个类需要继承 :code:`paddle::Layer` 这个基类,并且需要重写基类中的以下几个虚函数:
6464

@@ -153,7 +153,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
153153

154154
- 每个层在其 :code:`forward` 函数的开头必须调用 :code:`Layer::forward(passType);` 。
155155
- 之后使用 :code:`reserveOutput(batchSize, size);` 为输出分配内存。由于我们支持训练数据有不同的批次大小,所以这一步是必要的。 :code:`reserveOutput` 会相应地改变输出的尺寸。为了保证效率,如果需要扩大矩阵,我们会重新分配内存;如果需要缩减矩阵,我们会继续使用现有的内存块。
156-
- 之后使用矩阵运算函数来计算 :math:`\sum_i W_i x + b`。:code:`getInput(i).value` 返回第i个输入矩阵。每个输入都是一个 :math:`batchSize \times dim` 的矩阵,每行表示一个批次中的单个输入。对于我们支持的全部矩阵操作,请参考 :code:`paddle/math/Matrix.h`和:code:`paddle/math/BaseMatrix.h` 。
156+
- 之后使用矩阵运算函数来计算 :math:`\sum_i W_i x + b`。:code:`getInput(i).value` 返回第i个输入矩阵。每个输入都是一个 :math:`batchSize \times dim` 的矩阵,每行表示一个批次中的单个输入。对于我们支持的全部矩阵操作,请参考 :code:`paddle/legacy/math/Matrix.h`和:code:`paddle/legacy/math/BaseMatrix.h` 。
157157
- 最终,使用 :code:`forwardActivation();` 进行激活操作。这会自动进行网络配置中声明的激活操作。
158158

159159

@@ -262,15 +262,15 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
262262
REGISTER_LAYER(fc, FullyConnectedLayer);
263263
}
264264

265-
:code:`cpp` 被放在 :code:`paddle/gserver/layers` 目录下,其会自动被加入编译列表。
265+
:code:`cpp` 被放在 :code:`paddle/legacy/gserver/layers` 目录下,其会自动被加入编译列表。
266266

267267

268268
写梯度检查单元测试
269269
===============================
270270

271271
写梯度检查单元测试是一个验证新实现的层是否正确的相对简单的办法。梯度检查单元测试通过有限差分法来验证一个层的梯度。首先对输入做一个小的扰动 :math:`\Delta x` ,然后观察到输出的变化为 :math:`\Delta y` ,那么,梯度就可以通过这个方程计算得到 :math:`\frac{\Delta y}{\Delta x }` 。之后,再用这个梯度去和 :code:`backward` 函数得到的梯度去对比,以保证梯度计算的正确性。需要注意的是梯度检查仅仅验证了梯度的计算,并不保证 :code:`forward` 和 :code:`backward` 函数的实现是正确的。你需要一些更复杂的单元测试来保证你实现的网络层是正确的。
272272

273-
所有网络层的梯度检查单测都位于 :code:`paddle/gserver/tests/test_LayerGrad.cpp` 。我们建议你在写新网络层时把测试代码放入新的文件中。下面列出了全连接层的梯度检查单元测试。它包含以下几步:
273+
所有网络层的梯度检查单测都位于 :code:`paddle/legacy/gserver/tests/test_LayerGrad.cpp` 。我们建议你在写新网络层时把测试代码放入新的文件中。下面列出了全连接层的梯度检查单元测试。它包含以下几步:
274274

275275
+ 生成网络层配置。网络层配置包含以下几项:
276276
- 偏置参数的大小。(例子中是4096)
@@ -322,7 +322,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
322322
}
323323
}
324324
325-
如果你要为了测试而增加新的文件,例如 :code:`paddle/gserver/tests/testFCGrad.cpp` ,你需要把该文件加入 :code:`paddle/gserver/tests/CMakeLists.txt` 中。下面给出了一个例子。当你执行命令 :code:`make tests` 时,所有的单测都会被执行一次。注意,有些层可能需要高精度来保证梯度检查单测正确执行。你需要在配置cmake时将 :code:`WITH_DOUBLE` 设置为 `ON` 。
325+
如果你要为了测试而增加新的文件,例如 :code:`paddle/legacy/gserver/tests/testFCGrad.cpp` ,你需要把该文件加入 :code:`paddle/legacy/gserver/tests/CMakeLists.txt` 中。下面给出了一个例子。当你执行命令 :code:`make tests` 时,所有的单测都会被执行一次。注意,有些层可能需要高精度来保证梯度检查单测正确执行。你需要在配置cmake时将 :code:`WITH_DOUBLE` 设置为 `ON` 。
326326

327327
.. code-block:: bash
328328

doc/v2/dev/new_layer_en.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Finally we can use chain rule to calculate :math:`\frac{\partial z}{\partial x}`
5858
Implement C++ Class
5959
===================
6060

61-
The C++ class of the layer implements the initialization, forward, and backward part of the layer. The fully connected layer is at :code:`paddle/gserver/layers/FullyConnectedLayer.h` and :code:`paddle/gserver/layers/FullyConnectedLayer.cpp`. We list simplified version of the code below.
61+
The C++ class of the layer implements the initialization, forward, and backward part of the layer. The fully connected layer is at :code:`paddle/legacy/gserver/layers/FullyConnectedLayer.h` and :code:`paddle/legacy/gserver/layers/FullyConnectedLayer.cpp`. We list simplified version of the code below.
6262

6363
It needs to derive the base class :code:`paddle::Layer`, and it needs to override the following functions:
6464

@@ -154,7 +154,7 @@ The implementation of the forward part has the following steps.
154154

155155
- Every layer must call :code:`Layer::forward(passType);` at the beginning of its :code:`forward` function.
156156
- Then it allocates memory for the output using :code:`reserveOutput(batchSize, size);`. This step is necessary because we support the batches to have different batch sizes. :code:`reserveOutput` will change the size of the output accordingly. For the sake of efficiency, we will allocate new memory if we want to expand the matrix, but we will reuse the existing memory block if we want to shrink the matrix.
157-
- Then it computes :math:`\sum_i W_i x + b` using Matrix operations. :code:`getInput(i).value` retrieve the matrix of the i-th input. Each input is a :math:`batchSize \times dim` matrix, where each row represents an single input in a batch. For a complete lists of supported matrix operations, please refer to :code:`paddle/math/Matrix.h` and :code:`paddle/math/BaseMatrix.h`.
157+
- Then it computes :math:`\sum_i W_i x + b` using Matrix operations. :code:`getInput(i).value` retrieve the matrix of the i-th input. Each input is a :math:`batchSize \times dim` matrix, where each row represents an single input in a batch. For a complete lists of supported matrix operations, please refer to :code:`paddle/legacy/math/Matrix.h` and :code:`paddle/legacy/math/BaseMatrix.h`.
158158
- Finally it applies the activation function using :code:`forwardActivation();`. It will automatically applies the corresponding activation function specifies in the network configuration.
159159

160160

@@ -263,15 +263,15 @@ Finally, you can use :code:`REGISTER_LAYER(fc, FullyConnectedLayer);` to registe
263263
REGISTER_LAYER(fc, FullyConnectedLayer);
264264
}
265265

266-
If the :code:`cpp` file is put into :code:`paddle/gserver/layers`, it will be automatically added to the compilation list.
266+
If the :code:`cpp` file is put into :code:`paddle/legacy/gserver/layers`, it will be automatically added to the compilation list.
267267

268268

269269
Write Gradient Check Unit Test
270270
===============================
271271

272272
An easy way to verify the correctness of new layer's implementation is to write a gradient check unit test. Gradient check unit test utilizes finite difference method to verify the gradient of a layer. It modifies the input with a small perturbation :math:`\Delta x` and observes the changes of output :math:`\Delta y`, the gradient can be computed as :math:`\frac{\Delta y}{\Delta x }`. This gradient can be compared with the gradient computed by the :code:`backward` function of the layer to ensure the correctness of the gradient computation. Notice that the gradient check only tests the correctness of the gradient computation, it does not necessarily guarantee the correctness of the implementation of the :code:`forward` and :code:`backward` function. You need to write more sophisticated unit tests to make sure your layer is implemented correctly.
273273

274-
All the gradient check unit tests are located in :code:`paddle/gserver/tests/test_LayerGrad.cpp`. You are recommended to put your test into a new test file if you are planning to write a new layer. The gradient test of the gradient check unit test of the fully connected layer is listed below. It has the following steps.
274+
All the gradient check unit tests are located in :code:`paddle/legacy/gserver/tests/test_LayerGrad.cpp`. You are recommended to put your test into a new test file if you are planning to write a new layer. The gradient test of the gradient check unit test of the fully connected layer is listed below. It has the following steps.
275275

276276
+ Create layer configuration. A layer configuration can include the following attributes:
277277
- size of the bias parameter. (4096 in our example)
@@ -323,7 +323,7 @@ All the gradient check unit tests are located in :code:`paddle/gserver/tests/tes
323323
}
324324
}
325325
326-
If you are creating a new file for the test, such as :code:`paddle/gserver/tests/testFCGrad.cpp`, you need to add the file to :code:`paddle/gserver/tests/CMakeLists.txt`. An example is given below. All the unit tests will run when you execute the command :code:`make tests`. Notice that some layers might need high accuracy for the gradient check unit tests to work well. You need to configure :code:`WITH_DOUBLE` to `ON` when configuring cmake.
326+
If you are creating a new file for the test, such as :code:`paddle/legacy/gserver/tests/testFCGrad.cpp`, you need to add the file to :code:`paddle/legacy/gserver/tests/CMakeLists.txt`. An example is given below. All the unit tests will run when you execute the command :code:`make tests`. Notice that some layers might need high accuracy for the gradient check unit tests to work well. You need to configure :code:`WITH_DOUBLE` to `ON` when configuring cmake.
327327

328328
.. code-block:: bash
329329

doc/v2/faq/parameter/index_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,6 +196,6 @@ PaddlePaddle保存的模型参数文件内容由16字节头信息和网络参数
196196
obj="process",
197197
args={"src_dict_path": src_dict_path})
198198
199-
完整源码可参考 `sequence_recurrent <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_recurrent.py>`_ 示例。
199+
完整源码可参考 `sequence_recurrent <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_recurrent.py>`_ 示例。
200200

201201

doc/v2/howto/optimization/gpu_profiling_cn.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -50,12 +50,12 @@ GPU则还需要高并行性,才能发挥其全部能力。这正是它们速
5050
**nvprof** 是Nvidia性能分析工具, **nvvp** 则是带GUI的Nvidia可视化性能分析工具。
5151
在这个教程中,我们主要会介绍nvprof和nvvp。
5252

53-
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate
53+
:code:`test_GpuProfiler` from :code:`paddle/legacy/math/tests` directory will be used to evaluate
5454
above profilers.
5555

56-
:code:`paddle/math/test` 目录中的 :code:`test_GpuProfiler` 就是用于展示上述分析工具的用法。
56+
:code:`paddle/legacy/math/test` 目录中的 :code:`test_GpuProfiler` 就是用于展示上述分析工具的用法。
5757

58-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
58+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
5959
:language: c++
6060
:lines: 137-151
6161
:linenos:
@@ -83,7 +83,7 @@ program crashes when CPU version of PaddlePaddle invokes them.
8383

8484
1. 加入 :code:`REGISTER_TIMER_INFO` 和 :code:`printAllStatus` 函数(如高亮部分)。
8585

86-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
86+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
8787
:language: c++
8888
:lines: 137-151
8989
:emphasize-lines: 8-12,14
@@ -101,8 +101,8 @@ program crashes when CPU version of PaddlePaddle invokes them.
101101
.. code-block:: bash
102102
:emphasize-lines: 1,12-15
103103
104-
> ./paddle/math/tests/test_GpuProfiler
105-
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler
104+
> ./paddle/legacy/math/tests/test_GpuProfiler
105+
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/legacy/math/tests/test_GpuProfiler
106106
I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions
107107
I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done.
108108
[==========] Running 1 test from 1 test case.
@@ -130,7 +130,7 @@ nvprof 工具
130130
131131
1. 将 :code:`REGISTER_GPU_PROFILER` 函数加到代码中(参考强调部分)。
132132
133-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
133+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
134134
:language: c++
135135
:lines: 137-151
136136
:emphasize-lines: 6-7
@@ -147,13 +147,13 @@ nvprof 工具
147147
148148
.. code-block:: bash
149149
150-
nvprof ./paddle/math/tests/test_GpuProfiler
150+
nvprof ./paddle/legacy/math/tests/test_GpuProfiler
151151
152152
然后,您就能获得如下的分析结果:
153153
154154
.. code-block:: bash
155155
156-
==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler
156+
==78544== Profiling application: ./paddle/legacy/math/tests/test_GpuProfiler
157157
==78544== Profiling result:
158158
Time(%) Time Calls Avg Min Max Name
159159
27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD]

doc/v2/howto/optimization/gpu_profiling_en.rst

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,10 @@ For general GPU profiling, a bunch of tools are provided from both NVIDIA and th
5151
**nvprof** is Nvidia profiler and **nvvp** is (GUI based) Nvidia visual profiler.
5252
In this tutorial, we will focus on nvprof and nvvp.
5353

54-
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate
54+
:code:`test_GpuProfiler` from :code:`paddle/legacy/math/tests` directory will be used to evaluate
5555
above profilers.
5656

57-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
57+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
5858
:language: c++
5959
:lines: 137-151
6060
:linenos:
@@ -80,7 +80,7 @@ As a simple example, consider the following:
8080

8181
1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines).
8282

83-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
83+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
8484
:language: c++
8585
:lines: 137-151
8686
:emphasize-lines: 8-12,14
@@ -98,8 +98,8 @@ As a simple example, consider the following:
9898
.. code-block:: bash
9999
:emphasize-lines: 1,12-15
100100
101-
> ./paddle/math/tests/test_GpuProfiler
102-
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler
101+
> ./paddle/legacy/math/tests/test_GpuProfiler
102+
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/legacy/math/tests/test_GpuProfiler
103103
I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions
104104
I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done.
105105
[==========] Running 1 test from 1 test case.
@@ -127,7 +127,7 @@ To use this command line profiler **nvprof**, you can simply issue the following
127127
128128
1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines).
129129
130-
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp
130+
.. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
131131
:language: c++
132132
:lines: 137-151
133133
:emphasize-lines: 6-7
@@ -144,13 +144,13 @@ To use this command line profiler **nvprof**, you can simply issue the following
144144
145145
.. code-block:: bash
146146
147-
nvprof ./paddle/math/tests/test_GpuProfiler
147+
nvprof ./paddle/legacy/math/tests/test_GpuProfiler
148148
149149
Then, you can get the following profiling result:
150150
151151
.. code-block:: bash
152152
153-
==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler
153+
==78544== Profiling application: ./paddle/legacy/math/tests/test_GpuProfiler
154154
==78544== Profiling result:
155155
Time(%) Time Calls Avg Min Max Name
156156
27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD]

0 commit comments

Comments
 (0)