Skip to content

Commit 6f6d552

Browse files
committed
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_tensorrt_conv2d_converter
2 parents 990741a + 2409d0f commit 6f6d552

File tree

98 files changed

+1707
-978
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+1707
-978
lines changed

AUTHORS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@
4646
| tianbingsz | Tian-Bing Xu |
4747
| tpatejko | Tomasz Patejko |
4848
| typhoonzero | Yi Wu |
49+
| velconia | Qi-Yang Min |
4950
| wanghaoshuang | Hao-Shuang Wang |
5051
| wangyang59 | Yang Wang |
5152
| wangzhen-nlp | Zhen Wang |

doc/fluid/design/ir/draft.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
## Motivation
22

3-
There is a ```gap``` between the ```Program``` defined by
4-
user and the ```Executable``` that can be scheduled
3+
There is a `gap` between the `Program` defined by
4+
user and the `Executable` that can be scheduled
55
efficiently on heterogeneous hardware, either locally
66
or distributedly.
77

8-
Usually, the ```gap``` is bridged by
8+
Usually, the `gap` is bridged by
99

1010
* A serious transformations with defined order.
1111

1212
* These transformations usually involve
13-
```insert, delete, clustering, split, dependency analysis```.
13+
`insert, delete, clustering, split, dependency analysis`.
1414

1515
* Has a simple way to verify and debug each transformation.
1616

@@ -38,44 +38,44 @@ design below.
3838

3939
#### Node
4040

41-
```Node``` represents an operation that performs some computation or
41+
`Node` represents an operation that performs some computation or
4242
a variable that is input or output of operation.
4343

44-
```Node```s are connected to other ```Node```s via inputs and outputs.
44+
`Node`s are connected to other `Node`s via inputs and outputs.
4545

4646
Other properties (maybe device placement information) can be added
47-
to ```Node``` in the future if it's a
48-
common requirement of many other ```Pass```es. Otherwise, it should live
49-
in a ```Node``` wrapper class that is private to some ```Pass``` or be
50-
a local member of a ```Pass```.
47+
to `Node` in the future if it's a
48+
common requirement of many other `Pass`es. Otherwise, it should live
49+
in a `Node` wrapper class that is private to some `Pass` or be
50+
a local member of a `Pass`.
5151

5252
#### Graph
5353

54-
```Graph``` contains a list of ```Node```s, which are connected to
54+
`Graph` contains a list of `Node`s, which are connected to
5555
each other via inputs and outputs.
5656

5757
TODO: Better definitions for the graph.
5858

59-
```Graph``` can also contain ```Attribute```s. ```Attribute```s
60-
can be ``any`` thing. For example, it can be a list of "wraper"
61-
nodes. The ```wrapper``` nodes compose ```Node```s and provide
62-
helper method for execution or transformation. ```Attribute```
59+
`Graph` can also contain `Attribute`s. `Attribute`s
60+
can be `any` thing. For example, it can be a list of "wraper"
61+
nodes. The `wrapper` nodes compose `Node`s and provide
62+
helper method for execution or transformation. `Attribute`
6363
can also contain other things that describe some properties of
64-
the ```Graph``` or ```Graph``` nodes. ```Attribute``` can be passed
65-
across ```Pass```. However, it should be used with care.
64+
the `Graph` or `Graph` nodes. `Attribute` can be passed
65+
across `Pass`. However, it should be used with care.
6666

6767
#### Pass
6868

69-
```Pass``` represents a transformation of ```Graph```. Its input
70-
is a ```Graph``` and its output is also a ```Graph```. For example,
71-
a ```Pass``` can simply print out the ```Graph```. A ```Pass```
72-
can also fuse some ```Graph```'s ```Node```s.
69+
`Pass` represents a transformation of `Graph`. Its input
70+
is a `Graph` and its output is also a `Graph`. For example,
71+
a `Pass` can simply print out the `Graph`. A `Pass`
72+
can also fuse some `Graph`'s `Node`s.
7373

7474
#### Optimize
7575

76-
```Optimize``` contains a series of ```Pass``` with defined order.
77-
```Optimize``` transforms a ```Graph``` that only contains raw
78-
modeling logic to a ```Graph``` that can be run efficiently while
76+
`Optimize` contains a series of `Pass` with defined order.
77+
`Optimize` transforms a `Graph` that only contains raw
78+
modeling logic to a `Graph` that can be run efficiently while
7979
maintaining the original modeling logic.
8080

8181

paddle/fluid/API.spec

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,7 @@ paddle.fluid.program_guard ArgSpec(args=[], varargs='args', keywords='kwds', def
3535
paddle.fluid.get_var ArgSpec(args=['name', 'program'], varargs=None, keywords=None, defaults=(None,))
3636
paddle.fluid.Executor.__init__ ArgSpec(args=['self', 'place'], varargs=None, keywords=None, defaults=None)
3737
paddle.fluid.Executor.as_lodtensor ArgSpec(args=['self', 'data'], varargs=None, keywords=None, defaults=None)
38-
paddle.fluid.Executor.begin_pass ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
39-
paddle.fluid.Executor.end_pass ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
38+
paddle.fluid.Executor.close ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
4039
paddle.fluid.Executor.run ArgSpec(args=['self', 'program', 'feed', 'fetch_list', 'feed_var_name', 'fetch_var_name', 'scope', 'return_numpy', 'use_program_cache'], varargs=None, keywords=None, defaults=(None, None, None, 'feed', 'fetch', None, True, False))
4140
paddle.fluid.global_scope ArgSpec(args=[], varargs=None, keywords=None, defaults=None)
4241
paddle.fluid.scope_guard ArgSpec(args=[], varargs='args', keywords='kwds', defaults=None)
@@ -200,31 +199,23 @@ paddle.fluid.layers.argsort ArgSpec(args=['input', 'axis', 'name'], varargs=None
200199
paddle.fluid.layers.ones ArgSpec(args=['shape', 'dtype', 'force_cpu'], varargs=None, keywords=None, defaults=(False,))
201200
paddle.fluid.layers.zeros ArgSpec(args=['shape', 'dtype', 'force_cpu'], varargs=None, keywords=None, defaults=(False,))
202201
paddle.fluid.layers.reverse ArgSpec(args=['x', 'axis'], varargs=None, keywords=None, defaults=None)
203-
paddle.fluid.layers.split_lod_tensor ArgSpec(args=['input', 'mask', 'level'], varargs=None, keywords=None, defaults=(0,))
204-
paddle.fluid.layers.merge_lod_tensor ArgSpec(args=['in_true', 'in_false', 'x', 'mask', 'level'], varargs=None, keywords=None, defaults=(0,))
205202
paddle.fluid.layers.While.__init__ ArgSpec(args=['self', 'cond', 'name'], varargs=None, keywords=None, defaults=(None,))
206203
paddle.fluid.layers.While.block ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
207204
paddle.fluid.layers.While.complete ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
208205
paddle.fluid.layers.Switch.__init__ ArgSpec(args=['self', 'name'], varargs=None, keywords=None, defaults=(None,))
209206
paddle.fluid.layers.Switch.case ArgSpec(args=['self', 'condition'], varargs=None, keywords=None, defaults=None)
210207
paddle.fluid.layers.Switch.default ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
211-
paddle.fluid.layers.lod_rank_table ArgSpec(args=['x', 'level'], varargs=None, keywords=None, defaults=(0,))
212-
paddle.fluid.layers.max_sequence_len ArgSpec(args=['rank_table'], varargs=None, keywords=None, defaults=None)
213-
paddle.fluid.layers.lod_tensor_to_array ArgSpec(args=['x', 'table'], varargs=None, keywords=None, defaults=None)
214-
paddle.fluid.layers.array_to_lod_tensor ArgSpec(args=['x', 'table'], varargs=None, keywords=None, defaults=None)
215208
paddle.fluid.layers.increment ArgSpec(args=['x', 'value', 'in_place'], varargs=None, keywords=None, defaults=(1.0, True))
216209
paddle.fluid.layers.array_write ArgSpec(args=['x', 'i', 'array'], varargs=None, keywords=None, defaults=(None,))
217210
paddle.fluid.layers.create_array ArgSpec(args=['dtype'], varargs=None, keywords=None, defaults=None)
218211
paddle.fluid.layers.less_than ArgSpec(args=['x', 'y', 'force_cpu', 'cond'], varargs=None, keywords='ignored', defaults=(None, None))
219212
paddle.fluid.layers.equal ArgSpec(args=['x', 'y', 'cond'], varargs=None, keywords='ignored', defaults=(None,))
220213
paddle.fluid.layers.array_read ArgSpec(args=['array', 'i'], varargs=None, keywords=None, defaults=None)
221-
paddle.fluid.layers.shrink_memory ArgSpec(args=['x', 'i', 'table'], varargs=None, keywords=None, defaults=None)
222214
paddle.fluid.layers.array_length ArgSpec(args=['array'], varargs=None, keywords=None, defaults=None)
223215
paddle.fluid.layers.IfElse.__init__ ArgSpec(args=['self', 'cond', 'name'], varargs=None, keywords=None, defaults=(None,))
224216
paddle.fluid.layers.IfElse.false_block ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
225217
paddle.fluid.layers.IfElse.input ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None)
226218
paddle.fluid.layers.IfElse.output ArgSpec(args=['self'], varargs='outs', keywords=None, defaults=None)
227-
paddle.fluid.layers.IfElse.parent_block ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
228219
paddle.fluid.layers.IfElse.true_block ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
229220
paddle.fluid.layers.DynamicRNN.__init__ ArgSpec(args=['self', 'name'], varargs=None, keywords=None, defaults=(None,))
230221
paddle.fluid.layers.DynamicRNN.block ArgSpec(args=[], varargs='args', keywords='kwds', defaults=None)
@@ -233,9 +224,6 @@ paddle.fluid.layers.DynamicRNN.output ArgSpec(args=['self'], varargs='outputs',
233224
paddle.fluid.layers.DynamicRNN.static_input ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None)
234225
paddle.fluid.layers.DynamicRNN.step_input ArgSpec(args=['self', 'x'], varargs=None, keywords=None, defaults=None)
235226
paddle.fluid.layers.DynamicRNN.update_memory ArgSpec(args=['self', 'ex_mem', 'new_mem'], varargs=None, keywords=None, defaults=None)
236-
paddle.fluid.layers.ConditionalBlock.__init__ ArgSpec(args=['self', 'inputs', 'is_scalar_condition', 'name'], varargs=None, keywords=None, defaults=(False, None))
237-
paddle.fluid.layers.ConditionalBlock.block ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
238-
paddle.fluid.layers.ConditionalBlock.complete ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
239227
paddle.fluid.layers.StaticRNN.__init__ ArgSpec(args=['self', 'name'], varargs=None, keywords=None, defaults=(None,))
240228
paddle.fluid.layers.StaticRNN.complete_op ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None)
241229
paddle.fluid.layers.StaticRNN.memory ArgSpec(args=['self', 'init', 'shape', 'batch_ref', 'init_value', 'init_batch_dim_idx', 'ref_batch_dim_idx'], varargs=None, keywords=None, defaults=(None, None, None, 0.0, 0, 1))

paddle/fluid/framework/CMakeLists.txt

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,12 @@ endif()
2222

2323
cc_test(eigen_test SRCS eigen_test.cc DEPS tensor)
2424

25-
nv_test(mixed_vector_test SRCS mixed_vector_test.cu DEPS place memory device_context tensor)
25+
if(WITH_GPU)
26+
nv_test(mixed_vector_test SRCS mixed_vector_test.cc mixed_vector_test.cu DEPS place memory device_context tensor)
27+
else()
28+
cc_test(mixed_vector_test SRCS mixed_vector_test.cc DEPS place memory device_context tensor)
29+
endif()
30+
2631
cc_library(lod_tensor SRCS lod_tensor.cc DEPS ddim place tensor framework_proto recordio)
2732
cc_test(lod_tensor_test SRCS lod_tensor_test.cc DEPS lod_tensor memory)
2833
nv_test(lod_tensor_gpu_test SRCS lod_tensor_test.cu DEPS lod_tensor)

paddle/fluid/framework/block_desc.h

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,9 +88,8 @@ class BlockDesc {
8888
OpDesc *InsertOp(size_t index);
8989

9090
/*
91-
* Remove Op and its input/output variables.
92-
* Note that for either input or output variable, if it is also an input or
93-
* output variable of other ops, we should remain it.
91+
* Only remove op itself,
92+
* do nothing to its input and output variables
9493
*/
9594
void RemoveOp(size_t s, size_t e);
9695

paddle/fluid/framework/details/CMakeLists.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
cc_library(var_handle SRCS var_handle.cc DEPS place framework_proto)
1+
cc_library(var_handle SRCS var_handle.cc DEPS place framework_proto node)
22
cc_library(op_handle_base SRCS op_handle_base.cc DEPS var_handle device_context lod_tensor)
33
cc_library(scale_loss_grad_op_handle SRCS scale_loss_grad_op_handle.cc DEPS op_handle_base scope lod_tensor ddim memory)
44
cc_library(fetch_op_handle SRCS fetch_op_handle.cc DEPS op_handle_base scope lod_tensor ddim memory)
55
cc_library(computation_op_handle SRCS computation_op_handle.cc DEPS framework_proto scope place operator op_registry)
66
cc_library(rpc_op_handle SRCS rpc_op_handle.cc DEPS framework_proto scope place operator op_registry)
77

8-
cc_library(ssa_graph_builder SRCS ssa_graph_builder.cc DEPS graph)
8+
cc_library(ssa_graph_builder SRCS ssa_graph_builder.cc DEPS graph graph_helper)
99
cc_library(ssa_graph_printer SRCS ssa_graph_printer.cc DEPS ssa_graph_builder)
1010
cc_library(ssa_graph_checker SRCS ssa_graph_checker.cc DEPS ssa_graph_builder)
1111

0 commit comments

Comments
 (0)