|
| 1 | +# Release v0.11.0 |
| 2 | + |
| 3 | +## Fluid Python API |
| 4 | + |
| 5 | +- Release 0.11.0 includes a new feature *PaddlePaddle Fluid*. Fluid is |
| 6 | + designed to allow users to program like PyTorch and TensorFlow Eager Execution. |
| 7 | + In these systems, there is no longer the concept *model* and applications |
| 8 | + do not include a symbolic description of a graph of operators nor a sequence |
| 9 | + of layers. Instead, applications look exactly like a usual program that |
| 10 | + describes a process of training or inference. The difference between |
| 11 | + Fluid and PyTorch or Eager Execution is that Fluid doesn't rely on Python's |
| 12 | + control-flow, `if-then-else` nor `for`. Instead, Fluid provides its |
| 13 | + C++ implementations and their Python binding using the `with` statement. For an example |
| 14 | + |
| 15 | + https://github.com/PaddlePaddle/Paddle/blob/3df78ed2a98d37f7ae6725894cc7514effd5664b/python/paddle/v2/fluid/tests/test_while_op.py#L36-L44 |
| 16 | + |
| 17 | +- In 0.11.0, we provides a C++ class `Executor` to run a Fluid program. |
| 18 | +Executor works like an interpreter. In future version, we will improve |
| 19 | +`Executor` into a debugger like GDB, and we might provide some compilers, |
| 20 | +which, for example, takes an application like the above one, and outputs |
| 21 | +an equivalent C++ source program, which can be compiled using |
| 22 | +[`nvcc`](http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html) |
| 23 | +to generate binaries that use CUDA, or using |
| 24 | +[`icc`](https://software.intel.com/en-us/c-compilers) to generate binaries |
| 25 | +that make full use of Intel CPUs. |
| 26 | + |
| 27 | +## New Features |
| 28 | + |
| 29 | +* Release `Fluid` API. |
| 30 | +* Add C-API for model inference |
| 31 | +* Use fluid API to create a simple GAN demo. |
| 32 | +* Add develop guide about performance tunning. |
| 33 | +* Add retry when download `paddle.v2.dataset`. |
| 34 | +* Linking protobuf-lite not protobuf in C++. Reduce the binary size. |
| 35 | +* Feature [Elastic Deep Learning (EDL)](https://github.com/PaddlePaddle/cloud/tree/develop/doc/autoscale/experiment) released. |
| 36 | +* A new style cmake functions for Paddle. It is based on Bazel API. |
| 37 | +* Automatically download and compile with Intel® [MKLML](https://github.com/01org/mkl-dnn/releases/download/v0.11/mklml_lnx_2018.0.1.20171007.tgz) library as CBLAS when build `WITH_MKL=ON`. |
| 38 | +* [Intel® MKL-DNN on PaddlePaddle](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/mkldnn): |
| 39 | + - Complete 11 MKL-DNN layers: Convolution, Fully connectivity, Pooling, ReLU, Tanh, ELU, Softmax, BatchNorm, AddTo, Concat, LRN. |
| 40 | + - Complete 3 MKL-DNN networks: VGG-19, ResNet-50, GoogleNet |
| 41 | + - [Benchmark](https://github.com/PaddlePaddle/Paddle/blob/develop/benchmark/IntelOptimizedPaddle.md) on Intel Skylake 6148 CPU: 2~3x training speedup compared with MKLML. |
| 42 | +* Add the [`softsign` activation](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/activation.html#softsign). |
| 43 | +* Add the [dot product layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#dot-prod). |
| 44 | +* Add the [L2 distance layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#l2-distance). |
| 45 | +* Add the [sub-nested sequence layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#sub-nested-seq). |
| 46 | +* Add the [kmax sequence score layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#kmax-sequence-score). |
| 47 | +* Add the [sequence slice layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#seq-slice). |
| 48 | +* Add the [row convolution layer](http://www.paddlepaddle.org/docs/develop/documentation/zh/api/v2/config/layer.html#row-conv) |
| 49 | +* Add mobile friendly webpages. |
| 50 | + |
| 51 | +## Improvements |
| 52 | + |
| 53 | +* Build and install using a single `whl` package. |
| 54 | +* [Custom evaluating in V2 API](https://github.com/PaddlePaddle/models/tree/develop/ltr#训练过程中输出自定义评估指标). |
| 55 | +* Change `PADDLE_ONLY_CPU` to `PADDLE_WITH_GPU`, since we will support many kinds of devices. |
| 56 | +* Remove buggy BarrierStat. |
| 57 | +* Clean and remove unused functions in paddle::Parameter. |
| 58 | +* Remove ProtoDataProvider. |
| 59 | +* Huber loss supports both regression and classification. |
| 60 | +* Add the `stride` parameter for sequence pooling layers. |
| 61 | +* Enable v2 API use cudnn batch normalization automatically. |
| 62 | +* The BN layer's parameter can be shared by a fixed the parameter name. |
| 63 | +* Support variable-dimension input feature for 2D convolution operation. |
| 64 | +* Refine cmake about CUDA to automatically detect GPU architecture. |
| 65 | +* Improved website navigation. |
| 66 | + |
| 67 | +## Bug Fixes |
| 68 | + |
| 69 | +* Fix bug in ROI pooling. cc9a761 |
| 70 | +* Fix AUC is zero when label is dense vector. #5274 |
| 71 | +* Fix bug in WarpCTC layer. |
| 72 | + |
1 | 73 | # Release v0.10.0
|
2 | 74 |
|
3 | 75 | We are glad to release version 0.10.0. In this version, we are happy to release the new
|
|
0 commit comments