|
4 | 4 |
|
5 | 5 | ## Additions |
6 | 6 |
|
| 7 | + |
| 8 | + |
7 | 9 | ### Tensorflow Integration |
8 | 10 |
|
9 | | - **Package** `dynaml.tensorflow` |
| 11 | + **Package** `dynaml.tensorflow` |
| 12 | + |
| 13 | + #### Batch Normalisation |
| 14 | + |
| 15 | + [Batch normalisation](https://arxiv.org/abs/1502.03167) is used to standardize activations of convolutional layers and |
| 16 | + to speed up training of deep neural nets. |
| 17 | + |
| 18 | + **Usage** |
| 19 | + |
| 20 | + ```scala |
| 21 | + import io.github.mandar2812.dynaml.tensorflow._ |
| 22 | + |
| 23 | + val bn = dtflearn.batch_norm("BatchNorm1") |
| 24 | + |
| 25 | + ``` |
| 26 | + |
| 27 | + |
| 28 | + #### Inception v2 |
| 29 | + |
| 30 | + The [_Inception_](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) architecture, proposed by Google is an important |
| 31 | + building block of _convolutional neural network_ architectures used in vision applications. |
| 32 | + |
| 33 | +  |
| 34 | + |
| 35 | + DynaML now offers the Inception cell as a computational layer. |
| 36 | + |
| 37 | + **Usage** |
| 38 | + |
| 39 | + ```scala |
| 40 | + import io.github.mandar2812.dynaml.pipes._ |
| 41 | + import io.github.mandar2812.dynaml.tensorflow._ |
| 42 | + import org.platanios.tensorflow.api._ |
| 43 | + |
| 44 | + //Create an RELU activation, given a string name/identifier. |
| 45 | + val relu_act = DataPipe(tf.learn.ReLU(_)) |
| 46 | + |
| 47 | + //Learn 10 filters in each branch of the inception cell |
| 48 | + val filters = Seq(10, 10, 10, 10) |
| 49 | + |
| 50 | + val inception_cell = dtflearn.inception_unit( |
| 51 | + channels = 3, num_filters = filters, relu_act, |
| 52 | + //Apply batch normalisation after each convolution |
| 53 | + use_batch_norm = true)(layer_index = 1) |
| 54 | + |
| 55 | + ``` |
| 56 | + |
| 57 | + In a subsequent [paper](https://arxiv.org/pdf/1512.00567.pdf), the authors introduced optimizations in the Inception |
| 58 | + architecture, known colloquially as _Inception v2_. |
| 59 | + |
| 60 | + In _Inception v2_, larger convolutions (i.e. `3 x 3` and `5 x 5`) are implemented in a factorized manner |
| 61 | + to reduce the number of parameters to be learned. For example the `3 x 3` convolution is expressed as a |
| 62 | + combination of `1 x 3` and `3 x 1` convolutions. |
| 63 | + |
| 64 | +  |
| 65 | + |
| 66 | + Similarly the `5 x 5` convolutions can be expressed a combination of two `3 x 3` convolutions |
| 67 | + |
| 68 | +  |
| 69 | + |
| 70 | + #### Dynamical Systems: Continuous Time RNN |
| 71 | + |
| 72 | + - Added CTRNN layer: `dtflearn.ctrnn` |
| 73 | + |
| 74 | + - Added CTRNN layer with inferable time step: `dtflearn.dctrnn`. |
| 75 | + |
| 76 | + - Added a projection layer for CTRNN based models `dtflearn.ts_linear`. |
10 | 77 |
|
| 78 | + |
| 79 | + |
11 | 80 | **Training Stopping Criteria** |
12 | 81 |
|
13 | 82 | Create common and simple training stop criteria such as. |
|
43 | 112 | .prefetch(10) |
44 | 113 |
|
45 | 114 | // Create the MLP model. |
46 | | - val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2))) |
| 115 | + val input = tf.learn.Input( |
| 116 | + UINT8, |
| 117 | + Shape( |
| 118 | + -1, |
| 119 | + dataSet.trainImages.shape(1), |
| 120 | + dataSet.trainImages.shape(2)) |
| 121 | + ) |
47 | 122 |
|
48 | 123 | val trainInput = tf.learn.Input(UINT8, Shape(-1)) |
49 | 124 |
|
|
99 | 174 | net_layer_sizes) |
100 | 175 |
|
101 | 176 | ``` |
102 | | - |
103 | | - |
104 | | - #### Batch Normalisation |
105 | | - |
106 | | - [Batch normalisation](https://arxiv.org/abs/1502.03167) is used to standardize activations of convolutional layers and |
107 | | - to speed up training of deep neural nets. |
108 | | - |
109 | | - **Usage** |
110 | | - |
111 | | - ```scala |
112 | | - import io.github.mandar2812.dynaml.tensorflow._ |
113 | | - |
114 | | - val bn = dtflearn.batch_norm("BatchNorm1") |
115 | | - |
116 | | - ``` |
117 | | - |
118 | | - |
119 | | - #### Inception v2 |
120 | | - |
121 | | - The [_Inception_](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) architecture, proposed by Google is an important |
122 | | - building block of _convolutional neural network_ architectures used in vision applications. |
123 | | - |
124 | | -  |
125 | | - |
126 | | - DynaML now offers the Inception cell as a computational layer. |
127 | | - |
128 | | - **Usage** |
129 | | - |
130 | | - ```scala |
131 | | - import io.github.mandar2812.dynaml.pipes._ |
132 | | - import io.github.mandar2812.dynaml.tensorflow._ |
133 | | - import org.platanios.tensorflow.api._ |
134 | | - |
135 | | - //Create an RELU activation, given a string name/identifier. |
136 | | - val relu_act = DataPipe(tf.learn.ReLU(_)) |
137 | | - |
138 | | - //Learn 10 filters in each branch of the inception cell |
139 | | - val filters = Seq(10, 10, 10, 10) |
140 | | - |
141 | | - val inception_cell = dtflearn.inception_unit( |
142 | | - channels = 3, num_filters = filters, relu_act, |
143 | | - //Apply batch normalisation after each convolution |
144 | | - use_batch_norm = true)(layer_index = 1) |
145 | | - |
146 | | - ``` |
147 | | - |
148 | | - In a subsequent [paper](https://arxiv.org/pdf/1512.00567.pdf), the authors introduced optimizations in the Inception |
149 | | - architecture, known colloquially as _Inception v2_. |
150 | | - |
151 | | - In _Inception v2_, larger convolutions (i.e. `3 x 3` and `5 x 5`) are implemented in a factorized manner |
152 | | - to reduce the number of parameters to be learned. For example the `3 x 3` convolution is expressed as a |
153 | | - combination of `1 x 3` and `3 x 1` convolutions. |
154 | | - |
155 | | -  |
156 | | - |
157 | | - Similarly the `5 x 5` convolutions can be expressed a combination of two `3 x 3` convolutions |
158 | | - |
159 | | -  |
| 177 | + |
160 | 178 |
|
161 | 179 |
|
162 | 180 | ### 3D Graphics |
|
0 commit comments