Skip to content

Commit 51b8e44

Browse files
committed
docs: Preparing release notes for v1.5.3 [WIP 3]
Signed-off-by: mandar2812 <[email protected]>
1 parent 24c6849 commit 51b8e44

File tree

1 file changed

+78
-60
lines changed

1 file changed

+78
-60
lines changed

docs/releases/mydoc_release_notes_153.md

Lines changed: 78 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,79 @@
44

55
## Additions
66

7+
8+
79
### Tensorflow Integration
810

9-
**Package** `dynaml.tensorflow`
11+
**Package** `dynaml.tensorflow`
12+
13+
#### Batch Normalisation
14+
15+
[Batch normalisation](https://arxiv.org/abs/1502.03167) is used to standardize activations of convolutional layers and
16+
to speed up training of deep neural nets.
17+
18+
**Usage**
19+
20+
```scala
21+
import io.github.mandar2812.dynaml.tensorflow._
22+
23+
val bn = dtflearn.batch_norm("BatchNorm1")
24+
25+
```
26+
27+
28+
#### Inception v2
29+
30+
The [_Inception_](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) architecture, proposed by Google is an important
31+
building block of _convolutional neural network_ architectures used in vision applications.
32+
33+
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/inception.png)
34+
35+
DynaML now offers the Inception cell as a computational layer.
36+
37+
**Usage**
38+
39+
```scala
40+
import io.github.mandar2812.dynaml.pipes._
41+
import io.github.mandar2812.dynaml.tensorflow._
42+
import org.platanios.tensorflow.api._
43+
44+
//Create an RELU activation, given a string name/identifier.
45+
val relu_act = DataPipe(tf.learn.ReLU(_))
46+
47+
//Learn 10 filters in each branch of the inception cell
48+
val filters = Seq(10, 10, 10, 10)
49+
50+
val inception_cell = dtflearn.inception_unit(
51+
channels = 3, num_filters = filters, relu_act,
52+
//Apply batch normalisation after each convolution
53+
use_batch_norm = true)(layer_index = 1)
54+
55+
```
56+
57+
In a subsequent [paper](https://arxiv.org/pdf/1512.00567.pdf), the authors introduced optimizations in the Inception
58+
architecture, known colloquially as _Inception v2_.
59+
60+
In _Inception v2_, larger convolutions (i.e. `3 x 3` and `5 x 5`) are implemented in a factorized manner
61+
to reduce the number of parameters to be learned. For example the `3 x 3` convolution is expressed as a
62+
combination of `1 x 3` and `3 x 1` convolutions.
63+
64+
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact.png)
65+
66+
Similarly the `5 x 5` convolutions can be expressed a combination of two `3 x 3` convolutions
67+
68+
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact2.png)
69+
70+
#### Dynamical Systems: Continuous Time RNN
71+
72+
- Added CTRNN layer: `dtflearn.ctrnn`
73+
74+
- Added CTRNN layer with inferable time step: `dtflearn.dctrnn`.
75+
76+
- Added a projection layer for CTRNN based models `dtflearn.ts_linear`.
1077

78+
79+
1180
**Training Stopping Criteria**
1281

1382
Create common and simple training stop criteria such as.
@@ -43,7 +112,13 @@
43112
.prefetch(10)
44113

45114
// Create the MLP model.
46-
val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2)))
115+
val input = tf.learn.Input(
116+
UINT8,
117+
Shape(
118+
-1,
119+
dataSet.trainImages.shape(1),
120+
dataSet.trainImages.shape(2))
121+
)
47122

48123
val trainInput = tf.learn.Input(UINT8, Shape(-1))
49124

@@ -99,64 +174,7 @@
99174
net_layer_sizes)
100175

101176
```
102-
103-
104-
#### Batch Normalisation
105-
106-
[Batch normalisation](https://arxiv.org/abs/1502.03167) is used to standardize activations of convolutional layers and
107-
to speed up training of deep neural nets.
108-
109-
**Usage**
110-
111-
```scala
112-
import io.github.mandar2812.dynaml.tensorflow._
113-
114-
val bn = dtflearn.batch_norm("BatchNorm1")
115-
116-
```
117-
118-
119-
#### Inception v2
120-
121-
The [_Inception_](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf) architecture, proposed by Google is an important
122-
building block of _convolutional neural network_ architectures used in vision applications.
123-
124-
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/inception.png)
125-
126-
DynaML now offers the Inception cell as a computational layer.
127-
128-
**Usage**
129-
130-
```scala
131-
import io.github.mandar2812.dynaml.pipes._
132-
import io.github.mandar2812.dynaml.tensorflow._
133-
import org.platanios.tensorflow.api._
134-
135-
//Create an RELU activation, given a string name/identifier.
136-
val relu_act = DataPipe(tf.learn.ReLU(_))
137-
138-
//Learn 10 filters in each branch of the inception cell
139-
val filters = Seq(10, 10, 10, 10)
140-
141-
val inception_cell = dtflearn.inception_unit(
142-
channels = 3, num_filters = filters, relu_act,
143-
//Apply batch normalisation after each convolution
144-
use_batch_norm = true)(layer_index = 1)
145-
146-
```
147-
148-
In a subsequent [paper](https://arxiv.org/pdf/1512.00567.pdf), the authors introduced optimizations in the Inception
149-
architecture, known colloquially as _Inception v2_.
150-
151-
In _Inception v2_, larger convolutions (i.e. `3 x 3` and `5 x 5`) are implemented in a factorized manner
152-
to reduce the number of parameters to be learned. For example the `3 x 3` convolution is expressed as a
153-
combination of `1 x 3` and `3 x 1` convolutions.
154-
155-
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact.png)
156-
157-
Similarly the `5 x 5` convolutions can be expressed a combination of two `3 x 3` convolutions
158-
159-
![inception](https://github.com/transcendent-ai-labs/DynaML/blob/master/docs/images/conv-fact2.png)
177+
160178

161179

162180
### 3D Graphics

0 commit comments

Comments
 (0)