You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-5Lines changed: 12 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,6 +89,7 @@ The depth of Layer 1 is 64. You can see how each filter extracts different detai
89
89
</tr>
90
90
</tbody>
91
91
</table>
92
+
92
93
<tableborder=0width="800px"align="center">
93
94
<tbody>
94
95
<tr>
@@ -103,6 +104,7 @@ The depth of Layer 1 is 64. You can see how each filter extracts different detai
103
104
</tr>
104
105
</tbody>
105
106
</table>
107
+
106
108
<tableborder=0width="800px"align="center">
107
109
<tbody>
108
110
<tr>
@@ -121,18 +123,22 @@ The depth of Layer 1 is 64. You can see how each filter extracts different detai
121
123
<aid='max_activations'></a>
122
124
## Activation Maximization
123
125
124
-
Bla bla bla. Write some stuff here.
126
+
Activation Maximization was first proposed by Erhan et al.<sup>[[3]](#3)</sup> in 2009 as a way to communicate CNN behavior. Specifically as a way to intepret or visualize learned feature maps. This learned feature map can be represented by an active state of particular neurons. By looking at the maximimum activation of particular neurons we can visualize what patters are larned in particular filters.
127
+
128
+
### The Algorithm
129
+
130
+
We start with a pretrained Vgg16 model and a noisy image as seen below. This image is passed through the network. At a particular layer the gradient with respect to the noisy image is calculated at each neuron.<sup>[[4]](#4)</sup> This is calculted using backpropagation, while keeping the parameters of the model fixed. The `hook_fn` in the `ActivationMaximizationVis()` class captures the calculated gradients. Each pixel in the original noisy image is then iteratively changed to maximize the activation of the neuron. In otherwords, each pixel in the noisy image is iteratively changed to push the gradient to a maximum for that particular value. The pixel values are updated until a desired image is found.
We can visualize the activation map of each layer after a noisy image is passed through the network. Using the activation maximization technique we can see that patterns emerge at each layer/filter combination. If you look at the earlier layers in the network you can see that simplier patterns emerge. We start to notice that the activation map pulls out simpler patters and colors. Vertical and horizontal elements can be seen.
139
+
140
+
As we move deeper in the network you can see that more complex patters emerge. Some of the activation maps of later layers look like trees, eyes, and feathers. Well, at least that's what it looks like to me. We all may see something different.
133
141
134
-
### Layer Vis
135
-
Taking a look at the first few layers you can see...
136
142
<tableborder=0width="800px"align="center">
137
143
<tbody>
138
144
<tr>
@@ -150,6 +156,7 @@ Taking a look at the first few layers you can see...
0 commit comments