You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/museum/harmonium.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ where $Z$ is the normalizing constant (or, in statistical mechanics, the <i>part
60
60
When one works through the derivation of the gradient of the log probability $\log p(\mathbf{x})$ with respect to the synapses such as $\mathbf{W}$, they get a (contrastive) Hebbian-like update rule as follows:
where the angle brackets $< >$ tell us that we need to take the expectation of the values within the brackets under a certain distribution (such as the data distribution denoted by the subscript $data$). The above rule can also be considered to be a stochastic form of a general recipe known as contrastive Hebbian learning (CHL) [4].
@@ -170,31 +170,31 @@ which will fit/adapt your harmonium to MNIST. This should produce per-training i
170
170
W1: min -0.0494 ; max 0.0445 mu -0.0000 ; norm 4.4734
171
171
b1: min -4.0000 ; max -4.0000 mu -4.0000 ; norm 64.0000
172
172
c0: min -11.6114 ; max 0.0635 mu -3.8398 ; norm 135.2238
0 commit comments