Skip to content

Commit 54256d9

Browse files
Virender Singhclaude
andcommitted
Fix autoencoder bias gradient updates
Use bias gradients (b1_grad, b2_grad, etc.) instead of bias values (b1, b2, etc.) in momentum updates. This critical fix ensures proper backpropagation and training convergence in the 2-layer autoencoder. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent d4417b3 commit 54256d9

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

scripts/builtin/autoencoder_2layer.dml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -136,13 +136,13 @@ m_autoencoder_2layer = function(Matrix[Double] X, Integer num_hidden1, Integer n
136136
#update
137137
local_step = step / nrow(X_batch)
138138
upd_W1 = mu * upd_W1 - local_step * W1_grad
139-
upd_b1 = mu * upd_b1 - local_step * b1
139+
upd_b1 = mu * upd_b1 - local_step * b1_grad
140140
upd_W2 = mu * upd_W2 - local_step * W2_grad
141-
upd_b2 = mu * upd_b2 - local_step * b2
141+
upd_b2 = mu * upd_b2 - local_step * b2_grad
142142
upd_W3 = mu * upd_W3 - local_step * W3_grad
143-
upd_b3 = mu * upd_b3 - local_step * b3
143+
upd_b3 = mu * upd_b3 - local_step * b3_grad
144144
upd_W4 = mu * upd_W4 - local_step * W4_grad
145-
upd_b4 = mu * upd_b4 - local_step * b4
145+
upd_b4 = mu * upd_b4 - local_step * b4_grad
146146
W1 = W1 + upd_W1
147147
b1 = b1 + upd_b1
148148
W2 = W2 + upd_W2

0 commit comments

Comments
 (0)