File tree Expand file tree Collapse file tree 3 files changed +19
-26
lines changed
Expand file tree Collapse file tree 3 files changed +19
-26
lines changed Original file line number Diff line number Diff line change @@ -123,17 +123,14 @@ b -= learning_rate * b.grad
123123The [ ` micrograd.optim.SGD ` ] ( micrograd/optim.py ) wraps up the above
124124
125125``` python
126- SGD(target, # variable to be minimised
127- wrt = [], # list of variables with respect to which
126+ SGD(wrt = [], # list of variables with respect to which
128127 # to perform minimisation
129128 learning_rate = None ,
130129 # a non-negative number or a generator of them
131130 momentum = None )
132131```
133132
134- The ` learning_rate ` can accept a generator implementing a schedule of varying learning rates.
135-
136- Once [ ` SGD ` ] ( micrograd/optim.py ) is created, just call ` SGD.step() ` with the minibatch data.
133+ The ` learning_rate ` can accept a generator implementing a schedule of varying learning rates. Typical usage is as below,
137134
138135``` python
139136optimiser = SGD(... )
@@ -148,7 +145,10 @@ for k in range(n_steps):
148145 #
149146 batch_data = next (batch_iterator)
150147
151- optimiser.step(** batch_data)
148+ loss.forward(** batch_data)
149+ loss.backward()
150+
151+ optimiser.step()
152152
153153 # validation
154154 validation_metric.forward()
You can’t perform that action at this time.
0 commit comments