File tree Expand file tree Collapse file tree 3 files changed +19
-26
lines changed
Expand file tree Collapse file tree 3 files changed +19
-26
lines changed Original file line number Diff line number Diff line change @@ -122,17 +122,14 @@ b -= learning_rate * b.grad
122122The [ ` micrograd.optim.SGD ` ] ( micrograd/optim.py ) wraps up the above
123123
124124``` python
125- SGD(target, # variable to be minimised
126- wrt = [], # list of variables with respect to which
125+ SGD(wrt = [], # list of variables with respect to which
127126 # to perform minimisation
128127 learning_rate = None ,
129128 # a non-negative number or a generator of them
130129 momentum = None )
131130```
132131
133- The ` learning_rate ` can accept a generator implementing a schedule of varying learning rates.
134-
135- Once [ ` SGD ` ] ( micrograd/optim.py ) is created, just call ` SGD.step() ` with the minibatch data.
132+ The ` learning_rate ` can accept a generator implementing a schedule of varying learning rates. Typical usage is as below,
136133
137134``` python
138135optimiser = SGD(... )
@@ -147,7 +144,10 @@ for k in range(n_steps):
147144 #
148145 batch_data = next (batch_iterator)
149146
150- optimiser.step(** batch_data)
147+ loss.forward(** batch_data)
148+ loss.backward()
149+
150+ optimiser.step()
151151
152152 # validation
153153 validation_metric.forward()
You can’t perform that action at this time.
0 commit comments