Skip to content

Commit 84abb71

Browse files
author
Jencir Lee
committed
change SGD.step() signature
1 parent f437de0 commit 84abb71

File tree

3 files changed

+19
-26
lines changed

3 files changed

+19
-26
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -122,17 +122,14 @@ b -= learning_rate * b.grad
122122
The [`micrograd.optim.SGD`](micrograd/optim.py) wraps up the above
123123

124124
```python
125-
SGD(target, # variable to be minimised
126-
wrt=[], # list of variables with respect to which
125+
SGD(wrt=[], # list of variables with respect to which
127126
# to perform minimisation
128127
learning_rate=None,
129128
# a non-negative number or a generator of them
130129
momentum=None)
131130
```
132131

133-
The `learning_rate` can accept a generator implementing a schedule of varying learning rates.
134-
135-
Once [`SGD`](micrograd/optim.py) is created, just call `SGD.step()` with the minibatch data.
132+
The `learning_rate` can accept a generator implementing a schedule of varying learning rates. Typical usage is as below,
136133

137134
```python
138135
optimiser = SGD(...)
@@ -147,7 +144,10 @@ for k in range(n_steps):
147144
#
148145
batch_data = next(batch_iterator)
149146

150-
optimiser.step(**batch_data)
147+
loss.forward(**batch_data)
148+
loss.backward()
149+
150+
optimiser.step()
151151

152152
# validation
153153
validation_metric.forward()

0 commit comments

Comments
 (0)