Skip to content

Commit 6d900fe

Browse files
author
Jencir Lee
committed
change SGD.step() signature
1 parent 4cb7aab commit 6d900fe

File tree

3 files changed

+19
-26
lines changed

3 files changed

+19
-26
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -123,17 +123,14 @@ b -= learning_rate * b.grad
123123
The [`micrograd.optim.SGD`](micrograd/optim.py) wraps up the above
124124

125125
```python
126-
SGD(target, # variable to be minimised
127-
wrt=[], # list of variables with respect to which
126+
SGD(wrt=[], # list of variables with respect to which
128127
# to perform minimisation
129128
learning_rate=None,
130129
# a non-negative number or a generator of them
131130
momentum=None)
132131
```
133132

134-
The `learning_rate` can accept a generator implementing a schedule of varying learning rates.
135-
136-
Once [`SGD`](micrograd/optim.py) is created, just call `SGD.step()` with the minibatch data.
133+
The `learning_rate` can accept a generator implementing a schedule of varying learning rates. Typical usage is as below,
137134

138135
```python
139136
optimiser = SGD(...)
@@ -148,7 +145,10 @@ for k in range(n_steps):
148145
#
149146
batch_data = next(batch_iterator)
150147

151-
optimiser.step(**batch_data)
148+
loss.forward(**batch_data)
149+
loss.backward()
150+
151+
optimiser.step()
152152

153153
# validation
154154
validation_metric.forward()

0 commit comments

Comments
 (0)