Commit 758f363
committed
Fix pre/post-training evaluation to use same batch in nn_tutorial
The tutorial was comparing loss on different batches:
- Pre-training: evaluated on first 64 instances (batch 0)
- Post-training: evaluated on last batch from training loop
This made the comparison misleading as it wasn't measuring improvement
on the same data.
Changes:
- Save the initial batch (xb_initial, yb_initial) after first evaluation
- Use the saved initial batch for post-training evaluation
- Added clarifying comment about fair comparison
- Now both evaluations use the same data (first 64 training instances)
This provides an accurate before/after comparison showing the model's
improvement on the same batch of data.1 parent 4fa1fa8 commit 758f363
1 file changed
+7
-2
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
174 | 174 | | |
175 | 175 | | |
176 | 176 | | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
177 | 181 | | |
178 | 182 | | |
179 | 183 | | |
| |||
244 | 248 | | |
245 | 249 | | |
246 | 250 | | |
247 | | - | |
| 251 | + | |
| 252 | + | |
248 | 253 | | |
249 | | - | |
| 254 | + | |
250 | 255 | | |
251 | 256 | | |
252 | 257 | | |
| |||
0 commit comments