Skip to content

Commit 0a8acf9

Browse files
committed
Merge pull request #1 from lucasb-eyer/always-return-params
Default `Module.parameters` always return params, regardless of mode.
2 parents e6a7ef0 + 96568fe commit 0a8acf9

File tree

2 files changed

+4
-3
lines changed

2 files changed

+4
-3
lines changed

DeepFried2/layers/Module.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ def zero_grad_parameters(self):
2222
def parameters(self):
2323
params, grads = [], []
2424

25-
if self.training_mode and hasattr(self, 'weight'):
25+
if hasattr(self, 'weight'):
2626
assert hasattr(self, 'grad_weight'), "The layer {} has a `weight` variable but no `grad_weight`, you probably forget to implement it.".format(type(self))
2727
params += [self.weight]
2828
grads += [self.grad_weight]
2929

30-
if self.training_mode and hasattr(self, 'bias'):
30+
if hasattr(self, 'bias'):
3131
assert hasattr(self, 'grad_bias'), "The layer {} has a `bias` variable but no `grad_bias`, you probably forget to implement it.".format(type(self))
3232
params += [self.bias]
3333
grads += [self.grad_bias]

examples/MNIST/run.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import DeepFried2 as df
12
import DeepFried2.optimizers as optim
23
from mnist import *
34
from train import *
@@ -12,7 +13,7 @@ def main(params):
1213

1314
model = lenet()
1415

15-
criterion = bb8.ClassNLLCriterion()
16+
criterion = df.ClassNLLCriterion()
1617

1718
optimiser = optim.SGD(lr=params['lr'])
1819

0 commit comments

Comments
 (0)