Constant validation loss #12146
Replies: 2 comments 10 replies
-
Hi @cwoolfo1, not sure if below fixes your issue, but if you're using metrics from class NeuralNetwork(LightningModule):
def __init__(self, input_size, hidden_size, output_size, num_layers):
...
+ self.train_accuracy = Accuracy()
+ self.validation_accuracy = Accuracy()
def training_step(self, batch, batch_idx):
...
- Acc = Accuracy()
- self.log("train_accuracy", Acc(...), ...)
+ self.train_accuracy(...)
+ self.log("train_accuracy", self.train_accuracy, ...)
...
def validation_step(self, batch, batch_idx):
...
- Acc = Accuracy()
- self.log("validation_accuracy", Acc(...), ...)
+ self.validation_accuracy(...)
+ self.log("validation_accuracy", self.val_accuracy, ...)
... Would you mind trying this? Also, I would also try to make sure that it's working in a CPU environment first. For more examples of |
Beta Was this translation helpful? Give feedback.
-
The validation metrics are all constant. I have visualized learning curves on tensorboard and all of the metrics are straight lines. Not a single change in value. I am not using GPUs yet I will send you a copy of the script shortly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am training a GRU. The metrics are constant.
I originally believed this was overfitting but I have tried various hyperparameters, regularization, and optimizers.
I believe this is an issue with my code.
Can yall see if there are any issues with my code that is preventing my neural network from training
Beta Was this translation helpful? Give feedback.
All reactions