Skip to content

Commit 2a94bff

Browse files
Append epoch rather than best val. loss to val_loss
**Problem** Currently, we're val_loss.append(best_val_loss) in each epoch. This is misleading because we're appending the corresponding epoch (not best across epochs) quantities in train_loss, train_prep, and val_prep. This is also inconvenient, as one often would like to plot both train and validation losses as a function of the epochs to look for overfitting. **Solution** val_loss.append(eval_epoch_loss)
1 parent d8b0eba commit 2a94bff

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/llama_recipes/utils/train_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -288,7 +288,7 @@ def train(model, train_dataloader,eval_dataloader, tokenizer, optimizer, lr_sche
288288
print(f"best eval loss on epoch {epoch+1} is {best_val_loss}")
289289
else:
290290
print(f"best eval loss on epoch {epoch+1} is {best_val_loss}")
291-
val_loss.append(float(best_val_loss))
291+
val_loss.append(float(eval_epoch_loss))
292292
val_prep.append(float(eval_ppl))
293293
if train_config.enable_fsdp:
294294
if rank==0:

0 commit comments

Comments
 (0)