Skip to content

[FIX] Prevent inverse normalization of quantile validation loss#1432

Merged
marcopeix merged 3 commits intomainfrom
fix/val_loss_scaling
Jan 13, 2026
Merged

[FIX] Prevent inverse normalization of quantile validation loss#1432
marcopeix merged 3 commits intomainfrom
fix/val_loss_scaling

Conversation

@marcopeix
Copy link
Contributor

@marcopeix marcopeix commented Jan 12, 2026

When using DistributionLoss for training and sCRPS/MQLoss/HuberMQLoss for validation, _inv_normalization was applied to quantiles that were already in the original scale (via scale_decouple), causing validation loss to be computed on incorrectly scaled values.

This fix tracks when the output comes from a scaled distribution and skips _inv_normalization in those cases.

@marcopeix marcopeix linked an issue Jan 12, 2026 that may be closed by this pull request
@marcopeix marcopeix marked this pull request as ready for review January 12, 2026 20:43
Copy link
Collaborator

@nasaul nasaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The solution is correct. However I think that we need to add a test to check consistency across the different losses so this won't happen again.

@marcopeix marcopeix merged commit c90965e into main Jan 13, 2026
22 checks passed
@marcopeix marcopeix deleted the fix/val_loss_scaling branch January 13, 2026 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Incorrect Scaling of Quantiles in Validation Loss

2 participants