Skip to content

Added FFTLoss in Pytorch#1463

Open
SkyeGunasekaran wants to merge 1 commit intoNixtla:mainfrom
SkyeGunasekaran:feature/FFTLoss
Open

Added FFTLoss in Pytorch#1463
SkyeGunasekaran wants to merge 1 commit intoNixtla:mainfrom
SkyeGunasekaran:feature/FFTLoss

Conversation

@SkyeGunasekaran
Copy link

PR Description: Integration of Fourier-Domain Loss Functions

Overview

This PR introduces a suite of frequency-domain loss functions and a flexible MixedFFTLoss utility. These additions allow the model to optimize for spectral density and periodic structures, which provides a more robust loss function to trend and seasonality vs direct pointwise prediction errors.

Fourier Loss Suite

I have implemented three core frequency-domain losses based on the magnitude spectrum of the Real Discrete Fourier Transform (RFFT):

  • FFTMAELoss: Mean Absolute Error in frequency space.
  • FFTMSELoss: Mean Squared Error in frequency space.
  • FFTRMSELoss: Root Mean Squared Error in frequency space.

These losses operate on the magnitude spectrum $|F(y)|$ which ensures the loss remains real-valued, focuses on the power distribution of seasonal and trend components rather than exact point-in-time alignment.

Hybrid Optimization (MixedFFTLoss)

To balance point-wise accuracy with structural frequency alignment, I added MixedFFTLoss. This allows for a composite objective function. This function provides the best balance for real-world use cases:

$$\mathcal{L}_{total} = \mathcal{L}_{time} + \lambda \cdot \mathcal{L}_{freq}$$

Key Features:

  • Zero-Masking: Masks are applied in the time-domain prior to FFT to prevent padding or missing values from introducing high-frequency noise into the spectrum.
  • Normalization: Supports magnitude normalization via the norm parameter to ensure loss stability across varying sequence lengths ($H$).
  • Architectural Compatibility: The implementation respects the BasePointLoss interface to seamlessly integrate with existing loss functions in the repository.

Testing

Tests were performed locally via PyTest. The baseline formulas were computed in pure numpy and compared against the pytorch functions implemented. The loss functions are numerically stable and work within the NeuralForecast repository.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

1 similar comment
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@marcopeix
Copy link
Contributor

Hi, is there a paper that details these new losses and benchmarks them on known public datasets?

Even we do decide to move forward with this, the PR is missing:

  • proper tests to ensure it works with all types of models (univariate, multivariate, recurrent)
  • update to the documentation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments