Skip to content

Increase N-beats reproducibility by providing pre-computed forecast on evaluated datasets in the original paper #6

@PhilippeChatigny

Description

@PhilippeChatigny

Hi,

I propose that the point forecast of N-BEATS be included in this repo for all the dataset evaluated in the original paper like it has been done for the M4 competition github. This would ease the comparison of the N-beats model with others and increase the visibility of the N-BEATS paper.

Beside reproducing the experimental results presented in the paper, it is often more convenient to rely on precomputed forecast to compare different model on the same dataset. For instance, the M4 competition's github repository provides the point forecast of all the models submitted in compressed csv files which permits comparing the models per individual series. We can evaluate the performance of ES-RNN and FFORMA using different loss functions but not the N-beats model. Thankfully, this is the case for ES-RNN and FFORMA given the execution time to reproduce their forecasts. It would be great if N-BEATS wouldn't fall under the reproducibility exception. The argument holds for the other dataset evaluated.

Great repo btw!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions