[Fix] Poor performance with the NegativeBinomial DistributionLoss#1289
[Fix] Poor performance with the NegativeBinomial DistributionLoss#1289marcopeix merged 1 commit intoNixtla:mainfrom
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
@Antoine-Schwartz When you have time, could you help to check out the branch These two files are my tests using your reproducible script (only consider NHITS model due to limited computing resource) and we can see the 'on-dev-branch-evaluate.csv' has improved performance than `on-master-branch-evaluate.csv' (without the proposed fix). @elephaint @marcopeix @jmoralez |
|
Hello @JQGoh, I'm glad someone is finally taking this subject seriously. Indeed, as I didn't have the time to dissect the code in detail to understand the problem, I had gone for the default However, my team has noticed that |
I relaunched your example with one model only, NHITS. The attached file 'on-dev-branch-evalute.csv' demonstrates improvement in the loss score as compared to another file. But I would like to hear from others and invite more tests for this. Checking whether this indeed solves the issue. When I read the equation and consider the transformation on the parameters used for the mean-parametrization of negative binomial distribution, the fix makes sense too. |
Appreciate your help if you could take whichever model/dataset, but you try with two versions of runs
And compare the results |
|
I took a few minutes to test it out. |
|
Also tested on my end, and I see an improvement with DeepAR, NHITS and TFT. Thanks a lot @JQGoh for the fix, and thank you @Antoine-Schwartz for testing! |
An attempt to address #712 , suspect that the wrong scaling parameters were chosen during the Negative Binomial distribution computation