Skip to content
Discussion options

You must be logged in to vote

Benchmarks are generally useful but the numbers on papers are not independently meaningful. what makes the numbers meaningful is the comparisons between numbers and their methods. Therefore, if you really duplicate the numbers on experiments of the papers that the model you want to use is relevant, you have to set your experiments up as same as what they set up, eg. Dataset, hyper-params, epoches, so on, so on. But it is hardly possible because they can put some random noises and also they divide train/test/validation datasets randomly.

Conclusion : It is possible to have different numbers from the numbers of the papers you read. and it is normal.

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by djdameln
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants
Converted from issue

This discussion was converted from issue #540 on September 22, 2022 08:59.