Replies: 1 comment
-
|
tfb benchmark uses some low level apis, i made some performance related changes. next run should work, fix already merged |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
You may have guessed this already, but the thing which prompted this thread is that ntex has once again failed to run on TechEmpower for the temporary results that started Feb. 3rd and Feb. 11 of this year.
As a casual follower of this repo, I have sometimes opened issues if I noticed ntex failing to appear in the TechEmpower temporary results. I started doing this because failure to appear in the temporary results did prevent ntex from appearing in the last official run they published, and I thought it was a shame that the framework did not make the public standings. But I don't want to just make busy work for the maintainers here, in case you guys no longer care whether it is included there.
Is it worthwhile to the ntex maintainers for me to open these issues, when they arise?
If so, is there something intrinsic to the way you have set things up in the TechEmpower/FrameworkBenchmarks repo such that ntex periodically stops being able to run successfully there? For instance, I can see in the recent failure listings that there is a new configuration (likely for a different async backend) starting with "plt". But I've just got to wonder why these new test files would somehow(?) prevent all the other tests entries to fail. In fact, I've never seen a case where only some of the test submissions failed but the rest of them succeeded. So it makes me wonder if things are just coupled there in those files in such a way that causes nothing to work when one little thing goes wrong.
Thanks for all you do and for considering these questions!
Beta Was this translation helpful? Give feedback.
All reactions