add delay between runs + result variation using different order #55
Unanswered
devara-gheist
asked this question in
Q&A
Replies: 1 comment
-
|
Static benchmarks are compiled and executed with import { setTimeout as sleep } from 'timers/promises'
summary(() => {
bench('#1', async function* () {
await sleep(10000);
yield () => Bun.spawnSync(...)
});
/*...*/
});
await run(); |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Here's the code I'm using, note the programs are compiled binaries in multiple different languages:
However monitoring the CPU via btop I see no delay at all between runs.
Please take a look at the series of images below:
Order-1

Order-2

Benchmark runs early in the order gets better result, while those in last orders gets worse result. How to avoid this? This is a consistent behaviour I monitored when testing the benchmark for 10x.
Also this is the main reason I was looking for a way to introduce delay to avoid thermal throttling.
Below is the benchmark summary I did for the same set of programs via Hyperfine:
Note they are within 10-20 minutes between running bench using Mitata vs Hyperfine, of which the system processes remains the same. But on the bright side the percentage of which one is faster by how many % is close between Mitata and Hyperfine summaries.
[Update]: below is the latest bench I did with Mitata and Hyperfine.

Notice the numbers are closer because I "rest" my system for some time..but also note Mitata's numbers are always a bit higher than Hyperfine's.
Anyway I actually like Mitata especially with the options to add graphs to the benchmark result. I'm looking forward to use Mitata more once I understand more about it. Cheers!
Beta Was this translation helpful? Give feedback.
All reactions