-
Notifications
You must be signed in to change notification settings - Fork 175
Open
Description
There are numerous research papers, benchmarks which exists out there.
And environment with which anyone can just easily plug and play with the bench mark, would be a great addition.
This is one such example" https://arxiv.org/pdf/2509.08494
This idea originally came via learning Prime Inellect;s environment hub..
Now right now the current implementation have the .reset(), .step(), etc.
but majority have a single shot where there won't any .step involved and .reset just mean a different example this time and evaluation seperately.
Just wanted to know from maintainers an high levell idea, how should one implement the benchmarks?
Metadata
Metadata
Assignees
Labels
No labels