You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make sure to prepare the required benchmark according to instructions provided in the [setup
71
+
Make sure to prepare the required benchmark according to the instructions provided in the [setup
71
72
column](#-supported-benchmarks).
72
73
73
74
```bash
@@ -177,7 +178,7 @@ experience, consider using benchmarks like WorkArena instead.
177
178
178
179
### Loading Results
179
180
180
-
The class [`ExpResult`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L595) provides a lazy loader for all the information of a specific experiment. You can use [`yield_all_exp_results`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L872) to recursivley find all results in a directory. Finally [`load_result_df`](https://github.com/ServiceNow/AgentLab/blob/be1998c5fad5bda47ba50497ec3899aae03e85ec/src/agentlab/analyze/inspect_results.py#L119C5-L119C19) gathers all the summary information in a single dataframe. See [`inspect_results.ipynb`](src/agentlab/analyze/inspect_results.ipynb) for example usage.
181
+
The class [`ExpResult`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L595) provides a lazy loader for all the information of a specific experiment. You can use [`yield_all_exp_results`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L872) to recursively find all results in a directory. Finally [`load_result_df`](https://github.com/ServiceNow/AgentLab/blob/be1998c5fad5bda47ba50497ec3899aae03e85ec/src/agentlab/analyze/inspect_results.py#L119C5-L119C19) gathers all the summary information in a single dataframe. See [`inspect_results.ipynb`](src/agentlab/analyze/inspect_results.ipynb) for example usage.
181
182
182
183
```python
183
184
from agentlab.analyze import inspect_results
@@ -207,9 +208,15 @@ Once this is selected, you can see the trace of your agent on the given task. Cl
207
208
image to select a step and observe the action taken by the agent.
208
209
209
210
210
-
**⚠️ Note**: Gradio is still in developement and unexpected behavior have been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
211
+
**⚠️ Note**: Gradio is still developing, and unexpected behavior has been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
211
212
212
213
214
+
## 🏆 Leaderboard
215
+
216
+
Official unified [leaderboard](https://huggingface.co/spaces/ServiceNow/browsergym-leaderboard) across all benchmarks.
217
+
218
+
Experiments are on their way for more reference points using GenericAgent. We are also working on code to automatically push a study to the leaderboard.
219
+
213
220
## 🤖 Implement a new Agent
214
221
215
222
Get inspiration from the `MostBasicAgent` in
@@ -225,32 +232,32 @@ Several factors can influence reproducibility of results in the context of evalu
225
232
dynamic benchmarks.
226
233
227
234
### Factors affecting reproducibility
228
-
***Software version**: Different version of Playwright or any package in the software stack could
235
+
***Software version**: Different versions of Playwright or any package in the software stack could
229
236
influence the behavior of the benchmark or the agent.
230
-
***APIbased LLMs silently changing**: Even for a fixed version, an LLM may be updated e.g. to
231
-
incorporate latest web knowledge.
237
+
***API-based LLMs silently changing**: Even for a fixed version, an LLM may be updated e.g. to
238
+
incorporate the latest web knowledge.
232
239
***Live websites**:
233
240
* WorkArena: The demo instance is mostly fixed in time to a specific version but ServiceNow
234
-
sometime push minor modifications.
241
+
sometimes pushes minor modifications.
235
242
* AssistantBench and GAIA: These rely on the agent navigating the open web. The experience may
236
243
change depending on which country or region, some websites might be in different languages by
237
244
default.
238
-
***Stochastic Agents**: Setting temperature of the LLM to 0 can reduce most stochasticity.
239
-
***Nondeterministic tasks**: For a fixed seed, the changes should be minimal
245
+
***Stochastic Agents**: Setting the temperature of the LLM to 0 can reduce most stochasticity.
246
+
***Non-deterministic tasks**: For a fixed seed, the changes should be minimal
240
247
241
248
### Reproducibility Features
242
249
*`Study` contains a dict of information about reproducibility, including benchmark version, package
243
250
version and commit hash
244
251
* The `Study` class allows automatic upload of your results to
245
252
[`reproducibility_journal.csv`](reproducibility_journal.csv). This makes it easier to populate a
246
253
large amount of reference points. For this feature, you need to `git clone` the repository and install via `pip install -e .`.
247
-
***Reproduced results in the leaderboard**. For agents that are repdocudibile, we encourage users
254
+
***Reproduced results in the leaderboard**. For agents that are reprocudibile, we encourage users
248
255
to try to reproduce the results and upload them to the leaderboard. There is a special column
249
256
containing information about all reproduced results of an agent on a benchmark.
250
257
***ReproducibilityAgent**: [You can run this agent](src/agentlab/agents/generic_agent/reproducibility_agent.py) on an existing study and it will try to re-run
251
-
the same actions on the same task seeds. A vsiual diff of the two prompts will be displayed in the
258
+
the same actions on the same task seeds. A visual diff of the two prompts will be displayed in the
252
259
AgentInfo HTML tab of AgentXray. You will be able to inspect on some tasks what kind of changes
253
-
between to two executions. **Note**: this is a beta feature and will need some adaptation for your
260
+
between the two executions. **Note**: this is a beta feature and will need some adaptation for your
0 commit comments