You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make sure to prepare the required benchmark according to instructions provided in the [setup
72
+
Make sure to prepare the required benchmark according to the instructions provided in the [setup
68
73
column](#-supported-benchmarks).
69
74
70
75
```bash
@@ -174,7 +179,7 @@ experience, consider using benchmarks like WorkArena instead.
174
179
175
180
### Loading Results
176
181
177
-
The class [`ExpResult`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L595) provides a lazy loader for all the information of a specific experiment. You can use [`yield_all_exp_results`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L872) to recursivley find all results in a directory. Finally [`load_result_df`](https://github.com/ServiceNow/AgentLab/blob/be1998c5fad5bda47ba50497ec3899aae03e85ec/src/agentlab/analyze/inspect_results.py#L119C5-L119C19) gathers all the summary information in a single dataframe. See [`inspect_results.ipynb`](src/agentlab/analyze/inspect_results.ipynb) for example usage.
182
+
The class [`ExpResult`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L595) provides a lazy loader for all the information of a specific experiment. You can use [`yield_all_exp_results`](https://github.com/ServiceNow/BrowserGym/blob/da26a5849d99d9a3169d7b1fde79f909c55c9ba7/browsergym/experiments/src/browsergym/experiments/loop.py#L872) to recursively find all results in a directory. Finally [`load_result_df`](https://github.com/ServiceNow/AgentLab/blob/be1998c5fad5bda47ba50497ec3899aae03e85ec/src/agentlab/analyze/inspect_results.py#L119C5-L119C19) gathers all the summary information in a single dataframe. See [`inspect_results.ipynb`](src/agentlab/analyze/inspect_results.ipynb) for example usage.
178
183
179
184
```python
180
185
from agentlab.analyze import inspect_results
@@ -204,8 +209,14 @@ Once this is selected, you can see the trace of your agent on the given task. Cl
204
209
image to select a step and observe the action taken by the agent.
205
210
206
211
207
-
**⚠️ Note**: Gradio is still in developement and unexpected behavior have been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
212
+
**⚠️ Note**: Gradio is still developing, and unexpected behavior has been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
213
+
214
+
215
+
## 🏆 Leaderboard
216
+
217
+
Official unified [leaderboard](https://huggingface.co/spaces/ServiceNow/browsergym-leaderboard) across all benchmarks.
208
218
219
+
Experiments are on their way for more reference points using GenericAgent. We are also working on code to automatically push a study to the leaderboard.
209
220
210
221
## 🤖 Implement a new Agent
211
222
@@ -222,32 +233,32 @@ Several factors can influence reproducibility of results in the context of evalu
222
233
dynamic benchmarks.
223
234
224
235
### Factors affecting reproducibility
225
-
***Software version**: Different version of Playwright or any package in the software stack could
236
+
***Software version**: Different versions of Playwright or any package in the software stack could
226
237
influence the behavior of the benchmark or the agent.
227
-
***APIbased LLMs silently changing**: Even for a fixed version, an LLM may be updated e.g. to
228
-
incorporate latest web knowledge.
238
+
***API-based LLMs silently changing**: Even for a fixed version, an LLM may be updated e.g. to
239
+
incorporate the latest web knowledge.
229
240
***Live websites**:
230
241
* WorkArena: The demo instance is mostly fixed in time to a specific version but ServiceNow
231
-
sometime push minor modifications.
242
+
sometimes pushes minor modifications.
232
243
* AssistantBench and GAIA: These rely on the agent navigating the open web. The experience may
233
244
change depending on which country or region, some websites might be in different languages by
234
245
default.
235
-
***Stochastic Agents**: Setting temperature of the LLM to 0 can reduce most stochasticity.
236
-
***Nondeterministic tasks**: For a fixed seed, the changes should be minimal
246
+
***Stochastic Agents**: Setting the temperature of the LLM to 0 can reduce most stochasticity.
247
+
***Non-deterministic tasks**: For a fixed seed, the changes should be minimal
237
248
238
249
### Reproducibility Features
239
250
*`Study` contains a dict of information about reproducibility, including benchmark version, package
240
251
version and commit hash
241
252
* The `Study` class allows automatic upload of your results to
242
253
[`reproducibility_journal.csv`](reproducibility_journal.csv). This makes it easier to populate a
243
-
large amount of reference points.
244
-
***Reproduced results in the leaderboard**. For agents that are repdocudibile, we encourage users
254
+
large amount of reference points. For this feature, you need to `git clone` the repository and install via `pip install -e .`.
255
+
***Reproduced results in the leaderboard**. For agents that are reprocudibile, we encourage users
245
256
to try to reproduce the results and upload them to the leaderboard. There is a special column
246
257
containing information about all reproduced results of an agent on a benchmark.
247
258
***ReproducibilityAgent**: [You can run this agent](src/agentlab/agents/generic_agent/reproducibility_agent.py) on an existing study and it will try to re-run
248
-
the same actions on the same task seeds. A vsiual diff of the two prompts will be displayed in the
259
+
the same actions on the same task seeds. A visual diff of the two prompts will be displayed in the
249
260
AgentInfo HTML tab of AgentXray. You will be able to inspect on some tasks what kind of changes
250
-
between to two executions. **Note**: this is a beta feature and will need some adaptation for your
261
+
between the two executions. **Note**: this is a beta feature and will need some adaptation for your
0 commit comments