You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* yet another way to kill timedout jobs
* Improve timeout handling in task polling logic
* Add method to override max_steps in Study class
* add support for tab visibility in observation flags and update related components
* fix tests
* black
* Improve timeout handling in task polling logic
* yet another way to kill timedout jobs (#108)
* Add method to override max_steps in Study class
* add support for tab visibility in observation flags and update related components
* fix tests
* black
* black
* Fix sorting bug.
improve directory content retrieval with summary statistics
* fix test
* black
* tmp
* add error report, add cum cost to summary and ray backend by default
* displaying exp names in ray dashboard (#123)
* displaying exp names in ray dashboard
* fixing tests
* enabling chat o_0 (#124)
* sequential studies
* little bug
* more flexible requirement
* imrove readme
* Enhance agent configuration and logging in study setup
- Updated `get_vision_agent` to append "_vision" to agent names.
- Modified `BaseMessage.__str__` to include a no-warning option for logging.
- Improved `make_study` function to accept single agent args and benchmark types.
- Added detailed docstrings for better clarity on parameters and functionality.
- Introduced `avg_step_timeout` and `demo_mode` attributes in the Study class.
* get_text was added by mistake
* Update README and Jupyter notebook with improved documentation and result analysis instructions
* Update requirements to include Jupyter support for black
* Update README.md
* Fix formatting and improve clarity in README.md
* Update README.md to enhance visuals and improve navigation
* Add badges to README.md for PyPI, GitHub stars, and CI status
* Add video demonstration to AgentXray section in README.md
* test video
* xray video test
* Update AgentXray section in README.md with new asset link
* minor
* fix setup link ... again
* remove upper case letter before getting the benchmark
* minor
* Update ReproducibilityAgent link in README.md for better accessibility
---------
Co-authored-by: Maxime Gasse <[email protected]>
Co-authored-by: Thibault LSDC <[email protected]>
Inspect the behaviour of your agent using xray. You can load previous or ongoing experiments. The refresh mechanism is currently a bit clunky, but you can refresh the page, refresh the experiment directory list and select again your experiment to see an updated version of your currently running experiments.
**⚠️ Note**: Gradio is still in developement and unexpected behavior have been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
You will be able to select the recent experiments in the directory `AGENTLAB_EXP_ROOT` and visualize
202
+
You can load previous or ongoing experiments in the directory `AGENTLAB_EXP_ROOT` and visualize
198
203
the results in a gradio interface.
199
204
200
205
In the following order, select:
@@ -206,14 +211,18 @@ In the following order, select:
206
211
Once this is selected, you can see the trace of your agent on the given task. Click on the profiling
207
212
image to select a step and observe the action taken by the agent.
208
213
214
+
215
+
**⚠️ Note**: Gradio is still in developement and unexpected behavior have been frequently noticed. Version 5.5 seems to work properly so far. If you're not sure that the proper information is displaying, refresh the page and select your experiment again.
For a better integration with the tools, make sure to implement most functions in the
214
223
[AgentArgs](src/agentlab/agents/agent_args.py#L5) API and the extended `bgym.AbstractAgentArgs`.
215
224
216
-
If you think your agent should be included directly in AgenLab, let use know and it can be added in
225
+
If you think your agent should be included directly in AgenLab, let us know and it can be added in
217
226
agentlab/agents/ with the name of your agent.
218
227
219
228
## ↻ Reproducibility
@@ -243,7 +252,7 @@ dynamic benchmarks.
243
252
***Reproduced results in the leaderboard**. For agents that are repdocudibile, we encourage users
244
253
to try to reproduce the results and upload them to the leaderboard. There is a special column
245
254
containing information about all reproduced results of an agent on a benchmark.
246
-
***ReproducibilityAgent**: You can run this agent on an existing study and it will try to re-run
255
+
***ReproducibilityAgent**: [You can run this agent](src/agentlab/agents/generic_agent/reproducibility_agent.py) on an existing study and it will try to re-run
247
256
the same actions on the same task seeds. A vsiual diff of the two prompts will be displayed in the
248
257
AgentInfo HTML tab of AgentXray. You will be able to inspect on some tasks what kind of changes
249
258
between to two executions. **Note**: this is a beta feature and will need some adaptation for your
0 commit comments