Skip to content

Commit 9aa7f93

Browse files
Add screensuite (#2896)
* Add screensuite * Add thumbnail * Reorder authors * Add video * Added latex table * List evaluated models * Add holo1 * Add to toctree * Wording nit * Fixes * Fix * Add CTA * Update CTA * Update title
1 parent 338353e commit 9aa7f93

File tree

3 files changed

+125
-0
lines changed

3 files changed

+125
-0
lines changed

_blog.yml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6125,3 +6125,15 @@
61256125
- nlp
61266126
- vlm
61276127
- nanovlm
6128+
6129+
- local: screensuite
6130+
title: "ScreenSuite - The most comprehensive evaluation suite for GUI Agents!"
6131+
author: a-mahla
6132+
thumbnail: /blog/assets/screensuite/thumbnail.png
6133+
date: Jun 6, 2025
6134+
tags:
6135+
- screensuite
6136+
- gui-agents
6137+
- agents
6138+
- smolagents
6139+
- vlm

assets/screensuite/thumbnail.png

2.61 MB
Loading

screensuite.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
---
2+
title: "ScreenSuite - The most comprehensive evaluation suite for GUI Agents!"
3+
thumbnail: /blog/assets/screensuite/thumbnail.png
4+
authors:
5+
- user: a-mahla
6+
- user: m-ric
7+
- user: thomwolf
8+
---
9+
## Releasing ScreenSuite, the most comprehensive evaluation suite for GUI Agents!
10+
11+
**TL;DR**
12+
13+
Over the past few weeks, we’ve been working tirelessly on making GUI agents more open, accessible and easy to integrate. Along the way, we created the largest benchmarking suite for GUI agents performances 👉 let us introduce [ScreenSuite](https://github.com/huggingface/screensuite).
14+
15+
We are very excited to share it with you today: [ScreenSuite](https://github.com/huggingface/screensuite) is the most comprehensive and easiest way to evaluate [Vision Language Models](https://huggingface.co/blog/vlms) (VLMs)across many agentic capabilities!
16+
17+
### WTF is a GUI Agent?
18+
19+
<div>
20+
<video controls style="margin-bottom:0;">
21+
<source src="https://os-world.github.io/static/videos/main.mp4" type="video/mp4">
22+
</video>
23+
<p style="color:grey;margin-top:0;"><i>GUI Agents in action - courtesy of <a href="https://os-world.github.io/">OSWorld</a></i></p>
24+
</div>
25+
26+
In short, an AI Agent is a robot that acts in the virtual world. ([more thorough definition here](https://huggingface.co/docs/smolagents/conceptual_guides/intro_agents))
27+
28+
In particular, a “GUI Agent” is an agent that lives in a GUI. Think “an agent that can do clicks and navigate on my desktop or my phone”, à la Claude Computer Use.
29+
30+
This means in essence that the AI model powering the agent will be given a task like “Fill the rest of this Excel column”, along with screen captures of the GUI. Using this information, it will then decide to take action on the system : `click(x=130, y=540)` to open a web browser, `type(”Value for XYZ in 2025")`, `scroll(down=2)` to read further… To see a GUI agent in action, you can try our [Open Computer Agent](https://huggingface.co/spaces/smolagents/computer-agent), powered by [Qwen2.5-VL-72B](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct).
31+
32+
A good GUI agent will be able to navigate a computer just like we would, thus unlocking all computer tasks : scrolling through Google Maps, editing a file, buying an item online. This involves a variety of capabilities that can be hard to evaluate.
33+
34+
### Introducing ScreenSuite 🥳
35+
36+
The literature, for instance [Xu et al. (2025)](https://arxiv.org/abs/2412.04454) or [Qin et al. (2025)](https://arxiv.org/abs/2501.12326), generally splits GUI agent abilities amongst several categories:
37+
38+
1. Perception: correctly perceiving the informati displayed on screen
39+
2. Grounding: understanding the positioning of elements - this is paramount to click the correct place
40+
3. Single step actions: solving instructions correctly over one action
41+
4. Multi-step agents: solving a higher-level goal through several actions in a GUI environment.
42+
43+
So our first contribution is to **gather and unify a comprehensive suite of 13 benchmarks spanning the full range of these GUI agent capabilities.**
44+
45+
If you look at the last category listed above, evaluating Multi-step agentic capabilities is especially challenging as it requires virtual machines to run the agent’s environment, be it Windows, Android, Ubuntu... To address this, we provide support both for [E2B desktop](https://github.com/e2b-dev/desktop) remote sandboxes, and we created from scratch a new option to easily launch Ubuntu or Android virtual machines in Docker!
46+
47+
| **Category** | **Benchmark** | **Environment** | **Sample count** |
48+
| ---------------------------- | ------------------------------------------------------------------------------------------------ | --------------- | ------------------- |
49+
| **Perception / Grounding 👁️** | [ScreenQA-Short](https://github.com/google-research-datasets/screen_qa) | Mobile | 8.4k |
50+
| | [ScreenQA-Complex](https://github.com/google-research-datasets/screen_qa/tree/main/complex_qa) | Mobile | 11.8k |
51+
| | [ScreenSpot-v2](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2/tree/main) | Desktop | 1.3k |
52+
| | [ScreenSpot-Pro](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding) | Desktop | 1.6k |
53+
| | [WebSRC](https://huggingface.co/datasets/X-LANCE/WebSRC_v1.0) | Web | 52k |
54+
| | [VisualWebBench](https://huggingface.co/datasets/visualwebbench/VisualWebBench) | Web | 1.5k |
55+
| **Single-Step Actions 🎯** | [Showdown-clicks](https://huggingface.co/datasets/generalagents/showdown-clicks) | Web | 0.6k |
56+
| | [AndroidControl](https://github.com/google-research/google-research/tree/master/android_control) | Mobile | 3k |
57+
| | [Multimodal-Mind2web](https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web) | Web | 6.4k |
58+
| **Multi-Step Agents 🐾** | [AndroidWorld](https://github.com/google-research/android_world) (incl MobileMiniWob) | Mobile | 116 tasks, infinite |
59+
| | [OSWorld](https://os-world.github.io/) | Desktop | 369 |
60+
| | [BrowseComp](https://openai.com/index/browsecomp/) | Web | 1.27k |
61+
| | [GAIA-Web](https://huggingface.co/gaia-benchmark) | Web | 132 |
62+
| | [Mind2Web-Live](https://osu-nlp-group.github.io/Mind2Web/) | Web | 208 |
63+
64+
**Implementation details**
65+
66+
We’ve carefully designed our benchmark suite with modularity and consistency in mind, ensuring strong alignment across tasks and environments. When required, especially for online benchmarks, we leverage [smolagents](https://github.com/serain/smolagents) as framework layer to streamline agent execution and orchestration.
67+
68+
To support reproducibility and ease of use, we’ve built custom Dockerized containers that allow local deployment of full **Ubuntu Desktop** or **Android** environments.
69+
70+
Unlike many existing GUI benchmarks that rely on accessibility trees or other metadata alongside visual input, our stack is intentionally **vision-only**. While this can result in different scores on some established leaderboards, we deem that it creates a more realistic and challenging setup, one that better reflects how humans perceive and interact with graphical interfaces.
71+
72+
– All agentic frameworks (Android World, OSWorld, GAIAWeb, Mind2Web) use [smolagents](https://github.com/huggingface/smolagents) and rely solely on **vision**, without any accessibility tree or DOM added (in contrast with evaluation settings reported in other sources).
73+
**Mind2Web (Multimodal)** originally used **element-name-based multi-choice selection** based on the accessibility tree and screenshots, but was later adapted to **click precision within bounding boxes** using **vision only**, which significantly increases task difficulty.
74+
75+
### Ranking leading VLMs on ScreenSuite 📊
76+
77+
78+
We’ve evaluated leading VLMs on the benchmark
79+
- The [Qwen-2.5-VL series of models](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5) from 3B to 72B. These models are known for their amazing localization capabilities, in other words they know the coordinates of any element in an image which makes them suited for GUI agents that need to click precisely.
80+
- [UI-Tars-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B), the all-rounder by ByteDance.
81+
- [Holo1-7B](https://huggingface.co/Hcompany/Holo1-7B), the latest model by H company, showing extremely performant localization for its size.
82+
- [GPT-4o](https://arxiv.org/abs/2410.21276)
83+
84+
Our scores are in general agreement with the scores reported in various sources! *With the caveat that we evaluate on vision only, causing some differences, see implementation details above.*
85+
86+
<div class="flex justify-center">
87+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/screensuite/scores_screensuite.png"/>
88+
</div>
89+
90+
>[!NOTE]
91+
> 💡 Note that ScreenSuite does not intend to exactly reproduce benchmarks published in the industry: we evaluate models on *GUI agentic capabilities based on vision*. As a result, on benchmarks like Mind2Web where other benchmarks gave the agent a view of information rich context like DOM or accessibility tree, our evaluation setting is much harder, thus ScreenSuite does not match other sources.
92+
93+
### Start your custom evaluation in 30s ⚡️
94+
95+
Head to [the repository](https://github.com/huggingface/screensuite).
96+
97+
1. Clone the repository with submodules: `git clone --recurse-submodules [email protected]:huggingface/screensuite.git`
98+
2. Install the package: `uv sync --extra submodules --python 3.11`
99+
3. Run `python run.py`
100+
- Alternatively, run `python examples/run_benchmarks.py` for more fine-grained control, like running evaluations for several models in parallel.
101+
102+
>[!NOTE]
103+
> The multistep benchmarks **requires a bare-metal machine** to run and deploy desktop/mobile* environment *emulators (see [README.md](https://github.com/huggingface/screensuite/blob/main/README.md))
104+
105+
### Next steps 🚀
106+
107+
Running consistent and meaningful evaluations easily allows the community to quickly iterate and make progress in this field, as we’ve seen with [Eleuther LM evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness), the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and the [Chatbot Arena](https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard).
108+
109+
We hope to see much more capable open models in the coming month that can run a wide range of tasks reliably and even run locally!
110+
111+
To support this effort:
112+
- ⭐️ Go star [the ScreenSuite repo](https://github.com/huggingface/screensuite) and give us feedback in issues/PRs!
113+
- 👉 Follow the [smolagents org](https://huggingface.co/smolagents) to stay up-to-date.

0 commit comments

Comments
 (0)