Skip to content

Commit df35b5b

Browse files
Init textquests blog (#3022)
* Init textquests blog * Add TextQuests entry to _blog.yml * Update TextQuests publication date to Aug 12, 2025 * migrate images repo + change thumbnail
1 parent aa0affc commit df35b5b

File tree

3 files changed

+104
-0
lines changed

3 files changed

+104
-0
lines changed

_blog.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6515,3 +6515,14 @@
65156515
- datasets
65166516
- open-source
65176517
- community
6518+
6519+
- local: textquests
6520+
title: "TextQuests: How Good are LLMs at Text-Based Video Games?"
6521+
author: justinphan3110
6522+
thumbnail: /blog/assets/textquests/thumbnail.gif
6523+
date: Aug 12, 2025
6524+
tags:
6525+
- research
6526+
- llm
6527+
- evaluation
6528+
- agents

assets/textquests/thumbnail.gif

846 KB
Loading

textquests.md

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
title: "TextQuests: How Good are LLMs at Text-Based Video Games?"
3+
thumbnail: /blog/assets/textquests/thumbnail.gif
4+
authors:
5+
- user: justinphan3110
6+
org: cais
7+
- user: clefourrier
8+
---
9+
10+
# TextQuests: How Good are LLMs at Text-Based Video Games?
11+
12+
The rapid advancement of Large Language Models (LLMs) has enabled remarkable progress on established academic and industrial benchmarks. Knowledge benchmarks, such as MMLU and GPQA now largely saturated, and frontier models are making significant progress on expert evaluations like [HLE](lastexam.ai). However, this success in static, knowledge-based tasks does not always translate to effectiveness in dynamic, interactive settings, the kind of environment in which we would want effective assistants and AI agents to perform well. Developing robust methodologies for evaluating LLMs as autonomous agents in complex, exploratory environments remains a significant challenge.
13+
14+
Two core avenues exist to evaluate autonomous agents: either use real-world environments and a limited set of specific skills, such as tool use or coding capabilities, or use simulated open-world environments. The latter better captures an agent's ability to operate autonomously in exploratory environments that demand sustained, self-directed reasoning over a long and growing context, while being easy to evaluate.
15+
While this direction is still developing, it has seen growing interest through benchmarks such as [Balrog](https://balrogai.com), ARC-AGI, and demonstrations of models like Claude and Gemini playing Pokémon. Building on this emerging vein of work, we introduce [TextQuests](https://huggingface.co/spaces/cais/textquests).
16+
17+
18+
<div style="display: flex; justify-content: center; margin: -3.5rem 0;">
19+
<img
20+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/textquests/textquests_logo.png"
21+
alt="TextQuests Logo"
22+
style="width: 30px; height: 30px;"
23+
>
24+
</div>
25+
26+
## TextQuests
27+
28+
TextQuests is a benchmark built upon 25 classic [Infocom](https://en.wikipedia.org/wiki/Infocom) interactive fiction games. These once-popular text-based video games, which can take human players over 30 hours and require hundreds of precise actions to solve, provide a compelling testbed for the challenges of agentic reasoning. They demand that an agent demonstrate:
29+
30+
- **Long-Context Reasoning:** Agents must devise and execute multi-step plans by reasoning over a long and continuously growing history of actions and observations, relying solely on their intrinsic capabilities without the aid of external tools.
31+
32+
- **Learning through Exploration:** The games require agents to learn from experience, interrogating their own failures and making incremental improvements through trial-and-error as they explore the unknown world.
33+
34+
Success in these games requires an agent to build understanding over a long gameplay session. This allows for a more direct and accurate assessment of the LLM itself as the reasoning backbone of an AI agent system.
35+
36+
<figure class="image text-center">
37+
<figcaption> Examples showing the diverse reasoning challenges in TextQuests.
38+
39+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/textquests/textquests_examples.png" alt="Text watermark">
40+
</figure>
41+
42+
43+
## Evaluations
44+
45+
For each model, we conduct two distinct evaluation runs: one with access to the game's official hints (```With Clues```) and one without (```No Clues```). Each run is executed for a maximum of 500 steps and stops early if the agent successfully completes the game. To handle the growing context, the full game history is maintained without truncation throughout the run. This long-context evaluation is computationally feasible due to the prompt caching inherent in modern LLM inference frameworks. We employ two main evaluation metrics:
46+
47+
- **Game Progress.** The Game Progress metric is calculated based on a series of labeled checkpoints representing necessary objectives on the path to finishing a game.
48+
49+
- **Harm.** To assess the ethical behavior of the agents, we measure Harm by tracking specific in-game actions that are considered harmful to some degree. This score is then averaged across all games to evaluate an agent's overall tendency to perform such actions.
50+
51+
<figure class="image text-center">
52+
<figcaption> LLMs performance on TextQuests.
53+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/textquests/textquests_results.png" alt="results">
54+
55+
</figure>
56+
57+
## Discussion
58+
59+
**Long-context Reasoning.** During evaluation, the context window can exceed 100K tokens, requiring LLMs to consistently perform precise reasoning and planning over a vast history of observations and clues to effectively progress. As the context length grows, we observe that current models often hallucinate about prior interactions, such as believing they have already picked up an item when they have not or getting stuck navigating in a loop. Furthermore, similar to observations in [Gemini 2.5 Plays Pokémon](https://arxiv.org/abs/2507.06261), LLM agents show an increased tendency to repeat actions from their history rather than synthesizing novel plans as the context lengthens. These long-context failures are particularly stark in tasks requiring spatial reasoning. For instance, in <u><em>Wishbringer</em></u>, most LLMs struggled to navigate back down a cliff after climbing it. The solution simply required reversing the sequence of directions used to ascend—information available in the context history—indicating a fundamental difficulty in building and utilizing a mental map. Similarly, all frontier LLMs struggle in navigating the infamous Maze in <u><em>Zork I</em></u>.
60+
61+
62+
<figure class="image text-left">
63+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/textquests/textquests_fail.png" alt="reasoning">
64+
<figcaption>
65+
Examples of long context reasoning failures in TextQuests. <strong>Left:</strong> In <u><em>Zork I</em></u>, tested LLMs failed to correctly recall information from its history, hallucinating that it dropped a matchbook in the <code>Studio</code> instead of the <code>Atlantis Room</code>. <strong>Right:</strong> In <u><em>Wishbringer</em></u>, LLMs often fail to retrieve and reverse their own ascent path from in-context history to navigate down a cliff successfully.
66+
</figcaption>
67+
</figure>
68+
69+
**Dynamic Thinking.** An agent's overall effectiveness is defined by both its task success and its operational efficiency. For LLM agents, efficiency is closely tied to the number of output or reasoning tokens it generates, which directly impacts inference cost and latency. Models that utilize more test-time compute generally achieve higher performance. However, this trend starts to diminish after a certain budget. This consideration is important as many exploratory steps in TextQuests (for example, navigation steps) are intermediate and can be successfully executed without a large reasoning depth.
70+
71+
<figure class="image text-left">
72+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/textquests/textquests_dynamic_thinking.png" alt="thinking">
73+
<figcaption>
74+
A comparison of output and reasoning token efficiency across state-of-the-art LLMs on TextQuests. Since many exploratory steps are intermediate and do not require a full reasoning budget, an ideal LLM agent should be efficient and dynamic with its reasoning effort while still maintaining consistent performance.
75+
</figcaption>
76+
</figure>
77+
78+
In closing, TextQuests is an evaluation of how well models can consistently progress through a series of classic interactive fiction games that were once popular among human players. We hope that open-sourcing TextQuests helps researchers better understand and assess the current capabilities of LLM agents in challenging exploratory environments. Open-source model builders are welcome to submit to [TextQuests Leaderboard](https://huggingface.co/spaces/cais/textquests) by sending us an email at [[email protected]](mailto:[email protected])
79+
80+
81+
82+
## Citations
83+
```
84+
@misc{phan2025textquestsgoodllmstextbased,
85+
title={TextQuests: How Good are LLMs at Text-Based Video Games?},
86+
author={Long Phan and Mantas Mazeika and Andy Zou and Dan Hendrycks},
87+
year={2025},
88+
eprint={2507.23701},
89+
archivePrefix={arXiv},
90+
primaryClass={cs.AI},
91+
url={https://arxiv.org/abs/2507.23701},
92+
}
93+
```

0 commit comments

Comments
 (0)