You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Official implementation of "Modeling Human Gaze Behavior with Diffusion Models for Unified Scanpath Prediction", ICCV 2025 🌺
19
+
20
+
</div>
21
+
22
+
## Overview
23
+
24
+
<palign="center">
25
+
<img src="assets/figure.jpg">
26
+
</p>
27
+
28
+
>**Abstract**: <br>
29
+
> Predicting human gaze scanpaths is crucial for understanding visual attention, with applications in human-computer interaction, autonomous systems, and cognitive robotics. While deep learning models have advanced scanpath prediction, most existing approaches generate averaged behaviors, failing to capture the variability of human visual exploration. In this work, we present ScanDiff, a novel architecture that combines diffusion models with Vision Transformers to generate diverse and realistic scanpaths. Our method explicitly models scanpath variability by leveraging the stochastic nature of diffusion models, producing a wide range of plausible gaze trajectories. Additionally, we introduce textual conditioning to enable task-driven scanpath generation, allowing the model to adapt to different visual search objectives. Experiments on benchmark datasets show that ScanDiff surpasses state-of-the-art methods in both free-viewing and task-driven scenarios, producing more diverse and accurate scanpaths. These results highlight its ability to better capture the complexity of human visual behavior, pushing forward gaze prediction research.
<spanclass="scandiff">ScanDiff</span> is based on a unified architecture combining Diffusion Models with Transformers.
194
+
It represents the first diffusion-based approach for scanpath prediction on natural images. Textual conditioning allows <spanclass="scandiff">ScanDiff</span> to work both in free-viewing and task-driven scenarios,
195
+
and a dedicated length prediction module is introduced to handle variable-length scanpaths.
196
+
197
+
<h3>How does the model work?</h3>
198
+
199
+
<!-- begine itemize with numbers-->
200
+
<olclass="custom-steps has-text-left mt-4">
201
+
<li><strong>Scanpath embedding:</strong> The scanpath is embedded into the initial uncorrupted latent variable
202
+
<spanclass="math-symbol">z<sub>0</sub></span>.
203
+
</li>
204
+
<li><strong>Forward diffusion:</strong> Gaussian noise is added to the embedded sequence
205
+
<spanclass="math-symbol">z<sub>0</sub></span> over <spanclass="math-symbol">T</span> timesteps.
206
+
</li>
207
+
<li><strong>Visual encoding:</strong> The stimulus <spanclass="math-symbol">I</span> is encoded with a Transformer-based backbone (DINOv2).
208
+
</li>
209
+
<li><strong>Task encoding:</strong> A textual encoder (CLIP) processes the viewing task <spanclass="math-symbol">c</span>.
210
+
</li>
211
+
<li><strong>Multimodal fusion:</strong> Visual and textual features are projected into a joint multimodal embedding space.
212
+
</li>
213
+
<li><strong>Denoising:</strong> A Transformer encoder refines the noised scanpath embedding
214
+
<spanclass="math-symbol">z<sub>t</sub></span>, conditioned on the multimodal features.
215
+
</li>
216
+
<li><strong>Reconstruction:</strong> A three-layer MLP
217
+
<spanclass="math-symbol">γ<sub>θ</sub></span> reconstructs the scanpath, and a length prediction module
218
+
<spanclass="math-symbol">ℓ<sub>θ</sub></span> estimates its length.
Human visual exploration is inherently variable. Individuals perceive the same stimulus in different manners depending on factors such as attention,
291
+
context, and cognitive processes. Capturing such variability is essential for developing models that accurately reflect the diverse range of
292
+
human traits. However, existing scanpath prediction models tend to align closely with the statistical mean of human gaze behavior. While this approach may improve performance
293
+
on traditional evaluation metrics, it fails to reflect the natural variability in human visual attention. Commonly used
294
+
metrics such as MM, SM, and SS tend to reward predictions that closely match an aggregated ground truth, thus
295
+
favoring models that generate a single representative scanpath. Indeed, the
296
+
average similarity between ground-truth scanpaths can be smaller than the average similarity between generated scanpaths if these well reflect an average behavior.
297
+
<br><br>
298
+
We propose the <strong>Diversity-aware Sequence Score (DSS)</strong>, a new metric that extends the standard sequence similarity mesures by incorporating
299
+
a term that penalizes excessive similarity among the generated scanpaths when humans do not reflect such behavior. Given a set of generated scanpaths <spanclass="math-symbol">s<sub>g</sub></span> and
300
+
corresponding human scanpaths <spanclass="math-symbol">s<sub>h</sub></span> for a specific visual stimulus, DSS is computed as:
0 commit comments