Skip to content

gboduljak/what-happens-next

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

What Happens Next? Anticipating Future Motion by Generating Point Trajectories

Gabrijel Boduljak | Laurynas Karazija | Iro Laina | Christian Rupprecht | Andrea Vedaldi

Abstract

We consider the problem of forecasting motion from a single image, i.e., predicting how objects in the world are likely to move, without the ability to observe other parameters such as the object velocities or the forces applied to them. We formulate this task as conditional generation of dense trajectory grids with a model that closely follows the architecture of modern video generators but outputs motion trajectories instead of pixels. This approach captures scene-wide dynamics and uncertainty, yielding more accurate and diverse predictions than prior regressors and generators. Although recent state-of-the-art video generators are often regarded as world models, we show that they struggle with forecasting motion from a single image, even in simple physical scenarios such as falling blocks or mechanical object interactions, despite fine-tuning on such data. We show that this limitation arises from the overhead of generating pixels rather than directly modeling motion

Method

An overview of our method. Given an input image $\mathbf{I}$ and a grid of query points, we predict $T$ future point trajectories. Rather than generating raw point trajectories, for computational efficiency, we operate within the latent space of a trajectory VAE (encoder $\phi$, decoder $\psi$). Specifically, we employ a latent flow matching denoiser to generate trajectory latents $\mathbf{z} \in \mathbb{R}^{T \times h \times w \times d}$. conditioned on the input image $\mathbf{I}$ DINO patch features $\mathbf{f}$. These are subsequently decoded into a final grid of point trajectories $\mathbf{x} \in \mathbb{R}^{T \times H \times W \times 2}$. Our method generates diverse and plausible future motion.

Instructions

Inference

  1. Reproduce conda environment using environment.yml in src/track-generator folder.
  2. Download preprocessed demo data. Move this to datasets folder in src/track-generator or create a symlink.
  3. Download pretrained model checkpoints.
  4. Open one of demo notebooks (e.g. kubric_demo.ipynb).
  5. Adjust the ckpts paths in the notebook and update checkpoint paths in configs.
  6. Adjust DINO path in the notebook.
  7. Run the notebook. The notebook will run sampling and should reproduce demos folder.

Training instructions will be released soon.

About

What Happens Next? Anticipating Future Motion by Generating Point Trajectories

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages