Skip to content

yiannisha/jepa-toy-examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JEPA Toy Examples

A small, CPU-friendly “mini research playground” for JEPA-style representation learning: predict target embeddings (from an EMA target encoder) rather than reconstructing pixels.

Install

python -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -e ".[dev]"

Python 3.10+ works; Python 3.11+ is recommended.

Quickstart

Train:

jepa-toy train --task sine --config configs/sine/base.yaml
jepa-toy train --task tokens --config configs/tokens/base.yaml
jepa-toy train --task gridworld --config configs/gridworld/base.yaml

Evaluate:

jepa-toy eval --task sine --run_dir runs/sine/<timestamp>_<tag>

TensorBoard:

tensorboard --logdir runs

Tiny sweep (writes runs/summary.csv):

python scripts/run_sweeps.py --tiny

How It Works

  • Online encoder f(x) -> z
  • Predictor g(z_context, condition) -> z_pred
  • Target encoder f_t(x) -> z_t is an EMA copy of the online encoder (stop-grad)
  • Loss matches z_pred to stopgrad(z_t) (MSE or cosine), optionally with variance regularization

We log collapse diagnostics: per-dimension variance/std, covariance spectrum (top singular values), and batch cosine similarity.

Add A Task

  • Implement src/jepa_toy/tasks/<name>/task.py with get_task_spec()
  • Add configs/<name>/base.yaml
  • Add docs/tasks/<name>.md

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages