Skip to content

Zhangyanbo/pattern_to_dynamics

Repository files navigation

Equilibrium flow: From Snapshots to Dynamics

How does snapshot distribution constrain the possible dynamics? When we see a pattern, how confidently can we say "This is the underlying dynamics" without seeing the time evolution? How does artificial life relate to real biological life? To answer these fundamental questions, we propose Equilibrium flow: by learning the distribution-preserving dynamics, we can find possible dynamics to preserve the given data distribution without time information.

cover

For 2D systems, our method finds interesting non-trivial dynamics that preserve them. For Lorenz system, a dynamical system with chaotic behavior, the recovered dynamics also exhibit chaotic behavior with positive Lyapunov exponents. For Turing patterns, we propose a training-free method, which has a limited solution space, but is much faster. The resulting dynamics are also highly aligned to the ground-truth.

Beyond these, we also explore the design capability with our method on Artificial Life. With given manually designed patterns, our method not only finds the dynamics / neural cellular automata that preserve the pattern, but also reveals collective behaviors.

mix.mp4

Background music generated with Sono AI.

Quick Start

Low-dimensional dynamical systems

Step 1: Train diffusion model

Run the following command:

python train_diffusion.py --model lorenz [or two_peaks, ring, two_moons]

This will train a diffusion model on the Lorenz system. The root train_diffusion.py uses Hugging Face Diffusers DDPMScheduler and supports prediction_type in {sample(x_prediction), epsilon, v_prediction}. Sampling is configured with clip_sample=False (no forced clamp to [-1, 1]).

The trained model is saved in ./results/lorenz/diffusion_model.pth.

You can optionally override output folder:

python train_diffusion.py --model lorenz --output_dir ./some_folder

You can also switch prediction type:

python train_diffusion.py --model lorenz --prediction_type epsilon
python train_diffusion.py --model lorenz --prediction_type v_prediction
python train_diffusion.py --model lorenz --prediction_type x_prediction

Step 2: Train dynamics model

Run:

python train_dynamics.py --model lorenz --num_experiments 1
# Use the same model as the diffusion model
# You can set num_experiments to the desired number of experiments if you want multiple results

The trained dynamics model is saved in ./results/lorenz/models/dynamics_models_<id>.pth.

Step 3: Load model

import torch
from models import Flow, FlowKernel


model_id = 'lorenz'
score_model, dataset = load_model(model_id)

# Load trained dynamic model
v = FlowKernel(dim=dataset.dim)
v.load_state_dict(torch.load(f'./results/lorenz/models/dynamics_model_{model_id}.pth'))

This v model takes a torch tensor and return $v=dx/dt$.

2D Continous Experiments

Step 1: Generate Dataset

For Turing patterns, you can generate dataset by running:

cd ./2D_continous/turing_pattern/
python generate_dataset.py --preset life maze waves spirals --cuda --normalize

For Artificial Life, you can generate dataset by running:

cd ./2D_continous/alifes/
python random_images.py --num_sample 8192 --image_path [your_image_path.png]

Step 2: Train Diffusion Model

cd ./2D_continous/
bash train_alifes_diffusion.sh
bash train_turing_diffusion.sh

Step 3 (optional): Train dynamic model

cd ./2D_continous/
bash train_dynamics.sh

Step 4: Test the learned model (training-free method)

All the experiments can be found in ./2D_continous/training_free_turing.ipynb and training_free_alife.ipynb.

Testing

To keep test code and generated artifacts separated from experiment code, the diffusion migration smoke test is placed in:

  • tests/diffusion_hf/smoke_test_x_prediction.py
  • outputs: test_outputs/diffusion_hf/
  • prediction-matrix benchmark: tests/prediction_matrix/run_prediction_matrix.py
  • matrix outputs: test_outputs/prediction_matrix/
  • epoch sweep benchmark (128 -> 1024, step 128): tests/prediction_matrix/run_epoch_sweep.py
  • epoch sweep outputs: test_outputs/prediction_matrix_sweep_128to1024/
  • stream plot policy: raw black-point streamplots are disabled by default; use styled outputs only (streamplot_compare_all_styled.png)

Cite our paper

@misc{zhang2025equilibriumflowsnapshotsdynamics,
      title={Equilibrium flow: From Snapshots to Dynamics}, 
      author={Yanbo Zhang and Michael Levin},
      year={2025},
      eprint={2509.17990},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2509.17990}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors