NACE (Neural Adaptive Cellular Engine) is an AI architecture designed to simulate interactive environments through emergent behaviors: no hardcoded physics or game logic, just a neural network predicting the next frame from the current state and player input.
ArchitectureㆍProject StructureㆍGetting Started
Click on the previews to view the video
A minimal sand simulation environment where a controllable emitter generates falling sand particles. (The gradient was applied as a post-processing effect, not learnt by the model)
More Info
This is one of the best examples of what this architecture is good at learning: local simulations, originally, NCAs are used to simulate water, sand, fire physics, and this architecture is heavily inspired by NCAs, which is why it learns in a relatively short amount of time:
The loss curve shows fluctuations mostly because a noise was applied during training for stability purposes
The model was trained on about 18.000 total samples for this preview, which is tiny and takes less than a minute to train on modern GPUs.
At the end, the model runs at:
~1489.9 FPS on a RTX 5060 GPU
~208.0 FPS on a Ryzen 7 5700X CPU
(Both tests on a non-compiled and unoptimized model and a single batch, meaning it could run even faster)
The classic Super Mario Bros. from the NES console, except it's just a real-time level generator
More Info
The dataset it was trained on contains all the 13 overworld maps of the original game: underground, castle and water levels were purposely excluded, so that the sky is one single color and the prediction isn't "half sky half underground half water" when temperature is applied.
The model only has to predict a 15x15 grid, each pixel representing a 16x16 sprite: each sprite appearing across the 13 maps is assigned a unique color, effectively turning each sprite into a separate class/object for the model.
This way, during inference the window is 240x240, and interpolation between frames is used to avoid snappy movements, which also improves performance as fewer model forwards are done per second.
The loss curve shows fluctuations mostly because a noise was applied during training for stability purposes
The model was trained on about 960.000 total samples for this preview, quite a small amount considering how well it learns to reproduce most of the levels.
This environment shows the case where an additional parameter is used: the number of the level, so that the AI knows what level style to get inspired by, for example if it's a snowy level, it will use snowy assets instead of green. (This is not necessary, although without it, with a higher temperature it mixes up different styles)
In this preview, the model has learnt to reproduce the levels almost perfectly, by increasing the temperature instead, it becomes a level generator, which draws tiles in uncommon patterns, while still being coherent with the game's style (not always).
At the end, the model runs at:
~426.2 FPS on a RTX 5060 GPU
~171.6 FPS on a Ryzen 7 5700X CPU
(Both tests on a non-compiled and unoptimized model and a single batch, meaning it could run even faster)
Everything is based on the equation $s + a = s'$.
A neural network processes the current state (
Each cell/pixel in the grid evolves based on its local neighborhood to learn rules, for instance, a cell could 'think': If all the cells around me are sky, I will become sky too in the next state, or If I am the bottom part of the player, and the global action was up, I will become background in the next state.
Cells can "read" other cells' classes/colors and hidden channels, and decide what to become.
Unlike Growing NCA, which focuses on self-organization, NACE is built for interactive environments, resulting in a true engineless simulation where physics, logic and graphics are all learnt by the pixels themselves.
Another difference is that this architecture has no persistent memory: in growing NCA, hidden channels are given to each cell, which can decide what to remember and what to forget, in this architecture (by default) hidden channels only persist during the microsteps, meaning every frame the cells forget everything; this promotes stability and precision on single-state generation.
Basically, hidden channels in NACE act as a local communication layer: during the microsteps, cells 'pass messages' to their neighbors to build spatial awareness.
A funny explanation of how this works: If a cell in the center has to know the distance to the border beyond its perception, the cells in between will 'pass the information', each writing down in their hidden channels something like: "Hey, I'm a border!", "On my left there's a cell who said they're a border!", "On my left there's a cell who said on their left there's a cell who said they're a border!" ... until they get to the central cell who requires the information. - Of course in reality it's just unintelligible floating-point numbers that only the neural network understands.
Unlike heavier world models approaches (e.g: JEPA, Dreamer V3), this architecture is lightweight, fast, and meant to be run easily on consumer GPUs rather than expensive TPUs.
It's very limited in comparison, but excels in learning local rules in simple games and simulations.
(A good example of diffusion model is Oasis by Decart)
| File | Description |
|---|---|
NACE.py |
The model's architecture (defines class 'NACE' and 'Dataset') |
train.py |
To train and fine-tune models |
visualizer_cv2.py |
Inference via opencv-python, to try out the model |
visualizer_pygame.py |
Same as above, but via pygame - smoother graphics and continuous actions |
infer_speed.py |
To benchmark inference speed and view information about the model |
configs_vars.py |
Contains info about what each parameter does in the configurations (see Getting Started) |
Pre-requisites:
- Python 3.10+ (3.12.3 recommended).
- PyTorch 2.0+, which you can find here.
- Clone the repository via
git clone https://github.com/Veddy1674/nca-game-engine.git. - Install the dependencies with
pip install -r requirements.txt. - If you are using VSCode, I highly reccomend setting up tasks to easily switch between training configurations:
'train.py' and all the other scripts for inference, testing, etc, all require a path to a configuration file as the first argument:
So you could create a a task in VSCode to run scripts and set their first argument to the file you're currently viewing, this allows you to simply open a config.py file and hit the shortcut for Tasks: Run Task:
{
"label": "NACE: Train with current config",
"type": "shell",
"command": "python", // Your python path/virtual env
"args": [
"${workspaceFolder}/train.py",
"${relativeFile}"
],
"group": "build",
"presentation": {
"reveal": "always",
"panel": "new"
}
}Alternatively, execute scripts manually, e.g: python train.py example/config.py.
The example/ directory contains a minimal environment and a pre-trained model to try out.
Before anything else, run testview.py to see (and play) the environment the model will learn to simulate - the actual game/simulation.
You can create a dataset (.npz files) by running env.py.
Then, have a look at the configuration file to get an idea of how each parameter affects the model, train and inference. (More info in configs_vars.py).
Train the model with python train.py example/config.py (or run the task I talked about before).
Finally, try out the model with python visualizer_cv2.py example/config.py.
If 'LOSS_GRAPH' is not None in the configuration file, a loss graph will be saved in the specified path.
Feel free to tweak the configuration: loss function, scheduler, weights, microsteps, or override post_processing() in visualizer_cv2.py for custom inference effects.