Adaptive Recurrent Frame Prediction with Learnable Motion Vectors [SIGGRAPH ASIA 2023 Conference Paper]
This is offical repository for our paper, Adaptive Recurrent Frame Prediction with Learnable Motion Vectors
Authors: Zhizhen Wu, Chenyu Zuo, Yuchi Huo, Yazhen Yuan, Yinfan Peng, Guiyang Pu, Rui Wang and Hujun Bao.
in SIGGRAPH Asia 2023 Conference Proceedings
Summary: We propose learnable motion vectors, a novel approach that leverages both optical flow and traditionally rendered motion vectors to facilitate accurate motion estimation for challenging elements such as occlusion, dynamic shading, and translucent objects. Building on this concept, our new feature streaming neural network, FSNet, is designed to achieve adaptive recurrent frame prediction with lower latency and superior quality.
[2025-10-13]: Updated the post processing materials.
[2025-08-19]: Updated the outdated resource links.
- Make a directory for workspace and clone the repository:
mkdir learnable-motion-vector; cd learnable-motion-vector
git clone https://github.com/VicRanger/learnable-motion-vector code
cd code
- Install conda env:
scripts/create_env.shin Linux orscripts/create_env.batin Windows
The corresponding C++ code is located in scripts/ue4_source_code/.
- Copy the
.hand.cppfiles into your UE4 project's source directory and recompile the project. - After successful compilation, a new C++ Class,
CaptureManager, will appear in theContent Browser. Place this Actor into your scene, configure it, and you will be ready to export the render buffer. - Utilize the post-processing materials provided in
scripts/ue4_source_code/PPMaterialsto export specific render buffers. - A
BufferOptions.CaptureItem.txtfile is provided for convenience. Copy and paste its contents into theCaptureManager'sCapture Itemfield to apply all necessary settings for render buffer export.
Root_Directory/
|-- BaseColor
| |-- frame_0.EXR
| |-- frame_1.EXR
| |-- ...
| |-- frame_10.EXR
| |-- frame_11.EXR
| |-- ...
| |-- frame_100.EXR
| |-- frame_101.EXR
| |-- ...
|-- MetallicRoughnessStencil
|-- NoVSpecular
|-- STAlpha
|-- SceneColor
|-- SceneColorNoST
|-- SkyColor
|-- SkyDepth
|-- VelocityDepth
|-- WorldNormal
|-- WorldPosition
Edit the dataset_export_job_st.yaml configuration file located at config/includes/. (st stands for separate transluceny)
Within this file, configure the following paths:
- Set the
import_pathparameter to specify the directory containing the source EXR files exported from Unreal Engine. - Set the
export_pathparameter to define where the processed NPZ datasets to be saved. - Populate the
scenearray item to specify the name of the scene directory containing the source images.
python src/test/test_export_buffer.py --config config/export/export_st.yaml
In the config/export/export_st.yaml file:
num_thread: 8: The num_thread setting specifies the number of threads used for parallel export. Setting this to 0 disables multiprocessing.
overwrite: true: Set this to false to resume an export rather than overwrite existing files.
Requirements: exported npz files, and a yaml file.
Run the script with --train:
python src/test/test_trainer.py --config config/shadenet_v5d4.yaml --train
initial_inference: false: The initial_inference can be set to false to skip an initial dummy inference, used for timing.dataset.path: "/path/to/export_data/": The dataset.path setting specifies the path to the exported NumPy data files, which should end with a trailing slash "/".
Requirements: generated training result in a standardized directory structure, e.g.
job_name (e.g., shadenet_v5d4_FC)/
|-- time_stamp(e.g., 2024-MM-DD_HH-MM-SS)
| |-- log (logs in text format)
| |-- model (the models' pt of the best and the newest)
| |-- writer (logs in tensorboard format)
| |-- checkpoint (the last checkpoint)
| |-- history_checkpoints (all history checkpoints)
Run the script with --train --resume:
python src/test/test_trainer.py --config config/shadenet_v5d4.yaml --train --resume
As long as parent directory path job_name/time_stamp is valid and the directory checkpoint exists, the training will restart from the last saved checkpoint.
Requirements: generated training result.
run the script with --test:
python src/test/test_trainer.py --config config/shadenet_v5d4.yaml --test
Requirements: the .pt file containing dict_state of model (can be found in model inside training result directory). The checkpoints are not required.
Run the script with --test plus --test_only:
python src/test/test_trainer.py --config config/shadenet_v5d4.yaml --test --test_only
Requirements: the .pt model file, (can be found in model inside training result directory).
- Place .pt file in
output/checkpoints/ - Set
pre_model: "../output/checkpoints/model.pt"in the yaml. - Then run the script
python src/test/test_inference.py./scripts/download_pretrained_model_demo.sh
./scripts/download_dataset_demo.sh
(alternative) or download with the following links
- download pretraineed model (Hugging Face) (8.4MB) and place it in
../checkpoints - download dataset sample (Hugging Face) (378MB) and unzip it to
../dataset/
by test_trainer.py with test_only mode
python src/test/test_train.py --config config/lmv_v5d4_inference.yaml --test --test_only
or by default test_inference.py
python src/test/test_inference.py
Thank you for being interested in our paper.
If you find our paper helpful or use our work in your research, please cite:
@inproceedings{10.1145/3610548.3618211,
author = {Wu, Zhizhen and Zuo, Chenyu and Huo, Yuchi and Yuan, Yazhen and Peng, Yifan and Pu, Guiyang and Wang, Rui and Bao, Hujun},
title = {Adaptive Recurrent Frame Prediction with Learnable Motion Vectors},
year = {2023},
isbn = {9798400703157},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3610548.3618211},
doi = {10.1145/3610548.3618211},
booktitle = {SIGGRAPH Asia 2023 Conference Papers},
articleno = {10},
numpages = {11},
keywords = {Frame Extrapolation, Real-time Rendering, Spatial-temporal},
location = {, Sydney, NSW, Australia, },
series = {SA '23}
}
:) If you have any questions or suggestions about this repo, please feel free to contact me (jsnwu99@gmail.com).

