Skip to content

🎬ActionMesh: A fast video to animated mesh model with unprecedented quality. Generate animated mesh seamlessly importable into any 3D software in less than a minute.

License

Notifications You must be signed in to change notification settings

facebookresearch/actionmesh

🎬 ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion

Paper PDF arXiv Project Page Open In Colab

Meta Reality Labs; SpAItial; University College London

Remy Sabathier, David Novotny, Niloy J. Mitra, Tom Monnier

ActionMesh teaser

📖 Overview

ActionMesh is a fast VideoAnimated 3D Mesh model that generates an animated 3D mesh (topology fixed) from input videos (real or synthetic).

🆕 Updates

  • 2026-01-31: 🆕 Low RAM mode (--low_ram) — ActionMesh can now runs on Google Colab T4 GPUs! Try it on Colab

  • 2025-01-21: Demo is live! Try it here: 🤗 facebook/ActionMesh

  • 2025-01-21: Code released!

⚙️ Installation

Requirements

  • GPU: NVIDIA GPU with 32GB VRAM (tested on A100, H100, and H200)
  • GPU (Low RAM): 🆕 Supports GPUs with 12GB VRAM using --low_ram mode (e.g., Google Colab T4)
  • PyTorch: Requires PyTorch and torchvision (developed with torch 2.4.0 / CUDA 12.1 and torchvision 0.19.0)

1. Clone and Install Dependencies

git clone git@github.com:facebookresearch/actionmesh.git
cd actionmesh
git submodule update --init --recursive
pip install -r requirements.txt
pip install -e .

2. Optional Dependencies

Dependency Purpose Installation
PyTorch3D Video rendering of animated meshes Installation guide
Blender 3.5.1 Export animated mesh as a single .glb file Download

🚀 Quick Start

Basic Usage

Generate an animated mesh from an input video:

Note: To export a single animated mesh file (importable in Blender), specify the path to your Blender executable via --blender_path.

python inference/video_to_animated_mesh.py \
    --input assets/examples/davis_camel \
    --blender_path "path/to/blender/executable"  # optional: export animated mesh for Blender

Fast & Low RAM Modes

For faster inference (as used in the HuggingFace demo):

python inference/video_to_animated_mesh.py \
    --input assets/examples/davis_camel \
    --fast \
    --blender_path "path/to/blender/executable"

For low RAM GPUs (e.g., Google Colab T4):

python inference/video_to_animated_mesh.py \
    --input assets/examples/davis_camel \
    --fast --low_ram \
    --blender_path "path/to/blender/executable"

Performance comparison on H100 GPU:

Mode Time Quality
Default ~115s Higher quality
Fast (--fast) ~45s Slightly reduced quality

Model Downloads

On the first launch, ActionMesh weights and external models are automatically downloaded from HuggingFace:

Model Source Local Path
ActionMesh facebook/ActionMesh pretrained_weights/ActionMesh
TripoSG (image-to-3D) VAST-AI/TripoSG pretrained_weights/TripoSG
DinoV2 facebook/dinov2-large pretrained_weights/dinov2
RMBG briaai/RMBG-1.4 pretrained_weights/RMBG

🎨 Examples

We provide example sequences in assets/examples/ with expected outputs for testing and debugging your installation:

Example Type Expected Output
davis_camel
davis_flamingo
kangaroo
spring

🎬 Input

The --input argument accepts:

  • A .mp4 video file
  • A folder containing PNG images

The number of input frames should be between 16 and 31 (default is 16). Any additional frames will be ignored.

Masks

Input frames can be provided with or without alpha masks. If no mask is provided, RMBG background removal model is automatically applied to each frame before processing.

Tip: For custom videos, we strongly recommend using the SAM2 demo to isolate the animated subject on a white background, as RMBG may have limited performance on complex scenes. See our SAM2 extraction guide for detailed instructions.

📦 Export

The model exports a folder containing:

Output Description Requirements
Per-frame meshes One .glb mesh file per timestep (mesh_000.glb, mesh_001.glb, ...) None (default)
Animated mesh Single animated_mesh.glb with embedded animation, importable in Blender Blender 3.5.1
Video Rendered .mp4 video of the animated mesh PyTorch3D
🎥 Video output preview
Video output example
🎞️ Animated mesh file imported in Blender
Blender export example

🏛️ License

See the LICENSE file for details about the license under which this code is made available.

🙏 Acknowledgements

ActionMesh builds upon the following open-source projects. We thank the authors for making their work available:

Project Description
TripoSG Image-to-3D mesh generation
DINOv2 Self-supervised vision features
Diffusers Diffusion model framework
Transformers Transformer model library
RMBG-1.4 Background removal model

📚 Citation

@inproceedings{ActionMesh2025,
author = {Remy Sabathier, David Novotny, Niloy Mitra, Tom Monnier},
title = {ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion},
year = {2025},
}

About

🎬ActionMesh: A fast video to animated mesh model with unprecedented quality. Generate animated mesh seamlessly importable into any 3D software in less than a minute.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published