|
5 | 5 | ## News |
6 | 6 |
|
7 | 7 |
|
| 8 | +**April 3, 2025** |
| 9 | +- We are releasing **[Stable Video 4D 2.0 (SV4D 2.0)](https://huggingface.co/stabilityai/sv4d2.0)**, an enhanced video-to-4D diffusion model for high-fidelity novel-view video synthesis and 4D asset generation. For research purposes: |
| 10 | + - **SV4D 2.0** was trained to generate 48 frames (12 video frames x 4 camera views) at 576x576 resolution, given a 12-frame input video of the same size, ideally consisting of white-background images of a moving object. |
| 11 | + - Compared to our previous 4D model [SV4D](https://huggingface.co/stabilityai/sv4d), **SV4D 2.0** can generate videos with higher fidelity, sharper details during motion, and better spatio-temporal consistency. It also generalizes much better to real-world videos. Moreover, it does not rely on refernce multi-view of the first frame generated by SV3D, making it more robust to self-occlusions. |
| 12 | + - To generate longer novel-view videos, we autoregressively generate 12 frames at a time and use the previous generation as conditioning views for the remaining frames. |
| 13 | + - Please check our [project page](https://sv4d20.github.io), [arxiv paper](https://arxiv.org/pdf/2503.16396) and [video summary](https://www.youtube.com/watch?v=dtqj-s50ynU) for more details. |
| 14 | + |
| 15 | +**QUICKSTART** : |
| 16 | +- `python scripts/sampling/simple_video_sample_4d2.py --input_path assets/sv4d_videos/camel.gif --output_folder outputs/sv4d2` |
| 17 | +- We also train a 8-view model that generates 5 frames x 8 views at a time (same as SV4D). For example, run `python scripts/sampling/simple_video_sample_4d2_8views.py --input_path assets/sv4d_videos/chest.gif --output_folder outputs/sv4d2_8views` |
| 18 | + |
| 19 | +To run **SV4D 2.0** on a single input video of 21 frames: |
| 20 | +- Download SV4D 2.0 models (`sv4d2.safetensors` and `sv4d2_8views.safetensors`) from [here](https://huggingface.co/stabilityai/sv4d2.0) to `checkpoints/` |
| 21 | +- Run `python scripts/sampling/simple_video_sample_4d2.py --input_path <path/to/video>` |
| 22 | + - `input_path` : The input video `<path/to/video>` can be |
| 23 | + - a single video file in `gif` or `mp4` format, such as `assets/sv4d_videos/camel.gif`, or |
| 24 | + - a folder containing images of video frames in `.jpg`, `.jpeg`, or `.png` format, or |
| 25 | + - a file name pattern matching images of video frames. |
| 26 | + - `num_steps` : default is 50, can decrease to it to shorten sampling time. |
| 27 | + - `elevations_deg` : specified elevations (reletive to input view), default is 0.0 (same as input view). |
| 28 | + - **Background removal** : For input videos with plain background, (optionally) use [rembg](https://github.com/danielgatis/rembg) to remove background and crop video frames by setting `--remove_bg=True`. To obtain higher quality outputs on real-world input videos with noisy background, try segmenting the foreground object using [Clipdrop](https://clipdrop.co/) or [SAM2](https://github.com/facebookresearch/segment-anything-2) before running SV4D. |
| 29 | + - **Low VRAM environment** : To run on GPUs with low VRAM, try setting `--encoding_t=1` (of frames encoded at a time) and `--decoding_t=1` (of frames decoded at a time) or lower video resolution like `--img_size=512`. |
| 30 | + |
| 31 | +  |
| 32 | + |
| 33 | + |
8 | 34 | **July 24, 2024** |
9 | 35 | - We are releasing **[Stable Video 4D (SV4D)](https://huggingface.co/stabilityai/sv4d)**, a video-to-4D diffusion model for novel-view video synthesis. For research purposes: |
10 | 36 | - **SV4D** was trained to generate 40 frames (5 video frames x 8 camera views) at 576x576 resolution, given 5 context frames (the input video), and 8 reference views (synthesised from the first frame of the input video, using a multi-view diffusion model like SV3D) of the same size, ideally white-background images with one object. |
|
0 commit comments