Sizhe Shan1,2* β’ Qiulin Li1,3* β’ Yutao Cui1 β’ Miles Yang1 β’ Yuehai Wang2 β’ Qun Yang3 β’ Jin Zhou1β β’ Zhao Zhong1
π’ 1Tencent Hunyuan β’ π 2Zhejiang University β’
*Equal contribution β’ β Project lead
Experience the magic of AI-generated Foley audio in perfect sync with video content!
Tencent-HunyuanVideo-Foley.mp4
π¬ Watch how HunyuanVideo-Foley generates immersive sound effects synchronized with video content
ComfyUI Integration - Thanks to the amazing community for creating ComfyUI nodes:
- if-ai/ComfyUI_HunyuanVideoFoley - ComfyUI workflow integration which supports cpu offloading and FP8 quantization
- phazei/ComfyUI-HunyuanVideo-Foley - Alternative ComfyUI node implementation which supports different precision modes
π We encourage and appreciate community contributions that make HunyuanVideo-Foley more accessible!
π Multi-scenario Sync |
π§ Multi-modal Balance |
π΅ 48kHz Hi-Fi Output |
π Tencent Hunyuan open-sources HunyuanVideo-Foley an end-to-end video sound effect generation model!
A professional-grade AI tool specifically designed for video content creators, widely applicable to diverse scenarios including short video creation, film production, advertising creativity, and game development.
π¬ Multi-scenario Audio-Visual Synchronization
Supports generating high-quality audio that is synchronized and semantically aligned with complex video scenes, enhancing realism and immersive experience for film/TV and gaming applications.
βοΈ Multi-modal Semantic Balance
Intelligently balances visual and textual information analysis, comprehensively orchestrates sound effect elements, avoids one-sided generation, and meets personalized dubbing requirements.
π΅ High-fidelity Audio Output
Self-developed 48kHz audio VAE perfectly reconstructs sound effects, music, and vocals, achieving professional-grade audio generation quality.
π SOTA Performance Achieved
HunyuanVideo-Foley comprehensively leads the field across multiple evaluation benchmarks, achieving new state-of-the-art levels in audio fidelity, visual-semantic alignment, temporal alignment, and distribution matching - surpassing all open-source solutions!
π Performance comparison across different evaluation metrics - HunyuanVideo-Foley leads in all categories
The TV2A (Text-Video-to-Audio) task presents a complex multimodal generation challenge requiring large-scale, high-quality datasets. Our comprehensive data pipeline systematically identifies and excludes unsuitable content to produce robust and generalizable audio generation capabilities.
HunyuanVideo-Foley employs a sophisticated hybrid architecture:
- π Multimodal Transformer Blocks: Process visual-audio streams simultaneously
- π΅ Unimodal Transformer Blocks: Focus on audio stream refinement
- ποΈ Visual Encoding: Pre-trained encoder extracts visual features from video frames
- π Text Processing: Semantic features extracted via pre-trained text encoder
- π§ Audio Encoding: Latent representations with Gaussian noise perturbation
- β° Temporal Alignment: Synchformer-based frame-level synchronization with gated modulation
Objective and Subjective evaluation results demonstrating superior performance across all metrics
π Method | PQ β | PC β | CE β | CU β | IB β | DeSync β | CLAP β | MOS-Q β | MOS-S β | MOS-T β |
---|---|---|---|---|---|---|---|---|---|---|
FoleyGrafter | 6.27 | 2.72 | 3.34 | 5.68 | 0.17 | 1.29 | 0.14 | 3.36Β±0.78 | 3.54Β±0.88 | 3.46Β±0.95 |
V-AURA | 5.82 | 4.30 | 3.63 | 5.11 | 0.23 | 1.38 | 0.14 | 2.55Β±0.97 | 2.60Β±1.20 | 2.70Β±1.37 |
Frieren | 5.71 | 2.81 | 3.47 | 5.31 | 0.18 | 1.39 | 0.16 | 2.92Β±0.95 | 2.76Β±1.20 | 2.94Β±1.26 |
MMAudio | 6.17 | 2.84 | 3.59 | 5.62 | 0.27 | 0.80 | 0.35 | 3.58Β±0.84 | 3.63Β±1.00 | 3.47Β±1.03 |
ThinkSound | 6.04 | 3.73 | 3.81 | 5.59 | 0.18 | 0.91 | 0.20 | 3.20Β±0.97 | 3.01Β±1.04 | 3.02Β±1.08 |
HunyuanVideo-Foley (ours) | 6.59 | 2.74 | 3.88 | 6.13 | 0.35 | 0.74 | 0.33 | 4.14Β±0.68 | 4.12Β±0.77 | 4.15Β±0.75 |
Comprehensive objective evaluation showcasing state-of-the-art performance
π Method | FD_PANNs β | FD_PASST β | KL β | IS β | PQ β | PC β | CE β | CU β | IB β | DeSync β | CLAP β |
---|---|---|---|---|---|---|---|---|---|---|---|
FoleyGrafter | 22.30 | 322.63 | 2.47 | 7.08 | 6.05 | 2.91 | 3.28 | 5.44 | 0.22 | 1.23 | 0.22 |
V-AURA | 33.15 | 474.56 | 3.24 | 5.80 | 5.69 | 3.98 | 3.13 | 4.83 | 0.25 | 0.86 | 0.13 |
Frieren | 16.86 | 293.57 | 2.95 | 7.32 | 5.72 | 2.55 | 2.88 | 5.10 | 0.21 | 0.86 | 0.16 |
MMAudio | 9.01 | 205.85 | 2.17 | 9.59 | 5.94 | 2.91 | 3.30 | 5.39 | 0.30 | 0.56 | 0.27 |
ThinkSound | 9.92 | 228.68 | 2.39 | 6.86 | 5.78 | 3.23 | 3.12 | 5.11 | 0.22 | 0.67 | 0.22 |
HunyuanVideo-Foley (ours) | 6.07 | 202.12 | 1.89 | 8.30 | 6.12 | 2.76 | 3.22 | 5.53 | 0.38 | 0.54 | 0.24 |
π Outstanding Results! HunyuanVideo-Foley achieves the best scores across ALL evaluation metrics, demonstrating significant improvements in audio quality, synchronization, and semantic alignment.
π§ System Requirements
- CUDA: 12.4 or 11.8 recommended
- Python: 3.8+
- OS: Linux (primary support)
- Note: This model requires approximately 20GB of VRAM for inference. It is recommended to use a GPU >= 24GB of VRAMβ (such as RTX 3090 or 4090) for stable performance.
# π₯ Clone the repository
git clone https://github.com/Tencent-Hunyuan/HunyuanVideo-Foley
cd HunyuanVideo-Foley
π‘ Tip: We recommend using Conda for Python environment management.
# π§ Install dependencies
pip install -r requirements.txt
π Download Model weights from Huggingface
# using git-lfs
git clone https://huggingface.co/tencent/HunyuanVideo-Foley
# using huggingface-cli
huggingface-cli download tencent/HunyuanVideo-Foley
Generate Foley audio for a single video file with text description:
python3 infer.py \
--model_path PRETRAINED_MODEL_PATH_DIR \
--config_path ./configs/hunyuanvideo-foley-xxl.yaml \
--single_video video_path \
--single_prompt "audio description" \
--output_dir OUTPUT_DIR
Process multiple videos using a CSV file with video paths and descriptions:
# Download sample test videos
bash ./download_test_videos.sh
python3 infer.py \
--model_path PRETRAINED_MODEL_PATH_DIR \
--config_path ./configs/hunyuanvideo-foley-xxl.yaml \
--csv_path assets/test.csv \
--output_dir OUTPUT_DIR
Launch a user-friendly Gradio web interface for easy interaction:
export HIFI_FOLEY_MODEL_PATH=PRETRAINED_MODEL_PATH_DIR
python3 gradio_app.py
π Then open your browser and navigate to the provided local URL to start generating Foley audio!
If you find HunyuanVideo-Foley useful for your research, please consider citing our paper:
@misc{shan2025hunyuanvideofoleymultimodaldiffusionrepresentation,
title={HunyuanVideo-Foley: Multimodal Diffusion with Representation Alignment for High-Fidelity Foley Audio Generation},
author={Sizhe Shan and Qiulin Li and Yutao Cui and Miles Yang and Yuehai Wang and Qun Yang and Jin Zhou and Zhao Zhong},
year={2025},
eprint={2508.16930},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2508.16930},
}
We extend our heartfelt gratitude to the open-source community!
π¨ Stable Diffusion 3 |
β‘ FLUX |
π΅ MMAudio |
π€ HuggingFace |
ποΈ DAC |
π Synchformer |
π Special thanks to all researchers and developers who contribute to the advancement of AI-generated audio and multimodal learning!