This repository contains the implementation of SyncNet for Lip-Syncing a given video with an input audio file. It extracts frames from a video, processes them using SyncNet, and generates a lip-synced output video. ''' π Features: β Frame Extraction β Extracts frames from the input video. β Audio Preprocessing β Converts audio to 16kHz mono for SyncNet. β Lip-Sync Processing β Uses syncnet_v2.model to sync video with audio. β Final Video Generation β Merges processed frames with audio using FFmpeg.
π SyncNet-LipSync-Project/ βββ data/ # Contains model files (syncnet_v2.model) βββ output/ # Stores generated lip-synced video βββ syncnet_python/ # Main SyncNet scripts βββ input_video.mp4 # Input video file βββ input_audio.wav # Input audio file βββ run_syncnet.py # Main script to run SyncNet βββ README.md # Project description βββ requirements.txt # Dependencies (if needed) '''