English | 中文版
A real-time interactive streaming digital human system enabling synchronized audio-video conversation, which basically meets commercial application standards.
wav2lip Demo | ernerf Demo | musetalk Demo
Domestic Mirror Repository: https://gitee.com/lipku/LiveTalking
- Supports multiple digital human models: ernerf, musetalk, wav2lip, Ultralight-Digital-Human.
- Supports voice cloning.
- Supports interrupting the digital human while it is speaking.
- Supports full-body video stitching.
- Supports WebRTC and virtual camera output.
- Supports motion choreography: plays custom videos when the digital human is not speaking.
- Supports custom digital human avatars.
Tested on Ubuntu 24.04, Python 3.10, PyTorch 2.5.0, and CUDA 12.4.
conda create -n nerfstream python=3.10
conda activate nerfstream
# If your CUDA version is not 12.4 (check via "nvidia-smi"), install the corresponding PyTorch version from <https://pytorch.org/get-started/previous-versions/>
conda install pytorch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 pytorch-cuda=12.4 -c pytorch -c nvidia
pip install -r requirements.txtFor common installation issues, refer to the FAQ.
For CUDA environment setup on Linux, refer to this article: https://zhuanlan.zhihu.com/p/674972886
Troubleshooting for video connection issues: https://mp.weixin.qq.com/s/MVUkxxhV2cgMMHalphr2cg
- Download Models
Quark Cloud Drive: https://pan.quark.cn/s/83a750323ef0
Google Drive: https://drive.google.com/drive/folders/1FOC_MD6wdogyyX_7V1d4NDIO7P9NlSAJ?usp=sharing
- Copy
wav2lip256.pthto themodelsdirectory of this project and rename it towav2lip.pth. - Extract the
wav2lip256_avatar1.tar.gzarchive and copy the entire extracted folder todata/avatarsof this project.
- Run the Project
Execute:python app.py --transport webrtc --model wav2lip --avatar_id wav2lip256_avatar1
The server must open the following ports: TCP: 8010; UDP: 1-65536
You can access the client in two ways:
(1) Open http://serverip:8010/webrtcapi.html in a browser. First click "start" to play the digital human video; then enter any text in the input box and submit it. The digital human will broadcast the text.
(2) Use the desktop client (download link: https://pan.quark.cn/s/d7192d8ac19b).
- Quick Experience
Visit https://www.compshare.cn/images/4458094e-a43d-45fe-9b57-de79253befe4?referral_code=3XW3852OBmnD089hMMrtuU&ytag=GPU_GitHub_livetalking and create an instance with this image to run the project successfully immediately.
If you cannot access Hugging Face, run the following command before starting the project:
export HF_ENDPOINT=https://hf-mirror.com
For detailed usage instructions: https://livetalking-doc.readthedocs.io/
No prior installation is required; run directly with Docker:
docker run --gpus all -it --network=host --rm registry.cn-zhangjiakou.aliyuncs.com/codewithgpu3/lipku-livetalking:toza2irpHZ
The code is located in /root/livetalking. First run git pull to fetch the latest code, then execute commands as described in Sections 2 and 3.
The following images are available:
- AutoDL Image: https://www.codewithgpu.com/i/lipku/livetalking/base
AutoDL Tutorial - UCloud Image: https://www.compshare.cn/images/4458094e-a43d-45fe-9b57-de79253befe4?referral_code=3XW3852OBmnD089hMMrtuU&ytag=GPU_GitHub_livetalking
Supports opening any port; no additional SRS service deployment is required.
UCloud Tutorial
-
Performance mainly depends on CPU and GPU: Each video stream compression consumes CPU resources, and CPU performance is positively correlated with video resolution; each lip-sync inference depends on GPU performance.
-
The number of concurrent streams when the digital human is not speaking depends on CPU performance; the number of concurrent streams when multiple digital humans are speaking simultaneously depends on GPU performance.
-
In the backend logs,
inferfpsrefers to the GPU inference frame rate, andfinalfpsrefers to the final streaming frame rate. Both need to be above 25 fps to achieve real-time performance. Ifinferfpsis above 25 butfinalfpsis below 25, it indicates insufficient CPU performance. -
Real-Time Inference Performance
| Model | GPU Model | FPS |
|---|---|---|
| wav2lip256 | RTX 3060 | 60 |
| wav2lip256 | RTX 3080Ti | 120 |
| musetalk | RTX 3080Ti | 42 |
| musetalk | RTX 3090 | 45 |
| musetalk | RTX 4090 | 72 |
A GPU of RTX 3060 or higher is sufficient for wav2lip256, while musetalk requires an RTX 3080Ti or higher.
The following extended features are available for users who are familiar with the open-source project and need to expand product capabilities:
- High-definition wav2lip model.
- Full voice interaction: supports interrupting the digital human’s response via a wake word or button to ask a new question.
- Real-time synchronized subtitles: provides the frontend with events for the start and end of each sentence spoken by the digital human.
- Each connection can specify a corresponding avatar and voice; accelerated avatar image loading.
- Supports avatars (digital human images) with unlimited duration.
- Provides a real-time audio stream input interface.
- Transparent background for the digital human, supporting dynamic background overlay.
- Real-time avatar switching, supporting multiple digital humans in the same scene.
- Camera‑driven digital human movements and facial expressions.
For more details: https://livetalking-doc.readthedocs.io/en/latest/service.html
Videos developed based on this project and published on platforms such as Bilibili, WeChat Channels, and Douyin must include the LiveTalking watermark and logo.
If this project is helpful to you, please give it a "Star". Contributions from developers interested in improving this project are also welcome.
- Knowledge Planet (for high-quality FAQs, best practices, and Q&A): https://t.zsxq.com/7NMyO
- WeChat Official Account: 数字人技术 (Digital Human Technology)

