Skip to content

Commit 36efdf9

Browse files
committed
add joyvasa
1 parent 5c9c2b8 commit 36efdf9

File tree

2 files changed

+14
-0
lines changed

2 files changed

+14
-0
lines changed

README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
* Achieved real-time running of LivePortrait on RTX 3090 GPU using TensorRT, reaching speeds of 30+ FPS. This is the speed for rendering a single frame, including pre- and post-processing, not just the model inference speed.
88
* Implemented conversion of LivePortrait model to Onnx model, achieving inference speed of about 70ms/frame (~12 FPS) using onnxruntime-gpu on RTX 3090, facilitating cross-platform deployment.
99
* Seamless support for native gradio app, with several times faster speed and support for simultaneous inference on multiple faces and Animal Model.
10+
* Added support for [JoyVASA](https://github.com/jdh-algo/JoyVASA), which can drive videos or images with audio.
1011

1112
**If you find this project useful, please give it a star ✨✨**
1213

@@ -15,6 +16,12 @@
1516
<video src="https://github.com/user-attachments/assets/716d61a7-41ae-483a-874d-ea1bf345bd1a" controls="controls" width="500" height="300">您的浏览器不支持播放该视频!</video>
1617

1718
**Changelog**
19+
- [x] **2024/12/16:** Added support for [JoyVASA](https://github.com/jdh-algo/JoyVASA), which can drive videos or images with audio. Very cool!
20+
- Update code, then download the models: `huggingface-cli download TencentGameMate/chinese-hubert-base --local-dir .\checkpoints\chinese-hubert-base` and `huggingface-cli download jdh-algo/JoyVASA --local-dir ./checkpoints/JoyVASA`
21+
- After launching the webui, follow the tutorial below. When the source is a video, it's recommended to only drive the mouth movements
22+
23+
<video src="https://github.com/user-attachments/assets/42fb24be-0cde-4138-9671-e52eec95e7f5" controls="controls" width="500" height="400">您的浏览器不支持播放该视频!</video>
24+
1825
- [x] **2024/12/14:** Added pickle and image driving, as well as region driving animation_region.
1926
- Please update the latest code. Windows users can directly double-click `update.bat` to update, but note that your local code will be overwritten.
2027
- Running `python run.py` now automatically saves the corresponding pickle to the same directory as the driving video, allowing for direct reuse.

README_ZH.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
* 通过TensorRT实现在RTX 3090显卡上**实时**运行LivePortrait,速度达到 30+ FPS. 这个速度是实测渲染出一帧的速度,而不仅仅是模型的推理时间。
88
* 实现将LivePortrait模型转为Onnx模型,使用onnxruntime-gpu在RTX 3090上的推理速度约为 70ms/帧(~12 FPS),方便跨平台的部署。
99
* 无缝支持原生的gradio app, 速度快了好几倍,支持多张人脸、Animal模型。
10+
* 增加[JoyVASA](https://github.com/jdh-algo/JoyVASA)的支持,可以用音频驱动视频或图片。
1011

1112
**如果你觉得这个项目有用,帮我点个star吧✨✨**
1213

@@ -15,6 +16,12 @@
1516
<video src="https://github.com/user-attachments/assets/716d61a7-41ae-483a-874d-ea1bf345bd1a" controls="controls" width="500" height="300">您的浏览器不支持播放该视频!</video>
1617

1718
**日志**
19+
- [x] **2024/12/16:** 增加[JoyVASA](https://github.com/jdh-algo/JoyVASA)的支持,可以用音频驱动视频或图片。非常酷!
20+
- 更新代码,然后下载模型: `huggingface-cli download TencentGameMate/chinese-hubert-base --local-dir .\checkpoints\chinese-hubert-base`` huggingface-cli download jdh-algo/JoyVASA --local-dir ./checkpoints/JoyVASA`
21+
- 启动webui后根据以下教程使用即可,建议source 是视频的情况下只驱动嘴部
22+
23+
<video src="https://github.com/user-attachments/assets/42fb24be-0cde-4138-9671-e52eec95e7f5" controls="controls" width="500" height="400">您的浏览器不支持播放该视频!</video>
24+
1825
- [x] **2024/12/14:** 增加pickle和image驱动以及区域驱动`animation_region`
1926
- 请更新最新的代码,windows用户可以直接双击`update.bat`更新,但请注意本地的代码将会被覆盖。
2027
- `python run.py ` 现在运行 `driving video`会自动保存对应的pickle到跟`driving video`一样的目录,可以直接复用。

0 commit comments

Comments
 (0)