You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please download our [pre-trained model](https://drive.google.com/drive/folders/1Wd88VDoLhVzYsQ30_qDVluQr_Xm46yHT?usp=sharing) and put it in ./checkpoints.
|checkpoints/hub | Face detection models used in [face alignment](https://github.com/1adrianb/face-alignment).
99
107
108
+
</details>
109
+
100
110
#### Generating 2D face from a single Image
101
111
102
-
```
103
-
python inference.py --driven_audio <audio.wav> --source_image <video.mp4 or picture.png> --result_dir <a file to store results>
112
+
```bash
113
+
python inference.py --driven_audio <audio.wav> \
114
+
--source_image <video.mp4 or picture.png> \
115
+
--result_dir <a file to store results>
104
116
```
105
117
106
118
#### Generating 3D face from Audio
@@ -110,7 +122,7 @@ To do ...
110
122
#### Generating 4D free-view talking examples from audio and a single image
111
123
112
124
We use `camera_yaw`, `camera_pitch`, `camera_roll` to control camera pose. For example, `--camera_yaw -20 30 10` means the camera yaw degree changes from -20 to 30 and then changes from 30 to 10.
0 commit comments