You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+84-50Lines changed: 84 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@
4
4
5
5
<br><br>
6
6
7
-
**Real-time audio-to-blendshape lip sync for the browser.**
7
+
**Voice-driven 3D avatar animation engine for the browser.**
8
8
9
-
Rust/WASM engine that converts speech into ARKit-compatible facial animations at 30fps — entirely client-side.
9
+
Extracts emotion from speech and generates lip sync, facial expressions, and body motion in real time — entirely client-side via Rust/WASM.
10
10
11
11
<br>
12
12
@@ -36,33 +36,48 @@ Rust/WASM engine that converts speech into ARKit-compatible facial animations at
36
36
<tr>
37
37
<tdwidth="50%">
38
38
39
-
**Browser-native WASM**<br>
40
-
<sub>No server needed. Entire pipeline runs in the browser with near-native performance via Rust → WebAssembly compilation.</sub>
39
+
**Voice → Full-body Animation**<br>
40
+
<sub>Not just lip sync. Analyzes speech to generate lip movements, emotional facial expressions, eye blinks, and body poses — all from a single audio stream.</sub>
41
41
42
-
**ARKit-compatible Output**<br>
43
-
<sub>Standard 52-dim or 111-dim blendshape weight arrays. Works with any 3D framework — Three.js, Babylon.js, Unity WebGL.</sub>
42
+
**Emotion-aware Expressions**<br>
43
+
<sub>Automatically maps vocal characteristics to facial expressions. Eyebrow raises, smile intensity, jaw dynamics, and blink patterns respond to how things are said, not just what is said.</sub>
44
44
45
-
**Built-in Bone Animation**<br>
46
-
<sub>Embedded VRMA idle/speaking pose clips with automatic crossfade. Natural body movement out of the box.</sub>
45
+
**Built-in Body Motion**<br>
46
+
<sub>Embedded VRMA bone animation clips (idle / speaking poses) with automatic crossfade. Your avatar breathes, shifts weight, and moves naturally — out of the box.</sub>
47
47
48
48
</td>
49
49
<tdwidth="50%">
50
50
51
-
**Real-time Streaming**<br>
52
-
<sub>AudioWorklet-based microphone capture with ~300ms latency. Stream TTS audio or process recorded files.</sub>
51
+
**Browser-native WASM**<br>
52
+
<sub>No server needed. Entire pipeline runs in the browser at 30fps with near-native performance via Rust → WebAssembly. ARKit-compatible 52 or 111-dim output.</sub>
53
53
54
-
**30-day Free Trial**<br>
55
-
<sub>No signup, no API key. Call `init()` and start building. Internet required for license validation only.</sub>
54
+
**Real-time Streaming**<br>
55
+
<sub>AudioWorklet-based microphone capture with ~300ms latency. Feed live mic, TTS, or recorded audio — get animated avatar frames back instantly.</sub>
56
56
57
-
**Three.js + VRM Ready**<br>
58
-
<sub>First-class integration with @pixiv/three-vrm. Drop a VRM avatar and it just works.</sub>
57
+
**Plug & Play**<br>
58
+
<sub>3 lines of code to go from audio to animated avatar. 30-day free trial, no signup. First-class Three.js + VRM integration.</sub>
59
59
60
60
</td>
61
61
</tr>
62
62
</table>
63
63
64
64
---
65
65
66
+
## What AnimaSync Does
67
+
68
+
Most lip sync engines stop at mouth shapes. AnimaSync goes further — it treats voice as the **complete animation source**:
constframe=lipsync.getFrame(result, i); // number[52] — full face animation
91
107
applyToYourAvatar(frame);
92
108
}
93
109
```
@@ -114,9 +130,9 @@ Working examples you can run locally — zero npm install, all loaded from CDN.
114
130
115
131
| Example | Description | Source |
116
132
|---------|-------------|--------|
117
-
|**[Basic](examples/vanilla-basic/)**| Audio file → blendshape bar chart. No 3D, pure API demo. |[index.html](examples/vanilla-basic/index.html)|
118
-
|**[VRM Avatar](examples/vanilla-avatar/)**| Full 3D avatar with mic, file upload, bone animation. |[index.html](examples/vanilla-avatar/index.html)|
119
-
|**[V1 vs V2](examples/vanilla-comparison/)**| Side-by-side dual avatar comparison. Same audio, two engines. |[index.html](examples/vanilla-comparison/index.html)|
133
+
|**[Basic](examples/vanilla-basic/)**| Audio → animated blendshape visualization. No 3D, pure API demo. |[index.html](examples/vanilla-basic/index.html)|
134
+
|**[VRM Avatar](examples/vanilla-avatar/)**| Full 3D avatar — lip sync, expressions, body motion, mic streaming. |[index.html](examples/vanilla-avatar/index.html)|
135
+
|**[V1 vs V2](examples/vanilla-comparison/)**| Side-by-side dual avatar comparison. Same voice, two animation engines. |[index.html](examples/vanilla-comparison/index.html)|
Copy file name to clipboardExpand all lines: examples/vanilla-avatar/README.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,15 @@
1
1
# Vanilla Avatar
2
2
3
-
Full 3D VRM avatar that lip-syncs to audio using AnimaSync V2. Supports file upload and real-time microphone streaming.
3
+
Full 3D VRM avatar that comes alive from voice alone. Lip sync, emotional facial expressions, natural eye blinks, and body motion — all generated from a single audio stream via AnimaSync V2.
4
4
5
5
## What it demonstrates
6
6
7
-
- Three.js + `@pixiv/three-vrm` avatar rendering
8
-
- VRMA bone animation (idle pose crossfade)
9
-
- Real-time mic streaming via `processAudioChunk()` + AudioWorklet
10
-
- Batch file processing via `processFile()`
11
-
- 52-dim ARKit blendshape application to VRM expressions
7
+
-**Lip sync**: Mouth shapes driven by voice phonemes
8
+
-**Facial expressions**: Brows, cheeks, and eye area respond to vocal characteristics
Copy file name to clipboardExpand all lines: examples/vanilla-basic/README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
# Vanilla Basic
2
2
3
-
Minimal AnimaSync example — no 3D avatar, no Three.js. Drop an audio file and watch blendshape values animate in real time.
3
+
Minimal AnimaSync example — no 3D avatar, no Three.js. Drop an audio file and see how voice drives lip sync, facial expression, and blink animation data in real time.
4
4
5
5
## What it demonstrates
6
6
7
7
- Loading `@goodganglabs/lipsync-wasm-v2` from CDN (zero `npm install`)
8
-
-`processFile()` batch API
9
-
-Extracting frames with `getFrame()` and visualizing 23 key ARKit channels
8
+
-`processFile()` batch API — returns lip sync + expressions + blinks in one call
0 commit comments