You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .well-known/agent-card.json
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
"name": "AnimaSync",
4
4
"description": "Voice-driven 3D avatar animation engine for the browser. Extracts emotion from speech and generates lip sync, facial expressions, and body motion in real time — entirely client-side via Rust/WASM and ONNX inference.",
Copy file name to clipboardExpand all lines: agents.json
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@
29
29
"id": "quick-start-v2",
30
30
"name": "Quick Start with V2 Engine",
31
31
"description": "Install AnimaSync V2, initialize the engine, and process an audio file to get 52 ARKit blendshape animation frames at 30fps.",
32
-
"agent_instructions": "Use this flow when a developer wants to add lip sync animation to a web app. V2 is recommended for most use cases — simpler integration, direct 52-dim prediction.",
32
+
"agent_instructions": "Use this flow when a developer wants to add lip sync animation to a web app. V1 is recommended for most use cases — full expression control with built-in VAD and idle expressions.",
0 commit comments