Skip to content

Commit e5714f5

Browse files
Update articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Co-authored-by: Eric Urban <[email protected]>
1 parent 04e5638 commit e5714f5

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@ Render the SVG animation along with the synthesized speech to see the mouth move
230230
# [3D blend shapes](#tab/3dblendshapes)
231231
232232
233-
Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames have already been rendered prior to the current list of frames.
233+
Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
234234
235235
The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1. The decimal values are in the same order as described in the facial positions table below.
236236

0 commit comments

Comments
 (0)