Skip to content

Commit 0616c72

Browse files
authored
Merge pull request #404 from eric-urban/eur/speech-refresh-2
refresh speech docs with customer intent
2 parents b74e595 + ce28f30 commit 0616c72

12 files changed

+37
-24
lines changed

articles/ai-services/speech-service/how-to-recognize-speech.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,13 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 08/13/2024
9+
ms.date: 9/20/2024
1010
ms.author: eur
1111
ms.devlang: cpp
1212
ms.custom: devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python
1313
zone_pivot_groups: programming-languages-speech-services
1414
keywords: speech to text, speech to text software
15+
#Customer intent: As a developer, I want to learn how to recognize speech so that I can convert spoken language into text.
1516
---
1617

1718
# How to recognize speech

articles/ai-services/speech-service/how-to-select-audio-input-devices.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.reviewer: chlandsi
1212
ms.custom: devx-track-js, devx-track-python, linux-related-content
13+
#Customer intent: As a developer, I want to learn how to select an audio input device in the Speech SDK so that I can configure the audio input for my speech-enabled application.
1314
---
1415

1516
# Select an audio input device with the Speech SDK
@@ -382,6 +383,5 @@ In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.o
382383
## Next steps
383384
384385
- [Explore samples on GitHub](https://aka.ms/csspeech/samples)
385-
386386
- [Customize acoustic models](./how-to-custom-speech-train-model.md)
387387
- [Customize language models](./how-to-custom-speech-train-model.md)

articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,11 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.reviewer: yulili
1212
ms.custom: references_regions, devx-track-extended-java, devx-track-js, devx-track-python
1313
zone_pivot_groups: programming-languages-speech-services-nomore-variant
14+
#Customer intent: As a developer, I want to learn how to get facial position with viseme so that I can animate my avatar to match the speech.
1415
---
1516

1617
# Get facial position with viseme
@@ -187,7 +188,7 @@ result = speech_synthesizer.speak_ssml_async(ssml).get()
187188

188189
::: zone pivot="programming-language-javascript"
189190

190-
```Javascript
191+
```JavaScript
191192
var synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig, audioConfig);
192193

193194
// Subscribes to viseme received event

articles/ai-services/speech-service/how-to-speech-synthesis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.custom: devx-track-python, devx-track-js, devx-track-csharp, mode-other, devx-track-extended-java, devx-track-go
1212
zone_pivot_groups: programming-languages-speech-services
13-
keywords: text to speech
13+
#Customer intent: As a developer, I want to learn how to synthesize speech from text so that I can convert text into spoken language.
1414
---
1515

1616
# How to synthesize speech from text

articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,11 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.reviewer: rhurey
1212
ms.custom: devx-track-csharp, devx-track-extended-java, devx-track-python
1313
zone_pivot_groups: programming-languages-set-two
14+
#Customer intent: As a developer, I want to learn how to track memory usage in the Speech SDK so that I can manage resources effectively.
1415
---
1516

1617
# How to track Speech SDK memory usage

articles/ai-services/speech-service/how-to-translate-speech.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ manager: nitinme
77
ms.service: azure-ai-speech
88
ms.custom: devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.author: eur
1212
zone_pivot_groups: programming-languages-speech-services
13+
#Customer intent: As a developer, I want to learn how to translate speech from one language to text in another language so that I can convert spoken language into text in a different language.
1314
---
1415

1516
# How to recognize and translate speech

articles/ai-services/speech-service/how-to-use-audio-input-streams.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,11 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 1/21/2024
9+
ms.date: 9/20/2024
1010
ms.author: eur
1111
ms.devlang: csharp
1212
ms.custom: devx-track-csharp
13+
#Customer intent: As a developer, I want to learn how to use the audio input stream in the Speech SDK so that I can stream audio into the recognizer.
1314
---
1415

1516
# How to use the audio input stream

articles/ai-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.custom: devx-track-csharp, devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python, linux-related-content
1212
zone_pivot_groups: programming-languages-speech-services
13+
#Customer intent: As a developer, I want to learn how to use compressed input audio in the Speech SDK so that I can convert audio files into text.
1314
---
1415

1516
# How to use compressed input audio

articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,16 @@
22
title: How to recognize intents with custom entity pattern matching
33
titleSuffix: Azure AI services
44
description: In this guide, you learn how to recognize intents and custom entities from simple patterns.
5-
author: chschrae
6-
manager: travisw
5+
author: eric-urban
6+
ms.author: eur
7+
manager: nitinme
78
ms.service: azure-ai-speech
89
ms.topic: how-to
9-
ms.date: 1/21/2024
10-
ms.author: chschrae
10+
ms.date: 9/20/2024
11+
ms.reviewer: chschrae
1112
zone_pivot_groups: programming-languages-set-thirteen
1213
ms.custom: devx-track-cpp, devx-track-csharp, mode-other, devx-track-extended-java, linux-related-content
14+
#Customer intent: As a developer, I want to learn how to recognize intents and custom entities from simple patterns so that I can derive user intent from speech utterances.
1315
---
1416

1517
# How to recognize intents with custom entity pattern matching

articles/ai-services/speech-service/how-to-use-logging.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,9 @@ ms.author: eur
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 1/21/2024
10+
ms.date: 9/20/2024
1111
ms.custom: devx-track-csharp, devx-track-extended-java, devx-track-python
12+
#Customer intent: As a developer, I want to learn how to enable logging in the Speech SDK so that I can get additional information and diagnostics from the Speech SDK's core components.
1213
---
1314

1415
# Enable logging in the Speech SDK

0 commit comments

Comments
 (0)