You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/speech-scenarios.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,15 +27,15 @@ Many users want to enable voice input on their applications. Voice input is a gr
27
27
28
28
### Voice Triggered Apps with baseline models
29
29
30
-
If your app is going to be used by the general public in environments where the background noise is not excessive, the easiest and fastest way to do this be simply downloading our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md). The SDK powered by your [Azure Subscription key](https://azure.microsoft.com/try/cognitive-services/) allows developers to upload audio to baseline speech recognition models that power Cortana and Skype. The mdoels are state of the art, and are used by the aforementioned products. You can be up and running in minutes.
30
+
If your app is going to be used by the general public in environments where the background noise is not excessive, the easiest and fastest way to do this be simply downloading our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md). The SDK powered by your [Azure Subscription key](https://azure.microsoft.com/try/cognitive-services/) allows developers to upload audio to baseline speech recognition models that power Cortana and Skype. The models are state of the art, and are used by the aforementioned products. You can be up and running in minutes.
31
31
32
32
### Voice Triggered Apps with custom models
33
33
34
34
If your app addresses a specific domain, (say chemistry, biology or special dietary needs) then you may want to consider to adapt a [language model](how-to-customize-language-model.md). Adapting a language model will teach the decoder about the most common phrases and words used by your app. The decoder will be able to more accurately transcribe a voice input with a custom language model for a particular domain rather than the baseline model. Similarly if the background noise where your app is going to be used is prominent you may want to adapt an acoustic model. Explore the documentation for other cases under which [language adaptation](how-to-customize-language-model.md) and [acoustic adaptation](how-to-customize-acoustic-models.md) provide value and visit our [adaptation portal](https://customspeech.ai) for kick-starting the model creation experience. Similar to baseline models, custom models are called via our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md).
35
35
36
36
## Transcribe Call center audio calls
37
37
38
-
Call centers accumulate large quantities of audio. Hidden within those audio files lies value that can be obtained though transcription. The duration of the call, the sentiment, the satisfaction of the customer and the general value the call provided to the caller can be discovered by obtaining call transcripts.
38
+
Call centers accumulate large quantities of audio. Hidden within those audio files lies value that can be obtained through transcription. The duration of the call, the sentiment, the satisfaction of the customer and the general value the call provided to the caller can be discovered by obtaining call transcripts.
39
39
40
40
The best starting point is the [Batch transcription API](batch-transcription.md) along with related [Sample](https://github.com/PanosPeriorellis/Speech_Service-BatchTranscriptionAPI).
41
41
@@ -51,7 +51,7 @@ If you plan to use a custom model, then you will need the ID of that model along
51
51
52
52
## Voice Bots
53
53
54
-
Developer can empower their application with voice output. The Speech Service can synthetize speech for a number of [languages](supported-languages.md) and provides the [endpoints](rest-apis.md) for accessing and adding that capability to your app.
54
+
Developers can empower their applications with voice output. The Speech Service can synthetize speech for a number of [languages](supported-languages.md) and provides the [endpoints](rest-apis.md) for accessing and adding that capability to your app.
55
55
56
56
In addition, for users that want to add more personality and uniqueness to their bots, the Speech Service enables developers to customize a unique voice font. Similar to customizing speech recognition models voice fonts require user data. Developers are upload that data in our [voice adaptation portal](https://customspeech.ai) and start building your unique brand of voice for your bot. Details are described [here](how-to-text-to-speech.md) as well as the [FAQ](faq-text-to-speech.md) pages
0 commit comments