Skip to content

Commit cf9126a

Browse files
authored
Merge pull request #48170 from wolfma61/wolfma/gracespelling
fixed spelling errors
2 parents b6644f7 + 5680967 commit cf9126a

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/cognitive-services/Speech-Service/speech-scenarios.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,15 +27,15 @@ Many users want to enable voice input on their applications. Voice input is a gr
2727

2828
### Voice Triggered Apps with baseline models
2929

30-
If your app is going to be used by the general public in environments where the background noise is not excessive, the easiest and fastest way to do this be simply downloading our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md). The SDK powered by your [Azure Subscription key](https://azure.microsoft.com/try/cognitive-services/) allows developers to upload audio to baseline speech recognition models that power Cortana and Skype. The mdoels are state of the art, and are used by the aforementioned products. You can be up and running in minutes.
30+
If your app is going to be used by the general public in environments where the background noise is not excessive, the easiest and fastest way to do this be simply downloading our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md). The SDK powered by your [Azure Subscription key](https://azure.microsoft.com/try/cognitive-services/) allows developers to upload audio to baseline speech recognition models that power Cortana and Skype. The models are state of the art, and are used by the aforementioned products. You can be up and running in minutes.
3131

3232
### Voice Triggered Apps with custom models
3333

3434
If your app addresses a specific domain, (say chemistry, biology or special dietary needs) then you may want to consider to adapt a [language model](how-to-customize-language-model.md). Adapting a language model will teach the decoder about the most common phrases and words used by your app. The decoder will be able to more accurately transcribe a voice input with a custom language model for a particular domain rather than the baseline model. Similarly if the background noise where your app is going to be used is prominent you may want to adapt an acoustic model. Explore the documentation for other cases under which [language adaptation](how-to-customize-language-model.md) and [acoustic adaptation](how-to-customize-acoustic-models.md) provide value and visit our [adaptation portal](https://customspeech.ai) for kick-starting the model creation experience. Similar to baseline models, custom models are called via our [Speech SDK](speech-sdk.md) and following the relevant [Samples](quickstart-csharp-dotnet-windows.md).
3535

3636
## Transcribe Call center audio calls
3737

38-
Call centers accumulate large quantities of audio. Hidden within those audio files lies value that can be obtained though transcription. The duration of the call, the sentiment, the satisfaction of the customer and the general value the call provided to the caller can be discovered by obtaining call transcripts.
38+
Call centers accumulate large quantities of audio. Hidden within those audio files lies value that can be obtained through transcription. The duration of the call, the sentiment, the satisfaction of the customer and the general value the call provided to the caller can be discovered by obtaining call transcripts.
3939

4040
The best starting point is the [Batch transcription API](batch-transcription.md) along with related [Sample](https://github.com/PanosPeriorellis/Speech_Service-BatchTranscriptionAPI).
4141

@@ -51,7 +51,7 @@ If you plan to use a custom model, then you will need the ID of that model along
5151

5252
## Voice Bots
5353

54-
Developer can empower their application with voice output. The Speech Service can synthetize speech for a number of [languages](supported-languages.md) and provides the [endpoints](rest-apis.md) for accessing and adding that capability to your app.
54+
Developers can empower their applications with voice output. The Speech Service can synthetize speech for a number of [languages](supported-languages.md) and provides the [endpoints](rest-apis.md) for accessing and adding that capability to your app.
5555

5656
In addition, for users that want to add more personality and uniqueness to their bots, the Speech Service enables developers to customize a unique voice font. Similar to customizing speech recognition models voice fonts require user data. Developers are upload that data in our [voice adaptation portal](https://customspeech.ai) and start building your unique brand of voice for your bot. Details are described [here](how-to-text-to-speech.md) as well as the [FAQ](faq-text-to-speech.md) pages
5757

0 commit comments

Comments
 (0)