You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/virtual-kubelet.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -203,6 +203,9 @@ Use the [az aks remove-connector][aks-remove-connector] command to remove Virtua
203
203
az aks remove-connector --resource-group myAKSCluster --name myAKSCluster --connector-name virtual-kubelet
204
204
```
205
205
206
+
> [!NOTE]
207
+
> If you encounter errors removing both OS connectors, or want to remove just the Windows or Linux OS connector, you can manually specify the OS type. Add the `--os-type` parameter to the previous `az aks remove-connector` command, and specify `Windows` or `Linux`.
208
+
206
209
## Next steps
207
210
208
211
For possible issues with the Virtual Kubelet, see the [Known quirks and workarounds][vk-troubleshooting]. To report problems with the Virtual Kubelet, [open a GitHub issue][vk-issues].
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/luis-tutorial-pattern-roles.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ In this tutorial, the Human Resources app detects utterances about moving new em
50
50
|[Hierarchical (no roles)](luis-quickstart-intent-and-hier-entity.md)|mv Jill Jones from **a-2349** to **b-1298**|a-2349, b-1298|
51
51
|This tutorial (with roles)|Move Billy Patterson from **Yuma** to **Denver**.|Yuma, Denver|
52
52
53
-
You can't use the hierarchical entity in the pattern because only hierarchical parents are used in parents. In order to return the named locations of origin and destination, you muse use a pattern.
53
+
You can't use the hierarchical entity in the pattern because only hierarchical parents are used in patterns. In order to return the named locations of origin and destination, you muse use a pattern.
54
54
55
55
### Simple entity for new employee name
56
56
The name of the new employee, Billy Patterson, is not part of the list entity **Employee** yet. The new employee name is extracted first, in order to send the name to an external system to create the company credentials. After the company credentials are created, the employee credentials are added to the list entity **Employee**.
@@ -385,4 +385,4 @@ The intent score is now much higher and the role names are part of the entity re
385
385
## Next steps
386
386
387
387
> [!div class="nextstepaction"]
388
-
> [Learn best practices for LUIS apps](luis-concept-best-practices.md)
388
+
> [Learn best practices for LUIS apps](luis-concept-best-practices.md)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/overview.md
+53-24Lines changed: 53 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,71 +7,100 @@ author: v-jerkin
7
7
8
8
ms.service: cognitive-services
9
9
ms.component: speech-service
10
-
ms.topic: article
10
+
ms.topic: overview
11
11
ms.date: 05/07/2018
12
12
ms.author: v-jerkin
13
13
---
14
-
# What is the Speech service (preview)?
14
+
# What is the Speech service?
15
15
16
-
With one subscription, the Speech service gives developers an easy way to add powerful speech-enabled features to their applications. Your apps can now feature voice command, transcription, dictation, speech synthesis, and speech translation.
16
+
The Speech service unites the Azure speech features previously available via the [Bing Speech API](https://docs.microsoft.com/azure/cognitive-services/speech/home), [Translator Speech](https://docs.microsoft.com/azure/cognitive-services/translator-speech/), [Custom Speech](https://docs.microsoft.com/azure/cognitive-services/custom-speech-service/cognitive-services-custom-speech-home), and [Custom Voice](http://customvoice.ai/) services. Now, one subscription provides access to all of these capabilities.
17
17
18
-
The Speech service is powered by the technologies used in other Microsoft products, including Cortana and Microsoft Office.
18
+
Like the other Azure speech services, the Speech service is powered by the proven speech technologies used in products like Cortana and Microsoft Office. You can count on the quality of the results and the reliability of the Azure cloud.
19
19
20
20
> [!NOTE]
21
-
> The Speech service is currently in public preview. Return here regularly for updates to documentation, additional code samples, and more.
21
+
> The Speech service is currently in public preview. Return here regularly for documentation updates, new code samples, and more.
22
22
23
-
## Speech service features
23
+
## Main Speech service functions
24
24
25
-
|Function|Description|
25
+
The primary functions of the Speech service are Speech to Text (also called speech recognition or transcription), Text to Speech (speech synthesis), and Speech Translation.
26
+
27
+
|Function|Features|
26
28
|-|-|
27
-
|[Speech-to-text](speech-to-text.md)| Transcribes audio streams into text that your application can accept as input. Also integrates with the [Language Understanding service](https://docs.microsoft.com/azure/cognitive-services/luis/) (LUIS) to derive user intent from utterances.|
28
-
|[Text-to-speech](text-to-speech.md)| Converts plain text to natural-sounding speech, delivered to your application in an audio file. Multiple voices, varying in gender or accent, are available for many supported languages. |
29
-
|[Speech-translation](speech-translation.md)| Can be used either to translate streaming audio in near-real-time or to process recorded speech. |
30
-
|Custom speech-to-text|You can customize speech-to-text by creating your own [acoustic](how-to-customize-acoustic-models.md) and [language](how-to-customize-language-model.md) models and by specifying custom [pronunciation](how-to-customize-pronunciation.md) rules. |
31
-
|[Custom text-to-speech](how-to-customize-voice-font.md)|You can create your own voices for text-to-speech.|
32
-
|[Speech Devices SDK](speech-devices-sdk.md)| With the introduction of the unified Speech service, Microsoft and its partners offer an integrated hardware/software platform optimized for developing speech-enabled devices |
29
+
|[Speech to Text](speech-to-text.md)| <ul><li>Transcribes continuous real-time speech into text.<li>Can batch-transcribe speech from audio recordings. <li>Offers recognition modes for interactive, conversation, and dictation use cases.<li>Supports intermediate results, end-of-speech detection, automatic text formatting, and profanity masking. <li>Can call on [Language Understanding](https://docs.microsoft.com/azure/cognitive-services/luis/) (LUIS) to derive user intent from transcribed speech.\*|
30
+
|[Text to Speech](text-to-speech.md)| <ul><li>Converts text to natural-sounding speech. <li>Offers Multiple genders and/or dialects for many supported languages. <li>Supports plain text input or Speech Synthesis Markup Language (SSML). |
31
+
|[Speech Translation](speech-translation.md)| <ul><li>Translates streaming audio in near-real-time<li> Can also process recorded speech<li>Provides results as text or synthesized speech. |
32
+
33
+
\**Intent recognition requires a LUIS subscription.*
34
+
35
+
36
+
## Customizing speech features
37
+
38
+
The Speech service lets you use your own data to train the models underlying the Speech service's Speech to Text and Text to Speech features.
39
+
40
+
|Feature|Model|Purpose|
41
+
|-|-|-|
42
+
|Speech to Text|[Acoustic model](how-to-customize-acoustic-models.md)|Helps transcribe particular speakers and environments, such as cars or factories|
43
+
||[Language model](how-to-customize-language-model.md)|Helps transcribe field-specific vocabulary and grammar, such as medical or IT jargon|
44
+
||[Pronunciation model](how-to-customize-pronunciation.md)|Helps transcribe abbreviations and acronyms, such as "IOU" for "i oh you" |
45
+
|Text to Seech|[Voice font](how-to-customize-voice-font.md)|Gives your app a voice of its own by training the model on samples of human speech.|
46
+
47
+
Once created, your custom models can be used anywhere you'd use the standard models in your app's Speech to Text or Text to Speech functionality.
48
+
33
49
34
-
## Access to the Speech service
50
+
## Using the Speech service
35
51
36
-
The Speech service is made available in two ways. [The SDK](speech-sdk.md) abstracts away the details of the network protocols for easier development on supported platforms. The [REST API](rest-apis.md) works with any programming language, but does not offer all the functions offered by the SDK.
52
+
To simplify the development of speech-enabled applications, Microsoft provides the [Speech SDK](speech-sdk.md) for use with the new Speech service. The Speech SDK provides consistent native Speech to Text and Speech Translation APIs for C#, C++, and Java. If you're developing with one of these languages, the Speech SDK makes development easier by handling the network details for you.
53
+
54
+
The Speech service also has a [REST API](rest-apis.md) that works with any programming language that can make HTTP requests. The REST interface, however, does not offer the streaming, real-time functionality ofthe SDK.
|[SDKs](speech-sdk.md)|Yes|No|Yes|Libraries for specific programming languages, utilize Websocket-based procotol that simplify development.|
41
-
|[REST](rest-apis.md)|Yes|Yes|No|A simple HTTP-based API that makes it easy to add speech to your application.|
58
+
|[Speech SDK](speech-sdk.md)|Yes|No|Yes|Native APIs for C#, C++, and Java to simplify development.|
59
+
|[REST](rest-apis.md)|Yes|Yes|No|A simple HTTP-based API that makes it easy to add speech to your applications.|
60
+
61
+
### WebSockets
62
+
63
+
The Speech service also has WebSockets protocols for streaming Speech to Text and Speech Translation. The Speech SDKs use these protocols to communicate with the Speech service. You should use the Speech SDK rather than trying to implement your own WebSockets communication with the Speech service.
64
+
65
+
If you already have code that uses Bing Speech or Translator Speech via WebSockets, though, it is straightforward to update it to use the Speech service. The WebSockets protocols are compatible; only the endpoints are different.
66
+
67
+
### Speech Devices SDK
68
+
69
+
The [Speech Devices SDK](speech-devices-sdk.md) is an integrated hardware and software platform for developers of speech-enabled devices. Our hardware partner provides reference designs and development units. Microsoft provides a device-optimized SDK that takes full advantage of the hardware's capabilities.
70
+
42
71
43
72
## Speech scenarios
44
73
45
-
A few common uses of speech technology are discussed briefly below. The [Speech SDK](speech-sdk.md) is central to most of these scenarios.
74
+
Use cases for the Speech service include:
46
75
47
76
> [!div class="checklist"]
48
77
> * Create voice-triggered apps
49
78
> * Transcribe call center recordings
50
79
> * Implement voice bots
51
80
52
-
### Voice-triggered apps
81
+
### Voice user interface
53
82
54
-
Voice input is a great way to make your app flexible, hands-free, and quick to use. In a voice-enabled app, users can just ask for the information they want rather than needing to navigate to it by clicking or tapping.
83
+
Voice input is a great way to make your app flexible, hands-free, and quick to use. In a voice-enabled app, users can just ask for the information they want rather than needing to navigate to it.
55
84
56
-
If your app is intended for use by the general public, you can use the baseline speech recognition model provided by the Speech service. It does a good job of recognizing a wide variety of speakers in typical environments.
85
+
If your app is intended for use by the general public, you can use the default speech recognition models. They do a good job of recognizing a wide variety of speakers in common environments.
57
86
58
87
If your app will be used in a specific domain (for example, medicine or IT), you can create a [language model](how-to-customize-language-model.md) to teach the Speech service about the special terminology used by your app.
59
88
60
89
If your app will be used in a noisy environment, such as a factory, you can create a custom [acoustic model](how-to-customize-acoustic-models.md) to better allow the Speech service to distinguish speech from noise.
61
90
62
91
Getting started is as easy as downloading the [Speech SDK](speech-sdk.md) and following a relevant [Quickstart](quickstart-csharp-dotnet-windows.md) article.
63
92
64
-
### Transcribe call center recordings
93
+
### Call center transcription
65
94
66
95
Often, call center recordings are only consulted if an issue arises with a call. With the Speech service, it's easy to transcribe every recording to text. Once they're text, you can easily index them for [full-text search](https://docs.microsoft.com/azure/search/search-what-is-azure-search) or apply [Text Analytics](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/) to detect sentiment, language, and key phrases.
67
96
68
-
If your call center recordings often contain specialized terminology (such as product names or IT jargon), you can create a [language model](how-to-customize-language-model.md) to teach the Speech service that vocabulary. A custom [acoustic model](how-to-customize-acoustic-models.md) can help the Speech service understand less-than-optimal phone connections.
97
+
If your call center recordings revolve around specialized terminology (such as product names or IT jargon), you can create a [language model](how-to-customize-language-model.md) to teach the Speech service that vocabulary. A custom [acoustic model](how-to-customize-acoustic-models.md) can help the Speech service understand less-than-optimal phone connections.
69
98
70
99
For more information about this scenario, read more about [batch transcription](batch-transcription.md) with the Speech service.
71
100
72
101
### Voice bots
73
102
74
-
[Bots](https://dev.botframework.com/) are an increasingly popular way of connecting users with the information they want, and customers with the businesses they love. Adding a conversational user interface to your Web site or app makes its functionality easier to find and quicker to access. With the Speech service, this conversation takes on a new dimension of fluency by actually responding to spoken queries with synthesized speech.
103
+
[Bots](https://dev.botframework.com/) are an increasingly popular way of connecting users with the information they want, and customers with the businesses they love. Adding a conversational user interface to your Web site or app makes its functionality easier to find and quicker to access. With the Speech service, this conversation takes on a new dimension of fluency by responding to spoken queries in kind.
75
104
76
105
To add a unique personality to your voice-enabled bot (and strengthen your brand), you can give it a voice of its own. Creating a custom voice is a two-step process. First, you [make recordings](record-custom-voice-samples.md) of the voice you want to use. Then you [submit those recordings](how-to-customize-voice-font.md) (along with a text transcript) to the Speech service's [voice customization portal](https://cris.ai/Home/CustomVoice), which does the rest. Once you've created your custom voice, it's straightforward to use it in your app.
0 commit comments