You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Applications such as [Voice assistants](voice-assistants.md)listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
18
+
Applications such as voice agents listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the agent generated with [text to speech](text-to-speech.md). Devices connect to agents with the Speech SDK's `DialogServiceConnector` object.
19
19
20
20
Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity. Custom Commands helps you focus on building the best solution for your voice commanding scenarios.
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
20
22
21
23
In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You learn how to:
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
20
22
21
23
In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You learn how to:
> Intent recognition in Azure AI Speech is being retired on September 30, 2025. Your applications won't be able to use intent recognition directly via Azure AI Speech after this date. However, you're still able to perform intent recognition using Azure AI Language Service or Azure OpenAI.
12
+
>
13
+
> This change doesn't affect other Azure AI Speech capabilities such as [speech to text](../speech-to-text.md) (including no change to speaker diarization), [text to speech](../text-to-speech.md), and [speech translation](../speech-translation.md).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-sdk.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ SDK version 1.44.1 is being released for JavaScript only with 4 bug fixes:
23
23
> [!IMPORTANT]
24
24
> Support for target platforms is changing:
25
25
> * The minimum supported Android version is now Android 8.0 (API level 26).
26
-
> * The publishing of Speech SDK Unity packages are suspended after this release.
26
+
> * The publishing of Speech SDK Unity packages is suspended after this release.
27
27
28
28
#### New features:
29
29
* Added support for Android 16 KB memory page sizes.
@@ -724,7 +724,7 @@ This table shows the previous and new object names for real-time diarization and
724
724
725
725
#### New features
726
726
727
-
-**Objective-C, Swift, and Python**: Added support for DialogServiceConnector, used for [Voice-Assistant scenarios](../../voice-assistants.md).
727
+
-**Objective-C, Swift, and Python**: Added support for DialogServiceConnector, used for voice assistant scenarios.
728
728
-**Python**: Support for Python 3.10 was added. Support for Python 3.6 was removed, per Python's [end-of-life for 3.6](https://devguide.python.org/devcycle/#end-of-life-branches).
729
729
-**Unity**: Speech SDK is now supported for Unity applications on Linux.
730
730
-**C++, C#**: IntentRecognizer using pattern matching is now supported in C#. In addition, scenarios with custom entities, optional groups, and entity roles are now supported in C++ and C#.
@@ -1102,7 +1102,7 @@ Stay healthy!
1102
1102
### Speech SDK 1.12.0: 2020-May release
1103
1103
1104
1104
#### New features
1105
-
-**Go**: New Go language support for [Speech Recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](../../quickstarts/voice-assistants.md?pivots=programming-language-go). Set up your dev environment [here](../../quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.
1105
+
-**Go**: New Go language support for [Speech Recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and custom voice assistant. Set up your dev environment [here](../../quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.
1106
1106
-**JavaScript**: Added Browser support for text to speech. See documentation [here](../../get-started-text-to-speech.md?pivots=programming-language-JavaScript).
1107
1107
-**C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](../../keyword-recognition-overview.md). For sample code, see the Samples section below.
1108
1108
-**Java**: Added multi-device conversation with translation support. See the reference doc [here](/java/api/com.microsoft.cognitiveservices.speech.transcription).
@@ -1124,7 +1124,7 @@ Stay healthy!
1124
1124
- Fixed memory leaks in the keyword recognizer engine.
1125
1125
1126
1126
#### Samples
1127
-
-**Go**: Added quickstarts for [speech recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](../../quickstarts/voice-assistants.md?pivots=programming-language-go). Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).
1127
+
-**Go**: Added quickstarts for [speech recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and custom voice assistant. Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).
1128
1128
-**JavaScript**: Added quickstarts for [Text to speech](../../get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](../../get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](../../get-started-intent-recognition.md?pivots=programming-language-javascript).
1129
1129
- Keyword recognition samples for [C\#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer) (Android).
In this overview, you learn about the benefits and capabilities of intent recognition. An intent is something the user wants to do: book a flight, check the weather, or make a call. With intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options. You define user intent in the intent recognizer or conversational language understanding (CLU) model.
0 commit comments