Skip to content

Commit 4391aaf

Browse files
Merge pull request #5961 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-10 11:00 UTC
2 parents cf81e77 + a445e0a commit 4391aaf

18 files changed

+136
-163
lines changed

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1205,6 +1205,11 @@
12051205
"redirect_url": "/azure/ai-services/speech-service/multi-device-conversation",
12061206
"redirect_document_id": false
12071207
},
1208+
{
1209+
"source_path_from_root": "/articles/ai-services/speech-service/voice-assistants.md",
1210+
"redirect_url": "/azure/ai-services/speech-service/voice-live",
1211+
"redirect_document_id": false
1212+
},
12081213
{
12091214
"source_path_from_root": "/articles/ai-services/content-understanding/concepts/capabilities.md",
12101215
"redirect_url": "/azure/ai-services/content-understanding/concepts/analyzer-templates",

articles/ai-services/speech-service/custom-commands.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.custom: cogserv-non-critical-speech
1515

1616
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
1717

18-
Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
18+
Applications such as voice agents listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the agent generated with [text to speech](text-to-speech.md). Devices connect to agents with the Speech SDK's `DialogServiceConnector` object.
1919

2020
Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity. Custom Commands helps you focus on building the best solution for your voice commanding scenarios.
2121

articles/ai-services/speech-service/get-started-intent-recognition-clu.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ keywords: intent recognition
1515

1616
# Quickstart: Recognize intents with Conversational Language Understanding
1717

18+
[!INCLUDE [deprecation notice](./includes/intent-recognition-retire.md)]
19+
1820
::: zone pivot="programming-language-csharp"
1921
[!INCLUDE [C# include](includes/quickstarts/intent-recognition-clu/csharp.md)]
2022
::: zone-end

articles/ai-services/speech-service/get-started-intent-recognition.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.topic: quickstart
1010
ms.date: 3/10/2025
1111
ms.author: eur
1212
zone_pivot_groups: programming-languages-speech-services
13-
keywords: intent recognition
13+
keywords: intent recognition, deprecation
1414
---
1515

1616
# Quickstart: Recognize intents with the Speech service and LUIS

articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ ms.custom: devx-track-cpp, devx-track-csharp, mode-other, devx-track-extended-ja
1616

1717
# How to recognize intents with custom entity pattern matching
1818

19+
[!INCLUDE [deprecation notice](./includes/intent-recognition-retire.md)]
20+
1921
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
2022

2123
In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You learn how to:

articles/ai-services/speech-service/how-to-use-simple-language-pattern-matching.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ ms.custom: devx-track-cpp, devx-track-csharp, mode-other, devx-track-extended-ja
1616

1717
# How to recognize intents with simple language pattern matching
1818

19+
[!INCLUDE [deprecation notice](./includes/intent-recognition-retire.md)]
20+
1921
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
2022

2123
In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You learn how to:
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
author: goergenj
3+
manager: nitinme
4+
ms.service: azure-ai-speech
5+
ms.topic: include
6+
ms.date: 07/09/2025
7+
ms.author: jagoerge
8+
---
9+
10+
> [!IMPORTANT]
11+
> Intent recognition in Azure AI Speech is being retired on September 30, 2025. Your applications won't be able to use intent recognition directly via Azure AI Speech after this date. However, you're still able to perform intent recognition using Azure AI Language Service or Azure OpenAI.
12+
>
13+
> This change doesn't affect other Azure AI Speech capabilities such as [speech to text](../speech-to-text.md) (including no change to speaker diarization), [text to speech](../text-to-speech.md), and [speech translation](../speech-translation.md).

articles/ai-services/speech-service/includes/release-notes/release-notes-sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ SDK version 1.44.1 is being released for JavaScript only with 4 bug fixes:
2323
> [!IMPORTANT]
2424
> Support for target platforms is changing:
2525
> * The minimum supported Android version is now Android 8.0 (API level 26).
26-
> * The publishing of Speech SDK Unity packages are suspended after this release.
26+
> * The publishing of Speech SDK Unity packages is suspended after this release.
2727
2828
#### New features:
2929
* Added support for Android 16 KB memory page sizes.
@@ -724,7 +724,7 @@ This table shows the previous and new object names for real-time diarization and
724724

725725
#### New features
726726

727-
- **Objective-C, Swift, and Python**: Added support for DialogServiceConnector, used for [Voice-Assistant scenarios](../../voice-assistants.md).
727+
- **Objective-C, Swift, and Python**: Added support for DialogServiceConnector, used for voice assistant scenarios.
728728
- **Python**: Support for Python 3.10 was added. Support for Python 3.6 was removed, per Python's [end-of-life for 3.6](https://devguide.python.org/devcycle/#end-of-life-branches).
729729
- **Unity**: Speech SDK is now supported for Unity applications on Linux.
730730
- **C++, C#**: IntentRecognizer using pattern matching is now supported in C#. In addition, scenarios with custom entities, optional groups, and entity roles are now supported in C++ and C#.
@@ -1102,7 +1102,7 @@ Stay healthy!
11021102
### Speech SDK 1.12.0: 2020-May release
11031103

11041104
#### New features
1105-
- **Go**: New Go language support for [Speech Recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](../../quickstarts/voice-assistants.md?pivots=programming-language-go). Set up your dev environment [here](../../quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.
1105+
- **Go**: New Go language support for [Speech Recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and custom voice assistant. Set up your dev environment [here](../../quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.
11061106
- **JavaScript**: Added Browser support for text to speech. See documentation [here](../../get-started-text-to-speech.md?pivots=programming-language-JavaScript).
11071107
- **C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](../../keyword-recognition-overview.md). For sample code, see the Samples section below.
11081108
- **Java**: Added multi-device conversation with translation support. See the reference doc [here](/java/api/com.microsoft.cognitiveservices.speech.transcription).
@@ -1124,7 +1124,7 @@ Stay healthy!
11241124
- Fixed memory leaks in the keyword recognizer engine.
11251125

11261126
#### Samples
1127-
- **Go**: Added quickstarts for [speech recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](../../quickstarts/voice-assistants.md?pivots=programming-language-go). Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).
1127+
- **Go**: Added quickstarts for [speech recognition](../../get-started-speech-to-text.md?pivots=programming-language-go) and custom voice assistant. Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).
11281128
- **JavaScript**: Added quickstarts for [Text to speech](../../get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](../../get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](../../get-started-intent-recognition.md?pivots=programming-language-javascript).
11291129
- Keyword recognition samples for [C\#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer) (Android).
11301130

articles/ai-services/speech-service/intent-recognition.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ ms.date: 3/10/2025
1313

1414
# What is intent recognition?
1515

16+
[!INCLUDE [deprecation notice](./includes/intent-recognition-retire.md)]
17+
1618
In this overview, you learn about the benefits and capabilities of intent recognition. An intent is something the user wants to do: book a flight, check the weather, or make a call. With intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options. You define user intent in the intent recognizer or conversational language understanding (CLU) model.
1719

1820
## Pattern matching

articles/ai-services/speech-service/keyword-recognition-guidelines.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,4 +47,4 @@ For applications that require latency optimization, applications can provide lig
4747
## Next steps
4848

4949
* [Get the Speech SDK.](speech-sdk.md)
50-
* [Learn more about Voice Assistants.](voice-assistants.md)
50+
* [Learn more about voice agents](voice-live.md)

0 commit comments

Comments
 (0)