Skip to content

Commit f761028

Browse files
Merge pull request #261034 from valindrae/main
Update recognize-action.md
2 parents 6cc2ca6 + e03dc96 commit f761028

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/communication-services/concepts/call-automation/recognize-action.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: kpunjabi
1212

1313
With the release of Azure Communication Services Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
1414

15-
**Voice recognition with speech-to-text (Public Preview)**
15+
**Voice recognition with speech-to-text**
1616

1717
[Azure Communications services integration with Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pretrained with dialects and phonetics representing various common domains. For more information about supported languages, see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
1818

0 commit comments

Comments
 (0)