Skip to content

Commit 2f029db

Browse files
authored
Merge pull request #3045 from eric-urban/eur/aiservices-kind
[SCOPED] Use AI services resource for Speech terminology
2 parents f9b1280 + d3ecbd3 commit 2f029db

File tree

54 files changed

+95
-97
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+95
-97
lines changed

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -278,7 +278,7 @@ curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/mod
278278
::: zone-end
279279

280280
::: zone pivot="speech-cli"
281-
Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for a Speech resource in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
281+
Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for an AI Services resource for Speech in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
282282

283283
```azurecli
284284
spx csr list --base --api-version v3.2

articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Bring your own storage (BYOS) can be used in the following speech to text scenar
1919
- Real-time transcription with audio and transcription results logging enabled
2020
- Custom speech
2121

22-
One pair of a Speech resource and storage account can be used for all scenarios simultaneously.
22+
One pair of an AI Services resource for Speech and storage account can be used for all scenarios simultaneously.
2323

2424
This article explains in depth how to use a BYOS-enabled Speech resource in all speech to text scenarios. The article implies that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md).
2525

articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ You can always check, whether any given Speech resource is BYOS enabled, and wha
292292

293293
# [Azure portal](#tab/portal)
294294

295-
To check BYOS configuration of a Speech resource with Azure portal, you need to access some portal preview features. Perform the following steps:
295+
To check BYOS configuration of an AI Services resource for Speech with Azure portal, you need to access some portal preview features. Perform the following steps:
296296

297297
1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices).
298298
1. Close *Create Speech* screen by pressing *X* in the right upper corner.

articles/ai-services/speech-service/custom-commands.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Good candidates for Custom Commands have a fixed vocabulary with well-defined se
2929

3030
## Getting started with Custom Commands
3131

32-
Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">create a Speech resource</a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
32+
Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAIServices" target="_blank">create an AI Services resource for Speech</a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
3333

3434
#### Authoring flow for Custom Commands
3535
![Authoring flow for Custom Commands](media/voice-assistants/custom-commands-flow.png "The Custom Commands authoring flow")

articles/ai-services/speech-service/custom-speech-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ With custom speech, you can upload your own data, test and train a custom model,
2929

3030
Here's more information about the sequence of steps shown in the previous diagram:
3131

32-
1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#regions) table.
32+
1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAIServices" title="Create an AI Services resource for Speech" target="_blank">Speech resource</a> that you create in the Azure portal. If you train a custom model with audio data, choose an AI Services resource for Speech region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#regions) table.
3333
1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products.
3434
1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
3535
1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if more training is required.

articles/ai-services/speech-service/direct-line-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Direct Line Speech supports these locales: `ar-eg`, `ar-sa`, `ca-es`, `da-dk`, `
2323

2424
## Getting started with Direct Line Speech
2525

26-
To create a voice assistant using Direct Line Speech, create a Speech resource and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
26+
To create a voice assistant using Direct Line Speech, create an AI Services resource for Speech and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
2727

2828
![Conceptual diagram of the Direct Line Speech orchestration service flow](media/voice-assistants/overview-directlinespeech.png "The Speech Channel flow")
2929

articles/ai-services/speech-service/embedded-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, P
166166
167167
## Embedded speech configuration
168168
169-
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
169+
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with an API key and region. For embedded speech, you don't use an AI Services resource for Speech. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
170170
171171
Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
172172

articles/ai-services/speech-service/faq-stt.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ sections:
2525
- question: |
2626
Where do I start if I want to use a base model?
2727
answer: |
28-
First, get a Speech resource key and region in the [Azure portal](https://portal.azure.com). If you want to make REST calls to a predeployed base model, see the [REST APIs](rest-speech-to-text.md) documentation. If you want to use WebSockets, [download the Speech SDK](speech-sdk.md).
28+
First, get an API key and region in the [Azure portal](https://portal.azure.com). If you want to make REST calls to a predeployed base model, see the [REST APIs](rest-speech-to-text.md) documentation. If you want to use WebSockets, [download the Speech SDK](speech-sdk.md).
2929
3030
- question: |
3131
Do I always need to build a custom speech model?

articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,21 +14,21 @@ ms.custom: devx-track-azurepowershell, devx-track-extended-java, devx-track-pyth
1414

1515
# Microsoft Entra authentication with the Speech SDK
1616

17-
When using the Speech SDK to access the Speech service, there are three authentication methods available: service keys, a key-based token, and Microsoft Entra ID. This article describes how to configure a Speech resource and create a Speech SDK configuration object to use Microsoft Entra ID for authentication.
17+
When using the Speech SDK to access the Speech service, there are three authentication methods available: service keys, a key-based token, and Microsoft Entra ID. This article describes how to configure an AI Services resource for Speech and create a Speech SDK configuration object to use Microsoft Entra ID for authentication.
1818

1919
This article shows how to use Microsoft Entra authentication with the Speech SDK. You learn how to:
2020

2121
> [!div class="checklist"]
2222
>
23-
> - Create a Speech resource
23+
> - Create an AI Services resource for Speech
2424
> - Configure the Speech resource for Microsoft Entra authentication
2525
> - Get a Microsoft Entra access token
2626
> - Create the appropriate SDK configuration object.
2727
2828
To learn more about Microsoft Entra access tokens, including token lifetime, visit [Access tokens in the Microsoft identity platform](/azure/active-directory/develop/access-tokens).
2929

30-
## Create a Speech resource
31-
To create a Speech resource in the [Azure portal](https://portal.azure.com), see [this quickstart](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
30+
## Create an AI Services resource for Speech
31+
To create an AI Services resource for Speech in the [Azure portal](https://portal.azure.com), see [this quickstart](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
3232

3333
<a name='configure-the-speech-resource-for-azure-ad-authentication'></a>
3434

@@ -128,7 +128,7 @@ You need your Speech resource ID to make SDK calls using Microsoft Entra authent
128128
To get the resource ID in the Azure portal:
129129

130130
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
131-
1. Select a Speech resource.
131+
1. Select an AI Services resource for Speech.
132132
1. In the **Resource Management** group on the left pane, select **Properties**.
133133
1. Copy the **Resource ID**
134134

articles/ai-services/speech-service/how-to-custom-speech-create-project.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ To create a custom speech project, follow these steps:
2626
1. Select the subscription and Speech resource to work with.
2727

2828
> [!IMPORTANT]
29-
> If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#regions) table for more information.
29+
> If you will train a custom model with audio data, choose a region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#regions) table for more information.
3030
3131
1. Select **Custom speech** > **Create a new project**.
3232
1. Follow the instructions provided by the wizard to create your project.

0 commit comments

Comments
 (0)