You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-project/speech-studio.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,8 +5,7 @@ author: eric-urban
5
5
ms.author: eur
6
6
ms.service: azure-ai-speech
7
7
ms.topic: include
8
-
ms.date: 12/1/2023
9
-
ms.custom: include
8
+
ms.date: 1/6/2025
10
9
---
11
10
12
11
Content for [Custom neural voice](https://aka.ms/customvoice) like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country/region and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
@@ -30,7 +29,7 @@ To create a custom neural voice Pro project, follow these steps:
30
29
1. Select **Custom neural voice Pro** > **Next**.
31
30
1. Follow the instructions provided by the wizard to create your project.
32
31
33
-
Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**.
32
+
Select the new project by name or select **Go to project**. You see these menu items in the left panel: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/voice-assistants/go/go.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
---
2
-
author: trrwilson
2
+
author: eric-urban
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 05/25/2020
6
-
ms.author: trrwilson
5
+
ms.date: 1/6/2025
6
+
ms.author: eur
7
7
---
8
8
9
9
## Prerequisites
@@ -17,9 +17,9 @@ Before you get started:
17
17
> * Make sure that you have access to a microphone for audio capture
18
18
>
19
19
> [!NOTE]
20
-
> Please refer to [the list of supported regions for voice assistants](~/articles/ai-services/speech-service/regions.md#regions) and ensure your resources are deployed in one of those regions.
20
+
> Refer to [the list of supported regions for voice assistants](~/articles/ai-services/speech-service/regions.md#regions) and ensure your resources are deployed in one of those regions.
21
21
22
-
## Setup your environment
22
+
## Set up your environment
23
23
24
24
Update the go.mod file with the latest SDK version by adding this line
25
25
```sh
@@ -29,11 +29,11 @@ require (
29
29
```
30
30
31
31
## Start with some boilerplate code
32
-
Replace the contents of your source file (e.g.`quickstart.go`) with the below, which includes:
32
+
Replace the contents of your source file (for example,`quickstart.go`) with the below, which includes:
33
33
34
34
- "main" package definition
35
35
- importing the necessary modules from the Speech SDK
36
-
- variables for storing the bot information that will be replaced later in this quickstart
36
+
- variables for storing the bot information that is replaced later in this quickstart
37
37
- a simple implementation using the microphone for audio input
38
38
- event handlers for various events that take place during a speech interaction
39
39
@@ -100,24 +100,24 @@ Replace the `YOUR_SUBSCRIPTION_KEY` and `YOUR_BOT_REGION` values with actual val
100
100
- Use the Region identifier as the `YOUR_BOT_REGION` value replacement, for example: `"westus"`for**West US**
101
101
102
102
> [!NOTE]
103
-
>Please refer to [the list of supported regions forvoice assistants](~/articles/ai-services/speech-service/regions.md#regions) and ensure your resources are deployedin one of those regions.
103
+
>Refer to [the list of supported regions forvoice assistants](~/articles/ai-services/speech-service/regions.md#regions) and ensure your resources are deployedin one of those regions.
104
104
105
105
> [!NOTE]
106
106
> For information on configuring your bot, see the Bot Framework documentation for [the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech).
107
107
108
108
## Code explanation
109
109
The Speech subscription key and region are required to create a speech configuration object. The configuration object is needed to instantiate a speech recognizer object.
110
110
111
-
The recognizer instance exposes multiple ways to recognize speech. In this example, speech is continuously recognized. This functionality lets the Speech service know that you're sending many phrases for recognition, and when the program terminates to stop recognizing speech. As results are yielded, the code will write them to the console.
111
+
The recognizer instance exposes multiple ways to recognize speech. In this example, speech is continuously recognized. This functionality lets the Speech service know that you're sending many phrases for recognition, and when the program terminates to stop recognizing speech. As results are yielded, the code writes them to the console.
112
112
113
113
## Build and run
114
114
You're now set up to build your project and test your custom voice assistant using the Speech service.
115
-
1. Build your project, e.g.**"go build"**
115
+
1. Build your project, for example**"go build"**
116
116
2. Run the module and speak a phrase or sentence into your device's microphone. Your speech is transmitted to the Direct Line Speech channel and transcribed to text, which appears as output.
117
117
118
118
119
119
> [!NOTE]
120
-
> The Speech SDK will default to recognizing using en-us for the language, see [How to recognize speech](../../../../how-to-recognize-speech.md) for information on choosing the source language.
120
+
> The Speech SDK defaults to recognizing using en-us for the language, see [How to recognize speech](../../../../how-to-recognize-speech.md) for information on choosing the source language.
0 commit comments