Skip to content

Commit 0aa9bf2

Browse files
authored
Merge pull request #270928 from MicrosoftDocs/main
4/2/2024 AM Publish
2 parents 7d70ca4 + 8c9f822 commit 0aa9bf2

File tree

85 files changed

+1864
-620
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+1864
-620
lines changed

articles/ai-services/speech-service/get-started-stt-diarization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ keywords: speech to text, speech to text software
1414
#customer intent: As a developer, I want to create speech to text applications that use diarization to improve readability of multiple person conversations.
1515
---
1616

17-
# Quickstart: Create real-time diarization (Preview)
17+
# Quickstart: Create real-time diarization
1818

1919
::: zone pivot="programming-language-csharp"
2020
[!INCLUDE [C# include](includes/quickstarts/stt-diarization/csharp.md)]

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/cpp.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -142,9 +142,6 @@ Follow these steps to create a console application and install the Speech SDK.
142142

143143
The application recognizes speech from multiple participants in the conversation. Your audio file should contain multiple speakers.
144144

145-
> [!NOTE]
146-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
147-
148145
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
149146

150147
1. [Build and run](/cpp/build/vscpp-step-2-build) your application to start conversation transcription:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/csharp.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -121,9 +121,6 @@ Follow these steps to create a console application and install the Speech SDK.
121121
122122
The application recognizes speech from multiple participants in the conversation. Your audio file should contain multiple speakers.
123123
124-
> [!NOTE]
125-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
126-
127124
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
128125
129126
1. Run your console application to start conversation transcription:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/intro.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,6 @@ ms.author: eur
88

99
In this quickstart, you run an application for speech to text transcription with real-time diarization. Diarization distinguishes between the different speakers who participate in the conversation. The Speech service provides information about which speaker was speaking a particular part of transcribed speech.
1010

11-
> [!NOTE]
12-
> Real-time diarization is currently in public preview.
13-
1411
The speaker information is included in the result in the speaker ID field. The speaker ID is a generic identifier assigned to each conversation participant by the service during the recognition as different speakers are being identified from the provided audio content.
1512

1613
> [!TIP]

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/java.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,9 +148,6 @@ Follow these steps to create a console application for conversation transcriptio
148148

149149
The application recognizes speech from multiple participants in the conversation. Your audio file should contain multiple speakers.
150150

151-
> [!NOTE]
152-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
153-
154151
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
155152

156153
1. Run your new console application to start conversation transcription:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/javascript.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -93,9 +93,6 @@ Follow these steps to create a new console application for conversation transcri
9393

9494
The application recognizes speech from multiple participants in the conversation. Your audio file should contain multiple speakers.
9595

96-
> [!NOTE]
97-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
98-
9996
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
10097

10198
1. Run your new console application to start speech recognition from a file:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/python.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -108,9 +108,6 @@ Follow these steps to create a new console application.
108108

109109
The application recognizes speech from multiple participants in the conversation. Your audio file should contain multiple speakers.
110110

111-
> [!NOTE]
112-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
113-
114111
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
115112

116113
1. Run your new console application to start conversation transcription:

articles/ai-services/speech-service/includes/release-notes/release-notes-stt.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,14 @@ ms.date: 3/13/2024
66
ms.author: eur
77
---
88

9+
### April 2024 release
10+
11+
#### Real-time speech to text with diariazation (GA)
12+
13+
Real-time speech to text with diariazation is now generally available.
14+
15+
Check out [Real-time diarization quickstart](../../get-started-stt-diarization.md) to learn more about how to create speech to text applications that use diarization to distinguish between the different speakers who participate in the conversation.
16+
917
### March 2024 release
1018

1119
#### Whisper general availability (GA)

articles/ai-studio/tutorials/deploy-copilot-ai-studio.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -414,6 +414,7 @@ Now that you have your evaluation dataset, you can evaluate your flow by followi
414414

415415
> [!NOTE]
416416
> Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a GPT-4 or gpt-35-turbo-16k model. If you didn't previously deploy a GPT-4 or gpt-35-turbo-16k model, you can deploy another model by following the steps in [Deploy a chat model](#deploy-a-chat-model). Then return to this step and select the model you deployed.
417+
> The evaluation process may take up lots of tokens, so it's recommended to use a model which can support >=16k tokens.
417418

418419
1. Select **Add new dataset**. Then select **Next**.
419420

articles/azure-functions/functions-bindings-signalr-service-trigger.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.topic: reference
66
ms.devlang: csharp
77
# ms.devlang: csharp, javascript, python
88
ms.custom: devx-track-csharp, devx-track-extended-java, devx-track-js, devx-track-python
9-
ms.date: 03/12/2024
9+
ms.date: 04/02/2024
1010
ms.author: zityang
1111
zone_pivot_groups: programming-languages-set-functions-lang-workers
1212
---
@@ -25,7 +25,6 @@ For information on setup and configuration details, see the [overview](functions
2525

2626
::: zone pivot="programming-language-csharp"
2727

28-
2928
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
3029

3130
[!INCLUDE [functions-in-process-model-retirement-note](../../includes/functions-in-process-model-retirement-note.md)]
@@ -36,10 +35,12 @@ The following sample shows a C# function that receives a message event from clie
3635

3736
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalRTriggerFunctions.cs" id="snippet_on_message":::
3837

38+
> [!IMPORTANT]
39+
> Class based model of SignalR Service bindings in C# isolated worker doesn't optimize how you write SignalR triggers due to the limitation of C# worker model. For more information about class based model, see [Class based model](../azure-signalr/signalr-concept-serverless-development-config.md#class-based-model).
3940
4041
# [In-process model](#tab/in-process)
4142

42-
SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
43+
SignalR Service trigger binding for C# in-process model has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
4344

4445
### With class-based model
4546

@@ -200,7 +201,7 @@ See the [Example section](#example) for complete examples.
200201

201202
### Payloads
202203

203-
The trigger input type is declared as either `InvocationContext` or a custom type. If you choose `InvocationContext` you get full access to the request content. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
204+
The trigger input type is declared as either `InvocationContext` or a custom type. If you choose `InvocationContext`, you get full access to the request content. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
204205

205206
### InvocationContext
206207

@@ -210,11 +211,11 @@ The trigger input type is declared as either `InvocationContext` or a custom typ
210211
|------------------------------|------------|
211212
|Arguments| Available for *messages* category. Contains *arguments* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding)|
212213
|Error| Available for *disconnected* event. It can be Empty if the connection closed with no error, or it contains the error messages.|
213-
|Hub| The hub name which the message belongs to.|
214+
|Hub| The hub name that the message belongs to.|
214215
|Category| The category of the message.|
215216
|Event| The event of the message.|
216-
|ConnectionId| The connection ID of the client which sends the message.|
217-
|UserId| The user identity of the client which sends the message.|
217+
|ConnectionId| The connection ID of the client that sends the message.|
218+
|UserId| The user identity of the client that sends the message.|
218219
|Headers| The headers of the request.|
219220
|Query| The query of the request when clients connect to the service.|
220221
|Claims| The claims of the client.|
@@ -235,27 +236,27 @@ After you set `parameterNames`, the names you defined correspond to the argument
235236
[SignalRTrigger(parameterNames: new string[] {"arg1, arg2"})]
236237
```
237238

238-
Then, the `arg1` will contain the content of `message1`, and `arg2` will contain the content of `message2`.
239+
Then, the `arg1` contains the content of `message1`, and `arg2` contains the content of `message2`.
239240

240241
### `ParameterNames` considerations
241242

242243
For the parameter binding, the order matters. If you're using `ParameterNames`, the order in `ParameterNames` matches the order of the arguments you invoke in the client. If you're using attribute `[SignalRParameter]` in C#, the order of arguments in Azure Function methods matches the order of arguments in clients.
243244

244-
`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you will get an exception.
245+
`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you'll get an exception.
245246

246247
### SignalR Service integration
247248

248249
SignalR Service needs a URL to access Function App when you're using SignalR Service trigger binding. The URL should be configured in **Upstream Settings** on the SignalR Service side.
249250

250251
:::image type="content" source="../azure-signalr/media/concept-upstream/upstream-portal.png" alt-text="Upstream Portal":::
251252

252-
When using SignalR Service trigger, the URL can be simple and formatted as shown below:
253+
When using SignalR Service trigger, the URL can be simple and formatted as follows:
253254

254255
```http
255256
<Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY>
256257
```
257258

258-
The `Function_App_URL` can be found on Function App's Overview page and The `API_KEY` is generated by Azure Function. You can get the `API_KEY` from `signalr_extension` in the **App keys** blade of Function App.
259+
The `Function_App_URL` can be found on Function App's Overview page and the `API_KEY` is generated by Azure Function. You can get the `API_KEY` from `signalr_extension` in the **App keys** blade of Function App.
259260
:::image type="content" source="media/functions-bindings-signalr-service/signalr-keys.png" alt-text="API key":::
260261

261262
If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md).

0 commit comments

Comments
 (0)