You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ You also want to avoid mixing different schema designs. Don't build half of your
47
47
48
48
## Use standard training before advanced training
49
49
50
-
[Standard training](../how-to/train-model.md#training-modes) is free and faster than advanced training. It can help you quickly understand the effect of changing your training set or schema while you build the model. After you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
50
+
[Standard training](../how-to/train-model.md#training-modes) is free and faster than advanced training. It can help you quickly understand the effect of changing your training set or schema while you build the model. After you're satisfied with the schema, consider using advanced training to get the best model quality.
51
51
52
52
## Use the evaluation feature
53
53
@@ -113,7 +113,7 @@ If you enable this feature, the utterance count of your training set increases.
113
113
114
114
## Address model overconfidence
115
115
116
-
Customers can use the LoraNorm recipe version if the model is being incorrectly overconfident. An example of this behavior can be like the following scenario where the model predicts the incorrect intent with 100% confidence. This score makes the confidence threshold project setting unusable.
116
+
Customers can use the LoraNorm traning configuration version if the model is being incorrectly overconfident. An example of this behavior can be like the following scenario where the model predicts the incorrect intent with 100% confidence. This score makes the confidence threshold project setting unusable.
117
117
118
118
| Text | Predicted intent | Confidence score |
119
119
|----|----|----|
@@ -243,7 +243,7 @@ curl --request POST \
243
243
244
244
## Address out-of-domain utterances
245
245
246
-
Customers can use the newly updated recipe version `2024-08-01-preview` (previously `2024-06-01-preview`) if the model has poor AIQ on out-of-domain utterances. An example of this scenario with the default recipe can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.
246
+
Customers can use the newly updated training configuration version `2024-08-01-preview` (previously `2024-06-01-preview`) if the model has poor quality on out-of-domain utterances. An example of this scenario with the default training configuration can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.
247
247
248
248
| Text | Predicted intent | Confidence score |
249
249
|----|----|----|
@@ -273,6 +273,6 @@ After the request is sent, you can track the progress of the training job in Lan
273
273
274
274
Caveats:
275
275
276
-
- The None score threshold for the app (confidence threshold below which `topIntent` is marked as `None`) when you use this recipe should be set to 0. This setting is used because this new recipe attributes a certain portion of the in-domain probabilities to out of domain so that the model isn't incorrectly overconfident about in-domain utterances. As a result, users might see slightly reduced confidence scores for in-domain utterances as compared to the prod recipe.
277
-
- We don't recommend this recipe for apps with only two intents, such as `IntentA` and `None`, for example.
278
-
- We don't recommend this recipe for apps with a low number of utterances per intent. We highly recommend a minimum of 25 utterances per intent.
276
+
- The None score threshold for the app (confidence threshold below which `topIntent` is marked as `None`) when you use this training configuration should be set to 0. This setting is used because this new training configuration attributes a certain portion of the in-domain probabilities to out of domain so that the model isn't incorrectly overconfident about in-domain utterances. As a result, users might see slightly reduced confidence scores for in-domain utterances as compared to the prod training configuration.
277
+
- We don't recommend this training configuration for apps with only two intents, such as `IntentA` and `None`, for example.
278
+
- We don't recommend this training configuration for apps with a low number of utterances per intent. We highly recommend a minimum of 25 utterances per intent.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/fast-transcription-create.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,6 +40,7 @@ Construct the request body according to the following instructions:
40
40
- Set the required `locales` property. This value should match the expected locale of the audio data to transcribe. The supported locales are: en-US, es-ES, es-MX, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, pt-BR, and zh-CN. You can only specify one locale per transcription request.
41
41
- Optionally, set the `profanityFilterMode` property to specify how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. The `profanityFilterMode` property works the same way as via the [batch transcription API](./batch-transcription.md).
42
42
- Optionally, set the `channels` property to specify the zero-based indices of the channels to be transcribed separately. If not specified, multiple channels are merged and transcribed jointly. Only up to two channels are supported. If you want to transcribe the channels from a stereo audio file separately, you need to specify `[0,1]` here. Otherwise, stereo audio will be merged to mono, mono audio will be left as is, and only a single channel will be transcribed. In either of the latter cases, the output has no channel indices for the transcribed text, since only a single audio stream is transcribed.
43
+
- Optionally, set the `diarizationSettings` to recognize and separate multiple speakers on mono channel audio file. You need to specify the minimum and maximum number of people who might be speaking in the audio file (for example, specify `"diarizationSettings": {"minSpeakers": 1, "maxSpeakers": 4}`). Then the transcription file will contain a `speaker` entry for each transcribed phrase. The feature isn't available with stereo audio when you set the `channels` property as `[0,1]`.
43
44
44
45
Make a multipart/form-data POST request to the `transcriptions` endpoint with the audio file and the request body properties. The following example shows how to create a transcription using the fast transcription API.
45
46
@@ -263,4 +264,4 @@ The response will include `duration`, `channel`, and more. The `combinedPhrases`
263
264
264
265
-[Fast transcription REST API reference](/rest/api/speechtotext/transcriptions/transcribe)
265
266
-[Speech to text supported languages](./language-support.md?tabs=stt)
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-stt.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,11 @@ ms.date: 7/12/2024
6
6
ms.author: eur
7
7
---
8
8
9
+
### September 2024 release
10
+
11
+
#### Fast transcription (Preview)
12
+
Fast transcription now supports Diarization to recognize and separate multiple speakers on mono channel audio file. For more information, see [fast transcription API guide](../../fast-transcription-create.md#use-the-fast-transcription-api).
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/configure-private-link.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,6 @@ You get several hub default resources in your resource group. You need to config
26
26
- Disable public network access of hub default resources such as Azure Storage, Azure Key Vault, and Azure Container Registry.
27
27
- Establish private endpoint connection to hub default resources. You need to have both a blob and file private endpoint for the default storage account.
28
28
-[Managed identity configurations](#managed-identity-configuration) to allow hubs access your storage account if it's private.
29
-
- Azure AI Search should be public.
30
29
31
30
32
31
## Prerequisites
@@ -280,7 +279,6 @@ To check AI-PROJECT-GUID, go to the Azure portal, select your project, settings,
280
279
281
280
## Limitations
282
281
283
-
* Private Azure AI Services and Azure AI Search aren't supported.
284
282
* The "Add your data" feature in the Azure AI Studio playground doesn't support private storage account.
285
283
* You might encounter problems trying to access the private endpoint for your hub if you're using Mozilla Firefox. This problem might be related to DNS over HTTPS in Mozilla Firefox. We recommend using Microsoft Edge or Google Chrome.
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/create-azure-ai-resource.md
+20-3Lines changed: 20 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,13 +12,14 @@ ms.date: 5/21/2024
12
12
ms.reviewer: deeikele
13
13
ms.author: larryfr
14
14
author: Blackmist
15
+
# Customer Intent: As an admin, I need to create and manage an Azure AI Studio hub so that my team can use it to create projects for collaboration.
15
16
---
16
17
17
18
# How to create and manage an Azure AI Studio hub
18
19
19
20
In AI Studio, hubs provide the environment for a team to collaborate and organize work, and help you as a team lead or IT admin centrally set up security settings and govern usage and spend. You can create and manage a hub from the Azure portal or from the AI Studio.
20
21
21
-
In this article, you learn how to create and manage a hub in AI Studio with the default settings so you can get started quickly. Do you need to customize security or the dependent resources of your hub? Then use [Azure Portal](create-secure-ai-hub.md) or [template options](create-azure-ai-hub-template.md).
22
+
In this article, you learn how to create and manage a hub in AI Studio with the default settings so you can get started quickly. Do you need to customize security or the dependent resources of your hub? Then use [Azure portal](create-secure-ai-hub.md) or [template options](create-azure-ai-hub-template.md).
22
23
23
24
> [!TIP]
24
25
> If you'd like to create your Azure AI Studio hub using a template, see the articles on using [Bicep](create-azure-ai-hub-template.md) or [Terraform](create-hub-terraform.md).
@@ -105,7 +106,7 @@ For hubs that use CMK encryption mode, you can update the encryption key to a ne
105
106
106
107
To use custom environments for Prompt Flow, you're required to configure an Azure Container Registry for your hub. To use Azure Application Insights for Prompt Flow deployments, a configured Azure Application Insights resource is required for your hub. Updating the workspace-attached Azure Container Registry or ApplicationInsights resources may break lineage of previous jobs, deployed inference endpoints, or your ability to rerun earlier jobs in the workspace.
107
108
108
-
You can use the Azure Portal, Azure SDK/CLI options, or the infrastructure-as-code templates to update both Azure Application Insights and Azure Container Registry for the hub.
109
+
You can use the Azure portal, Azure SDK/CLI options, or the infrastructure-as-code templates to update both Azure Application Insights and Azure Container Registry for the hub.
109
110
110
111
# [Azure portal](#tab/portal)
111
112
@@ -142,7 +143,23 @@ az ml workspace update -n "myexamplehub" -g "{MY_RESOURCE_GROUP}" -a "APPLICATIO
142
143
```
143
144
---
144
145
145
-
## Next steps
146
+
## Delete an Azure AI Studio hub
147
+
148
+
To delete a hub, use the [Azure portal](https://portal.azure.com). To quickly get to the Azure portal from the Azure AI Studio, go to the **Hub overview** for your hub and then select **Manage in Azure portal**.
149
+
150
+
:::image type="content" source="../media/how-to/hubs/manage-hub-azure-portal.png" alt-text="Screenshot of the manage in Azure portal link in Azure AI Studio.":::
151
+
152
+
From the portal page for your hub, select **Overview** along the left side of the page and then select **Delete** from the top of the page.
153
+
154
+
:::image type="content" source="../media/how-to/hubs/delete-hub-button.png" alt-text="Screenshot of the delete button for the Azure AI Studio hub in the Azure portal.":::
155
+
156
+
You can also find your hub in the Azure portal by entering the hub name in the search field at the top of the Azure portal. Select the hub from the **Resources** list to navigate to the **Overview** page for the hub.
157
+
158
+
:::image type="content" source="../media/how-to/hubs/search-in-portal.png" alt-text="Screenshot of using the search field in the Azure portal to find a hub.":::
159
+
160
+
161
+
162
+
## Related content
146
163
147
164
-[Create a project](create-projects.md)
148
165
-[Learn more about Azure AI Studio](../what-is-ai-studio.md)
0 commit comments