You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/concepts/role-based-access-control.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: conceptual
9
-
ms.date: 11/21/2024
9
+
ms.date: 06/30/2025
10
10
ms.author: lajanuar
11
11
---
12
12
@@ -51,7 +51,7 @@ These custom roles only apply to Language resources.
51
51
52
52
### Cognitive Services Language Reader
53
53
54
-
A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the application’s assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
54
+
A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They might want to review the application’s assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
55
55
56
56
57
57
:::row:::
@@ -85,7 +85,7 @@ A user that should only be validating and reviewing the Language apps, typically
85
85
86
86
### Cognitive Services Language Writer
87
87
88
-
A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn’t have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldn’t be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
88
+
A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn’t have access to deploying this application to the runtime, as they might accidentally reflect their changes in production. They also shouldn’t be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They might also create new applications under this resource, but with the restrictions mentioned.
89
89
90
90
:::row:::
91
91
:::column span="":::
@@ -104,7 +104,7 @@ A user that is responsible for building and modifying an application, as a colla
104
104
:::column-end:::
105
105
:::column span="":::
106
106
* All APIs under Language reader
107
-
* All POST, PUT and PATCH APIs under:
107
+
* All POST, PUT, and PATCH APIs under:
108
108
* [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
109
109
* [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/conversational-language-understanding/how-to/deploy-model.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,18 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: how-to
9
-
ms.date: 11/21/2024
9
+
ms.date: 06/30/2025
10
10
ms.author: lajanuar
11
11
ms.custom: language-service-clu,
12
12
---
13
13
14
14
# Deploy a model
15
15
16
-
Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
16
+
Once you're satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
17
17
18
18
## Prerequisites
19
19
20
-
* A successfully [created project](create-project.md)
20
+
* A [created project](create-project.md)
21
21
*[Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
22
22
* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
23
23
@@ -45,7 +45,7 @@ After you have reviewed the model's performance and decide it's fit to be used i
45
45
46
46
## Swap deployments
47
47
48
-
After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
48
+
After you're done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
49
49
* Taking the model assigned to the first deployment, and assigning it to the second deployment.
50
50
* taking the model assigned to second deployment and assign it to the first deployment.
51
51
@@ -89,7 +89,7 @@ You can [deploy your project to multiple regions](../../concepts/custom-features
89
89
90
90
## Unassign deployment resources
91
91
92
-
When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
92
+
When unassigning or removing a deployment resource from a project, you'll also delete all the deployments that have been deployed to the resource's region.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/custom-text-classification/how-to/tag-data.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,14 +7,14 @@ author: laujan
7
7
manager: nitinme
8
8
ms.service: azure-ai-language
9
9
ms.topic: how-to
10
-
ms.date: 11/21/2024
10
+
ms.date: 06/30/2025
11
11
ms.author: lajanuar
12
12
ms.custom: language-service-custom-classification
13
13
---
14
14
15
15
# Label text data for training your model
16
16
17
-
Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
17
+
Before training your model, you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already labeled your data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
18
18
19
19
Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
20
20
@@ -23,19 +23,19 @@ Before creating a custom text classification model, you need to have labeled dat
23
23
Before you can label data, you need:
24
24
25
25
*[A successfully created project](create-project.md) with a configured Azure blob storage account,
26
-
* Documents containing text data that have [been uploaded](design-schema.md#data-preparation)to your storage account.
26
+
* Documents containing the [uploaded](design-schema.md#data-preparation)text data in your storage account.
27
27
28
28
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
29
29
30
30
## Data labeling guidelines
31
31
32
-
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
32
+
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON file in your storage container that you've connected to this project.
33
33
34
34
As you label your data, keep in mind:
35
35
36
36
* In general, more labeled data leads to better results, provided the data is labeled accurately.
37
37
38
-
* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
38
+
* There is no fixed number of labels that can guarantee your model performs the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
39
39
40
40
## Label your data
41
41
@@ -61,7 +61,7 @@ Use the following steps to label your data:
**Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
64
+
**Multi label classification**: your file can be labeled with multiple classes. You can do so by selecting all applicable check boxes next to the classes you want to label this document with.
65
65
66
66
:::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
67
67
@@ -77,24 +77,24 @@ Use the following steps to label your data:
77
77
78
78
6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
79
79
80
-
7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
80
+
7. In the bottom section of the right side pane you can add the current file you're viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
81
81
82
82
> [!TIP]
83
-
> If you are planning on using **Automatic** data splitting use the default option of assigning all the documents into your training set.
83
+
> If you're planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
84
84
85
85
8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
86
86
**Total instances* where you can view count of all labeled instances of a specific class.
87
87
**documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
88
88
89
-
9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
89
+
9. While you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
90
90
91
91
## Remove labels
92
92
93
93
If you want to remove a label, uncheck the button next to the class.
94
94
95
95
## Delete or classes
96
96
97
-
To delete a class, select the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
97
+
To delete a class, select the icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/custom-text-classification/tutorials/triage-email.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,11 @@ author: laujan
7
7
manager: nitinme
8
8
ms.service: azure-ai-language
9
9
ms.topic: tutorial
10
-
ms.date: 11/21/2024
10
+
ms.date: 06/30/2025
11
11
ms.author: lajanuar
12
12
---
13
13
14
-
# Tutorial: Triage incoming emails with power automate
14
+
# Tutorial: Triage incoming emails with Power Automate
15
15
16
16
In this tutorial you will categorize and triage incoming email using custom text classification. Using this [Power Automate](/power-automate/getting-started) flow, when a new email is received, its contents will have a classification applied, and depending on the result, a message will be sent to a designated channel on [Microsoft Teams](https://www.microsoft.com/microsoft-teams).
17
17
@@ -27,7 +27,7 @@ In this tutorial you will categorize and triage incoming email using custom text
27
27
28
28
## Create a Power Automate flow
29
29
30
-
1.[Sign in to power automate](https://make.powerautomate.com/)
30
+
1.[Sign in to Power Automate](https://make.powerautomate.com/)
31
31
32
32
2. From the left side menu, select **My flows** and create a **Automated cloud flow**
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/concepts/azure-resources.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
5
5
ms.topic: conceptual
6
6
author: laujan
7
7
ms.author: lajanuar
8
-
ms.date: 11/21/2024
8
+
ms.date: 06/30/2025
9
9
ms.custom: language-service-question-answering
10
10
---
11
11
@@ -34,7 +34,7 @@ Typically there are three parameters you need to consider:
34
34
35
35
* The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
36
36
37
-
* This should also influence your **Azure AI Search**SKU selection, see more details [here](/azure/search/search-sku-tier). Additionally, you might need to adjust Azure AI Search [capacity](/azure/search/search-capacity-planning) with replicas.
37
+
* This should also influence your **Azure AI Search** selection, see more details [here](/azure/search/search-sku-tier). Additionally, you might need to adjust Azure AI Search [capacity](/azure/search/search-capacity-planning) with replicas.
38
38
39
39
***Size and the number of projects**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
40
40
@@ -58,7 +58,7 @@ The following table gives you some high-level guidelines.
58
58
## Recommended settings
59
59
60
60
61
-
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
61
+
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) tier of Azure AI Search.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/how-to/chit-chat.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,20 @@
1
1
---
2
2
title: Adding chitchat to a custom question answering project
3
3
titleSuffix: Azure AI services
4
-
description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a pre-populated set of the top chitchat, into your projects.
4
+
description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a prepopulated set of the top chitchat, into your projects.
5
5
#services: cognitive-services
6
6
manager: nitinme
7
7
author: laujan
8
8
ms.author: lajanuar
9
9
ms.service: azure-ai-language
10
10
ms.topic: how-to
11
-
ms.date: 11/21/2024
11
+
ms.date: 06/30/2025
12
12
ms.custom: language-service-question-answering
13
13
---
14
14
15
15
# Use chitchat with a project
16
16
17
-
Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a pre-populated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
17
+
Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a prepopulated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
18
18
19
19
This dataset has about 100 scenarios of chitchat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, custom question answering tries to match it with the closest known chitchat question and answer.
20
20
@@ -70,7 +70,7 @@ To turn the views for context and metadata on and off, select **Show columns** i
70
70
71
71
## Add more chitchat questions and answers
72
72
73
-
You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
73
+
You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat," as seen in the following image:
0 commit comments