You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/conversational-language-understanding/how-to/deploy-model.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,18 @@ author: laujan
6
6
manager: nitinme
7
7
ms.service: azure-ai-language
8
8
ms.topic: how-to
9
-
ms.date: 11/21/2024
9
+
ms.date: 06/30/2025
10
10
ms.author: lajanuar
11
11
ms.custom: language-service-clu,
12
12
---
13
13
14
14
# Deploy a model
15
15
16
-
Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
16
+
Once you're satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
17
17
18
18
## Prerequisites
19
19
20
-
* A successfully [created project](create-project.md)
20
+
* A [created project](create-project.md)
21
21
*[Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
22
22
* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
23
23
@@ -45,7 +45,7 @@ After you have reviewed the model's performance and decide it's fit to be used i
45
45
46
46
## Swap deployments
47
47
48
-
After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
48
+
After you're done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
49
49
* Taking the model assigned to the first deployment, and assigning it to the second deployment.
50
50
* taking the model assigned to second deployment and assign it to the first deployment.
51
51
@@ -89,7 +89,7 @@ You can [deploy your project to multiple regions](../../concepts/custom-features
89
89
90
90
## Unassign deployment resources
91
91
92
-
When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
92
+
When unassigning or removing a deployment resource from a project, you'll also delete all the deployments that have been deployed to the resource's region.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/custom-text-classification/how-to/tag-data.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,14 +7,14 @@ author: laujan
7
7
manager: nitinme
8
8
ms.service: azure-ai-language
9
9
ms.topic: how-to
10
-
ms.date: 11/21/2024
10
+
ms.date: 06/30/2025
11
11
ms.author: lajanuar
12
12
ms.custom: language-service-custom-classification
13
13
---
14
14
15
15
# Label text data for training your model
16
16
17
-
Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
17
+
Before training your model, you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already labeled your data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
18
18
19
19
Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
20
20
@@ -23,19 +23,19 @@ Before creating a custom text classification model, you need to have labeled dat
23
23
Before you can label data, you need:
24
24
25
25
*[A successfully created project](create-project.md) with a configured Azure blob storage account,
26
-
* Documents containing text data that have [been uploaded](design-schema.md#data-preparation)to your storage account.
26
+
* Documents containing the [uploaded](design-schema.md#data-preparation)text data in your storage account.
27
27
28
28
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
29
29
30
30
## Data labeling guidelines
31
31
32
-
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
32
+
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON file in your storage container that you've connected to this project.
33
33
34
34
As you label your data, keep in mind:
35
35
36
36
* In general, more labeled data leads to better results, provided the data is labeled accurately.
37
37
38
-
* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
38
+
* There is no fixed number of labels that can guarantee your model performs the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
39
39
40
40
## Label your data
41
41
@@ -61,7 +61,7 @@ Use the following steps to label your data:
**Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
64
+
**Multi label classification**: your file can be labeled with multiple classes. You can do so by selecting all applicable check boxes next to the classes you want to label this document with.
65
65
66
66
:::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
67
67
@@ -77,24 +77,24 @@ Use the following steps to label your data:
77
77
78
78
6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
79
79
80
-
7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
80
+
7. In the bottom section of the right side pane you can add the current file you're viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
81
81
82
82
> [!TIP]
83
-
> If you are planning on using **Automatic** data splitting use the default option of assigning all the documents into your training set.
83
+
> If you're planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
84
84
85
85
8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
86
86
**Total instances* where you can view count of all labeled instances of a specific class.
87
87
**documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
88
88
89
-
9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
89
+
9. While you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
90
90
91
91
## Remove labels
92
92
93
93
If you want to remove a label, uncheck the button next to the class.
94
94
95
95
## Delete or classes
96
96
97
-
To delete a class, select the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
97
+
To delete a class, select the icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
* When you are creating the first project in your service, you get a choice pick the language each time you create a new project. Select this option, to create projects belonging to different languages within one service.
28
-
*Language setting option cannot be modified for the service, once the first project is created.
28
+
*The language setting option cannot be modified for the service once the first project is created.
29
29
* If you enable multiple languages for the project, then instead of having one test index for the service you will have one test index per project.
30
30
31
31
## Supporting multiple languages in one project
32
32
33
33
If you need to support a project system, which includes several languages, you can:
34
34
35
35
* Use the [Translator service](../../translator/translator-overview.md) to translate a question into a single language before sending the question to your project. This allows you to focus on the quality of a single language and the quality of the alternate questions and answers.
36
-
* Create a custom question answering enabled language resource, and a project inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This gives you much more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
36
+
* Create a custom question answering enabled language resource, and a project inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This provides more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
37
37
38
38
## Single language per resource
39
39
@@ -47,84 +47,82 @@ If you **select the option to set the language used by all projects associated w
47
47
48
48
The following list contains the languages supported for a custom question answering resource.
49
49
50
-
| Language |
51
-
|--|
52
-
| Arabic |
53
-
| Armenian |
54
-
| Bangla |
55
-
| Basque |
56
-
| Bulgarian |
57
-
| Catalan |
58
-
| Chinese_Simplified |
59
-
| Chinese_Traditional |
60
-
| Croatian |
61
-
| Czech |
62
-
| Danish |
63
-
| Dutch |
64
-
| English |
65
-
| Estonian |
66
-
| Finnish |
67
-
| French |
68
-
| Galician |
69
-
| German |
70
-
| Greek |
71
-
| Gujarati |
72
-
| Hebrew |
73
-
| Hindi |
74
-
| Hungarian |
75
-
| Icelandic |
76
-
| Indonesian |
77
-
| Irish |
78
-
| Italian |
79
-
| Japanese |
80
-
| Kannada |
81
-
| Korean |
82
-
| Latvian |
83
-
| Lithuanian |
84
-
| Malayalam |
85
-
| Malay |
86
-
| Norwegian |
87
-
| Polish |
88
-
| Portuguese |
89
-
| Punjabi |
90
-
| Romanian |
91
-
| Russian |
92
-
| Serbian_Cyrillic |
93
-
| Serbian_Latin |
94
-
| Slovak |
95
-
| Slovenian |
96
-
| Spanish |
97
-
| Swedish |
98
-
| Tamil |
99
-
| Telugu |
100
-
| Thai |
101
-
| Turkish |
102
-
| Ukrainian |
103
-
| Urdu |
104
-
| Vietnamese |
50
+
- Arabic
51
+
- Armenian
52
+
- Bangla
53
+
- Basque
54
+
- Bulgarian
55
+
- Catalan
56
+
- Chinese_Simplified
57
+
- Chinese_Traditional
58
+
- Croatian
59
+
- Czech
60
+
- Danish
61
+
- Dutch
62
+
- English
63
+
- Estonian
64
+
- Finnish
65
+
- French
66
+
- Galician
67
+
- German
68
+
- Greek
69
+
- Gujarati
70
+
- Hebrew
71
+
- Hindi
72
+
- Hungarian
73
+
- Icelandic
74
+
- Indonesian
75
+
- Irish
76
+
- Italian
77
+
- Japanese
78
+
- Kannada
79
+
- Korean
80
+
- Latvian
81
+
- Lithuanian
82
+
- Malayalam
83
+
- Malay
84
+
- Norwegian
85
+
- Polish
86
+
- Portuguese
87
+
- Punjabi
88
+
- Romanian
89
+
- Russian
90
+
- Serbian_Cyrillic
91
+
- Serbian_Latin
92
+
- Slovak
93
+
- Slovenian
94
+
- Spanish
95
+
- Swedish
96
+
- Tamil
97
+
- Telugu
98
+
- Thai
99
+
- Turkish
100
+
- Ukrainian
101
+
- Urdu
102
+
- Vietnamese
105
103
106
104
## Query matching and relevance
107
105
Custom question answering depends on [Azure AI Search language analyzers](/rest/api/searchservice/language-support) for providing results.
108
106
109
107
While the Azure AI Search capabilities are on par for supported languages, custom question answering has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
110
108
111
-
|Languages with additional ranker|
112
-
|--|
113
-
|Chinese|
114
-
|Czech|
115
-
|Dutch|
116
-
|English|
117
-
|French|
118
-
|German|
119
-
|Hungarian|
120
-
|Italian|
121
-
|Japanese|
122
-
|Korean|
123
-
|Polish|
124
-
|Portuguese|
125
-
|Spanish|
126
-
|Swedish|
127
-
128
-
This additional ranking is an internal working of the custom question answering's ranker.
109
+
- Chinese
110
+
- Czech
111
+
- Dutch
112
+
- English
113
+
- French
114
+
- German
115
+
- Hungarian
116
+
- Italian
117
+
- Japanese
118
+
- Korean
119
+
- Polish
120
+
- Portuguese
121
+
- Spanish
122
+
- Swedish
123
+
124
+
This ranking is an internal working of the custom question answering's ranker.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/question-answering/tutorials/adding-synonyms.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
5
5
ms.topic: tutorial
6
6
author: laujan
7
7
ms.author: lajanuar
8
-
ms.date: 11/21/2024
8
+
ms.date: 06/30/2025
9
9
ms.custom: language-service-question-answering
10
10
---
11
11
@@ -17,11 +17,11 @@ In this tutorial, you learn how to:
17
17
> * Add synonyms to improve the quality of your responses
18
18
> * Evaluate the response quality via the inspect option of the Test pane
19
19
20
-
This tutorial will show you how you can improve the quality of your responses by using synonyms. Let's assume that users are not getting an accurate response to their queries, when they use alternate forms, synonyms or acronyms of a word. So, they decide to improve the quality of the response by using [Authoring API](../how-to/authoring.md) to add synonyms for keywords.
20
+
This tutorial shows you how you can improve the quality of your responses by using synonyms. Let's assume that users aren't getting an accurate response to their queries, when they use alternate forms, synonyms or acronyms of a word. So, they decide to improve the quality of the response by using [Authoring API](../how-to/authoring.md) to add synonyms for keywords.
21
21
22
22
## Add synonyms using Authoring API
23
23
24
-
Let’s us add the following words and their alterations to improve the results:
24
+
Let improve the results by adding the following words and their alterations:
25
25
26
26
|Word | Alterations|
27
27
|--------------|--------------------------------|
@@ -58,7 +58,7 @@ Let’s us add the following words and their alterations to improve the results:
58
58
59
59
```
60
60
61
-
For the question and answer pair “Fix problems with Surface Pen”, we compare the response for a query made using its synonym “troubleshoot”.
61
+
For the question and answer pair “Fix problems with Surface Pen,” we compare the response for a query made using its synonym “troubleshoot.”
62
62
63
63
## Response before addition of synonym
64
64
@@ -70,7 +70,7 @@ For the question and answer pair “Fix problems with Surface Pen”, we compare
70
70
> [!div class="mx-imgBorder"]
71
71
> [](../media/adding-synonyms/score-improvement.png#lightbox)
72
72
73
-
As you can see, when `troubleshoot` was not added as a synonym, we got a low confidence response to the query “How to troubleshoot your surface pen”. However, after we add `troubleshoot` as a synonym to “fix problems”, we received the correct response to the query with a higher confidence score. Once, these synonyms were added, the relevance of results improved thereby improving user experience.
73
+
As you can see, when `troubleshoot` was not added as a synonym, we got a low confidence response to the query “How to troubleshoot your surface pen.” However, after we add `troubleshoot` as a synonym to “fix problems”, we received the correct response to the query with a higher confidence score. Once these synonyms were added, the relevance of results is improved.
74
74
75
75
> [!IMPORTANT]
76
76
> Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/STOPWORDS.md).
@@ -80,8 +80,8 @@ As you can see, when `troubleshoot` was not added as a synonym, we got a low con
80
80
* Synonyms can be added in any order. The ordering is not considered in any computational logic.
81
81
* Synonyms can only be added to a project that has at least one question and answer pair.
82
82
* Synonyms can be added only when there is at least one question and answer pair present in a project.
83
-
* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets.
84
-
* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
83
+
* In case of overlapping synonym words between two sets of alterations, it can have unexpected results and it isn't recommended to use overlapping sets.
84
+
* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19," they are treated the same as "COVID 19," and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
0 commit comments