You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/language-service/orchestration-workflow/concepts/evaluation-metrics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Model evaluation in orchestration workflow uses the following metrics:
23
23
|Recall | The ratio of successful recognitions to the actual number of entities present. |`Recall = #True_Positive / (#True_Positive + #False_Negatives)`|
24
24
|F1 score | The combination of precision and recall. |`F1 Score = 2 * Precision * Recall / (Precision + Recall)`|
25
25
26
-
# Confusion matrix
26
+
##Confusion matrix
27
27
28
28
A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of intents.
29
29
The matrix compares the actual tags with the tags predicted by the model.
Copy file name to clipboardExpand all lines: articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-query-model.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,7 @@ When a model is deployed, you will be able to test the model directly in the por
34
34
35
35
In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
36
36
37
-
:::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::
38
-
39
-
2. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
37
+
3. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
40
38
41
39
No configurations are required for custom question answering or unlinked intents.
Copy file name to clipboardExpand all lines: articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: How to train and evaluate models in orchestration workflow projects
3
3
titleSuffix: Azure Cognitive Services
4
-
description: Use this article to train a model and view its evaluation details to make improvements.
4
+
description: Use this article to train an orchestration model and view its evaluation details to make improvements.
5
5
services: cognitive-services
6
6
author: aahill
7
7
manager: nitinme
@@ -13,13 +13,13 @@ ms.author: aahi
13
13
ms.custom: language-service-orchestration
14
14
---
15
15
16
-
# Train and evaluate models
16
+
# Train and evaluate orchestration workflow models
17
17
18
18
After you have completed [tagging your utterances](./tag-utterances.md), you can train your model. Training is the act of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you have to name your training instance.
19
19
20
20
You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
21
21
22
-
The training times can be anywhere from a few seconds, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
22
+
The training times can be anywhere from a few seconds, up to a couple of hours when you reach high numbers of utterances.
23
23
24
24
## Train model
25
25
@@ -46,7 +46,7 @@ In the **view model details** page, you'll be able to see all your models, with
46
46
> [!NOTE]
47
47
> If you don't see any of the intents you have in your model displayed here, it is because they weren't in any of the utterances that were used for the test set.
48
48
49
-
You can view the [confusion matrix](../concepts/evaluation-metrics.md#confusion-matrix) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
49
+
You can view the [confusion matrix](../concepts/evaluation-metrics.md) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
0 commit comments