Skip to content

Commit 41110ba

Browse files
committed
fixes
1 parent 606e308 commit 41110ba

File tree

3 files changed

+6
-8
lines changed

3 files changed

+6
-8
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Model evaluation in orchestration workflow uses the following metrics:
2323
|Recall | The ratio of successful recognitions to the actual number of entities present. | `Recall = #True_Positive / (#True_Positive + #False_Negatives)` |
2424
|F1 score | The combination of precision and recall. | `F1 Score = 2 * Precision * Recall / (Precision + Recall)` |
2525

26-
# Confusion matrix
26+
## Confusion matrix
2727

2828
A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of intents.
2929
The matrix compares the actual tags with the tags predicted by the model.

articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-query-model.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,7 @@ When a model is deployed, you will be able to test the model directly in the por
3434

3535
In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
3636

37-
:::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::
38-
39-
2. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
37+
3. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
4038

4139
No configurations are required for custom question answering or unlinked intents.
4240

articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: How to train and evaluate models in orchestration workflow projects
33
titleSuffix: Azure Cognitive Services
4-
description: Use this article to train a model and view its evaluation details to make improvements.
4+
description: Use this article to train an orchestration model and view its evaluation details to make improvements.
55
services: cognitive-services
66
author: aahill
77
manager: nitinme
@@ -13,13 +13,13 @@ ms.author: aahi
1313
ms.custom: language-service-orchestration
1414
---
1515

16-
# Train and evaluate models
16+
# Train and evaluate orchestration workflow models
1717

1818
After you have completed [tagging your utterances](./tag-utterances.md), you can train your model. Training is the act of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you have to name your training instance.
1919

2020
You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
2121

22-
The training times can be anywhere from a few seconds, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
22+
The training times can be anywhere from a few seconds, up to a couple of hours when you reach high numbers of utterances.
2323

2424
## Train model
2525

@@ -46,7 +46,7 @@ In the **view model details** page, you'll be able to see all your models, with
4646
> [!NOTE]
4747
> If you don't see any of the intents you have in your model displayed here, it is because they weren't in any of the utterances that were used for the test set.
4848
49-
You can view the [confusion matrix](../concepts/evaluation-metrics.md#confusion-matrix) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
49+
You can view the [confusion matrix](../concepts/evaluation-metrics.md) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
5050

5151
## Next steps
5252
* [Model evaluation metrics](../concepts/evaluation-metrics.md)

0 commit comments

Comments
 (0)