Skip to content

Commit fa8e0ad

Browse files
Merge pull request #5414 from aahill/lang-freshness
acrolinx pass
2 parents e0839b9 + 3390c91 commit fa8e0ad

File tree

3 files changed

+11
-11
lines changed

3 files changed

+11
-11
lines changed

articles/ai-services/language-service/conversational-language-understanding/concepts/data-formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: conceptual
9-
ms.date: 11/21/2024
9+
ms.date: 06/05/2025
1010
ms.author: lajanuar
1111
ms.custom: language-service-custom-clu
1212
---

articles/ai-services/language-service/custom-text-classification/concepts/evaluation-metrics.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: laujan
77
manager: nitinme
88
ms.service: azure-ai-language
99
ms.topic: conceptual
10-
ms.date: 11/21/2024
10+
ms.date: 6/6/2025
1111
ms.author: lajanuar
1212
ms.custom: language-service-custom-classification
1313
---
@@ -26,12 +26,12 @@ Model evaluation is triggered automatically after training is completed successf
2626

2727
`Recall = #True_Positive / (#True_Positive + #False_Negatives)`
2828

29-
* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
29+
* **F1 score**: The F1 score is a function of precision and recall. It's needed when you seek a balance between precision and recall.
3030

3131
`F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
3232

3333
>[!NOTE]
34-
> Precision, recall and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
34+
> Precision, recall, and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
3535
## Model-level and Class-level evaluation metrics
3636

3737
The definitions of precision, recall, and evaluation are the same for both class-level and model-level evaluations. However, the count of *True Positive*, *False Positive*, and *False Negative* differ as shown in the following example.
@@ -89,7 +89,7 @@ The below sections use the following example dataset:
8989
**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.8 * 0.67) / (0.8 + 0.67) = 0.73`
9090

9191
> [!NOTE]
92-
> For single-label classification models, the count of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each document. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a document is counted as a false negative.
92+
> For single-label classification models, the number of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each document. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a document is counted as a false negative.
9393
## Interpreting class-level evaluation metrics
9494

9595
So what does it actually mean to have a high precision or a high recall for a certain class?

articles/ai-services/language-service/question-answering/how-to/network-isolation.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ ms.service: azure-ai-language
55
ms.topic: how-to
66
author: laujan
77
ms.author: lajanuar
8-
ms.date: 11/21/2024
8+
ms.date: 06/06/2025
99
ms.custom: language-service-question-answering
1010
---
1111

1212
# Network isolation and private endpoints
1313

14-
The steps below describe how to restrict public access to custom question answering resources as well as how to enable Azure Private Link. Protect an AI Foundry resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
14+
The following steps describe how to restrict public access to custom question answering resources as well as how to enable Azure Private Link. Protect an AI Foundry resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
1515

1616
## Private Endpoints
1717

@@ -21,12 +21,12 @@ Private endpoints are provided by [Azure Private Link](/azure/private-link/priva
2121

2222
## Steps to enable private endpoint
2323

24-
1. Assign *Contributor* role to language resource (Depending on the context this may appear as a Text Analytics resource) in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
24+
1. Assign the *contributor* role to your resource in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
2525

2626
> [!div class="mx-imgBorder"]
2727
> ![Text Analytics Identity](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoints-identity.png)
2828
29-
2. Add the above identity as *Contributor* by going to Azure Search Service IAM tab.
29+
2. Add the above identity as *Contributor* by going to the Azure Search Service access control tab.
3030

3131
![Managed service IAM](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
3232

@@ -54,9 +54,9 @@ This will establish a private endpoint connection between language resource and
5454

5555
## Restrict access to Azure AI Search resource
5656

57-
Follow the steps below to restrict public access to custom question answering language resources. Protect an AI Foundry resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
57+
Follow these steps to restrict public access to custom question answering language resources. Protect an AI Foundry resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
5858

59-
After restricting access to an AI Foundry resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.
59+
After you restrict access to an AI Foundry resource based on virtual network, to browse projects on Language Studio from your on-premises network or your local browser:
6060
- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
6161
- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules).
6262
- Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.

0 commit comments

Comments
 (0)