Skip to content

Commit d721fc8

Browse files
authored
Apply suggestions from code review
Fixed typos flagged by Acrolinx and added period at the end of alt text.
1 parent b8b4c8f commit d721fc8

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/3b-automated-evaluations.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ uid: learn.wwl.evaluate-models-azure-ai-studio.automated-evaluations
33
title: Automated evaluations
44
metadata:
55
title: Automated evaluations
6-
description: "Learn how use autmated evaluations in the Azure AI Foundry portal."
6+
description: "Learn how use automated evaluations in the Azure AI Foundry portal."
77
ms.date: 04/16/2025
88
author: madiepev
99
ms.author: madiepev

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/6-knowledge-check.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ quiz:
3434
- content: "Risk and safety"
3535
isCorrect: false
3636
explanation: "Incorrect. "
37-
- content: "YYou want to evaluate the grammatical and linguistic quality of responses. What kind of metrics should you specify for automated evaluations?"
37+
- content: "You want to evaluate the grammatical and linguistic quality of responses. What kind of metrics should you specify for automated evaluations?"
3838
choices:
3939
- content: "AI quality (AI-assisted)"
4040
isCorrect: true

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/includes/3b-automated-evaluations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Automated evaluations in Azure AI Foundry portal enable you to assess the qualit
44

55
To evaluate a model, you need a dataset of prompts and responses (and optionally, expected responses as "ground truth"). You can compile this dataset manually or use the output from an existing application; but a useful way to get started is to use an AI model to generate a set of prompts and responses related to a specific subject. You can then edit the generated prompts and responses to reflect your desired output, and use them as ground truth to evaluate the responses from another model.
66

7-
![Screenshot of AI-generated evaluation data](../media/ai-generated-test-data.png)
7+
![Screenshot of AI-generated evaluation data.](../media/ai-generated-test-data.png)
88

99
## Evaluation metrics
1010

0 commit comments

Comments
 (0)