Skip to content

Commit 3c09ac8

Browse files
committed
Post-review fixes
1 parent aa93d84 commit 3c09ac8

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

learn-pr/paths/create-custom-copilots-ai-studio/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ metadata:
1111
ms.custom: [copilot-learning-hub]
1212
title: Develop generative AI apps in Azure
1313
prerequisites: |
14-
Before starting this module, you should be familiar with fundamental AI concepts and services in Azure. You should also be proficient in programming with Python or Microoft C#.
14+
Before starting this module, you should be familiar with fundamental AI concepts and services in Azure. You should also be proficient in programming with Python or Microsoft C#.
1515
summary: |
1616
Generative Artificial Intelligence (AI) is becoming more accessible through comprehensive development platforms like Azure AI Foundry. Learn how to build generative AI applications that use language models to chat with your users.
1717
iconUrl: /training/achievements/generic-badge.svg

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/6-knowledge-check.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ quiz:
3636
explanation: "Incorrect. "
3737
- content: "Which evaluator metric uses an AI model to judge the structure and logical flow of ideas in a response?"
3838
choices:
39-
- content: "ACoherence"
39+
- content: "Coherence"
4040
isCorrect: true
4141
explanation: "Correct. "
4242
- content: "F1 Score"

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/includes/3b-automated-evaluations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,5 @@ To evaluate a model, you need a dataset of prompts and responses (and optionally
1010

1111
Automated evaluation enables you to choose which *evaluators* you want to assess your model's responses, and which metrics those evaluators should calculate. There are evaluators that help you measure:
1212

13-
- **AI Quality**: The quality of your model's responses are measured by using AI models to evaluate them for metrics like *coherence* and *relevance* and by using standard NLP metrics like F1 score, BLEU, METEOR, and ROUGE based on ground truth (in the form of expected response text)
13+
- **AI Quality**: The quality of your model's responses is measured by using AI models to evaluate them for metrics like *coherence* and *relevance* and by using standard NLP metrics like F1 score, BLEU, METEOR, and ROUGE based on ground truth (in the form of expected response text)
1414
- **Risk and safety**: evaluators that assess the responses for content safety issues, including violence, hate, sexual content, and content related to self-harm.

learn-pr/wwl-data-ai/evaluate-models-azure-ai-studio/includes/7-summary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,4 @@ In this module, you learned to:
55
- Perform automated evaluations.
66

77
> [!TIP]
8-
> Fr more information about evaluating models in Azure AI Foundry, see [Observability in generative AI](/azure/ai-foundry/concepts/observability).
8+
> For more information about evaluating models in Azure AI Foundry, see [Observability in generative AI](/azure/ai-foundry/concepts/observability).

0 commit comments

Comments
 (0)