Skip to content

Commit 2dfbe97

Browse files
committed
fix links
1 parent ddb153d commit 2dfbe97

File tree

4 files changed

+6
-7
lines changed

4 files changed

+6
-7
lines changed

articles/ai-foundry/how-to/benchmark-model-in-catalog.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ai-learning-hub
99
ms.topic: how-to
10-
ms.date: 04/07/2025
10+
ms.date: 05/19/2025
1111
ms.reviewer: changliu2
1212
reviewer: changliu2
1313
ms.author: lagayhar
@@ -24,7 +24,6 @@ In this article, you learn to streamline your model selection process in the Azu
2424
- [Trade-off charts](#compare-models-in-the-trade-off-charts) to see how models perform on one metric versus another, such as quality versus cost;
2525
- [Leaderboards by scenario](#view-leaderboards-by-scenario) to find the best leaderboards that suite your scenario.
2626

27-
2827
## Prerequisites
2928

3029
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
@@ -109,7 +108,7 @@ To access benchmark results for a specific metric and dataset:
109108
The previous sections showed the benchmark results calculated by Microsoft, using public datasets. However, you can try to regenerate the same set of metrics with your data.
110109

111110
1. Return to the **Benchmarks** tab in the model card.
112-
1. Select **Try with your own data** to [evaluate the model with your data](evaluate-generative-ai-app.md#model-and-prompt-evaluation). Evaluation on your data helps you see how the model performs in your particular scenarios.
111+
1. Select **Try with your own data** to [evaluate the model with your data](evaluate-generative-ai-app.md#fine-tuned-model-evaluation). Evaluation on your data helps you see how the model performs in your particular scenarios.
113112

114113
:::image type="content" source="../media/how-to/model-benchmarks/try-with-your-own-data.png" alt-text="Screenshot showing the button to select for evaluating with your own data." lightbox="../media/how-to/model-benchmarks/try-with-your-own-data.png":::
115114

articles/ai-foundry/how-to/evaluate-generative-ai-app.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,5 +200,5 @@ Learn more about how to evaluate your generative AI applications:
200200

201201
- [Evaluate your generative AI apps via the playground](./evaluate-prompts-playground.md)
202202
- [View the evaluation results](./evaluate-results.md)
203-
- [Creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub](../../ai-services/openai/how-to/evaluations)
203+
- [Creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub](../../ai-services/openai/how-to/evaluations.md)
204204
- [Transparency Note for Azure AI Foundry safety evaluations](../concepts/safety-evaluations-transparency-note.md).

articles/ai-foundry/how-to/evaluate-results.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ And here are some examples of the metrics results for the conversation scenario.
8989

9090
When selecting “View evaluation results per turn”, you see the following screen:
9191

92-
:::image type="content" source="../media/evaluations/view-results/png" alt-text="Screenshot of evaluation results per turn." lightbox="../media/evaluations/view-results/metric-per-turn.png":::
92+
:::image type="content" source="../media/evaluations/view-results/metric-per-turn.png" alt-text="Screenshot of evaluation results per turn." lightbox="../media/evaluations/view-results/metric-per-turn.png":::
9393

9494
For a safety evaluation in a multi-modal scenario (text + images), you can review the images from both the input and output in the detailed metrics result table to better understand the evaluation result. Since multi-modal evaluation is currently supported only for conversation scenarios, you can select "View evaluation results per turn" to examine the input and output for each turn.
9595

@@ -165,6 +165,6 @@ Understanding the built-in metrics is vital for assessing the performance and ef
165165
Learn more about how to evaluate your generative AI applications:
166166
- [Evaluate your generative AI apps via the playground](../how-to/evaluate-prompts-playground.md)
167167
- [Evaluate your generative AI apps with the Azure AI Foundry portal or SDK](../how-to/evaluate-generative-ai-app.md)
168-
- [Creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub](../../ai-services/openai/how-to/evaluations)
168+
- [Creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub](../../ai-services/openai/how-to/evaluations.md)
169169

170170
Learn more about [harm mitigation techniques](../concepts/evaluation-approach-gen-ai.md).

articles/ai-foundry/includes/fdp-backward-compatibility-azure-openai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ ms.custom: include file
2323
> - The storage needs to be added to the account (if it’s added to the project, you'll get service errors).
2424
> - User needs to add their project to their storage account through access control in the Azure portal.
2525
>
26-
> To learn more about creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub, see [How to use Azure OpenAI Service evaluation](../../ai-services/openai/how-to/evaluations)
26+
> To learn more about creating evaluations specifically with OpenAI evaluation graders in Azure OpenAI Hub, see [How to use Azure OpenAI Service evaluation](../../ai-services/openai/how-to/evaluations.md)

0 commit comments

Comments
 (0)