Skip to content

Commit 6e66585

Browse files
Merge pull request #1490 from msakande/ignite-add-links-to-model-evaluation-article
add link to model and prompt evaluation article
2 parents cc499d5 + 845c34b commit 6e66585

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-studio/how-to/benchmark-model-in-catalog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ To access benchmark results for a specific metric and dataset:
7878
The previous sections showed the benchmark results calculated by Microsoft, using public datasets. However, you can try to regenerate the same set of metrics with your data.
7979

8080
1. Return to the **Benchmarks** tab in the model card.
81-
1. Select **Try with your own data** to evaluate the model with your data. Evaluation on your data helps you see how the model performs in your particular scenarios.
81+
1. Select **Try with your own data** to [evaluate the model with your data](evaluate-generative-ai-app.md#model-and-prompt-evaluation). Evaluation on your data helps you see how the model performs in your particular scenarios.
8282

8383
:::image type="content" source="../media/how-to/model-benchmarks/try-with-your-own-data.png" alt-text="Screenshot showing the button to select for evaluating with your own data." lightbox="../media/how-to/model-benchmarks/try-with-your-own-data.png":::
8484

0 commit comments

Comments
 (0)