Skip to content

Commit e78f3db

Browse files
committed
AI Studio Eval SDK and updates
1 parent c3b3a2c commit e78f3db

File tree

10 files changed

+112
-138
lines changed

10 files changed

+112
-138
lines changed

articles/ai-studio/.openpublishing.redirection.ai-studio.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,11 @@
119119
"source_path": "articles/ai-studio/how-to/commitment-tier.md",
120120
"redirect_url": "/azure/ai-services/commitment-tier.md",
121121
"redirect_document_id": false
122+
},
123+
{
124+
"source_path": "articles/ai-studio/how-to/develop/flow-evaluate-sdk.md",
125+
"redirect_url": "/azure/ai-studio/how-to/develop/evaluate-sdk.md",
126+
"redirect_document_id": true
122127
}
123128
]
124129
}

articles/ai-studio/how-to/develop/flow-evaluate-sdk.md renamed to articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 82 additions & 122 deletions
Large diffs are not rendered by default.

articles/ai-studio/how-to/evaluate-flow-results.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 5/21/2024
11+
ms.date: 9/24/2024
1212
ms.reviewer: wenxwei
1313
ms.author: lagayhar
1414
author: lgayhardt
@@ -20,7 +20,7 @@ author: lgayhardt
2020

2121
The Azure AI Studio evaluation page is a versatile hub that not only allows you to visualize and assess your results but also serves as a control center for optimizing, troubleshooting, and selecting the ideal AI model for your deployment needs. It's a one-stop solution for data-driven decision-making and performance enhancement in your AI Studio projects. You can seamlessly access and interpret the results from various sources, including your flow, the playground quick test session, evaluation submission UI, and SDK. This flexibility ensures that you can interact with your results in a way that best suits your workflow and preferences.
2222

23-
Once you've visualized your evaluation results, you can dive into a thorough examination. This includes the ability to not only view individual results but also to compare these results across multiple evaluation runs. By doing so, you can identify trends, patterns, and discrepancies, gaining invaluable insights into the performance of your AI system under various conditions.
23+
Once you've visualized your evaluation results, you can dive into a thorough examination. This includes the ability to not only view individual results but also to compare these results across multiple evaluation runs. By doing so, you can identify trends, patterns, and discrepancies, gaining invaluable insights into the performance of your AI system under various conditions.
2424

2525
In this article you learn to:
2626

@@ -55,11 +55,13 @@ Some potential action items based on the evaluation metrics could include:
5555

5656
The metrics detail table offers a wealth of data that can guide your model improvement efforts, from recognizing patterns to customizing your view for efficient analysis and refining your model based on identified issues.
5757

58-
We break down the aggregate views or your metrics by**Performance and quality** and **Risk and safety metrics**. You can view the distribution of scores across the evaluated dataset and see aggregate scores for each metric.
58+
We break down the aggregate views or your metrics by **Performance and quality** and **Risk and safety metrics**. You can view the distribution of scores across the evaluated dataset and see aggregate scores for each metric.
5959

6060
- For performance and quality metrics, we aggregate by calculating an average across all the scores for each metric.
6161
:::image type="content" source="../media/evaluations/view-results/evaluation-details-page.png" alt-text="Screenshot of performance and quality metrics dashboard tab." lightbox="../media/evaluations/view-results/evaluation-details-page.png":::
62-
- For risk and safety metrics, we aggregate based on a threshold to calculate a defect rate across all scores for each metric. Defect rate is defined as the percentage of instances in your test dataset that surpass a threshold on the severity scale over the whole dataset size.
62+
- For risk and safety metrics, we aggregate by calculating a defect rate for each metric.
63+
- For content harm metrics, the defect rate is defined as the percentage of instances in your test dataset that surpass a threshold on the severity scale over the whole dataset size. By default, the threshold is “Medium”.
64+
- For protected material and indirect attack, the defect rate is calculated as the percentage of instances where the output is 'true' (Defect Rate = (#trues / #instances) × 100).
6365
:::image type="content" source="../media/evaluations/view-results/evaluation-details-safety-metrics.png" alt-text="Screenshot of risk and safety metrics dashboard tab." lightbox="../media/evaluations/view-results/evaluation-details-safety-metrics.png":::
6466

6567
Here are some examples of the metrics results for the question answering scenario:

articles/ai-studio/how-to/evaluate-generative-ai-app.md

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ manager: scottpolly
66
ms.service: azure-ai-studio
77
ms.custom: ignite-2023, references_regions, build-2024
88
ms.topic: how-to
9-
ms.date: 5/21/2024
9+
ms.date: 9/24/2024
1010
ms.reviewer: mithigpe
1111
ms.author: lagayhar
1212
author: lgayhardt
@@ -48,6 +48,13 @@ From the collapsible left menu, select **Prompt flow** > **Evaluate** > **Built-
4848

4949
#### Basic information
5050

51+
When you start an evaluation from the evaluate page, you need to decide what is the evaluation target first. By specifying the appropriate evaluation target, we can tailor the evaluation to the specific nature of your application, ensuring accurate and relevant metrics. Currently we support two types of evaluation target:
52+
53+
**Dataset**: You already have your model generated outputs in a test dataset.
54+
**Prompt flow**: You have created a flow, and you want to evaluate the output from the flow.
55+
56+
:::image type="content" source="../media/evaluations/evaluate/select-dataset-or-prompt-flow.png" alt-text="Screenshot of what do you want to evaluate showing dataset or prompt flow selection. " lightbox="../media/evaluations/evaluate/select-dataset-or-prompt-flow.png":::
57+
5158
When you enter the evaluation creation wizard, you can provide an optional name for your evaluation run and select the scenario that best aligns with your application's objectives. We currently offer support for the following scenarios:
5259

5360
- **Question and answer with context**: This scenario is designed for applications that involve answering user queries and providing responses with context information.
@@ -57,10 +64,7 @@ You can use the help panel to check the FAQs and guide yourself through the wiza
5764

5865
:::image type="content" source="../media/evaluations/evaluate/basic-information.png" alt-text="Screenshot of the basic information page when creating a new evaluation." lightbox="../media/evaluations/evaluate/basic-information.png":::
5966

60-
By specifying the appropriate scenario, we can tailor the evaluation to the specific nature of your application, ensuring accurate and relevant metrics.
61-
62-
- **Evaluate from data**: If you already have your model generated outputs in a test dataset, skip **Select a flow to evaluate** and directly go to the next step to configure test data.
63-
- **Evaluate from flow**: If you initiate the evaluation from the Flow page, we'll automatically select your flow to evaluate. If you intend to evaluate another flow, you can select a different one. It's important to note that within a flow, you might have multiple nodes, each of which could have its own set of variants. In such cases, you must specify the node and the variants you wish to assess during the evaluation process.
67+
If you are evaluating a prompt flow, you can select the flow to evaluate. If you initiate the evaluation from the Flow page, we'll automatically select your flow to evaluate. If you intend to evaluate another flow, you can select a different one. It's important to note that within a flow, you might have multiple nodes, each of which could have its own set of variants. In such cases, you must specify the node and the variants you wish to assess during the evaluation process.
6468

6569
:::image type="content" source="../media/evaluations/evaluate/select-flow.png" alt-text="Screenshot of the select a flow to evaluate page when creating a new evaluation." lightbox="../media/evaluations/evaluate/select-flow.png":::
6670

@@ -91,19 +95,20 @@ You can refer to the table for the complete list of metrics we offer support for
9195

9296
| Scenario | Performance and quality metrics | Risk and safety metrics |
9397
|--|--|--|
94-
| Question and answer with context | Groundedness, Relevance, Coherence, Fluency, GPT similarity, F1 score | Self-harm-related content, Hateful and unfair content, Violent content, Sexual content |
95-
| Question and answer without context | Coherence, Fluency, GPT similarity, F1 score | Self-harm-related content, Hateful and unfair content, Violent content, Sexual content |
96-
98+
| Question and answer with context | Groundedness, Relevance, Coherence, Fluency, GPT similarity, F1 score | Self-harm-related content, Hateful and unfair content, Violent content, Sexual content, Protected material, Indirect attack |
99+
| Question and answer without context | Coherence, Fluency, GPT similarity, F1 score | Self-harm-related content, Hateful and unfair content, Violent content, Sexual content, Protected material, Indirect attack |
97100

98101
When using AI-assisted metrics for performance and quality evaluation, you must specify a GPT model for the calculation process. Choose an Azure OpenAI connection and a deployment with either GPT-3.5, GPT-4, or the Davinci model for our calculations.
99102

100103
:::image type="content" source="../media/evaluations/evaluate/quality-metrics.png" alt-text="Screenshot of the select metrics page with quality metrics selected when creating a new evaluation." lightbox="../media/evaluations/evaluate/quality-metrics.png":::
101104

102105
For risk and safety metrics, you don't need to provide a connection and deployment. The Azure AI Studio safety evaluations back-end service provisions a GPT-4 model that can generate content risk severity scores and reasoning to enable you to evaluate your application for content harms.
103106

104-
You can set the threshold to calculate the defect rate for the risk and safety metrics. The defect rate is calculated by taking a percentage of instances with severity levels (Very low, Low, Medium, High) above a threshold. By default, we set the threshold as "Medium".
107+
You can set the threshold to calculate the defect rate for the content harm metrics (self-harm-related content, hateful and unfair content, violent content, sexual content). The defect rate is calculated by taking a percentage of instances with severity levels (Very low, Low, Medium, High) above a threshold. By default, we set the threshold as "Medium".
108+
109+
For protected material and indirect attack, the defect rate is calculated by taking a percentage of instances where the output is 'true' (Defect Rate = (#trues / #instances) × 100).
105110

106-
:::image type="content" source="../media/evaluations/evaluate/safety-metrics.png" alt-text="Screenshot of the select metrics page with safety metrics selected when creating a new evaluation." lightbox="../media/evaluations/evaluate/safety-metrics.png":::
111+
:::image type="content" source="../media/evaluations/evaluate/ip-xpia-in-wizard.png" alt-text="Screenshot of risk and safety metrics curated by Microsoft showing self-harm, protected material, and indirect attack selected." lightbox="../media/evaluations/evaluate/ip-xpia-in-wizard.png":::
107112

108113
> [!NOTE]
109114
> AI-assisted risk and safety metrics are hosted by Azure AI Studio safety evaluations back-end service and is only available in the following regions: East US 2, France Central, UK South, Sweden Central
@@ -131,6 +136,8 @@ For guidance on the specific data mapping requirements for each metric, refer to
131136
| Hateful and unfair content | Required: Str | Required: Str | N/A | N/A |
132137
| Violent content | Required: Str | Required: Str | N/A | N/A |
133138
| Sexual content | Required: Str | Required: Str | N/A | N/A |
139+
| Protected material | Required: Str | Required: Str | N/A | N/A |
140+
| Indirect attack | Required: Str | Required: Str | N/A | N/A |
134141

135142
- Question: the question asked by the user in Question Answer pair
136143
- Answer: the response to question generated by the model as answer
-12.7 KB
Loading
42.8 KB
Loading
Binary file not shown.
40.8 KB
Loading
-12.5 KB
Loading

articles/ai-studio/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -271,8 +271,8 @@ items:
271271
href: how-to/evaluate-prompts-playground.md
272272
- name: Generate adversarial simulations for safety evaluation
273273
href: how-to/develop/simulator-interaction-data.md
274-
- name: Evaluate with the prompt flow SDK
275-
href: how-to/develop/flow-evaluate-sdk.md
274+
- name: Evaluate with the Azure AI Evaluation SDK
275+
href: how-to/develop/evaluate-sdk.md
276276
displayName: code,accuracy,metrics
277277
- name: Evaluate with Azure AI Studio
278278
href: how-to/evaluate-generative-ai-app.md

0 commit comments

Comments
 (0)