Skip to content

Commit b88246a

Browse files
committed
2nd article draft
1 parent 9f8808b commit b88246a

18 files changed

+102
-131
lines changed

articles/machine-learning/prompt-flow/how-to-bulk-test-evaluate-flow.md

Lines changed: 99 additions & 128 deletions
Large diffs are not rendered by default.

articles/machine-learning/prompt-flow/how-to-develop-an-evaluation-flow.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.topic: how-to
1111
author: lgayhardt
1212
ms.author: lagayhar
1313
ms.reviewer: ziqiwang
14-
ms.date: 10/24/2024
14+
ms.date: 10/25/2024
1515
---
1616

1717
# Evaluation flows and metrics
@@ -153,13 +153,13 @@ After you create your own evaluation flow and metrics, you can use the flow to a
153153
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/evaluate-button.png" alt-text="Screenshot of evaluation button.":::
154154

155155

156-
1. In the **Batch run and evaluate** wizard, complete the **Basic settings** and **Batch run settings** to load the dataset for testing and configure the input mapping. For more information, see [Submit batch run and evaluate a flow in prompt flow](how-to-bulk-test-evaluate-flow.md#submit-batch-run-and-evaluate-a-flow).
156+
1. In the **Batch run & Evaluate** wizard, complete the **Basic settings** and **Batch run settings** to load the dataset for testing and configure the input mapping. For more information, see [Submit batch run and evaluate a flow in prompt flow](how-to-bulk-test-evaluate-flow.md#submit-batch-run-and-evaluate-a-flow).
157157

158158
1. In the **Select evaluation** step, you can select one or more of your customized evaluations or built-in evaluations to run. **Customized evaluation** lists all the evaluation flows that you created, cloned, or customized. Evaluation flows created by others in the same project don't appear in this section.
159159

160160
:::image type="content" source="./media/how-to-develop-an-evaluation-flow/select-customized-evaluation.png" alt-text="Screenshot of selecting customized evaluation." lightbox = "./media/how-to-develop-an-evaluation-flow/select-customized-evaluation.png":::
161161

162-
1. On the **Configure evaluation** screen, specify the sources of any input data needed for the evaluation method. For example, the ground truth column might come from a dataset. If your evaluation method doesn't require data from a dataset, you don't need to select a dataset or reference any dataset columns in the input mapping section, and this step is optional.
162+
1. On the **Configure evaluation** screen, specify the sources of any input data needed for the evaluation method. For example, the ground truth column might come from a dataset. If your evaluation method doesn't require data from a dataset, you don't need to select a dataset or reference any dataset columns in the input mapping section.
163163

164164
In the **Evaluation input mapping** section, you can indicate the sources of required inputs for the evaluation. If the data source is from your run output, set the source as `${run.outputs.[OutputName]}`. If the data is from your test dataset, set the source as `${data.[ColumnName]}`. Any descriptions you set for the data inputs also appear here. For more information, see [Submit batch run and evaluate a flow in prompt flow](how-to-bulk-test-evaluate-flow.md#submit-batch-run-and-evaluate-a-flow).
165165

137 KB
Loading
-11.9 KB
Loading
-51.6 KB
Loading
-37.1 KB
Loading
49.6 KB
Loading
43.7 KB
Loading
-328 Bytes
Loading
-456 KB
Loading

0 commit comments

Comments
 (0)