Skip to content

Commit d73c3be

Browse files
committed
Merge branch 'patch-6' of https://github.com/minthigpen/azure-docs-pr into aistudiofloweval0824
2 parents f699db2 + 03cb367 commit d73c3be

File tree

1 file changed

+89
-7
lines changed

1 file changed

+89
-7
lines changed

articles/ai-studio/how-to/develop/flow-evaluate-sdk.md

Lines changed: 89 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ ms.reviewer: dantaylo
1212
ms.author: eur
1313
author: eric-urban
1414
---
15-
1615
# Evaluate with the prompt flow SDK
1716

1817
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
@@ -51,7 +50,10 @@ Built-in composite evaluators are composed of individual evaluators.
5150
- `ContentSafetyEvaluator` combines all the safety evaluators for a single output of combined metrics for question and answer pairs
5251
- `ContentSafetyChatEvaluator` combines all the safety evaluators for a single output of combined metrics for chat messages following the OpenAI message protocol that can be found [here](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
5352

54-
### Required data input for built-in evaluators
53+
> [!TIP]
54+
> For more information about inputs and outputs, see the [Prompt flow Python reference documentation](https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.evals.evaluators.html).
55+
56+
### Data requirements for built-in evaluators
5557
We require question and answer pairs in `.jsonl` format with the required inputs, and column mapping for evaluating datasets, as follows:
5658

5759
| Evaluator | `question` | `answer` | `context` | `ground_truth` |
@@ -186,6 +188,37 @@ The result:
186188
```JSON
187189
{"answer_length":27}
188190
```
191+
#### Log your custom code-based evaluator to your AI Studio project
192+
```python
193+
# First we need to save evaluator into separate file in its own directory:
194+
def answer_len(answer):
195+
return len(answer)
196+
197+
# Note, we create temporary directory to store our python file
198+
target_dir_tmp = "flex_flow_tmp"
199+
os.makedirs(target_dir_tmp, exist_ok=True)
200+
lines = inspect.getsource(answer_len)
201+
with open(os.path.join("flex_flow_tmp", "answer.py"), "w") as fp:
202+
fp.write(lines)
203+
204+
from flex_flow_tmp.answer import answer_len as answer_length
205+
# Then we convert it to flex flow
206+
pf = PFClient()
207+
flex_flow_path = "flex_flow"
208+
pf.flows.save(entry=answer_length, path=flex_flow_path)
209+
# Finally save the evaluator
210+
eval = Model(
211+
path=flex_flow_path,
212+
name="answer_len_uploaded",
213+
description="Evaluator, calculating answer length using Flex flow.",
214+
)
215+
flex_model = ml_client.evaluators.create_or_update(eval)
216+
# This evaluator can be downloaded and used now
217+
retrieved_eval = ml_client.evaluators.get("answer_len_uploaded", version=1)
218+
ml_client.evaluators.download("answer_len_uploaded", version=1, download_path=".")
219+
evaluator = load_flow(os.path.join("answer_len_uploaded", flex_flow_path))
220+
```
221+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
189222
### Prompt-based evaluators
190223
To build your own prompt-based large language model evaluator, you can create a custom evaluator based on a **Prompty** file. Prompty is a file with `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format that contains many metadata fields that define model configuration and expected inputs of the Prompty. Given an example `apology.prompty` file that looks like the following:
191224

@@ -252,7 +285,23 @@ Here is the result:
252285
```JSON
253286
{"apology": 0}
254287
```
255-
288+
#### Log your custom prompt-based evaluator to your AI Studio project
289+
```python
290+
# Define the path to prompty file.
291+
prompty_path = os.path.join("apology-prompty", "apology.prompty")
292+
# Finally the evaluator
293+
eval = Model(
294+
path=prompty_path,
295+
name="prompty_uploaded",
296+
description="Evaluator, calculating answer length using Flex flow.",
297+
)
298+
flex_model = ml_client.evaluators.create_or_update(eval)
299+
# This evaluator can be downloaded and used now
300+
retrieved_eval = ml_client.evaluators.get("prompty_uploaded", version=1)
301+
ml_client.evaluators.download("prompty_uploaded", version=1, download_path=".")
302+
evaluator = load_flow(os.path.join("prompty_uploaded", "apology.prompty"))
303+
```
304+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
256305
## Evaluate on test dataset using `evaluate()`
257306
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
258307
```python
@@ -312,7 +361,9 @@ The evaluator outputs results in a dictionary which contains aggregate `metrics`
312361
'outputs.relevance.gpt_relevance': 5}],
313362
'traces': {}}
314363
```
315-
### Supported data formats for `evaluate()`
364+
### Requirements for `evaluate()`
365+
The `evaluate()` API has a few requirements for the data format that it accepts and how it handles evaluator parameter key names so that the charts in your AI Studio evaluation results show up properly.
366+
#### Data format
316367
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#required-data-input-for-built-in-evaluators).
317368
```json
318369
{
@@ -360,7 +411,7 @@ To `evaluate()` with either the `ChatEvaluator` or `ContentSafetyChatEvaluator`,
360411
result = evaluate(
361412
data="data.jsonl",
362413
evaluators={
363-
"chatevaluator": chat_evaluator
414+
"chat": chat_evaluator
364415
},
365416
# column mapping for messages
366417
evaluator_config={
@@ -370,7 +421,36 @@ result = evaluate(
370421
}
371422
)
372423
```
373-
424+
#### Evaluator parameter format
425+
When passing in your built-in evaluators, it is important to specify the right keyword mapping in the `evaluators` parameter list. The following is the keyword mapping required for the results from your built-in evaluators to show up in the UI when logged to Azure AI Studio.
426+
| Evaluator | keyword param |
427+
|------------------------------|-----------------------|
428+
| `RelevanceEvaluator` | "relevance" |
429+
| `CoherenceEvaluator` | "coherence" |
430+
| `GroundednessEvaluator` | "groundedness" |
431+
| `FluencyEvaluator` | "fluency" |
432+
| `SimilarityEvaluator` | "similarity" |
433+
| `F1ScoreEvaluator` | "f1_score" |
434+
| `ViolenceEvaluator` | "violence" |
435+
| `SexualEvaluator` | "sexual" |
436+
| `SelfHarmEvaluator` | "self_harm" |
437+
| `HateUnfairnessEvaluator` | "hate_unfairness" |
438+
| `QAEvaluator` | "qa" |
439+
| `ChatEvaluator` | "chat" |
440+
| `ContentSafetyEvaluator` | "content_safety" |
441+
| `ContentSafetyChatEvaluator` | "content_safety_chat" |
442+
Here's an example of setting the `evaluators` parameters:
443+
```python
444+
result = evaluate(
445+
data="data.jsonl",
446+
evaluators={
447+
"sexual":sexual_evaluator
448+
"self_harm":self_harm_evaluator
449+
"hate_unfairness":hate_unfairness_evaluator
450+
"violence":violence_evaluator
451+
}
452+
)
453+
```
374454
## Evaluate on a target
375455

376456
If you have a list of queries that you'd like to run then evaluate, the `evaluate()` also supports a `target` parameter, which can send queries to an application to collect answers then run your evaluators on the resulting question and answers.
@@ -399,4 +479,6 @@ result = evaluate(
399479
## Related content
400480

401481
- [Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md)
402-
- [Work with projects in VS Code](vscode.md)
482+
- [Prompt flow Python reference documentation](https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.evals.evaluators.html)
483+
- [Learn more about the evaluation metrics](../../concepts/evaluation-metrics-built-in.md)
484+
- [View your evaluation results in Azure AI Studio](../../how-to/evaluate-flow-results.md)

0 commit comments

Comments
 (0)