Skip to content

Commit 16fb84a

Browse files
committed
Link fixes
1 parent d73c3be commit 16fb84a

File tree

1 file changed

+31
-7
lines changed

1 file changed

+31
-7
lines changed

articles/ai-studio/how-to/develop/flow-evaluate-sdk.md

Lines changed: 31 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
77
ms.custom:
88
- build-2024
99
ms.topic: how-to
10-
ms.date: 5/21/2024
10+
ms.date: 08/07/2024
1111
ms.reviewer: dantaylo
1212
ms.author: eur
1313
author: eric-urban
@@ -162,9 +162,11 @@ chat_evaluator = ChatEvaluator(
162162
```
163163

164164
## Custom evaluators
165+
165166
Built-in evaluators are great out of the box to start evaluating your application's generations. However you might want to build your own code-based or prompt-based evaluator to cater to your specific evaluation needs.
166167

167168
### Code-based evaluators
169+
168170
Sometimes a large language model isn't needed for certain evaluation metrics. This is when code-based evaluators can give you the flexibility to define metrics based on functions or callable class. Given a simple Python class in an example `answer_length.py` that calculates the length of an answer:
169171
```python
170172
class AnswerLengthEvaluator:
@@ -218,8 +220,11 @@ retrieved_eval = ml_client.evaluators.get("answer_len_uploaded", version=1)
218220
ml_client.evaluators.download("answer_len_uploaded", version=1, download_path=".")
219221
evaluator = load_flow(os.path.join("answer_len_uploaded", flex_flow_path))
220222
```
221-
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
223+
224+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-ap.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
225+
222226
### Prompt-based evaluators
227+
223228
To build your own prompt-based large language model evaluator, you can create a custom evaluator based on a **Prompty** file. Prompty is a file with `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format that contains many metadata fields that define model configuration and expected inputs of the Prompty. Given an example `apology.prompty` file that looks like the following:
224229

225230
```markdown
@@ -281,7 +286,7 @@ apology_score = apology_eval(
281286
print(apology_score)
282287
```
283288

284-
Here is the result:
289+
Here's the result:
285290
```JSON
286291
{"apology": 0}
287292
```
@@ -301,9 +306,13 @@ retrieved_eval = ml_client.evaluators.get("prompty_uploaded", version=1)
301306
ml_client.evaluators.download("prompty_uploaded", version=1, download_path=".")
302307
evaluator = load_flow(os.path.join("prompty_uploaded", "apology.prompty"))
303308
```
304-
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
309+
310+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
311+
305312
## Evaluate on test dataset using `evaluate()`
306-
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
313+
314+
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
315+
307316
```python
308317
from promptflow.evals.evaluate import evaluate
309318

@@ -325,9 +334,11 @@ result = evaluate(
325334
output_path="./myevalresults.json"
326335
)
327336
```
337+
328338
> [!TIP]
329339
> Get the contents of the `result.studio_url` property for a link to view your logged evaluation results in Azure AI Studio.
330340
The evaluator outputs results in a dictionary which contains aggregate `metrics` and row-level data and metrics. An example of an output:
341+
331342
```python
332343
{'metrics': {'answer_length.value': 49.333333333333336,
333344
'relevance.gpt_relevance': 5.0},
@@ -360,11 +371,17 @@ The evaluator outputs results in a dictionary which contains aggregate `metrics`
360371
'outputs.answer_length.value': 66,
361372
'outputs.relevance.gpt_relevance': 5}],
362373
'traces': {}}
374+
363375
```
376+
364377
### Requirements for `evaluate()`
378+
365379
The `evaluate()` API has a few requirements for the data format that it accepts and how it handles evaluator parameter key names so that the charts in your AI Studio evaluation results show up properly.
380+
366381
#### Data format
367-
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#required-data-input-for-built-in-evaluators).
382+
383+
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#data-requirements-for built-in evaluators).
384+
368385
```json
369386
{
370387
"question":"What is the capital of France?",
@@ -373,7 +390,9 @@ The `evaluate()` API only accepts data in the JSONLines format. For all built-in
373390
"ground_truth": "Paris"
374391
}
375392
```
393+
376394
For the composite evaluator class, `ChatEvaluator` and `ContentSafetyChatEvaluator`, we require an array of messages that adheres to OpenAI's messages protocol that can be found [here](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content). The messages protocol contains a role-based list of messages with the following:
395+
377396
- `content`: The content of that turn of the interaction between user and application or assistant.
378397
- `role`: Either the user or application/assistant.
379398
- `"citations"` (within `"context"`): Provides the documents and its ID as key value pairs from the retrieval-augmented generation model.
@@ -421,8 +440,11 @@ result = evaluate(
421440
}
422441
)
423442
```
443+
424444
#### Evaluator parameter format
425-
When passing in your built-in evaluators, it is important to specify the right keyword mapping in the `evaluators` parameter list. The following is the keyword mapping required for the results from your built-in evaluators to show up in the UI when logged to Azure AI Studio.
445+
446+
When passing in your built-in evaluators, it's important to specify the right keyword mapping in the `evaluators` parameter list. The following is the keyword mapping required for the results from your built-in evaluators to show up in the UI when logged to Azure AI Studio.
447+
426448
| Evaluator | keyword param |
427449
|------------------------------|-----------------------|
428450
| `RelevanceEvaluator` | "relevance" |
@@ -439,6 +461,7 @@ When passing in your built-in evaluators, it is important to specify the right k
439461
| `ChatEvaluator` | "chat" |
440462
| `ContentSafetyEvaluator` | "content_safety" |
441463
| `ContentSafetyChatEvaluator` | "content_safety_chat" |
464+
442465
Here's an example of setting the `evaluators` parameters:
443466
```python
444467
result = evaluate(
@@ -451,6 +474,7 @@ result = evaluate(
451474
}
452475
)
453476
```
477+
454478
## Evaluate on a target
455479

456480
If you have a list of queries that you'd like to run then evaluate, the `evaluate()` also supports a `target` parameter, which can send queries to an application to collect answers then run your evaluators on the resulting question and answers.

0 commit comments

Comments
 (0)