You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -51,7 +50,10 @@ Built-in composite evaluators are composed of individual evaluators.
51
50
-`ContentSafetyEvaluator` combines all the safety evaluators for a single output of combined metrics for question and answer pairs
52
51
-`ContentSafetyChatEvaluator` combines all the safety evaluators for a single output of combined metrics for chat messages following the OpenAI message protocol that can be found [here](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content).
53
52
54
-
### Required data input for built-in evaluators
53
+
> [!TIP]
54
+
> For more information about inputs and outputs, see the [Prompt flow Python reference documentation](https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow-evals/promptflow.evals.evaluators.html).
55
+
56
+
### Data requirements for built-in evaluators
55
57
We require question and answer pairs in `.jsonl` format with the required inputs, and column mapping for evaluating datasets, as follows:
Built-in evaluators are great out of the box to start evaluating your application's generations. However you might want to build your own code-based or prompt-based evaluator to cater to your specific evaluation needs.
164
167
165
168
### Code-based evaluators
169
+
166
170
Sometimes a large language model isn't needed for certain evaluation metrics. This is when code-based evaluators can give you the flexibility to define metrics based on functions or callable class. Given a simple Python class in an example `answer_length.py` that calculates the length of an answer:
167
171
```python
168
172
classAnswerLengthEvaluator:
@@ -186,7 +190,41 @@ The result:
186
190
```JSON
187
191
{"answer_length":27}
188
192
```
193
+
#### Log your custom code-based evaluator to your AI Studio project
194
+
```python
195
+
# First we need to save evaluator into separate file in its own directory:
196
+
defanswer_len(answer):
197
+
returnlen(answer)
198
+
199
+
# Note, we create temporary directory to store our python file
200
+
target_dir_tmp ="flex_flow_tmp"
201
+
os.makedirs(target_dir_tmp, exist_ok=True)
202
+
lines = inspect.getsource(answer_len)
203
+
withopen(os.path.join("flex_flow_tmp", "answer.py"), "w") as fp:
204
+
fp.write(lines)
205
+
206
+
from flex_flow_tmp.answer import answer_len as answer_length
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
225
+
189
226
### Prompt-based evaluators
227
+
190
228
To build your own prompt-based large language model evaluator, you can create a custom evaluator based on a **Prompty** file. Prompty is a file with `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format that contains many metadata fields that define model configuration and expected inputs of the Prompty. Given an example `apology.prompty` file that looks like the following:
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
255
311
256
312
## Evaluate on test dataset using `evaluate()`
257
-
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
313
+
314
+
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
315
+
258
316
```python
259
317
from promptflow.evals.evaluate import evaluate
260
318
@@ -276,9 +334,11 @@ result = evaluate(
276
334
output_path="./myevalresults.json"
277
335
)
278
336
```
337
+
279
338
> [!TIP]
280
339
> Get the contents of the `result.studio_url` property for a link to view your logged evaluation results in Azure AI Studio.
281
340
The evaluator outputs results in a dictionary which contains aggregate `metrics` and row-level data and metrics. An example of an output:
@@ -311,9 +371,17 @@ The evaluator outputs results in a dictionary which contains aggregate `metrics`
311
371
'outputs.answer_length.value': 66,
312
372
'outputs.relevance.gpt_relevance': 5}],
313
373
'traces': {}}
374
+
314
375
```
315
-
### Supported data formats for `evaluate()`
316
-
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#required-data-input-for-built-in-evaluators).
376
+
377
+
### Requirements for `evaluate()`
378
+
379
+
The `evaluate()` API has a few requirements for the data format that it accepts and how it handles evaluator parameter key names so that the charts in your AI Studio evaluation results show up properly.
380
+
381
+
#### Data format
382
+
383
+
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#data-requirements-for built-in evaluators).
384
+
317
385
```json
318
386
{
319
387
"question":"What is the capital of France?",
@@ -322,7 +390,9 @@ The `evaluate()` API only accepts data in the JSONLines format. For all built-in
322
390
"ground_truth": "Paris"
323
391
}
324
392
```
393
+
325
394
For the composite evaluator class, `ChatEvaluator` and `ContentSafetyChatEvaluator`, we require an array of messages that adheres to OpenAI's messages protocol that can be found [here](https://platform.openai.com/docs/api-reference/messages/object#messages/object-content). The messages protocol contains a role-based list of messages with the following:
395
+
326
396
-`content`: The content of that turn of the interaction between user and application or assistant.
327
397
-`role`: Either the user or application/assistant.
328
398
-`"citations"` (within `"context"`): Provides the documents and its ID as key value pairs from the retrieval-augmented generation model.
@@ -360,7 +430,7 @@ To `evaluate()` with either the `ChatEvaluator` or `ContentSafetyChatEvaluator`,
360
430
result = evaluate(
361
431
data="data.jsonl",
362
432
evaluators={
363
-
"chatevaluator": chat_evaluator
433
+
"chat": chat_evaluator
364
434
},
365
435
# column mapping for messages
366
436
evaluator_config={
@@ -371,6 +441,40 @@ result = evaluate(
371
441
)
372
442
```
373
443
444
+
#### Evaluator parameter format
445
+
446
+
When passing in your built-in evaluators, it's important to specify the right keyword mapping in the `evaluators` parameter list. The following is the keyword mapping required for the results from your built-in evaluators to show up in the UI when logged to Azure AI Studio.
Here's an example of setting the `evaluators` parameters:
466
+
```python
467
+
result = evaluate(
468
+
data="data.jsonl",
469
+
evaluators={
470
+
"sexual":sexual_evaluator
471
+
"self_harm":self_harm_evaluator
472
+
"hate_unfairness":hate_unfairness_evaluator
473
+
"violence":violence_evaluator
474
+
}
475
+
)
476
+
```
477
+
374
478
## Evaluate on a target
375
479
376
480
If you have a list of queries that you'd like to run then evaluate, the `evaluate()` also supports a `target` parameter, which can send queries to an application to collect answers then run your evaluators on the resulting question and answers.
@@ -399,4 +503,6 @@ result = evaluate(
399
503
## Related content
400
504
401
505
-[Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md)
0 commit comments