Skip to content

Commit 7f0d142

Browse files
authored
Update flow-evaluate-sdk.md
1 parent 06a044b commit 7f0d142

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

articles/ai-studio/how-to/develop/flow-evaluate-sdk.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ The result:
188188
```JSON
189189
{"answer_length":27}
190190
```
191-
#### Log your custom prompt-based evaluator to you AI project
191+
#### Log your custom prompt-based evaluator to your AI project
192192
```python
193193
# First we need to save evaluator into separate file in its own directory:
194194
def answer_len(answer):
@@ -218,6 +218,7 @@ retrieved_eval = ml_client.evaluators.get("answer_len_uploaded", version=1)
218218
ml_client.evaluators.download("answer_len_uploaded", version=1, download_path=".")
219219
evaluator = load_flow(os.path.join("answer_len_uploaded", flex_flow_path))
220220
```
221+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
221222
### Prompt-based evaluators
222223
To build your own prompt-based large language model evaluator, you can create a custom evaluator based on a **Prompty** file. Prompty is a file with `.prompty` extension for developing prompt template. The Prompty asset is a markdown file with a modified front matter. The front matter is in YAML format that contains many metadata fields that define model configuration and expected inputs of the Prompty. Given an example `apology.prompty` file that looks like the following:
223224

@@ -284,7 +285,7 @@ Here is the result:
284285
```JSON
285286
{"apology": 0}
286287
```
287-
#### Log your custom prompt-based evaluator to you AI project
288+
#### Log your custom prompt-based evaluator to your AI project
288289
```python
289290
# Define the path to prompty file.
290291
prompty_path = os.path.join("apology-prompty", "apology.prompty")
@@ -300,7 +301,7 @@ retrieved_eval = ml_client.evaluators.get("prompty_uploaded", version=1)
300301
ml_client.evaluators.download("prompty_uploaded", version=1, download_path=".")
301302
evaluator = load_flow(os.path.join("prompty_uploaded", "apology.prompty"))
302303
```
303-
304+
After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/evaluate-generative-ai-app#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio.
304305
## Evaluate on test dataset using `evaluate()`
305306
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
306307
```python

0 commit comments

Comments
 (0)