Skip to content

Commit 43d351c

Browse files
authored
Update flow-evaluate-sdk.md
1 parent 15c116c commit 43d351c

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

articles/ai-studio/how-to/develop/flow-evaluate-sdk.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ The result:
188188
```JSON
189189
{"answer_length":27}
190190
```
191-
#### Log your custom prompt-based evaluator to your AI project
191+
#### Log your custom prompt-based evaluator to your AI Studio project
192192
```python
193193
# First we need to save evaluator into separate file in its own directory:
194194
def answer_len(answer):
@@ -285,7 +285,7 @@ Here is the result:
285285
```JSON
286286
{"apology": 0}
287287
```
288-
#### Log your custom prompt-based evaluator to your AI project
288+
#### Log your custom prompt-based evaluator to your AI Studio project
289289
```python
290290
# Define the path to prompty file.
291291
prompty_path = os.path.join("apology-prompty", "apology.prompty")
@@ -362,6 +362,7 @@ The evaluator outputs results in a dictionary which contains aggregate `metrics`
362362
'traces': {}}
363363
```
364364
### Requirements for `evaluate()`
365+
The `evaluate()` API has a few requirements for the data format that it accepts and how it handles evaluator parameter key names so that the charts in your AI Studio evaluation results show up properly.
365366
#### Data format
366367
The `evaluate()` API only accepts data in the JSONLines format. For all built-in evaluators, except for `ChatEvaluator` or `ContentSafetyChatEvaluator`, `evaluate()` requires data in the following format with required input fields. See the [previous section on required data input for built-in evaluators](#required-data-input-for-built-in-evaluators).
367368
```json

0 commit comments

Comments
 (0)