Skip to content

Commit 14f0bc1

Browse files
committed
minor update
1 parent 3b49b4d commit 14f0bc1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,7 @@ print(violence_conv_score)
310310

311311
```
312312

313-
The result of the risk and safety evaluators for a query and response pair is a dictionary containing:
313+
The result of the content safety evaluators for a query and response pair is a dictionary containing:
314314

315315
- `{metric_name}` provides a severity label for that content risk ranging from Very low, Low, Medium, and High. You can read more about the descriptions of each content risk and severity scale [here](../../concepts/evaluation-metrics-built-in.md).
316316
- `{metric_name}_score` has a range between 0 and 7 severity level that maps to a severity label given in `{metric_name}`.

0 commit comments

Comments
 (0)