Skip to content

Commit a70aa42

Browse files
w-javedminthigpen
andauthored
Update scenarios/evaluate/Supported_Evaluation_Metrics/AI_Judge_Evaluators_Safety_Risks/AI_Judge_Evaluators_Safety_Risks_Content_Safety.ipynb
Co-authored-by: Minsoo Thigpen <[email protected]>
1 parent cc7ddd1 commit a70aa42

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

scenarios/evaluate/Supported_Evaluation_Metrics/AI_Judge_Evaluators_Safety_Risks/AI_Judge_Evaluators_Safety_Risks_Content_Safety.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,7 @@
225225
"cell_type": "markdown",
226226
"metadata": {},
227227
"source": [
228-
"Now that we have our dataset, we can evaluate it for Content Safety harms. The `ContentSafetyEvaluator` class can take in the dataset and detect whether your data contains harmful content. Let's use the `evaluate()` API to run the evaluation and log it to our Azure AI Studio Project."
228+
"Now that we have our dataset, we can evaluate it for Content Safety harms. The `ContentSafetyEvaluator` class can take in the dataset and detect whether your data contains harmful content (Hateful and unfair, sexual, violent, and self-harm-related content). Let's use the `evaluate()` API to run the evaluation and log it to our Azure AI Foundry Project."
229229
]
230230
},
231231
{

0 commit comments

Comments
 (0)