Skip to content

Commit e2dd6a1

Browse files
committed
fix warning
1 parent e7313d1 commit e2dd6a1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-foundry/concepts/evaluation-evaluators/risk-safety-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ author: lgayhardt
1313

1414
# Risk and safety evaluators (preview)
1515

16-
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
16+
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
1717

1818
Risk and safety evaluators draw on insights gained from our previous Large Language Model projects such as GitHub Copilot and Bing. This ensures a comprehensive approach to evaluating generated responses for risk and safety severity scores. These evaluators are generated through the Azure AI Foundry Evaluation service, which employs a set of LLMs. Each model is tasked with assessing specific risks that could be present in the response from your AI system (for example, sexual content, violent content, etc.). These evaluator models are provided with risk definitions and annotate accordingly. Currently the following risks are supported:
1919

0 commit comments

Comments
 (0)