Skip to content

Commit b8b4c8f

Browse files
committed
acrolinx fixes
1 parent fb6c3b3 commit b8b4c8f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

learn-pr/wwl-data-ai/responsible-ai-studio/4-measure-harms.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ uid: learn.wwl.responsible-ai-studio.measure-harms
33
title: Measure potential harms
44
metadata:
55
title: Measure potential harms
6-
description: Measure the presence of potential harms in a responsible AI solution.
6+
description: Measure the presence of potential harm in a responsible AI solution.
77
author: ivorb
88
ms.author: berryivor
99
ms.date: 04/16/2025

learn-pr/wwl-data-ai/responsible-ai-studio/includes/5-mitigate-harms.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Other safety system layer mitigations can include abuse detection algorithms to
2626

2727
## 3: The *system message and grounding* layer
2828

29-
The system message and grounding layer focuses on the construction of prompts that are submitted to the model. Harm mitigation techniques that you can apply at this layer include:
29+
This layer focuses on the construction of prompts that are submitted to the model. Harm mitigation techniques that you can apply at this layer include:
3030

3131
- Specifying system inputs that define behavioral parameters for the model.
3232
- Applying prompt engineering to add grounding data to input prompts, maximizing the likelihood of a relevant, nonharmful output.

0 commit comments

Comments
 (0)