Skip to content

Commit da2c7b7

Browse files
Update README.md
1 parent c73c279 commit da2c7b7

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

trusted-ai/runtime-evaluations/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@
33

44
Monitoring AI models after deployment is essential to ensuring they behave safely and reliably in real-world conditions. Runtime monitoring generally consists of two key AI evaluation categories.
55

6-
**(1) AI Guardrails:**
6+
**(1) Real-Time Guardrails:**
77

8-
The first focuses on maintaining control over unwanted or unsafe model behavior, such as detecting and blocking jailbreak attempts, harmful outputs, or other forms of misuse, by implementing robust AI guardrails.
8+
The first focuses on maintaining control over unwanted or unsafe model behavior, such as instantly detecting and blocking jailbreak attempts, harmful outputs, or other forms of misuse, by implementing robust real-time AI guardrails.
99

1010
**(2) Continuous Monitoring:**
1111

12-
The second focuses on tracking the model’s performance over time to detect issues like performance drift, which may arise as user input patterns, data distributions, or usage contexts evolve. Together, these continuous monitoring practices help teams identify deviations early, preserve model quality, and maintain trust in AI-driven systems.
12+
The second focuses on tracking a range of metrics over time to detect issues like model's performance drift, which may arise as user input patterns, data distributions, or usage contexts evolve. Together, these continuous monitoring practices help teams identify deviations early, preserve model quality, and maintain trust in AI-driven systems.
1313

1414

1515
<img src="real-time-guardrails/images/Guardrails vs monitoring.png"

0 commit comments

Comments
 (0)