Skip to content

Commit c73c279

Browse files
Update README.md
1 parent e1703d7 commit c73c279

File tree

1 file changed

+6
-2
lines changed

1 file changed

+6
-2
lines changed

trusted-ai/runtime-evaluations/README.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,13 @@
33

44
Monitoring AI models after deployment is essential to ensuring they behave safely and reliably in real-world conditions. Runtime monitoring generally consists of two key AI evaluation categories.
55

6-
**(1) AI Guardrails:** The first focuses on maintaining control over unwanted or unsafe model behavior, such as detecting and blocking jailbreak attempts, harmful outputs, or other forms of misuse, by implementing robust AI guardrails.
6+
**(1) AI Guardrails:**
77

8-
**(2) Continuous Monitoring:** The second focuses on tracking the model’s performance over time to detect issues like performance drift, which may arise as user input patterns, data distributions, or usage contexts evolve. Together, these continuous monitoring practices help teams identify deviations early, preserve model quality, and maintain trust in AI-driven systems.
8+
The first focuses on maintaining control over unwanted or unsafe model behavior, such as detecting and blocking jailbreak attempts, harmful outputs, or other forms of misuse, by implementing robust AI guardrails.
9+
10+
**(2) Continuous Monitoring:**
11+
12+
The second focuses on tracking the model’s performance over time to detect issues like performance drift, which may arise as user input patterns, data distributions, or usage contexts evolve. Together, these continuous monitoring practices help teams identify deviations early, preserve model quality, and maintain trust in AI-driven systems.
913

1014

1115
<img src="real-time-guardrails/images/Guardrails vs monitoring.png"

0 commit comments

Comments
 (0)