Skip to content

Commit 40f1b82

Browse files
changliu2lgayhardt
andauthored
Update articles/ai-foundry/concepts/model-benchmarks.md
Co-authored-by: Lauryn Gayhardt <[email protected]>
1 parent 45b790c commit 40f1b82

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-foundry/concepts/model-benchmarks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ To guide the selection of safety benchmarks for evaluation, we apply a structure
7575
| Toxigen | Ability to detect toxic content |
7676

7777
### Model harmful behaviors
78-
The [HarmBench](https://github.com/centerforaisafety/HarmBench) benchmark measures model harmful behaviors and includes prompts to illicit harmful behaviour from model. As it relates to safety, the benchmark covers 7 semantic categories of behaviour:
78+
The [HarmBench](https://github.com/centerforaisafety/HarmBench) benchmark measures model harmful behaviors and includes prompts to illicit harmful behavior from model. As it relates to safety, the benchmark covers 7 semantic categories of behavior:
7979
- Cybercrime & Unauthorized Intrusion
8080
- Chemical & Biological Weapons/Drugs
8181
- Copyright Violations

0 commit comments

Comments
 (0)