You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The `llm-content-safety` policy enforces content safety checks on LLM requests (prompts) by transmitting them to the [Azure AI Content Safety](/azure/ai-services/content-safety/overview) service before sending to the backend LLM. When enabled and AI Content Safety detects malicious content, API Management blocks the request and returns a `403` error code.
19
+
The `llm-content-safety` policy enforces content safety checks on LLM requests (prompts) by transmitting them to the [Azure AI Content Safety](/azure/ai-services/content-safety/overview) service before sending to the backend LLM. When enabled and Azure AI Content Safety detects malicious content, API Management blocks the request and returns a `403` error code.
20
20
21
21
Use the policy in scenarios such as the following:
22
22
@@ -82,7 +82,7 @@ Policy expressions are allowed. | No | `FourSeverityLevels` |
| name | Specifies the name of this category. The attribute must have one of the following values: `Hate`, `SelfHarm`, `Sexual`, `Violence`. Policy expressions are allowed. | Yes | N/A |
85
-
| threshold | Specifies the threshold value for this category at which request are blocked. Requests with content severities less than the threshold are not blocked. The value must be between 0 and 7. Policy expressions are allowed. | Yes | N/A |
85
+
| threshold | Specifies the threshold value for this category at which request are blocked. Requests with content severities less than the threshold aren't blocked. The value must be between 0 and 7. Policy expressions are allowed. | Yes | N/A |
86
86
87
87
88
88
## Usage
@@ -99,7 +99,7 @@ Policy expressions are allowed. | No | `FourSeverityLevels` |
99
99
100
100
## Example
101
101
102
-
The following example enforces content safety checks on LLM requests using the Azure AI Content Safety service. The policy blocks requests that contain hate speech or violence with a severity level of 4 or higher. The `shield-prompt` attribute is set to `true` to check for adversarial attacks.
102
+
The following example enforces content safety checks on LLM requests using the Azure AI Content Safety service. The policy blocks requests that contain speech in the `hate`or `violence` category with a severity level of 4 or higher. The `shield-prompt` attribute is set to `true` to check for adversarial attacks.
103
103
104
104
```xml
105
105
<policies>
@@ -121,16 +121,4 @@ The following example enforces content safety checks on LLM requests using the A
0 commit comments