Skip to content

Commit 77ae166

Browse files
authored
Merge pull request #122257 from jinruishao/patch-26
Update jailbreak-detection.md
2 parents 0d86abb + 79b70ff commit 77ae166

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

articles/ai-services/content-safety/concepts/jailbreak-detection.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn about User Prompt injection attacks and the Prompt Shields fe
66
author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
9-
ms.custom: build-2023
9+
ms.custom: build-2023, references_regions
1010
ms.topic: conceptual
1111
ms.date: 03/15/2024
1212
ms.author: pafarley
@@ -73,6 +73,12 @@ Currently, the Prompt Shields API supports the English language. While our API d
7373

7474
The maximum character limit for Prompt Shields allows for a user prompt of up to 10,000 characters, while the document array is restricted to a maximum of 5 documents with a combined total not exceeding 10,000 characters.
7575

76+
### Regions
77+
To use this API, you must create your Azure AI Content Safety resource in the supported regions. Currently, it's available in the following Azure regions:
78+
79+
- East US
80+
- West Europe
81+
7682
### TPS limitations
7783

7884
| Pricing Tier | Requests per 10 seconds |

0 commit comments

Comments
 (0)