Skip to content

Commit 79b70ff

Browse files
authored
Update jailbreak-detection.md
1 parent c2661d2 commit 79b70ff

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/ai-services/content-safety/concepts/jailbreak-detection.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Learn about User Prompt injection attacks and the Prompt Shields fe
66
author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-content-safety
9-
ms.custom: build-2023
9+
ms.custom: build-2023, references_regions
1010
ms.topic: conceptual
1111
ms.date: 03/15/2024
1212
ms.author: pafarley
@@ -76,8 +76,8 @@ The maximum character limit for Prompt Shields allows for a user prompt of up to
7676
### Regions
7777
To use this API, you must create your Azure AI Content Safety resource in the supported regions. Currently, it's available in the following Azure regions:
7878

79-
East US
80-
West Europe
79+
- East US
80+
- West Europe
8181

8282
### TPS limitations
8383

0 commit comments

Comments
 (0)