Skip to content

Commit c2526a8

Browse files
committed
blocking issues fixed
1 parent b92b77d commit c2526a8

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

articles/defender-for-cloud/alerts-reference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4379,15 +4379,15 @@ Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
43794379

43804380
### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Prompt Shields
43814381

4382-
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Sheilds), ensuring the integrity of the AI resources and the data security.
4382+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
43834383

43844384
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
43854385

43864386
**Severity**: Medium
43874387

43884388
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Prompt Shields
43894389

4390-
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Sheilds), but were not blocked due to content filtering settings or due to low confidence.
4390+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
43914391

43924392
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
43934393

0 Bytes
Loading

0 commit comments

Comments
 (0)