Skip to content

Commit 0e36cc2

Browse files
Learn Editor: Update alerts-reference.md
1 parent 4b2c47c commit 0e36cc2

File tree

1 file changed

+14
-6
lines changed

1 file changed

+14
-6
lines changed

articles/defender-for-cloud/alerts-reference.md

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4382,31 +4382,39 @@ Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
43824382

43834383
## Alerts for AI workloads
43844384

4385-
### Detected credential theft attempts on an Azure Open AI model deployment
4385+
### Detected credential theft attempts on an Azure Open AI model deployment
4386+
4387+
(AI.Azure_CredentialTheftAttempt)
43864388

43874389
**Description**: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful.
43884390

43894391
**[MITRE tactics](#mitre-attck-tactics)**: Credential Access, Lateral Movement, Exfiltration
43904392

43914393
**Severity**: Medium
43924394

4393-
### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Prompt Shields
4395+
### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Azure AI Content Safety Prompt Shields
4396+
4397+
(AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt)
43944398

4395-
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
4399+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
43964400

43974401
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
43984402

43994403
**Severity**: Medium
44004404

4401-
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Prompt Shields
4405+
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Azure AI Content Safety Prompt Shields
44024406

4403-
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
4407+
(AI.Azure_Jailbreak.ContentFiltering.DetectedAttempt)
4408+
4409+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
44044410

44054411
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
44064412

44074413
**Severity**: Medium
44084414

4409-
### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
4415+
### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
4416+
4417+
(AI.Azure_DataLeakInModelResponse.Sensitive)
44104418

44114419
**Description**: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AI’s safeguards to access unauthorized sensitive data.
44124420

0 commit comments

Comments
 (0)