You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/alerts-reference.md
+14-6Lines changed: 14 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4382,31 +4382,39 @@ Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
4382
4382
4383
4383
## Alerts for AI workloads
4384
4384
4385
-
### Detected credential theft attempts on an Azure Open AI model deployment
4385
+
### Detected credential theft attempts on an Azure Open AI model deployment
4386
+
4387
+
(AI.Azure_CredentialTheftAttempt)
4386
4388
4387
4389
**Description**: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful.
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
4399
+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Prompt Shields
4405
+
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Azure AI Content Safety Prompt Shields
4402
4406
4403
-
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
4415
+
### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
4416
+
4417
+
(AI.Azure_DataLeakInModelResponse.Sensitive)
4410
4418
4411
4419
**Description**: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AI’s safeguards to access unauthorized sensitive data.
0 commit comments