Skip to content

Commit 532843b

Browse files
committed
added alerts for AI
1 parent 76e0629 commit 532843b

File tree

1 file changed

+35
-1
lines changed

1 file changed

+35
-1
lines changed

articles/defender-for-cloud/alerts-reference.md

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Reference table for all security alerts
33
description: This article lists the security alerts visible in Microsoft Defender for Cloud.
44
ms.topic: reference
55
ms.custom: linux-related-content
6-
ms.date: 03/17/2024
6+
ms.date: 05/01/2024
77
ai-usage: ai-assisted
88
---
99

@@ -4367,6 +4367,40 @@ Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
43674367

43684368
**Severity**: Medium
43694369

4370+
## Alerts for AI Workloads
4371+
4372+
### Detected credential theft attempts on an Azure Open AI model deployment
4373+
4374+
**Description**: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful.
4375+
4376+
**[MITRE tactics](#mitre-attck-tactics)**: Credential Access, Lateral Movement, Exfiltration
4377+
4378+
**Severity**: Medium
4379+
4380+
### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Prompt Shields
4381+
4382+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Sheilds), ensuring the integrity of the AI resources and the data security.
4383+
4384+
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
4385+
4386+
**Severity**: Medium
4387+
4388+
### A Jailbreak attempt on an Azure Open AI model deployment was detected by Prompt Shields
4389+
4390+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Sheilds), but were not blocked due to content filtering settings or due to low confidence.
4391+
4392+
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion
4393+
4394+
**Severity**: Medium
4395+
4396+
### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
4397+
4398+
**Description**: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AI’s safeguards to access unauthorized sensitive data.
4399+
4400+
**[MITRE tactics](#mitre-attck-tactics)**: Collection
4401+
4402+
**Severity**: Medium
4403+
43704404
## Deprecated Defender for Containers alerts
43714405

43724406
The following lists include the Defender for Containers security alerts which were deprecated.

0 commit comments

Comments
 (0)