|
| 1 | +--- |
| 2 | +title: Alerts for AI workloads |
| 3 | +description: This article lists the security alerts for AI workloads visible in Microsoft Defender for Cloud. |
| 4 | +ms.topic: reference |
| 5 | +ms.custom: linux-related-content |
| 6 | +ms.date: 06/03/2024 |
| 7 | +ai-usage: ai-assisted |
| 8 | +--- |
| 9 | + |
| 10 | +# Alerts for AI workloads |
| 11 | + |
| 12 | +This article lists the security alerts you might get for AI workloads from Microsoft Defender for Cloud and any Microsoft Defender plans you enabled. The alerts shown in your environment depend on the resources and services you're protecting, and your customized configuration. |
| 13 | + |
| 14 | +> [!NOTE] |
| 15 | +> Some of the recently added alerts powered by Microsoft Defender Threat Intelligence and Microsoft Defender for Endpoint might be undocumented. |
| 16 | +
|
| 17 | +[Learn how to respond to these alerts](managing-and-responding-alerts.yml). |
| 18 | + |
| 19 | +[Learn how to export alerts](continuous-export.md). |
| 20 | + |
| 21 | +> [!NOTE] |
| 22 | +> Alerts from different sources might take different amounts of time to appear. For example, alerts that require analysis of network traffic might take longer to appear than alerts related to suspicious processes running on virtual machines. |
| 23 | +
|
| 24 | +## AI workload alerts |
| 25 | + |
| 26 | +### Detected credential theft attempts on an Azure OpenAI model deployment |
| 27 | + |
| 28 | +(AI.Azure_CredentialTheftAttempt) |
| 29 | + |
| 30 | +**Description**: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful. |
| 31 | + |
| 32 | +**[MITRE tactics](alerts-reference.md#mitre-attck-tactics)**: Credential Access, Lateral Movement, Exfiltration |
| 33 | + |
| 34 | +**Severity**: Medium |
| 35 | + |
| 36 | +### A Jailbreak attempt on an Azure OpenAI model deployment was blocked by Azure AI Content Safety Prompt Shields |
| 37 | + |
| 38 | +(AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt) |
| 39 | + |
| 40 | +**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security. |
| 41 | + |
| 42 | +**[MITRE tactics](alerts-reference.md#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion |
| 43 | + |
| 44 | +**Severity**: Medium |
| 45 | + |
| 46 | +### A Jailbreak attempt on an Azure OpenAI model deployment was detected by Azure AI Content Safety Prompt Shields |
| 47 | + |
| 48 | +(AI.Azure_Jailbreak.ContentFiltering.DetectedAttempt) |
| 49 | + |
| 50 | +**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence. |
| 51 | + |
| 52 | +**[MITRE tactics](alerts-reference.md#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion |
| 53 | + |
| 54 | +**Severity**: Medium |
| 55 | + |
| 56 | +### Sensitive Data Exposure Detected in Azure OpenAI Model Deployment |
| 57 | + |
| 58 | +(AI.Azure_DataLeakInModelResponse.Sensitive) |
| 59 | + |
| 60 | +**Description**: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AI’s safeguards to access unauthorized sensitive data. |
| 61 | + |
| 62 | +**[MITRE tactics](alerts-reference.md#mitre-attck-tactics)**: Collection |
| 63 | + |
| 64 | +**Severity**: Medium |
| 65 | + |
| 66 | +> [!NOTE] |
| 67 | +> For alerts that are in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
| 68 | +
|
| 69 | +## Next steps |
| 70 | + |
| 71 | +- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) |
| 72 | +- [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.yml) |
| 73 | +- [Continuously export Defender for Cloud data](continuous-export.md) |
0 commit comments