Skip to content

Commit c4ced2e

Browse files
committed
dspm for ai module
1 parent 43c2a6b commit c4ced2e

16 files changed

+235
-28
lines changed

learn-pr/wwl-sci/purview-identify-mitigate-ai-risks/configure-dspm-ai.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,6 @@ metadata:
1010
ms.topic: unit
1111
azureSandbox: false
1212
labModal: false
13-
durationInMinutes: 4
13+
durationInMinutes: 8
1414
content: |
1515
[!include[](includes/configure-dspm-ai.md)]
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.purview-identify-mitigate-ai-risks.data-assessments
3+
title: Use Data assessments (preview) to detect oversharing risks
4+
metadata:
5+
title: Use Data assessments (preview) to detect oversharing risks
6+
description: "Use Data assessments (preview) to detect oversharing risks."
7+
ms.date: 2/6/2025
8+
author: wwlpublish
9+
ms.author: riswinto
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 8
14+
content: |
15+
[!include[](includes/data-assessments.md)]

learn-pr/wwl-sci/purview-identify-mitigate-ai-risks/includes/configure-dspm-ai.md

Lines changed: 58 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -8,37 +8,73 @@ Before configuring DSPM for AI, check that your environment meets these requirem
88
- **[Verify Microsoft Purview Audit is enabled](/purview/audit-log-enable-disable?tabs=microsoft-purview-portal#verify-the-auditing-status-for-your-organization)**: Auditing is on by default for new tenants, but it's a good idea to verify.
99
- **[Assign Copilot Licenses](/copilot/microsoft-365/microsoft-365-copilot-enable-users#assign-licenses)**: Users should be assigned Microsoft 365 Copilot licenses for activity tracking.
1010
- **[Onboard Devices to Microsoft Purview](/purview/device-onboarding-overview)**: Devices need to be onboarded to Microsoft Purview to track AI interactions.
11-
- **[Install the Microsoft Purview Browser Extension](/purview/insider-risk-management-browser-support#configure-browser-signal-detection-for-microsoft-edge)**: The Microsoft Purview browser extension is required to monitor third-party AI site visits.
11+
- **[Install the Microsoft Purview Browser Extension](/purview/insider-risk-management-browser-support#configure-browser-signal-detection-for-microsoft-edge)**: The Microsoft Purview browser extension is required to monitor non-Microsoft AI site visits.
1212

1313
## Steps to configure DSPM for AI
1414

1515
After completing the prerequisites, configure DSPM for AI in Microsoft Purview. This process includes enabling built-in policies, running data assessments, and verifying that AI-related security controls are in place.
1616

17-
1. Access DSPM for AI
17+
### Step 1: Set up DSPM for AI
1818

19-
- Sign in to the Microsoft Purview portal.
20-
- Navigate to Solutions > DSPM for AI.
19+
1. Sign in to the [Microsoft Purview portal](https://purview.microsoft.com/).
20+
1. Navigate to **Solutions** > **DSPM for AI**.
21+
1. From the **Overview** page, go to **Get started** to complete the required setup tasks.
22+
1. Verify that **Microsoft Purview Audit** is enabled to track AI interactions.
23+
1. Install the **Microsoft Purview browser extension** to detect AI-related activity.
24+
1. **Onboard devices to Microsoft Purview** to monitor AI interactions.
25+
1. Enable **Extend your insights for data discovery** to create policies that detect risky AI usage, track AI site visits, and identify when users paste sensitive data into AI apps.
2126

22-
1. Review the Get Started Section
27+
:::image type="content" source="../media/dspm-ai-get-started.png" alt-text="Screenshot of the DSPM for AI interface in Microsoft Purview, showing the Get started checklist with required setup steps." lightbox="../media/dspm-ai-get-started.png":::
2328

24-
- From the Overview page, review Get Started for initial actions.
25-
- Confirm that Audit Logging is enabled.
26-
- Enable Extend Insights for Data Discovery to track AI-generated content.
27-
- Activate One-Click Policies to apply built-in security controls.
28-
1. Activate Preconfigured Policies
29+
### Step 2: Review and configure recommendations and policies
2930

30-
- Go to Policies in the Microsoft Purview portal.
31-
- Review available AI security policies.
32-
- Enable recommended policies to detect sensitive data exposure and AI activity.
33-
- If needed, edit the policy scope before activation to apply policies only to specific users or groups instead of the entire organization.
34-
- Allow up to 24 hours for policies to take effect.
31+
Microsoft Purview provides AI security recommendations that help organizations protect sensitive data and monitor AI interactions. These recommendations include preconfigured policies (one-click policies) or suggested actions that require manual review.
3532

36-
Once activated, policies begin tracking AI interactions based on configured rules. Results appear in DSPM reports and Activity Explorer after data processing. If a policy is deleted, it remains visible with a PendingDeletion status until fully removed.
33+
#### How to use recommendations
3734

38-
1. Run Data Assessments
35+
1. Go to **Recommendations** in the Microsoft Purview portal.
36+
1. Review the available AI security recommendations and their status.
37+
1. Select a recommendation to:
3938

40-
- DSPM for AI automatically runs weekly assessments on the top 100 SharePoint sites used by Copilot.
41-
- To create a custom assessment:
42-
- Go to Data Assessments (Preview) in Microsoft Purview.
43-
- Select Create Assessment and choose users and data sources to scan.
44-
- Run the assessment and allow up to 48 hours for results to appear.
39+
- **Create a policy**: Instantly apply a one-click policy with built-in security settings.
40+
- **View the recommendation**: Assess and manually take action based on guidance.
41+
42+
:::image type="content" source="../media/dspm-ai-recommendations.png" alt-text="Screenshot of the Recommendations page in Microsoft Purview, showing a list of AI security recommendations categorized as Not Started, Dismissed, or Completed." lightbox="../media/dspm-ai-recommendations.png":::
43+
44+
> [!NOTE]
45+
> Recommendations that provide one-click policies include a **Create policy** button, while manual recommendations require reviewing and taking action based on the provided guidance.
46+
47+
#### Types of AI security recommendations
48+
49+
Recommendations are grouped into categories such as **Data Security**, **Data Discovery**, or **AI Regulations**. When selecting a recommendation, DSPM for AI provides either:
50+
51+
- A preconfigured policy that can be activated immediately (one-click policy)
52+
- Guidance on security measures that require manual implementation
53+
54+
**Recommendations in DSPM for AI**:
55+
56+
| Recommendation | Type | Description |
57+
|-----|-----|-----|
58+
| Fortify your data security | Data security | Uses Adaptive Protection to apply a block-with-override rule for high-risk users interacting with AI sites. |
59+
| Control unethical behavior in AI | Insight into communications | Creates a policy to detect unethical behavior in Microsoft 365 Copilot. Alerts are generated in Communication Compliance. |
60+
| Guided assistance to AI regulations | AI regulations | Provides guidance on regulatory compliance for AI interactions. |
61+
| Protect sensitive data referenced in Copilot responses | Data security | Runs a data assessment to identify oversharing risks in Copilot interactions. |
62+
| Discover and govern interactions with ChatGPT Enterprise AI (Preview) | Data discovery |Requires setting up a connector in Purview to track ChatGPT Enterprise interactions. |
63+
| Protect sensitive data referenced in Microsoft 365 Copilot (Preview) | Data security | Creates a data loss prevention policy to prevent Copilot from processing labeled content. |
64+
| Protect your data from potential oversharing risks | Data security | Provides insights into oversharing risks based on a weekly scan. |
65+
| Use Copilot to improve your data security posture (Preview) | Data security | Uses Security Copilot to investigate alerts and analyze security risks. |
66+
| Information Protection Policy for Sensitivity Labels | Data security | Sets up default sensitivity labels to preserve document access rights and protect Copilot output. |
67+
68+
#### Understand recommendation status
69+
70+
Each recommendation falls into one of three categories:
71+
72+
- **Not Started**: Recommendations that haven't been acted on.
73+
- **Dismissed**: Recommendations that were reviewed but not applied.
74+
- **Completed**: Recommendations that have been fully implemented.
75+
76+
#### Policy activation timeline
77+
78+
Policies take up to 24 hours to take effect. Once activated, they track AI interactions based on configured rules, with results appearing in DSPM reports and Activity Explorer after data processing. Deleted policies remain visible with a **PendingDeletion** status until fully removed.
79+
80+
After configuring DSPM for AI, use Microsoft Purview reports and data assessments to evaluate AI interactions and identify potential risks. Reports provide insights into policy enforcement, AI data exposure, and compliance status, while data assessments help detect oversharing risks before they affect security.
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
AI tools like Microsoft 365 Copilot can unintentionally expose misclassified or over-permissioned content. Data assessments help security teams detect these risks early, apply protections, and maintain compliance.
2+
3+
Microsoft 365 Copilot and other AI tools can surface misclassified, over-permissioned, or outdated content, increasing the likelihood of unintentional data exposure. By running data assessments, organizations can identify these risks early, apply appropriate protections, and ensure compliance with internal policies and regulatory requirements.
4+
5+
## Default data assessments
6+
7+
Microsoft Purview Data Security Posture Management (DSPM) for AI automatically runs a weekly assessment on the top 100 SharePoint sites used by Microsoft 365 Copilot. This built-in assessment helps organizations identify high-risk data exposure without manual configuration.
8+
9+
To review the latest weekly assessment:
10+
11+
1. Navigate to **DSPM for AI** in the [Microsoft Purview portal](https://purview.microsoft.com/).
12+
1. Select **Assessments** from the navigation pane.
13+
1. Open the **Oversharing Assessment for the week of <month, year>**.
14+
1. Review key findings, including:
15+
- Number of sensitive files accessed
16+
- Frequency of access
17+
- External sharing risks
18+
19+
:::image type="content" source="../media/data-assessment-oversharing.png" alt-text="Screenshot of the Oversharing assessments page in Microsoft Purview, showing details on total items, sensitivity labels, and data with sharing links." lightbox="../media/data-assessment-oversharing.png":::
20+
21+
The weekly assessment helps identify trends in data exposure, allowing organizations to detect misconfigured access settings, overly permissive sharing, or files that contain sensitive data but lack proper classification. Reviewing these results regularly ensures that security policies are informed by actual risks rather than assumptions.
22+
23+
For a deeper analysis of specific users, sites, or data sources, security teams can run custom assessments tailored to their needs.
24+
25+
## Run a custom data assessment
26+
27+
Organizations might need to scan beyond the default assessment to evaluate AI security risks in different users, sites, or content types. Custom data assessments allow security teams to define the scope of their analysis.
28+
29+
To create and run a custom assessment:
30+
31+
1. Navigate to **DSPM for AI** > **Data assessments**.
32+
1. Select **Create assessment**.
33+
1. On the **Basic details** page:
34+
- Enter an **Assessment name**.
35+
- Provide an optional **Description** to define the purpose of the assessment.
36+
1. On the **Add users** page:
37+
- Choose whether to Include all users or Include specific users or groups.
38+
1. On the **Data sources** page, select the SharePoint sites or other data sources you want to scan.
39+
1. On the **Review and run the data assessment scan**, select **Save and run** to run the custom assessment.
40+
41+
Assessments can take up to 48 hours to complete. After the assessment completes, review the findings in the Protect and Monitor tabs to determine the appropriate security actions.
42+
43+
## Review and act on assessment results
44+
45+
After a data assessment runs, security teams can analyze the results and take action using the **Protect** and **Monitor** tabs. These tabs provide insights into how sensitive data is being accessed and shared, and offer remediation options to reduce oversharing risks.
46+
47+
### Protect tab - Apply security controls
48+
49+
The **Protect** tab helps security teams limit access to high-risk data and enforce compliance measures. Recommended actions include:
50+
51+
- **Restrict access by label**: Use Microsoft Purview Data Loss Prevention (DLP) to prevent Microsoft 365 Copilot from summarizing data that has specific sensitivity labels. For more information about how this works and supported scenarios, see [Learn about the Microsoft 365 Copilot policy location](/purview/dlp-microsoft365-copilot-location-learn-about).
52+
53+
- **Restrict all items**: Use [SharePoint Restricted Content Discoverability](/sharepoint/restricted-content-discovery) to prevent Microsoft 365 Copilot from indexing specified SharePoint sites.
54+
55+
:::image type="content" source="../media/data-assessment-dlp-restrict-items.png" alt-text="Screenshot showing the options in the Protect tab in Data assessments to restrict access to sensitive data." lightbox="../media/data-assessment-dlp-restrict-items.png":::
56+
57+
- **Apply auto-labeling policies**: [Automatically apply sensitivity labels](/purview/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-policies-for-sharepoint-onedrive-and-exchange) to unlabeled files containing sensitive information.
58+
59+
- **Enforce retention policies**: Use [Microsoft Purview Data Lifecycle Management retention policies](/purview/create-retention-policies?tabs=teams-retention) to delete content that hasn't been accessed for at least three years.
60+
61+
:::image type="content" source="../media/data-assessment-apply-label.png" alt-text="Screenshot showing the options in the Protect tab in Data assessments to manage sensitivity labels and policies for a specific SharePoint site." lightbox="../media/data-assessment-apply-label.png":::
62+
63+
### Monitor tab - Review sharing and access risks
64+
65+
The **Monitor** tab provides visibility into how data is shared and accessed across the organization. It includes tools for reviewing and managing access:
66+
67+
- **Run a SharePoint site access review**: Identify and assess sites that are shared broadly or externally. IT administrators can delegate access reviews to site owners.
68+
- **Run an identity access review**: Review group memberships, enterprise application access, and role assignments in Microsoft Entra ID to ensure only the right users maintain access.
69+
70+
:::image type="content" source="../media/data-assessment-monitor.png" alt-text="Screenshot showing the options in the Monitor tab in Data assessments to Run a site access review and Run an identity access review." lightbox="../media/data-assessment-monitor.png":::
71+
72+
By regularly reviewing assessment results in both the **Protect** and **Monitor** tabs, organizations can enforce security policies, reduce oversharing risks, and ensure compliance with data protection requirements.
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
After configuring Data Security Posture Management (DSPM) for AI, the next step is to monitor AI activity, assess security risks, and review reports to ensure that AI interactions comply with organizational policies. Microsoft Purview provides insights into how AI is used, identifies potential data security issues, and enables organizations to take action when necessary.
2+
3+
## Reports in DSPM for AI
4+
5+
The **Reports** section in DSPM for AI provides insights into AI interactions, sensitive data exposure, and security risks. These reports help organizations assess AI activity, identify potential compliance issues, and take proactive steps to protect sensitive information.
6+
7+
To view AI activity insights:
8+
9+
1. Sign in to the [Microsoft Purview portal](https://purview.microsoft.com/).
10+
1. Navigate to **Solutions** > **DSPM for AI**.
11+
1. Go to **Reports** to review AI-related activity, trends, and risk assessments.
12+
13+
## Understand report categories and their insights
14+
15+
Each report helps organizations understand AI usage and risks in different ways. The reports are grouped into three sections: **Activity**, **Data**, and **User Risk**.
16+
17+
### Activity reports
18+
19+
**Activity** reports provide an overview of AI usage patterns across the organization. These reports help security teams track adoption trends, detect unusual activity, and determine whether AI interactions align with company policies. Understanding AI usage is crucial for assessing risks and identifying whether extra safeguards are needed.
20+
21+
The reports available here include:
22+
23+
- **Total interactions over time (Microsoft Copilot & enterprise AI apps)**: Tracks the number of AI interactions within Microsoft 365 Copilot and non-Microsoft AI tools. This report helps organizations monitor AI adoption and identify patterns that might require further investigation.
24+
- **Total visits (other AI apps)**: Displays user visits to AI applications such as ChatGPT, Gemini, and Copilot for Bing. This report helps organizations determine whether employees are engaging with unauthorized AI tools and take action if needed.
25+
26+
### Data insights
27+
28+
**Data** reports highlight risks related to AI interactions involving sensitive data. These reports help organizations identify where sensitive information is being processed and whether AI tools are being used responsibly.
29+
30+
The reports available here include:
31+
32+
- **Sensitive interactions per AI app**: Identifies AI applications that process sensitive data. This report helps security teams assess which AI tools pose the highest data exposure risks.
33+
- **Top unethical AI interactions**: Surfaces instances where Microsoft 365 Copilot has generated or responded to unethical, inappropriate, or noncompliant content. This information is useful for organizations using Communication Compliance policies to monitor AI-generated messages.
34+
- **Top sensitivity labels referenced in Copilot for Microsoft 365**: Displays which sensitivity-labeled content is being referenced by AI tools. This insight helps organizations assess whether AI interactions involve confidential or highly classified data.
35+
36+
### User risk reports
37+
38+
**User** risk reports help organizations identify potential insider threats based on how employees interact with AI tools. These reports assess user behavior, AI usage trends, and the severity of security risks.
39+
40+
The reports available here include:
41+
42+
- **Insider risk severity**: Shows user AI interactions grouped by risk levels, helping security teams identify patterns that might indicate excessive or inappropriate AI usage.
43+
- **Insider risk severity per AI app**: Breaks down user risk levels by specific AI applications, showing where risky behavior is occurring. This report helps organizations determine whether Copilot or non-Microsoft AI tools require stricter monitoring.
44+
45+
### Taking action on reports
46+
47+
After reviewing AI security reports, organizations can take specific actions to enhance monitoring, reduce risks, and enforce security policies. Some reports might initially appear blank if data tracking hasn't been enabled. In these cases, adjustments might be needed to begin collecting insights.
48+
49+
To act on report findings, consider:
50+
51+
- **Extend insights**: If reports show "Data discovery is yet to be defined," AI interactions aren't currently being tracked. Select Extend insights to enable monitoring for Microsoft 365 Copilot and non-Microsoft AI tools.
52+
- **Enable policies**: Certain reports require data loss prevention (DLP) policies, sensitivity labels, or communication compliance rules to be activated before tracking begins. If a report remains empty despite AI activity in your environment, check policy configurations.
53+
- **Review flagged activity**: Reports can highlight sensitive data usage, risky AI interactions, or insider threats. If an anomaly is detected, security teams should investigate further using Activity Explorer. They can then apply necessary controls, such as blocking AI interactions with classified data or restricting access to high-risk AI tools.
54+
55+
By continuously monitoring AI reports and acting on insights, organizations can ensure AI technologies are used securely, responsibly, and in compliance with data protection policies.

learn-pr/wwl-sci/purview-identify-mitigate-ai-risks/includes/summary.md

Whitespace-only changes.

0 commit comments

Comments
 (0)