Skip to content

Commit de5eeab

Browse files
authored
Merge pull request #3227 from MicrosoftDocs/diannegali-copilotrai
RAI FAQ for copilot in defender
2 parents 6b580cf + 6c339c7 commit de5eeab

File tree

3 files changed

+95
-4
lines changed

3 files changed

+95
-4
lines changed

defender-xdr/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -439,6 +439,8 @@
439439
href: advanced-hunting-security-copilot.md
440440
- name: Create incident reports
441441
href: security-copilot-m365d-create-incident-report.md
442+
- name: Responsible AI FAQs
443+
href: responsible-ai-copilot-defender.md
442444
- name: Security Copilot agents in Microsoft Defender
443445
items:
444446
- name: Overview
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
---
2+
title: Responsible AI FAQs for Microsoft Copilot in Defender
3+
description: Learn about how Microsoft applies responsible AI principles to Microsoft Copilot in Microsoft Defender.
4+
ms.service: defender-xdr
5+
f1.keywords:
6+
- NOCSH
7+
ms.author: diannegali
8+
author: diannegali
9+
ms.localizationpriority: medium
10+
manager: deniseb
11+
audience: ITPro
12+
ms.collection:
13+
- m365-security
14+
- tier1
15+
- security-copilot
16+
- magic-ai-copilot
17+
ms.topic: concept-article
18+
search.appverid:
19+
- MOE150
20+
- MET150
21+
ms.date: 03/25/2025
22+
#customer intent: I want to learn about how Microsoft applies responsible AI principles to Microsoft Copilot in Microsoft Defender.
23+
---
24+
25+
# Responsible AI FAQs for Microsoft Copilot in Microsoft Defender
26+
27+
## Overview
28+
29+
An AI system includes not only the technology, but also the people who use it, the people affected by it, and the environment in which it's deployed. Microsoft's Responsible AI FAQs are intended to help you understand how AI technology works, the choices system owners and users can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Responsible AI FAQs to better understand specific AI systems and features that Microsoft develops.
30+
31+
Responsible AI FAQs are part of a broader effort to put Microsoft's AI principles into practice. To find out more, see [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai).
32+
33+
## Responsible AI FAQs
34+
35+
### What is Microsoft Copilot in Microsoft Defender?
36+
37+
Microsoft Copilot in Defender is the integration of Security Copilot in the Microsoft Defender portal. It is a security solution that uses AI to help security analysts investigate and respond to threats. Copilot in Defender is designed to help security analysts work more efficiently and effectively by providing them with relevant information and recommendations.
38+
39+
Copilot in Defender draws context from the data in the workloads that it monitors, and uses that context to provide recommendations to security analysts.
40+
41+
### What can Copilot in Defender do?
42+
43+
Copilot in Defender helps security analysts working in the Microsoft Defender portal by providing them with relevant information and recommendations. For example, Copilot in Defender can help security analysts by:
44+
45+
- Providing them with summaries of incidents or entities that they are investigating.
46+
- Providing them with recommendations for how to respond to threats.
47+
- Providing them with help in executing tasks like technical analysis of scripts or files, KQL query creation, or creation of incident reports.
48+
- Providing them with information about threats, threat actors, and vulnerabilities that they need to be aware of.
49+
50+
### What is Copilot in Defender's intended use?
51+
52+
Copilot in Defender is intended for use by security analysts who are responsible for investigating and responding to threats. Copilot in Defender also provides recommendations to threat intelligence analysts about the latest threats, threat actors, and vulnerabilities that they need to be aware of to protect their organization.
53+
54+
### How was Copilot in Defender evaluated? What metrics are used to measure performance?
55+
56+
Copilot in Defender underwent substantial testing prior to being released. Testing included red teaming, which is the practice of rigorously testing the product to identify failure modes and scenarios that might cause Security Copilot to do or say things outside of its intended uses or that don't support the [Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai).
57+
58+
Now that it is released, user feedback is critical in helping Microsoft improve the system. You have the option of providing feedback whenever you receive output from Copilot in Defender. When a response is inaccurate, incomplete, or unclear, use the "Off-target" and "Report" buttons to flag any objectionable output. You can also confirm when responses are useful and accurate using the "Confirm" button. These buttons appear at the bottom of every Copilot in Defender response and your feedback goes directly to Microsoft to help us improve the platform's performance.
59+
60+
### What are the limitations of Copilot in Defender? How can users minimize the impact of Copilot in Defender’s limitations when using the system?
61+
62+
- The system is designed to generate responses and respond to prompts related to the security domain like incident investigation and threat intelligence. Prompts outside the scope of security might result in responses that lack accuracy and comprehensiveness.
63+
64+
- Copilot in Defender might generate code or include code in responses, which could potentially expose sensitive information or vulnerabilities if not used carefully. Responses might appear to be valid but might not actually be semantically or syntactically correct or might not accurately reflect the intent of the developer. Users should always take the same precautions as they would with any code they write that uses material users didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.
65+
66+
- Matches with Public Code: Copilot in Defender is capable of generating new code, which it does in a probabilistic way. While the probability that it might produce code that matches code in the training set is low, a Security Copilot suggestion might contain some code snippets that match code in the training set. Users should always take the same precautions as they would with any code they write that uses material developers didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.
67+
68+
- The system might not be able to process long prompts, such as hundreds of thousands of characters.
69+
70+
- Use of the platform might be subject to usage limits or capacity throttling. Even with shorter prompts, generating responses, and checking them before displaying them to the user can take time (up to several minutes) and require high GPU capacity.
71+
72+
- Like any AI-powered technology, Copilot in Defender doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.
73+
74+
### How is Microsoft approaching responsible AI for Copilot in Defender?
75+
76+
At Microsoft, we take our commitment to responsible AI seriously. Security Copilot is being developed in accordance with our [AI principles](https://www.microsoft.com/ai/principles-and-approach). We're working with OpenAI to deliver an experience that encourages responsible use. For example, we have and will continue to collaborate with OpenAI on foundational model work. We have designed the Copilot in Defender user experience to keep humans at the center. We developed a safety system that is designed to mitigate failures and prevent misuse with things like harmful content annotation, operational monitoring, and other safeguards. The invite-only early access program is also a part of our approach to responsible AI. We're taking user feedback from those with early access to Security Copilot to improve the tool before making it broadly available.
77+
78+
Responsible AI is a journey, and we'll continually improve our systems along the way. We're committed to making our AI more reliable and trustworthy, and your feedback will help us do so.
79+
80+
### Do you comply with the EU AI Act?
81+
82+
We are committed to compliance with the EU AI Act. Our multi-year effort to define, evolve, and implement our Responsible AI Standard and internal governance has strengthened our readiness. To find out more, see [Microsoft's compliance with the EU AI Act](https://www.microsoft.com/trust-center/compliance/eu-ai-act).
83+
84+
At Microsoft, we recognize the importance of regulatory compliance as a cornerstone of trust and reliability in AI technologies. We're committed to creating responsible AI by design. Our goal is to develop and deploy AI that will have a beneficial impact on and earn trust from society.
85+
86+
Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Microsoft's Responsible AI Standard takes these six principles and breaks them down into goals and requirements for the AI we make available.
87+
88+
Our Responsible AI Standard takes into account regulatory proposals and their evolution, including the initial proposal for the EU AI Act. We developed our most recent products and services in the AI space such as Microsoft Copilot and Microsoft Azure OpenAI Service in alignment with our Responsible AI Standard. As final requirements under the EU AI Act are defined in more detail, we look forward to working with policymakers to ensure feasible implementation and application of the rules, to demonstrating our compliance, and to engaging with our customers and other stakeholders to support compliance across the ecosystem.
89+
90+
[!INCLUDE [Microsoft Defender XDR rebranding](../includes/defender-m3d-techcommunity.md)]

defender-xdr/security-copilot-in-microsoft-365-defender.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,14 +44,13 @@ If you're new to Security Copilot, you should familiarize yourself with it by re
4444
- [Get started with Security Copilot](/security-copilot/get-started-security-copilot)
4545
- [Understand authentication in Security Copilot](/security-copilot/authentication)
4646
- [Prompting in Security Copilot](/security-copilot/prompting-security-copilot)
47-
- [Responsible AI](/copilot/security/responsible-ai-overview-security-copilot)
48-
- [FAQs on Responsible AI](/copilot/security/rai-faqs-security-copilot)
47+
- [Responsible AI FAQs](responsible-ai-copilot-defender.md)
4948

5049
## Microsoft Copilot integration in Microsoft Defender
5150

5251
[Microsoft Security Copilot](/security-copilot/microsoft-security-copilot) brings together the power of AI and human expertise to help security teams respond to attacks faster and more effectively. Security Copilot is embedded in the Microsoft Defender portal to help provide security teams with enhanced capabilities to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence. Copilot in Defender is available to users who have provisioned access to Security Copilot.
5352

54-
Security Copilot operates using [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai). To know more, see the [Security Copilot Responsible AI FAQs](/copilot/security/rai-faqs-security-copilot).
53+
Copilot in Defender operates using [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai). For more information, see the [Responsible AI FAQs](responsible-ai-copilot-defender.md).
5554

5655
## Key features
5756

@@ -183,7 +182,7 @@ Copilot uses [preinstalled Microsoft plugins](/security-copilot/manage-plugins#p
183182

184183
- [Get started with Security Copilot](/security-copilot/get-started-security-copilot)
185184
- [Privacy and data security in Copilot](/security-copilot/privacy-data-security)
186-
- [Responsible AI FAQs](/security-copilot/responsible-ai-overview-security-copilot)
185+
- [Security Copilot Responsible AI FAQs](/security-copilot/responsible-ai-overview-security-copilot)
187186
- Other [Security Copilot embedded experiences](/security-copilot/experiences-security-copilot)
188187

189188
[!INCLUDE [Microsoft Defender XDR rebranding](../includes/defender-m3d-techcommunity.md)]

0 commit comments

Comments
 (0)