Skip to content

Commit cb20435

Browse files
committed
updated text
1 parent 4ebf885 commit cb20435

File tree

1 file changed

+5
-7
lines changed

1 file changed

+5
-7
lines changed

defender-xdr/responsible-ai-copilot-defender.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.topic: concept-article
1818
search.appverid:
1919
- MOE150
2020
- MET150
21-
ms.date: 03/21/2025
21+
ms.date: 03/25/2025
2222
#customer intent: I want to learn about how Microsoft applies responsible AI principles to Microsoft Copilot in Microsoft Defender.
2323
---
2424

@@ -32,9 +32,9 @@ Responsible AI FAQs are part of a broader effort to put Microsoft's AI principle
3232

3333
## Responsible AI FAQs
3434

35-
### What is Microsoft Copilot in Defender?
35+
### What is Microsoft Copilot in Microsoft Defender?
3636

37-
Microsoft Copilot in Defender is a security solution that uses AI to help security analysts investigate and respond to threats. Copilot in Defender is designed to help security analysts work more efficiently and effectively by providing them with relevant information and recommendations.
37+
Microsoft Copilot in Defender is the integration of Security Copilot in the Microsoft Defender portal. It is a security solution that uses AI to help security analysts investigate and respond to threats. Copilot in Defender is designed to help security analysts work more efficiently and effectively by providing them with relevant information and recommendations.
3838

3939
Copilot in Defender draws context from the data in the workloads that it monitors, and uses that context to provide recommendations to security analysts.
4040

@@ -59,10 +59,6 @@ Now that it is released, user feedback is critical in helping Microsoft improve
5959

6060
### What are the limitations of Copilot in Defender? How can users minimize the impact of Copilot in Defender’s limitations when using the system?
6161

62-
- The Early Access Program is designed to give customers the opportunity to get early access to Copilot in Defender and provide feedback about the platform. Preview features aren’t meant for production use and might have limited functionality.
63-
64-
- Like any AI-powered technology, Copilot in Defender doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.
65-
6662
- The system is designed to generate responses and respond to prompts related to the security domain like incident investigation and threat intelligence. Prompts outside the scope of security might result in responses that lack accuracy and comprehensiveness.
6763

6864
- Copilot in Defender might generate code or include code in responses, which could potentially expose sensitive information or vulnerabilities if not used carefully. Responses might appear to be valid but might not actually be semantically or syntactically correct or might not accurately reflect the intent of the developer. Users should always take the same precautions as they would with any code they write that uses material users didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.
@@ -73,6 +69,8 @@ Now that it is released, user feedback is critical in helping Microsoft improve
7369

7470
- Use of the platform might be subject to usage limits or capacity throttling. Even with shorter prompts, generating responses, and checking them before displaying them to the user can take time (up to several minutes) and require high GPU capacity.
7571

72+
- Like any AI-powered technology, Copilot in Defender doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.
73+
7674
### How is Microsoft approaching responsible AI for Copilot in Defender?
7775

7876
At Microsoft, we take our commitment to responsible AI seriously. Security Copilot is being developed in accordance with our [AI principles](https://www.microsoft.com/ai/principles-and-approach). We're working with OpenAI to deliver an experience that encourages responsible use. For example, we have and will continue to collaborate with OpenAI on foundational model work. We have designed the Copilot in Defender user experience to keep humans at the center. We developed a safety system that is designed to mitigate failures and prevent misuse with things like harmful content annotation, operational monitoring, and other safeguards. The invite-only early access program is also a part of our approach to responsible AI. We're taking user feedback from those with early access to Security Copilot to improve the tool before making it broadly available.

0 commit comments

Comments
 (0)