Skip to content

Commit 416ebdd

Browse files
committed
update rai for branding changes
1 parent c891f6f commit 416ebdd

File tree

2 files changed

+17
-12
lines changed

2 files changed

+17
-12
lines changed

articles/ai-foundry/openai/concepts/abuse-monitoring.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,36 @@
11
---
2-
title: Azure OpenAI in Azure AI Foundry Models abuse monitoring
2+
title: Azure Direct Models abuse monitoring
33
titleSuffix: Azure OpenAI
44
description: Learn about the abuse monitoring capabilities of Azure OpenAI
55
author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 07/02/2025
9+
ms.date: 09/30/2025
1010
ms.custom: template-concept, ignite-2024
1111
manager: nitinme
1212
---
1313

1414
# Abuse Monitoring
1515

16-
Azure OpenAI in Azure AI Foundry Models detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that might violate the [Code of Conduct](https://aka.ms/AI-CoC). Details on how data is handled can be found on the [Data, Privacy, and Security](/azure/ai-foundry/responsible-ai/openai/data-privacy) page.
16+
Azure Direct Models detect and mitigate instances of recurring content and/or behaviors that suggest use of the service in a manner that might violate the [Code of Conduct](https://aka.ms/AI-CoC). Details on how data is handled can be found on the [Data, Privacy, and Security](/azure/ai-foundry/responsible-ai/openai/data-privacy) page.
17+
1718

1819
## Components of abuse monitoring
1920

2021
There are several components to abuse monitoring:
2122

2223
- **Content Classification**: Classifier models detect harmful text and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/ai-code-of-conduct?context=/azure/ai-foundry/openai/context/context), and assigns severity levels as described in more detail on the [Content Filtering](/azure/ai-foundry/openai/concepts/content-filter) page. The content classification signals contribute to pattern detection as described below.
23-
- **Abuse Pattern Capture**: Azure OpenAI’s abuse monitoring system looks at customer usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected (as indicated in content classifier signals) in a customer’s prompts and completions, as well as the intentionality of the behavior. The trends and urgency of the detected pattern will also affect scoring of potential abuse severity.
24+
- **Abuse Pattern Capture**: The abuse monitoring system for Azure Direct Models looks at customer usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected (as indicated in content classifier signals) in a customer’s prompts and completions, as well as the intentionality of the behavior. The trends and urgency of the detected pattern will also affect scoring of potential abuse severity.
2425
For example, a higher volume of harmful content classified as higher severity, or recurring conduct indicating intentionality (such as recurring jailbreak attempts) are both more likely to receive a high score indicating potential abuse.
2526
- **Review and Decision**: Prompts and completions that are flagged through content classification and/or identified as part of a potentially abusive pattern of use are subjected to another review process to help confirm the system’s analysis and inform actioning decisions for abuse monitoring. Such review is conducted through two methods: automated review and human review.
2627
- By default, if prompts and completions are flagged through content classification as harmful and/or identified to be part of a potentially abusive pattern of use, they might be sampled for review by using automated means including AI models such as LLMs instead of a human reviewer. The model used for this purpose processes prompts and completions only to confirm the system’s analysis and inform actioning decisions; prompts and completions that undergo such review are not stored by the abuse monitoring system or used to train the AI model or other systems.
27-
- In some cases, when automated review does not meet applicable confidence thresholds in complex contexts or if automated review systems are not available, human eyes-on review might be introduced to make an extra judgment. Authorized Microsoft employees may assess content flagged through content classification and/or identified as part of a potentially abusive pattern of use, and either confirm or correct the classification or determination based on predefined guidelines and policies. Such prompts and completions can be accessed for human review only by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers. For Azure OpenAI resources deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area. This human review abuse monitoring process will not take place if the customer has been approved for modified abuse monitoring.
28-
- **Notification and Action**: When a threshold of abusive behavior has been confirmed based on the preceding steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically have an opportunity to explain or remediate—and implement mechanisms to prevent recurrence of—the abusive behavior. Failure to address the behavior—or recurring or severe abuse—may result in suspension or termination of the customer’s access to Azure OpenAI resources and/or capabilities.
28+
- In some cases, when automated review does not meet applicable confidence thresholds in complex contexts or if automated review systems are not available, human eyes-on review might be introduced to make an extra judgment. Authorized Microsoft employees may assess content flagged through content classification and/or identified as part of a potentially abusive pattern of use, and either confirm or correct the classification or determination based on predefined guidelines and policies. Such prompts and completions can be accessed for human review only by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers. For Azure Direct Model resources deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area. This human review abuse monitoring process will not take place if the customer has been approved for modified abuse monitoring.
29+
- **Notification and Action**: When a threshold of abusive behavior has been confirmed based on the preceding steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically have an opportunity to explain or remediate—and implement mechanisms to prevent recurrence of—the abusive behavior. Failure to address the behavior—or recurring or severe abuse—may result in suspension or termination of the customer’s access to Azure Direct Model resources and/or capabilities.
2930

3031
## Modified abuse monitoring
3132

32-
Some customers may want to use the Azure OpenAI for a use case that involves the processing of highly sensitive or highly confidential data, or otherwise may conclude that they don't want or don't have the right to permit Microsoft to store and conduct human review on their prompts and completions for abuse detection. To address these concerns, Microsoft allows customers who meet additional Limited Access eligibility criteria to apply to modify abuse monitoring by completing [this](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu)form. Learn more about applying for modified abuse monitoring at [Limited access to Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/limited-access).
33+
Some customers may want to use Azure Direct Models for a use case that involves the processing of highly sensitive or highly confidential data, or otherwise may conclude that they don't want or don't have the right to permit Microsoft to store and conduct human review on their prompts and completions for abuse detection. To address these concerns, Microsoft allows customers who meet additional Limited Access eligibility criteria to apply to modify abuse monitoring by completing [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu). Some advanced models from Azure Direct Models may have more stringent criteria for turning off abuse monitoring. Learn more about applying for modified abuse monitoring at [Limited access to Azure Direct Models](/azure/ai-foundry/responsible-ai/openai/limited-access).
3334

3435
> [!NOTE]
3536
> When abuse monitoring is modified and human review is not performed, detection of potential abuse may be less accurate. Customers are notified of potential abuse detection as described above, and should be prepared to respond to such notification to avoid service interruption if possible.

articles/ai-foundry/responsible-ai/openai/limited-access.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,19 @@ ms.service: azure-ai-openai
99
ms.topic: article
1010
ms.date: 11/03/2023
1111
---
12-
# Limited access for Azure OpenAI Service
12+
# Limited access for Azure Direct Models
1313

1414
[!INCLUDE [non-english-translation](../includes/non-english-translation.md)]
1515

16-
As part of Microsoft's commitment to responsible AI, we have designed and operate Azure OpenAI Service with the intention of protecting the rights of individuals and society and fostering transparent human-computer interaction. For this reason, Azure OpenAI is a Limited Access service, and access and use is subject to eligibility criteria determined by Microsoft. Unless otherwise indicated in the service, all Azure customers are eligible for access to Azure OpenAI models, and all uses consistent with the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and [Code of Conduct](/legal/ai-code-of-conduct) are permitted, so customers are not required to submit a registration form unless they are requesting approval to modify content filters and/or abuse monitoring.
16+
As part of Microsoft's commitment to responsible AI, we have designed and operate Azure Direct Models (as defined in the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage)) with the intention of respecting the rights of individuals and society and fostering transparent human-computer interaction. For this reason, certain Azure Direct Models (or versions of them) are designated as Limited Access Services, and access and use are subject to eligibility criteria determined by Microsoft. Unless otherwise indicated in the service, all Azure customers are eligible for access to Azure Direct Models, and all uses consistent with the Product Terms and Code of Conduct are permitted, so customers are not required to submit a registration form unless they are: (a) accessing an Azure Direct Model designated as a Limited Access Service, or (b) requesting approval to modify content filters and/or abuse monitoring for an Azure Direct Model.
1717

18-
Azure OpenAI Service is made available to customers under the terms governing their subscription to Microsoft Azure Services, including [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) such as the Universal License Terms applicable to Microsoft Generative AI Services and the product offering terms for Azure OpenAI. Please review these terms carefully as they contain important conditions and obligations governing your use of Azure OpenAI Service.
18+
Azure Direct Models are made available to customers under the terms governing their subscription to Microsoft Azure Services, including [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) such as the Universal License Terms applicable to Microsoft Generative AI Services and the product offering terms for the Azure Direct Model. Please review these terms carefully as they contain important conditions and obligations governing your use.
1919

20-
## Registration for modified content filters and/or abuse monitoring
20+
Azure OpenAI Service is made available to customers under the terms governing their subscription to Microsoft Azure Services, including such as the Universal License Terms applicable to Microsoft Generative AI Services and the product offering terms for Azure OpenAI. Please review these terms carefully as they contain important conditions and obligations governing your use of Azure OpenAI Service.
2121

22-
Customers who wish to modify content filters and/or modify abuse monitoring are subject to additional eligibility criteria and requirements. At this time, modified content filters and/or modified abuse monitoring for Azure OpenAI Service are only available to managed customers and partners working with Microsoft account teams and are subject to additional requirements. Customers meeting these requirements can request approval for modified content filters and/or modified abuse monitoring using the following forms:
22+
## Registration for modified content filters and/or abuse monitoring
23+
24+
All customers have the ability to configure severity thresholds on content filters, however, the modified content filter approval process is required to turn the content filters partially or fully off. Customers who wish to modify content filters and/or modify abuse monitoring are subject to additional eligibility criteria and requirements. At this time, modified content filters and/or modified abuse monitoring for Azure Direct Models are available only to customers and partners managed by a Microsoft account team or under an eligible program, and are subject to additional requirements. Customers meeting these requirements can request approval for modified content filters and/or modified abuse monitoring using the following forms:
2325

2426
- [Modified content filters](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
2527
- [Modified abuse monitoring](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu)
@@ -29,6 +31,8 @@ Customers who wish to modify content filters and/or modify abuse monitoring are
2931
- [Register to modify content filtering](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu) (if needed)
3032
- [Register to modify abuse monitoring](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu) (if needed)
3133

34+
Some advanced models from Azure Direct Models may have more stringent criteria for turning off abuse monitoring.
35+
3236
## Help and support
3337

3438
Frequently asked questions about Limited Access can be found on the [Azure AI Services Limited Access](/azure/ai-services/cognitive-services-limited-access) page. If you need help with Azure OpenAI, see the [AI Services support options](/azure/ai-services/cognitive-services-support-options) page. Report abuse of Azure OpenAI [here](https://aka.ms/reportabuse).

0 commit comments

Comments
 (0)