Skip to content

Commit 0c7c456

Browse files
Merge pull request #5528 from PatrickFarley/rai-auto-migration
Fix links to new RAI docs locations
2 parents 72aaf35 + b6fda3d commit 0c7c456

File tree

128 files changed

+442
-442
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+442
-442
lines changed

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,5 +80,5 @@ You can also enable the following special output filters:
8080

8181
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md).
8282
- Azure AI Foundry content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
83-
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview?context=/azure/ai-services/context/context).
83+
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview).
8484
- Learn more about evaluating your generative AI models and AI systems via [Azure AI Evaluation](https://aka.ms/genaiopsevals).

articles/ai-foundry/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ sections:
6161
- question: |
6262
Do you use my company data to train any of the models?
6363
answer: |
64-
Azure OpenAI doesn't use customer data to retrain models. For more information, see the [Azure OpenAI data, privacy, and security guide](/azure/ai-foundry/responsible-ai/openai/data-privacy?context=/azure/ai-services/openai/context/context).
64+
Azure OpenAI doesn't use customer data to retrain models. For more information, see the [Azure OpenAI data, privacy, and security guide](/azure/ai-foundry/responsible-ai/openai/data-privacy).
6565
- name: Learning more and where to ask questions
6666
questions:
6767
- question: |

articles/ai-foundry/model-inference/concepts/content-filter.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Azure AI Foundry Models includes a content filtering system that works alongside
2424

2525
The text content filtering models for the hate, sexual, violence, and self-harm categories were trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
2626

27-
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
27+
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy#preventing-abuse-and-harmful-content-generation).
2828

2929
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
3030

@@ -309,5 +309,5 @@ The table below outlines the various ways content filtering can appear:
309309
## Next steps
310310

311311
- Learn about [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
312-
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview?context=/azure/ai-services/openai/context/context).
313-
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
312+
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview).
313+
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy#preventing-abuse-and-harmful-content-generation).

articles/ai-foundry/model-inference/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,5 +119,5 @@ sections:
119119
- question: |
120120
How do I obtain coverage under the Customer Copyright Commitment?
121121
answer:
122-
The Customer Copyright Commitment is a provision to be included in the December 1, 2023, Microsoft Product Terms that describes Microsoft’s obligation to defend customers against certain non-Microsoft intellectual property claims relating to Output Content. If the subject of the claim is Output Content generated from Azure OpenAI (or any other Covered Product that allows customers to configure the safety systems), then to receive coverage, customers must have implemented all mitigations required by the Azure OpenAI documentation in the offering that delivered the Output Content. The required mitigations are documented [here](/azure/ai-foundry/responsible-ai/openai/customer-copyright-commitment?context=/azure/ai-services/openai/context/context) and updated on an ongoing basis. For new services, features, models, or use cases, new CCC requirements will be posted and take effect at or following the launch of such service, feature, model, or use case. Otherwise, customers will have six months from the time of publication to implement new mitigations to maintain coverage under the CCC. If a customer tenders a claim, the customer will be required to demonstrate compliance with the relevant requirements. These mitigations are required for Covered Products that allow customers to configure the safety systems, including Azure OpenAI; they don't impact coverage for customers using other Covered Products.
122+
The Customer Copyright Commitment is a provision to be included in the December 1, 2023, Microsoft Product Terms that describes Microsoft’s obligation to defend customers against certain non-Microsoft intellectual property claims relating to Output Content. If the subject of the claim is Output Content generated from Azure OpenAI (or any other Covered Product that allows customers to configure the safety systems), then to receive coverage, customers must have implemented all mitigations required by the Azure OpenAI documentation in the offering that delivered the Output Content. The required mitigations are documented [here](/azure/ai-foundry/responsible-ai/openai/customer-copyright-commitment) and updated on an ongoing basis. For new services, features, models, or use cases, new CCC requirements will be posted and take effect at or following the launch of such service, feature, model, or use case. Otherwise, customers will have six months from the time of publication to implement new mitigations to maintain coverage under the CCC. If a customer tenders a claim, the customer will be required to demonstrate compliance with the relevant requirements. These mitigations are required for Covered Products that allow customers to configure the safety systems, including Azure OpenAI; they don't impact coverage for customers using other Covered Products.
123123
additionalContent: |

articles/ai-foundry/model-inference/includes/configure-content-filters/code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ Once content filtering has been applied to your model deployment, requests can b
1919

2020
We recommend informing your content filtering configuration decisions through an iterative identification (for example, red team testing, stress-testing, and analysis) and measurement process to address the potential harms that are relevant for a specific model, application, and deployment scenario. After you implement mitigations such as content filtering, repeat measurement to test effectiveness.
2121

22-
Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/overview?context=/azure/ai-services/openai/context/context).
22+
Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/overview).

articles/ai-foundry/model-inference/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Microsoft helps guard against abuse and unintended harm by taking the following
5757
- Incorporating Microsoft's [principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai)
5858
- Adopting a [code of conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service
5959
- Building [content filters](/azure/ai-services/content-safety/overview) to support customers
60-
- Providing responsible AI [information and guidance](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure OpenAI.
60+
- Providing responsible AI [information and guidance](/azure/ai-foundry/responsible-ai/openai/transparency-note) that customers should consider when using Azure OpenAI.
6161

6262
## Getting started
6363

articles/ai-foundry/responsible-ai/agents/transparency-note.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
145145

146146
### Technical limitations, operational factors, and ranges
147147

148-
* **Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
148+
* **Generative AI model limitations:** Because Azure AI Agent Service works with a variety of models, the overall system inherits the limitations specific to those models. Before selecting a model to incorporate into your agent, carefully [evaluate the model](/azure/ai-studio/how-to/model-catalog-overview#overview-of-model-catalog-capabilities) to understand its limitations. Consider reviewing the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance) for additional information about generative AI limitations that are also likely to be relevant to the system and review other best practices for incorporating generative AI into your agent application.
149149
* **Tool orchestration complexities:** AI Agents depend on multiple integrated tools and data connectors (such as Bing Search, SharePoint, and Azure Logic Apps). If any of these tools are misconfigured, unavailable, or return inconsistent results, or a high number of tools are configured on a single agent, the agent’s guidance may become fragmented, outdated, or misleading.
150150
* **Unequal representation and support:** When serving diverse user groups, AI Agents can show uneven performance if language varieties, regional data, or specialized knowledge domains are underrepresented. A retail agent, for example, might offer less reliable product recommendations to customers who speak under-represented languages.
151151
* **Opaque decision-making processes:** As agents combine large language models with external systems, tracing the “why” behind their decisions can become challenging. A user using such an agent may find it difficult to understand why certain tools or combination of tools were chosen to answer a query, complicating trust and verification of the agent’s outputs or actions.
@@ -176,7 +176,7 @@ We encourage customers to use Azure AI Agent Service in their innovative solutio
176176
* **Clearly define intended operating environments.** Clearly define the intended operating environments (domain boundaries) where your agent is designed to perform effectively.
177177
* **Ensure appropriate intelligibility in decision making.** Providing information to users before, during, and after actions are taken and/or tools are called may help them understand action justification or why certain actions were taken or the application is behaving a certain way, where to intervene, and how to troubleshoot issues.
178178
<!--* **Provide trusted data.** Retrieving or uploading untrusted data into your systems could compromise the security of your systems or applications. To mitigate these risks in your applications using the Azure AI Agent Service, we recommend logging and monitoring LLM interactions (inputs/outputs) to detect and analyze potential prompt injections, clearly delineating user input to minimize risk of prompt injection, restricting the LLM’s access to sensitive resources, limiting its capabilities to the minimum required, and isolating it from critical systems and resources. Learn about additional mitigation approaches in [Security guidance for Large Language Models.](/ai/playbook/technology-guidance/generative-ai/mlops-in-openai/security/security-recommend)-->
179-
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=text#best-practices-for-improving-system-performance).
179+
* Follow additional generative AI best practices as appropriate for your system, including recommendations in the [Azure OpenAI Transparency Note](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text#best-practices-for-improving-system-performance).
180180

181181
## Learn more about responsible AI
182182

articles/ai-foundry/responsible-ai/clu/clu-characteristics-and-limitations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Not all features are at the same language parity. For example, language support
9999
## Next steps
100100

101101
* [Introduction to conversational language understanding](/azure/ai-services/language-service/conversational-language-understanding/overview)
102-
* [Language Understanding transparency note](/azure/ai-foundry/responsible-ai/luis/luis-transparency-note?context=/azure/ai-services/LUIS/context/context)
102+
* [Language Understanding transparency note](/azure/ai-foundry/responsible-ai/luis/luis-transparency-note)
103103

104104
* [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai?rtc=1&activetab=pivot1%3aprimaryr6)
105105
* [Building responsible bots](https://www.microsoft.com/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf)

articles/ai-foundry/responsible-ai/computer-vision/compliance-privacy-security-2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,4 +78,4 @@ To learn more about Microsoft's privacy and security commitments visit the Micro
7878
## Next steps
7979

8080
> [!div class="nextstepaction"]
81-
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
81+
> [Responsible use deployment guidance for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/responsible-use-deployment)

articles/ai-foundry/responsible-ai/computer-vision/disclosure-design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,4 +105,4 @@ Evaluate the first and continuous-use experience with a representative sample of
105105
## Next steps
106106

107107
> [!div class="nextstepaction"]
108-
> [Research insights for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/research-insights?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext)
108+
> [Research insights for spatial analysis](/azure/ai-foundry/responsible-ai/computer-vision/research-insights)

0 commit comments

Comments
 (0)