Skip to content

Commit 1095fa7

Browse files
authored
Merge pull request #3208 from aahill/link-updates
Updating links
2 parents ef996fd + 6fd54ba commit 1095fa7

File tree

12 files changed

+13
-13
lines changed

12 files changed

+13
-13
lines changed

articles/ai-foundry/model-inference/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ At Microsoft, we're committed to the advancement of AI driven by principles that
5959
Microsoft helps guard against abuse and unintended harm by taking the following actions:
6060

6161
- Incorporating Microsoft's [principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai)
62-
- Adopting a [code of conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service
62+
- Adopting a [code of conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service
6363
- Building [content filters](/azure/ai-services/content-safety/overview) to support customers
6464
- Providing responsible AI [information and guidance](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure OpenAI.
6565

articles/ai-services/agents/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Both services enable you to build agents using the same API and SDKs, but if you
7777

7878
## Responsible AI
7979

80-
At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, adopting a [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service, building [content filters](/azure/ai-services/content-safety/overview) to support customers, and providing responsible AI [information and guidance](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure AI Agent Service.
80+
At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, adopting a [Code of Conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/agents/context/context) for use of the service, building [content filters](/azure/ai-services/content-safety/overview) to support customers, and providing responsible AI [information and guidance](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure AI Agent Service.
8181

8282
## Get started with Azure AI Agent Service
8383

articles/ai-services/computer-vision/concept-liveness-abuse-monitoring.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ feedback_help_link_url: https://learn.microsoft.com/answers/tags/156/azure-face
1818

1919
# Abuse monitoring in Face liveness detection
2020

21-
Azure AI Face liveness detection lets you detect and mitigate instances of recurring content and/or behaviors that indicate a violation of the [Code of Conduct](/legal/cognitive-services/face/code-of-conduct?context=/azure/ai-services/computer-vision/context/context) or other applicable product terms. This guide shows you how to work with these features to ensure your application is compliant with Azure policy.
21+
Azure AI Face liveness detection lets you detect and mitigate instances of recurring content and/or behaviors that indicate a violation of the [Code of Conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/computer-vision/context/context) or other applicable product terms. This guide shows you how to work with these features to ensure your application is compliant with Azure policy.
2222

2323
Details on how data is handled can be found on the [Data, Privacy, and Security](/legal/cognitive-services/face/data-privacy-security?context=/azure/ai-services/computer-vision/context/context) page.
2424

articles/ai-services/computer-vision/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@ items:
350350
- name: Best practices for enrolling users
351351
href: enrollment-overview.md
352352
- name: Code of conduct
353-
href: /legal/cognitive-services/face/code-of-conduct?context=/azure/ai-services/computer-vision/context/context
353+
href: /legal/ai-code-of-conduct?context=/azure/ai-services/computer-vision/context/context
354354
- name: Data privacy and security
355355
href: /legal/cognitive-services/face/data-privacy-security?context=/azure/ai-services/computer-vision/context/context
356356
- name: Limited Access features

articles/ai-services/content-understanding/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ See [Quickstart](quickstart/use-ai-foundry.md) for more examples.
7373

7474

7575
## Responsible AI
76-
Azure AI Content Understanding is designed to guard against processing harmful content, such as graphic violence and gore, hateful speech and bullying, exploitation, abuse, and more. For more information and a full list of prohibited content, *see* our [**Transparency note**](/legal/cognitive-services/content-understanding/transparency-note?toc=/azure/ai-services/content-understanding/toc.json&bc=/azure/ai-services/content-understanding/breadcrumb/toc.json) and our [**Code of Conduct**](/legal/cognitive-services/openai/code-of-conduct).
76+
Azure AI Content Understanding is designed to guard against processing harmful content, such as graphic violence and gore, hateful speech and bullying, exploitation, abuse, and more. For more information and a full list of prohibited content, *see* our [**Transparency note**](/legal/cognitive-services/content-understanding/transparency-note?toc=/azure/ai-services/content-understanding/toc.json&bc=/azure/ai-services/content-understanding/breadcrumb/toc.json) and our [**Code of Conduct**](https://aka.ms/AI-CoC).
7777

7878
### Modified Content Filtering
7979

articles/ai-services/openai/concepts/abuse-monitoring.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,13 @@ manager: nitinme
1313

1414
# Abuse Monitoring
1515

16-
Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that might violate the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context). Details on how data is handled can be found on the [Data, Privacy, and Security](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context) page.
16+
Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that might violate the [Code of Conduct](https://aka.ms/AI-CoC). Details on how data is handled can be found on the [Data, Privacy, and Security](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context) page.
1717

1818
## Components of abuse monitoring
1919

2020
There are several components to abuse monitoring:
2121

22-
- **Content Classification**: Classifier models detect harmful text and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context), and assigns severity levels as described in more detail on the [Content Filtering](/azure/ai-services/openai/concepts/content-filter) page. The content classification signals contribute to pattern detection as described below.
22+
- **Content Classification**: Classifier models detect harmful text and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context), and assigns severity levels as described in more detail on the [Content Filtering](/azure/ai-services/openai/concepts/content-filter) page. The content classification signals contribute to pattern detection as described below.
2323
- **Abuse Pattern Capture**: Azure OpenAI Service’s abuse monitoring system looks at customer usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected (as indicated in content classifier signals) in a customer’s prompts and completions, as well as the intentionality of the behavior. The trends and urgency of the detected pattern will also affect scoring of potential abuse severity.
2424
For example, a higher volume of harmful content classified as higher severity, or recurring conduct indicating intentionality (such as recurring jailbreak attempts) are both more likely to receive a high score indicating potential abuse.
2525
- **Review and Decision**: Prompts and completions that are flagged through content classification and/or identified as part of a potentially abusive pattern of use are subjected to another review process to help confirm the system’s analysis and inform actioning decisions for abuse monitoring. Such review is conducted through two methods: automated review and human review.

articles/ai-services/openai/includes/content-filter-configurability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,4 +30,4 @@ Configurable content filters for inputs (prompts) and outputs (completions) are
3030

3131
Content filtering configurations are created within a Resource in Azure AI Foundry portal, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
3232

33-
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
33+
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context).

articles/ai-services/openai/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ landingContent:
133133
- text: Limited access
134134
url: /legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context
135135
- text: Code of conduct
136-
url: /legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context
136+
url: /legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context
137137
- text: Data, privacy, and security
138138
url: /legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context
139139
- text: Customer Copyright Commitment

articles/ai-services/openai/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
3131

3232
## Responsible AI
3333

34-
At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, adopting a [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service, building [content filters](/azure/ai-services/content-safety/overview) to support customers, and providing responsible AI [information and guidance](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure OpenAI.
34+
At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, adopting a [Code of Conduct](/legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context) for use of the service, building [content filters](/azure/ai-services/content-safety/overview) to support customers, and providing responsible AI [information and guidance](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=image) that customers should consider when using Azure OpenAI.
3535

3636
## Get started with Azure OpenAI Service
3737

articles/ai-services/openai/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ items:
270270
- name: Limited access
271271
href: /legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context
272272
- name: Code of conduct
273-
href: /legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context
273+
href: /legal/ai-code-of-conduct?context=/azure/ai-services/openai/context/context
274274
- name: Data, privacy, and security
275275
href: /legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context
276276
- name: Customer Copyright Commitment

0 commit comments

Comments
 (0)