Skip to content

Commit 908f0b4

Browse files
authored
Merge pull request #5648 from MicrosoftDocs/release-aisvcs-move-rai-docs
[RELEASE PUBLISH] Move RAI docs
2 parents dbb78ef + 93b4599 commit 908f0b4

File tree

298 files changed

+11835
-352
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

298 files changed

+11835
-352
lines changed

.github/policies/disallow-edits.yml

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,18 @@ configuration:
1919
@${issueAuthor} - You tried to add an index file to this repository; this is not permitted so your pull request will be closed automatically.
2020
- closePullRequest
2121

22-
- description: Close PRs to the "ai-services/personalizer" and "ai-services/responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
22+
- description: Close PRs to the "personalizer" and "responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
2323
if:
2424
- payloadType: Pull_Request
2525
- isAction:
2626
action: Opened
2727
- or:
2828
- filesMatchPattern:
2929
matchAny: true
30-
pattern: articles/ai-services/personalizer/*
30+
pattern: articles/ai-foundry/responsible-ai/*
3131
- filesMatchPattern:
3232
matchAny: true
33-
pattern: articles/ai-services/responsible-ai/*
33+
pattern: articles/ai-services/personalizer/*
3434
- not:
3535
activitySenderHasAssociation:
3636
association: Member
@@ -40,14 +40,18 @@ configuration:
4040
@${issueAuthor} - Pull requests that modify files in this folder aren't accepted from public contributors.
4141
- closePullRequest
4242

43-
- description: \@mention specific people when a PR is opened in the "ai-services/personalizer" folder.
43+
- description: \@mention specific people when a PR is opened in the "personalizer" or "responsible-ai" folder.
4444
if:
4545
- payloadType: Pull_Request
4646
- isAction:
4747
action: Opened
48-
- filesMatchPattern:
49-
matchAny: true
50-
pattern: articles/ai-services/personalizer/*
48+
- or:
49+
- filesMatchPattern:
50+
matchAny: true
51+
pattern: articles/ai-foundry/responsible-ai/*
52+
- filesMatchPattern:
53+
matchAny: true
54+
pattern: articles/ai-services/personalizer/*
5155
- activitySenderHasAssociation:
5256
association: Member
5357
- not:

articles/ai-foundry/agents/breadcrumb/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@
1414
topicHref: /azure/index
1515
items:
1616
- name: AI Foundry # Original doc set name
17-
tocHref: /legal/cognitive-services/openai # Destination doc set route
17+
tocHref: /azure/ai-foundry/responsible-ai/openai # Destination doc set route
1818
topicHref: /azure/ai-services/agents/index # Original doc set route
1919
items:
2020
- name: Agent Service # Destination doc set name
21-
tocHref: /legal/cognitive-services/openai # Destination doc set route
21+
tocHref: /azure/ai-foundry/responsible-ai/openai # Destination doc set route
2222
topicHref: /azure/ai-services/agents/index # Original doc set route

articles/ai-foundry/agents/concepts/agent-catalog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.custom:
1515
# Get started with the Agent Catalog
1616

1717
Accelerate your agent development using code samples and best practices for creating agents. Each agent sample below links to a GitHub Repository, where you can browse the agent's configuration files, setup instructions and source code to start integrating them into your own project in code.
18-
With agents you create using these code samples, be sure to assess safety and legal implications, and to comply with all applicable laws and safety standards. See the [transparency note](/legal/cognitive-services/agents/transparency-note) for more information.
18+
With agents you create using these code samples, be sure to assess safety and legal implications, and to comply with all applicable laws and safety standards. See the [transparency note](/azure/ai-foundry/responsible-ai/agents/transparency-note) for more information.
1919

2020
## Prerequisites
2121

articles/ai-foundry/agents/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ sections:
3737
- question: |
3838
Is my data used by Microsoft for training models?
3939
answer: |
40-
No. Data is not used by Microsoft for training models. See the [Responsible AI documentation](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext) for more information.
40+
No. Data is not used by Microsoft for training models. See the [Responsible AI documentation](/azure/ai-foundry/responsible-ai/openai/data-privacy) for more information.
4141
- question: |
4242
Where is data stored geographically?
4343
answer: |

articles/ai-foundry/agents/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,9 +131,9 @@ items:
131131
- name: Responsible AI
132132
items:
133133
- name: Transparency note
134-
href: /legal/cognitive-services/agents/transparency-note?context=/azure/ai-services/agents/context/context
134+
href: ../../ai-foundry/responsible-ai/agents/transparency-note.md
135135
- name: Data, privacy, and security For Azure AI Foundry Agent Service
136-
href: /legal/cognitive-services/agents/data-privacy-security?context=/azure/ai-services/agents/context/context
136+
href: ../../ai-foundry/responsible-ai/agents/data-privacy-security.md
137137
- name: Reference
138138
items:
139139
- name: REST API

articles/ai-foundry/concepts/content-filtering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,5 +80,5 @@ You can also enable the following special output filters:
8080

8181
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md).
8282
- Azure AI Foundry content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
83-
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/context/context).
83+
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview).
8484
- Learn more about evaluating your generative AI models and AI systems via [Azure AI Evaluation](https://aka.ms/genaiopsevals).

articles/ai-foundry/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ sections:
6161
- question: |
6262
Do you use my company data to train any of the models?
6363
answer: |
64-
Azure OpenAI doesn't use customer data to retrain models. For more information, see the [Azure OpenAI data, privacy, and security guide](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context).
64+
Azure OpenAI doesn't use customer data to retrain models. For more information, see the [Azure OpenAI data, privacy, and security guide](/azure/ai-foundry/responsible-ai/openai/data-privacy).
6565
- name: Learning more and where to ask questions
6666
questions:
6767
- question: |

articles/ai-foundry/how-to/concept-data-privacy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ The model processes your input prompts and generates outputs based on its functi
4747

4848
Microsoft acts as the data processor for prompts and outputs sent to, and generated by, a model deployed for standard deployment. Microsoft doesn't share these prompts and outputs with the model provider. Also, Microsoft doesn't use these prompts and outputs to train or improve Microsoft models, the model provider's models, or any third party's models.
4949

50-
Models are stateless, and they don't store any prompts or outputs. If content filtering is enabled, the Azure AI Content Safety service screens prompts and outputs for certain categories of harmful content in real time. [Learn more about how Azure AI Content Safety processes data](/legal/cognitive-services/content-safety/data-privacy).
50+
Models are stateless, and they don't store any prompts or outputs. If content filtering is enabled, the Azure AI Content Safety service screens prompts and outputs for certain categories of harmful content in real time. [Learn more about how Azure AI Content Safety processes data](/azure/ai-foundry/responsible-ai/content-safety/data-privacy).
5151

5252
Prompts and outputs are processed within the geography specified during deployment, but they might be processed between regions within the geography for operational purposes. Operational purposes include performance and capacity management.
5353

articles/ai-foundry/model-inference/concepts/content-filter.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Azure AI Foundry Models includes a content filtering system that works alongside
2424

2525
The text content filtering models for the hate, sexual, violence, and self-harm categories were trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
2626

27-
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
27+
In addition to the content filtering system, Azure OpenAI performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy#preventing-abuse-and-harmful-content-generation).
2828

2929
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
3030

@@ -309,5 +309,5 @@ The table below outlines the various ways content filtering can appear:
309309
## Next steps
310310

311311
- Learn about [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
312-
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
313-
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
312+
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/azure/ai-foundry/responsible-ai/openai/overview).
313+
- Learn more about how data is processed with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI](/azure/ai-foundry/responsible-ai/openai/data-privacy#preventing-abuse-and-harmful-content-generation).

articles/ai-foundry/model-inference/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,5 +119,5 @@ sections:
119119
- question: |
120120
How do I obtain coverage under the Customer Copyright Commitment?
121121
answer:
122-
The Customer Copyright Commitment is a provision to be included in the December 1, 2023, Microsoft Product Terms that describes Microsoft’s obligation to defend customers against certain non-Microsoft intellectual property claims relating to Output Content. If the subject of the claim is Output Content generated from Azure OpenAI (or any other Covered Product that allows customers to configure the safety systems), then to receive coverage, customers must have implemented all mitigations required by the Azure OpenAI documentation in the offering that delivered the Output Content. The required mitigations are documented [here](/legal/cognitive-services/openai/customer-copyright-commitment?context=/azure/ai-services/openai/context/context) and updated on an ongoing basis. For new services, features, models, or use cases, new CCC requirements will be posted and take effect at or following the launch of such service, feature, model, or use case. Otherwise, customers will have six months from the time of publication to implement new mitigations to maintain coverage under the CCC. If a customer tenders a claim, the customer will be required to demonstrate compliance with the relevant requirements. These mitigations are required for Covered Products that allow customers to configure the safety systems, including Azure OpenAI; they don't impact coverage for customers using other Covered Products.
122+
The Customer Copyright Commitment is a provision to be included in the December 1, 2023, Microsoft Product Terms that describes Microsoft’s obligation to defend customers against certain non-Microsoft intellectual property claims relating to Output Content. If the subject of the claim is Output Content generated from Azure OpenAI (or any other Covered Product that allows customers to configure the safety systems), then to receive coverage, customers must have implemented all mitigations required by the Azure OpenAI documentation in the offering that delivered the Output Content. The required mitigations are documented [here](/azure/ai-foundry/responsible-ai/openai/customer-copyright-commitment) and updated on an ongoing basis. For new services, features, models, or use cases, new CCC requirements will be posted and take effect at or following the launch of such service, feature, model, or use case. Otherwise, customers will have six months from the time of publication to implement new mitigations to maintain coverage under the CCC. If a customer tenders a claim, the customer will be required to demonstrate compliance with the relevant requirements. These mitigations are required for Covered Products that allow customers to configure the safety systems, including Azure OpenAI; they don't impact coverage for customers using other Covered Products.
123123
additionalContent: |

0 commit comments

Comments
 (0)