Skip to content

Commit 94e2e16

Browse files
authored
Update concept-data-privacy.md
move feature preview note to top of page
1 parent cd4f089 commit 94e2e16

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-studio/how-to/concept-data-privacy.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ author: s-polly
1414
---
1515
# Data, privacy, and security for use of models through the model catalog in AI Studio
1616

17+
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
18+
1719
This article describes how the data that you provide is processed, used, and stored when you deploy models from the model catalog. Also see the [Microsoft Products and Services Data Protection Addendum](https://aka.ms/DPA), which governs data processing by Azure services.
1820

1921
> [!IMPORTANT]
@@ -43,8 +45,6 @@ When you deploy a model from the model catalog (base or fine-tuned) by using ser
4345

4446
The model processes your input prompts and generates outputs based on its functionality, as described in the model details. Your use of the model (along with the provider's accountability for the model and its outputs) is subject to the license terms for the model. Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in this *model as a service* (MaaS) scenario are subject to Azure data, privacy, and security commitments. [Learn more about Azure compliance offerings applicable to Azure AI Studio](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
4547

46-
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
47-
4848
Microsoft acts as the data processor for prompts and outputs sent to, and generated by, a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn't share these prompts and outputs with the model provider. Also, Microsoft doesn't use these prompts and outputs to train or improve Microsoft models, the model provider's models, or any third party's models.
4949

5050
Models are stateless, and they don't store any prompts or outputs. If content filtering (preview) is enabled, the Azure AI Content Safety service screens prompts and outputs for certain categories of harmful content in real time. [Learn more about how Azure AI Content Safety processes data](/legal/cognitive-services/content-safety/data-privacy).

0 commit comments

Comments
 (0)