Skip to content

Commit a61ecac

Browse files
author
Jill Grant
authored
Merge pull request #275774 from s-polly/stp-ml-maas-terminology
Stp ml maas terminology
2 parents 5071beb + ce385de commit a61ecac

14 files changed

+87
-88
lines changed

articles/machine-learning/concept-data-privacy.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,41 +22,41 @@ When you deploy models in Azure Machine Learning, the following types of data ar
2222

2323
* **Prompts and generated content**. Prompts are submitted by the user, and content (output) is generated by the model via the operations supported by the model. Prompts may include content that has been added via retrieval-augmented-generation (RAG), metaprompts, or other functionality included in an application.
2424

25-
* **Uploaded data**. For models that support finetuning, customers can upload their data to the [Azure Machine Learning Datastore](./concept-data.md) for use for finetuning.
25+
* **Uploaded data**. For models that support fine-tuning, customers can upload their data to the [Azure Machine Learning Datastore](./concept-data.md) for use for fine-tuning.
2626

2727
## Generate inferencing outputs with real-time endpoints
2828

29-
Deploying models to managed online endpoints deploys model weights to dedicated Virtual Machines and exposes a REST API for real-time inference. Learn more about deploying models from the [Model Catalog to real-time endpoints](concept-model-catalog.md). You manage the infrastructure for these real-time endpoints, and Azures data, privacy, and security commitments apply. Learn more about [Azure compliance offerings](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3) applicable to Azure Machine Learning.
29+
Deploying models to managed compute deploys model weights to dedicated Virtual Machines and exposes a REST API for real-time inference. Learn more about deploying models from the [Model Catalog to real-time endpoints](concept-model-catalog.md). You manage the infrastructure for these real-time endpoints, and Azure's data, privacy, and security commitments apply. Learn more about [Azure compliance offerings](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3) applicable to Azure Machine Learning.
3030

31-
Although containers for models Curated by Azure AI” have been scanned for vulnerabilities that could exfiltrate data, not all models available through the Model Catalog have been scanned. To reduce the risk of data exfiltration, you can protect your deployment using virtual networks. Follow this link to [learn more](./how-to-network-isolation-model-catalog.md). You can also use [Azure Policy](./how-to-regulate-registry-deployments.md) to regulate the models that can be deployed by your users.
31+
Although containers for models "Curated by Azure AI" are scanned for vulnerabilities that could exfiltrate data, not all models available through the model catalog have been scanned. To reduce the risk of data exfiltration, you can protect your deployment using virtual networks. Follow this link to [learn more](./how-to-network-isolation-model-catalog.md). You can also use [Azure Policy](./how-to-regulate-registry-deployments.md) to regulate the models that can be deployed by your users.
3232

3333
:::image type="content" source="media/concept-data-privacy/platform-service.png" alt-text="A diagram showing the platform service life cycle." lightbox="media/concept-data-privacy/platform-service.png":::
3434

35-
## Generate inferencing outputs with pay-as-you-go deployments (Models-as-a-Service)
35+
## Generate inferencing outputs with serverless APIs (Models-as-a-Service)
3636

37-
When you deploy a model from the Model Catalog (base or finetuned) using pay-as-you-go deployments for inferencing, an API is provisioned giving you access to the model hosted and managed by the Azure Machine Learning Service. Learn more about [Models-as-a-Service](concept-model-catalog.md). The model processes your input prompts and generates outputs based on the functionality of the model, as described in the model details provided for the model. While the model is provided by the model provider, and your use of the model (and the model providers accountability for the model and its outputs) is subject to the license terms provided with the model, Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in Models-as-a-Service are subject to Azures data, privacy, and security commitments. Learn more about Azure compliance offerings applicable to Azure Machine Learning [here](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
37+
When you deploy a model from the model catalog (base or fine-tuned) as a serverless API for inferencing, an API is provisioned giving you access to the model hosted and managed by the Azure Machine Learning Service. Learn more about [Models-as-a-Service](concept-model-catalog.md). The model processes your input prompts and generates outputs based on the functionality of the model, as described in the model details provided for the model. While the model is provided by the model provider, and your use of the model (and the model provider's accountability for the model and its outputs) is subject to the license terms provided with the model, Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in Models-as-a-Service are subject to Azure's data, privacy, and security commitments. Learn more about Azure compliance offerings applicable to Azure Machine Learning [here](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
3838

39-
Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for pay-as-you-go inferencing (MaaS). Microsoft does not share these prompts and outputs with the model provider, and Microsoft does not use these prompts and outputs to train or improve Microsofts, the model providers, or any third partys models. Models are stateless and no prompts or outputs are stored in the model. If content filtering is enabled, prompts and outputs are screened for certain categories of harmful content by the Azure AI Content Safety service in real time; learn more about how Azure AI Content Safety processes data [here](/legal/cognitive-services/content-safety/data-privacy). Prompts and outputs are processed within the geography specified during deployment but may be processed between regions within the geography for operational purposes (including performance and capacity management).
39+
Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn't share these prompts and outputs with the model provider, and Microsoft doesn't use these prompts and outputs to train or improve Microsoft's, the model provider's, or any third party's models. Models are stateless and no prompts or outputs are stored in the model. If content filtering is enabled, prompts and outputs are screened for certain categories of harmful content by the Azure AI Content Safety service in real time; learn more about how Azure AI Content Safety processes data [here](/legal/cognitive-services/content-safety/data-privacy). Prompts and outputs are processed within the geography specified during deployment but may be processed between regions within the geography for operational purposes (including performance and capacity management).
4040

4141
:::image type="content" source="media/concept-data-privacy/model-publisher-cycle.png" alt-text="A diagram showing model publisher service cycle." lightbox="media/concept-data-privacy/model-publisher-cycle.png":::
4242

4343
As explained during the deployment process for Models-as-a-Service, Microsoft may share customer contact information and transaction details (including usage volume associated with the offering) with the model publisher so that they can contact customers regarding the model. Learn more about information available to model publishers, [follow this link](/partner-center/analytics).
4444

45-
## Finetune a model for pay-as-you-go deployment (Models-as-a-Service)
45+
## Fine-tune a model with serverless APIs (Models-as-a-Service)
4646

47-
If a model available for pay-as-you-go deployment (MaaS) supports finetuning, you can upload data to (or designate data already in) an [Azure Machine Learning Datastore](./concept-data.md) to finetune the model. You can then create a pay-as-you-go deployment for the finetuned model. The finetuned model can't be downloaded, but the finetuned model:
47+
If a model available for serverless API deployment supports fine-tuning, you can upload data to (or designate data already in) an [Azure Machine Learning Datastore](./concept-data.md) to fine-tune the model. You can then create a serverless API for the fine-tuned model. The fine-tuned model can't be downloaded, but the fine-tuned model:
4848

4949
* Is available exclusively for your use;
5050

5151
* Can be double [encrypted at rest](../ai-services/openai/encrypt-data-at-rest.md) (by default with Microsoft's AES-256 encryption and optionally with a customer managed key).
5252

5353
* Can be deleted by you at any time.
5454

55-
Training data uploaded for finetuning isn't used to train, retrain, or improve any Microsoft or third party model except as directed by you within the service.
55+
Training data uploaded for fine-tuning isn't used to train, retrain, or improve any Microsoft or third party model except as directed by you within the service.
5656

5757
## Data processing for downloaded models
5858

59-
If you download a model from the Model Catalog, you choose where to deploy the model, and you're responsible for how data is processed when you use the model.
59+
If you download a model from the model catalog, you choose where to deploy the model, and you're responsible for how data is processed when you use the model.
6060

6161
## Next steps
6262

0 commit comments

Comments
 (0)