Skip to content

Commit 6cd5bdc

Browse files
committed
name changes
1 parent d47aba0 commit 6cd5bdc

11 files changed

+34
-34
lines changed

articles/ai-foundry/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ sections:
5151
answer: |
5252
Prompt templates in prompt flow provide robust examples and instructions for avoiding prompt injection attacks in the application. Azure AI Content Safety helps detect offensive or inappropriate content in text and images. Content moderation also checks for jailbreaks.
5353
- question: |
54-
What is the billing model for Model-as-a-Service (MaaS)?
54+
What is the billing model for standard deployments?
5555
answer: |
5656
Azure AI Foundry offers pay-as-you-go inference APIs and hosted fine-tuning for [Llama 2 family models](how-to/deploy-models-llama.md). Currently, there's no extra charge for Azure AI Foundry outside of typical AI services and other Azure resource charges.
5757
- question: |

articles/ai-foundry/how-to/built-in-policy-model-deployment.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
---
22
title: Control AI model deployment with built-in policies
33
titleSuffix: Azure AI Foundry
4-
description: "Learn how to use built-in Azure policies to control what managed AI Services (MaaS) and Model-as-a-Platform (MaaP) AI models can be deployed in Azure AI Foundry portal."
4+
description: "Learn how to use built-in Azure policies to control what managed AI Services (standard deployment) and Model-as-a-Platform (MaaP) AI models can be deployed in Azure AI Foundry portal."
55
author: Blackmist
66
ms.author: larryfr
77
ms.service: azure-ai-foundry
88
ms.topic: how-to #Don't change
99
ms.date: 02/19/2025
1010

11-
#customer intent: As an admin, I want control what Managed AI Services (MaaS) and Model-as-a-Platform (MaaP) AI models can be deployed by my developers.
11+
#customer intent: As an admin, I want control what Managed AI Services (standard deployment) and Model-as-a-Platform (MaaP) AI models can be deployed by my developers.
1212

1313
---
1414

1515
# Control AI model deployment with built-in policies in Azure AI Foundry portal
1616

17-
Azure Policy provides built-in policy definitions that help you govern the deployment of AI models in Managed AI Services (MaaS) and Model-as-a-Platform (MaaP). You can use these policies to control what models your developers can deploy in Azure AI Foundry portal.
17+
Azure Policy provides built-in policy definitions that help you govern the deployment of AI models in Managed AI Services (standard deployment) and Model-as-a-Platform (MaaP). You can use these policies to control what models your developers can deploy in Azure AI Foundry portal.
1818

1919
## Prerequisites
2020

articles/ai-foundry/how-to/deploy-models-serverless.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -563,7 +563,7 @@ Read more about the [capabilities of this API](../../ai-foundry/model-inference/
563563
564564
## Network isolation
565565
566-
Endpoints for models deployed as standard deployment follow the public network access (PNA) flag setting of the Azure AI Foundry portal Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your Azure AI Foundry Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
566+
Endpoints for models deployed as standard deployment follow the public network access (PNA) flag setting of the Azure AI Foundry portal Hub that has the project in which the deployment exists. To secure your standard deployment, disable the PNA flag on your Azure AI Foundry Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
567567
568568
To set the PNA flag for the Azure AI Foundry hub:
569569

articles/ai-foundry/how-to/evaluate-generative-ai-app.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ To create a new evaluation for your selected model deployment and defined prompt
171171

172172
#### Basic information
173173

174-
To start, you can set up the name for your evaluation run. Then select the **model deployment** you want to evaluate. We support both Azure OpenAI models and other open models compatible with Model-as-a-Service (MaaS), such as Meta Llama and Phi-3 family models. Optionally, you can adjust the model parameters like max response, temperature, and top P based on your need.
174+
To start, you can set up the name for your evaluation run. Then select the **model deployment** you want to evaluate. We support both Azure OpenAI models and other open models compatible with standard deployment, such as Meta Llama and Phi-3 family models. Optionally, you can adjust the model parameters like max response, temperature, and top P based on your need.
175175

176176
In the System message text box, provide the prompt for your scenario. For more information on how to craft your prompt, see the prompt catalog. You can choose to add example to show the chat what responses you want. It will try to mimic any responses you add here to make sure they match the rules you laid out in the system message.
177177

articles/ai-foundry/how-to/fine-tune-serverless.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,7 @@ Here are some of the tasks you can do on the **Models** tab:
210210

211211
### Supported enterprise scenarios for fine-tuning
212212

213-
Several enterprise scenarios are supported for MaaS fine-tuning. The table below outlines the supported configurations for user storage networking and authentication to ensure smooth operation within enterprise scenarios:
213+
Several enterprise scenarios are supported for standard deployment fine-tuning. The table below outlines the supported configurations for user storage networking and authentication to ensure smooth operation within enterprise scenarios:
214214

215215
>[!Note]
216216
>- Data connections auth can be changed via AI Foundry by clicking on the datastore connection which your dataset is stored in, and navigating to the **Access details** > **Authentication Method** setting.
@@ -230,7 +230,7 @@ Several enterprise scenarios are supported for MaaS fine-tuning. The table below
230230

231231
The scenarios above should work in a Managed Vnet workspace as well. See setup of Managed Vnet AI Foundry hub here: [How to configure a managed network for Azure AI Foundry hubs](./configure-managed-network.md)
232232

233-
Customer-Managed Keys (CMKs) is **not** a supported enterprise scenario with MaaS fine-tuning.
233+
Customer-Managed Keys (CMKs) is **not** a supported enterprise scenario with standard deployment fine-tuning.
234234

235235
Issues fine-tuning with unique network setups on the workspace and storage usually points to a networking setup issue.
236236

@@ -577,7 +577,7 @@ model_id = f"azureml://locations/{workspace.location}/workspaces/{workspace._wor
577577

578578
### Supported enterprise scenarios for fine-tuning
579579

580-
Several enterprise scenarios are supported for MaaS fine-tuning. The table below outlines the supported configurations for user storage networking and authentication to ensure smooth operation within enterprise scenarios:
580+
Several enterprise scenarios are supported for standard deployment fine-tuning. The table below outlines the supported configurations for user storage networking and authentication to ensure smooth operation within enterprise scenarios:
581581

582582
>[!Note]
583583
>- Data connections auth can be changed via AI Foundry by clicking on the datastore connection which your dataset is stored in, and navigating to the **Access details** > **Authentication Method** setting.
@@ -597,7 +597,7 @@ Several enterprise scenarios are supported for MaaS fine-tuning. The table below
597597

598598
The scenarios above should work in a Managed Vnet workspace as well. See setup of Managed Vnet AI Foundry hub here: [How to configure a managed network for Azure AI Foundry hubs](./configure-managed-network.md)
599599

600-
Customer-Managed Keys (CMKs) is **not** a supported enterprise scenario with MaaS fine-tuning.
600+
Customer-Managed Keys (CMKs) is **not** a supported enterprise scenario with standard deployment fine-tuning.
601601

602602
Issues fine-tuning with unique network setups on the workspace and storage usually points to a networking setup issue.
603603

articles/ai-foundry/how-to/model-catalog-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -133,9 +133,9 @@ Models that are available for deployment as standard deployments with pay-as-you
133133

134134
* Manages the hosting infrastructure.
135135
* Makes the inference APIs available.
136-
* Acts as the data processor for prompts submitted and content output by models deployed via MaaS.
136+
* Acts as the data processor for prompts submitted and content output by models deployed via standard deployment.
137137

138-
Learn more about data processing for MaaS in the [article about data privacy](concept-data-privacy.md).
138+
Learn more about data processing for standard deployment in the [article about data privacy](concept-data-privacy.md).
139139

140140
:::image type="content" source="../media/explore/model-publisher-cycle.png" alt-text="Diagram that shows the model publisher service cycle." lightbox="../media/explore/model-publisher-cycle.png":::
141141

@@ -144,7 +144,7 @@ Learn more about data processing for MaaS in the [article about data privacy](co
144144
145145
### Billing
146146

147-
The discovery, subscription, and consumption experience for models deployed via MaaS is in Azure AI Foundry portal and Azure Machine Learning studio. Users accept license terms for use of the models. Pricing information for consumption is provided during deployment.
147+
The discovery, subscription, and consumption experience for models deployed via standard deployment is in Azure AI Foundry portal and Azure Machine Learning studio. Users accept license terms for use of the models. Pricing information for consumption is provided during deployment.
148148

149149
Models from non-Microsoft providers are billed through Azure Marketplace, in accordance with the [Microsoft Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms).
150150

@@ -182,7 +182,7 @@ To set the public network access flag for the Azure AI Foundry hub:
182182

183183
* If you have an Azure AI Foundry hub with a private endpoint created before July 11, 2024, standard deployments added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create new standard deployments in the project so that the new deployments can follow the hub's networking configuration.
184184

185-
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing standard deployments won't follow the hub's networking configuration. For standard deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
185+
* If you have an Azure AI Foundry hub with standard deployment created before July 11, 2024, and you enable a private endpoint on this hub, the existing standard deployments won't follow the hub's networking configuration. For standard deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
186186

187187
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for standard deployments in private hubs, because private hubs have the public network access flag disabled.
188188

articles/machine-learning/concept-data-privacy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ When you deploy a model from the model catalog (base or fine-tuned) as a standar
3939

4040
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
4141

42-
Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn't share these prompts and outputs with the model provider, and Microsoft doesn't use these prompts and outputs to train or improve Microsoft's, the model provider's, or any third party's models. Models are stateless and no prompts or outputs are stored in the model. If content filtering (preview) is enabled, prompts and outputs are screened for certain categories of harmful content by the Azure AI Content Safety service in real time; learn more about how Azure AI Content Safety processes data [here](/legal/cognitive-services/content-safety/data-privacy). Prompts and outputs are processed within the geography specified during deployment but may be processed between regions within the geography for operational purposes (including performance and capacity management).
42+
Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for standard deployment. Microsoft doesn't share these prompts and outputs with the model provider, and Microsoft doesn't use these prompts and outputs to train or improve Microsoft's, the model provider's, or any third party's models. Models are stateless and no prompts or outputs are stored in the model. If content filtering (preview) is enabled, prompts and outputs are screened for certain categories of harmful content by the Azure AI Content Safety service in real time; learn more about how Azure AI Content Safety processes data [here](/legal/cognitive-services/content-safety/data-privacy). Prompts and outputs are processed within the geography specified during deployment but may be processed between regions within the geography for operational purposes (including performance and capacity management).
4343

4444
:::image type="content" source="media/concept-data-privacy/model-publisher-cycle.png" alt-text="A diagram showing model publisher service cycle." lightbox="media/concept-data-privacy/model-publisher-cycle.png":::
4545

0 commit comments

Comments
 (0)