Skip to content

Commit f5ae411

Browse files
Merge pull request #3654 from santiagxf/santiagxf-patch-2
Update deployment-types.md
2 parents 1a0a8b5 + 4920ae9 commit f5ae411

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/ai-foundry/model-inference/concepts/deployment-types.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,15 +29,15 @@ To learn more about deployment options for Azure OpenAI models see [Azure OpenAI
2929

3030
## Deployment types for Models-as-a-Service models
3131

32-
Models from third-party model providers with pay-as-you-go billing (collectively called Models-as-a-Service), makes models available in Azure AI model inference under **standard** deployments with a Global processing option (`Global-Standard`).
32+
Models with pay-as-you-go billing (collectively called Models-as-a-Service), makes models available in Azure AI model inference under **standard** deployments with a Global processing option (`Global-Standard`).
33+
34+
> [!TIP]
35+
> Models-as-a-Service offers regional deployment options under [Serverless API endpoints](../../../ai-studio/how-to/deploy-models-serverless.md) in Azure AI Foundry. However, those deployments can't be accessed using the Azure AI model inference endpoint in Azure AI Services and they need to be created within a project.
3336
3437
### Global-Standard
3538

3639
Global deployments leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources. Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure location. Learn more about [data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
3740

38-
> [!NOTE]
39-
> Models-as-a-Service offers regional deployment options under [Serverless API endpoints](../../../ai-studio/how-to/deploy-models-serverless.md) in Azure AI Foundry. Prompts and outputs are processed within the geography specified during deployment. However, those deployments can't be accessed using the Azure AI model inference endpoint in Azure AI Services.
40-
4141
## Control deployment options
4242

4343
Administrators can control which model deployment types are available to their users by using Azure Policies. Learn more about [How to control AI model deployment with custom policies](../../../ai-studio/how-to/custom-policy-model-deployment.md).

0 commit comments

Comments
 (0)