Skip to content

Commit c9dd338

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-ai-docs-pr (branch live)
2 parents 001d3c7 + d87a0e5 commit c9dd338

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+442
-299
lines changed

articles/ai-foundry/.openpublishing.redirection.ai-studio.json

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1097,6 +1097,11 @@
10971097
"source_path_from_root": "/articles/ai-foundry/model-inference/reference/reference-model-inference-images-embeddings.md",
10981098
"redirect_url": "/rest/api/aifoundry/model-inference/get-image-embeddings/get-image-embeddings",
10991099
"redirect_document_id": false
1100-
}
1100+
},
1101+
{
1102+
"source_path_from_root": "/articles/ai-foundry/how-to/prompt-flow.md",
1103+
"redirect_url": "/azure/ai-foundry/concepts/prompt-flow",
1104+
"redirect_document_id": true
1105+
}
11011106
]
11021107
}

articles/ai-foundry/concepts/deployments-overview.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,6 @@ titleSuffix: Azure AI Foundry
44
description: Learn about deploying models in Azure AI Foundry portal.
55
manager: scottpolly
66
ms.service: azure-ai-foundry
7-
ms.custom:
8-
- ignite-2023
9-
- build-2024
10-
- ignite-2024
117
ms.topic: concept-article
128
ms.date: 10/21/2024
139
ms.reviewer: fasantia
@@ -17,22 +13,28 @@ author: msakande
1713

1814
# Overview: Deploy AI models in Azure AI Foundry portal
1915

20-
The model catalog in Azure AI Foundry portal is the hub to discover and use a wide range of models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. The process of interacting with a deployed model is called *inferencing*. Azure AI Foundry offer a comprehensive suite of deployment options for those models depending on your needs and model requirements.
16+
The model catalog in Azure AI Foundry portal is the hub to discover and use a wide range of models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. Azure AI Foundry offers a comprehensive suite of deployment options for those models depending on your needs and model requirements.
2117

2218
## Deploying models
2319

24-
Deployment options vary depending on the model type:
20+
Deployment options vary depending on the model offering:
2521

26-
* **Azure OpenAI models:** The latest OpenAI models that have enterprise features from Azure.
27-
* **Models as a Service models:** These models don't require compute quota from your subscription. This option allows you to deploy your Model as a Service (MaaS). You use a serverless API deployment and are billed per token in a pay-as-you-go fashion.
28-
* **Open and custom models:** The model catalog offers access to a large variety of models across modalities that are of open access. You can host open models in your own subscription with a managed infrastructure, virtual machines, and the number of instances for capacity management. There's a wide range of models from Azure OpenAI, Hugging Face, and NVIDIA.
22+
* **Azure OpenAI models:** The latest OpenAI models that have enterprise features from Azure with flexible billing options.
23+
* **Models-as-a-Service models:** These models don't require compute quota from your subscription and are billed per token in a pay-as-you-go fashion.
24+
* **Open and custom models:** The model catalog offers access to a large variety of models across modalities, including models of open access. You can host open models in your own subscription with a managed infrastructure, virtual machines, and the number of instances for capacity management.
2925

3026
Azure AI Foundry offers four different deployment options:
3127

3228
|Name | Azure OpenAI service | Azure AI model inference | Serverless API | Managed compute |
3329
|-------------------------------|----------------------|-------------------|----------------|-----------------|
34-
| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Azure OpenAI models and Models as a Service](../../ai-foundry/model-inference/concepts/models.md) | [Models as a Service](../how-to/model-catalog-overview.md#content-safety-for-models-deployed-via-serverless-apis) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
30+
| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Azure OpenAI models and Models-as-a-Service](../../ai-foundry/model-inference/concepts/models.md) | [Models-as-a-Service](../how-to/model-catalog-overview.md#content-safety-for-models-deployed-via-serverless-apis) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
3531
| Deployment resource | Azure OpenAI resource | Azure AI services resource | AI project resource | AI project resource |
32+
| Requires Hubs/Projects | No | No | Yes | Yes |
33+
| Data processing options | Regional <br /> Data-zone <br /> Global | Global | Regional | Regional |
34+
| Private networking | Yes | Yes | Yes | Yes |
35+
| Content filtering | Yes | Yes | Yes | No |
36+
| Custom content filtering | Yes | Yes | No | No |
37+
| Key-less authentication | Yes | Yes | No | No |
3638
| Best suited when | You are planning to use only OpenAI models | You are planning to take advantage of the flagship models in Azure AI catalog, including OpenAI. | You are planning to use a single model from a specific provider (excluding OpenAI). | If you plan to use open models and you have enough compute quota available in your subscription. |
3739
| Billing bases | Token usage & PTU | Token usage | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
3840
| Deployment instructions | [Deploy to Azure OpenAI Service](../how-to/deploy-models-openai.md) | [Deploy to Azure AI model inference](../model-inference/how-to/create-model-deployments.md) | [Deploy to Serverless API](../how-to/deploy-models-serverless.md) | [Deploy to Managed compute](../how-to/deploy-models-managed.md) |
@@ -48,18 +50,16 @@ Azure AI Foundry offers four different deployment options:
4850

4951
Azure AI Foundry encourages customers to explore the deployment options and pick the one that best suites their business and technical needs. In general you can use the following thinking process:
5052

51-
1. Start with the deployment options that have the bigger scopes. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. [Azure AI model inference](../../ai-foundry/model-inference/overview.md) is a deployment target that supports all the flagship models in the Azure AI catalog, including latest innovation from Azure OpenAI. To get started, follow [Configure your AI project to use Azure AI model inference](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md).
53+
* Start with [Azure AI model inference](../../ai-foundry/model-inference/overview.md) which is the option with the bigger scope. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. If you are using Azure AI Foundry Hubs or Projects, enable it by [turning on Azure AI model inference](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md).
5254

53-
2. When you are looking to use a specific model:
55+
* When you are looking to use a specific model:
5456

55-
1. When you are interested in Azure OpenAI models, use the Azure OpenAI Service which offers a wide range of capabilities for them and it's designed for them.
57+
* When you are interested in Azure OpenAI models, use the Azure OpenAI Service which offers a wide range of capabilities for them and it's designed for them.
5658

57-
2. When you are interested in a particular model from Models as a Service, and you don't expect to use any other type of model, use [Serverless API endpoints](../how-to/deploy-models-serverless.md). They allow deployment of a single model under a unique set of endpoint URL and keys.
59+
* When you are interested in a particular model from Models-as-a-Service, and you don't expect to use any other type of model, use [Serverless API endpoints](../how-to/deploy-models-serverless.md). They allow deployment of a single model under a unique set of endpoint URL and keys.
5860

59-
3. When your model is not available in Models as a Service and you have compute quota available in your subscription, use [Managed Compute](../how-to/deploy-models-managed.md) which support deployment of open and custom models. It also allows high level of customization of the deployment inference server, protocols, and detailed configuration.
61+
* When your model is not available in Models-as-a-Service and you have compute quota available in your subscription, use [Managed Compute](../how-to/deploy-models-managed.md) which support deployment of open and custom models. It also allows high level of customization of the deployment inference server, protocols, and detailed configuration.
6062

61-
> [!TIP]
62-
> Each deployment option may offer different capabilities in terms of networking, security, and additional features like content safety. Review the documentation for each of them to understand their limitations.
6363

6464
## Related content
6565

articles/ai-foundry/how-to/prompt-flow.md renamed to articles/ai-foundry/concepts/prompt-flow.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: conceptual
12-
ms.date: 11/19/2024
12+
ms.date: 03/18/2025
1313
ms.reviewer: none
1414
ms.author: lagayhar
1515
author: lgayhardt
@@ -108,5 +108,5 @@ If the prompt flow tools in Azure AI Foundry portal don't meet your requirements
108108

109109
## Next steps
110110

111-
- [Build with prompt flow in Azure AI Foundry portal](flow-develop.md)
111+
- [Build with prompt flow in Azure AI Foundry portal](../how-to/flow-develop.md)
112112
- [Get started with prompt flow in VS Code](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html)
Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
---
2+
title: How to deploy NVIDIA Inference Microservices
3+
titleSuffix: Azure AI Foundry
4+
description: Learn to deploy NVIDIA Inference Microservices, using Azure AI Foundry.
5+
manager: scottpolly
6+
ms.service: azure-ai-foundry
7+
ms.topic: how-to
8+
ms.date: 03/14/2024
9+
ms.author: ssalgado
10+
author: ssalgadodev
11+
ms.reviewer: tinaem
12+
reviewer: tinaem
13+
ms.custom: devx-track-azurecli
14+
---
15+
16+
# How to deploy NVIDIA Inference Microservices
17+
18+
In this article, you learn how to deploy NVIDIA Inference Microservices (NIMs) on Managed Compute in the model catalog on Foundry​. NVIDIA inference microservices are containers built by NVIDIA for optimized pre-trained and customized AI models serving on NVIDIA GPUs​.
19+
Get improved TCO (total cost of ownership) and performance with NVIDIA NIMs offered for one-click deployment on Foundry, with enterprise production-grade software under NVIDIA AI Enterprise license.
20+
21+
[!INCLUDE [models-preview](../includes/models-preview.md)]
22+
23+
## Prerequisites
24+
25+
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
26+
27+
- An [Azure AI Foundry hub](create-azure-ai-resource.md).
28+
29+
- An [Azure AI Foundry project](create-projects.md).
30+
31+
- Ensure Marketplace purchases are enabled for your Azure subscription. Learn more about it [here](/azure/cost-management-billing/manage/enable-marketplace-purchases).
32+
33+
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned a _custom role_ with the following permissions. User accounts assigned the _Owner_ or _Contributor_ role for the Azure subscription can also create NIM deployments. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-foundry.md).
34+
35+
- On the Azure subscription—**to subscribe the workspace to the Azure Marketplace offering**, once for each workspace/project:
36+
- Microsoft.MarketplaceOrdering/agreements/offers/plans/read
37+
- Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action
38+
- Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read
39+
- Microsoft.Marketplace/offerTypes/publishers/offers/plans/agreements/read
40+
- Microsoft.SaaS/register/action
41+
42+
- On the resource group—**to create and use the SaaS resource**:
43+
- Microsoft.SaaS/resources/read
44+
- Microsoft.SaaS/resources/write
45+
46+
- On the workspace—**to deploy endpoints**:
47+
- Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*
48+
- Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*
49+
50+
51+
## NVIDIA NIM PayGo offer on Azure Marketplace by NVIDIA
52+
53+
NVIDIA NIMs available on Azure AI Foundry model catalog can be deployed with a subscription to the [NVIDIA NIM SaaS offer](https://aka.ms/nvidia-nims-plan) on Azure Marketplace. This offer includes a 90-day trial that applies to all NIMs associated with a particular SaaS subscription scoped to an Azure AI Foundry project, and has a PayGo price of $1 per GPU hour post the trial period.
54+
55+
Azure AI Foundry enables a seamless purchase flow of the NVIDIA NIM offering on Marketplace from NVIDIA collection in the model catalog, and further deployment on managed compute.
56+
57+
## Deploy NVIDIA Inference Microservices on Managed Compute
58+
59+
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and go to the **Home** page.
60+
2. Select **Model catalog** from the left sidebar.
61+
3. In the filters section, select **Collections** and select **NVIDIA**.
62+
63+
:::image type="content" source="../media/how-to/deploy-nvidia-inference-microservice/nvidia-collections.png" alt-text="A screenshot showing the Nvidia inference microservices available in the model catalog." lightbox="../media/how-to/deploy-nvidia-inference-microservice/nvidia-collections.png":::
64+
65+
4. Select the NVIDIA NIM of your choice. In this article, we are using **Llama-3.3-70B-Instruct-NIM-microservice** as an example.
66+
5. Select **Deploy**.
67+
6. Select one of the NVIDIA GPU based VM SKUs supported for the NIM, based on your intended workload. You need to have quota in your Azure subscription.
68+
7. You can then customize your deployment configuration for the instance count, select an existing endpoint or create a new one, etc. For the example in this article, we consider an instance count of **2** and create a new endpoint.
69+
70+
:::image type="content" source="../media/how-to/deploy-nvidia-inference-microservice/project-customization.png" alt-text="A screenshot showing project customization options in the deployment wizard." lightbox="../media/how-to/deploy-nvidia-inference-microservice/project-customization.png":::
71+
72+
8. Select **Next**
73+
9. Then, review the pricing breakdown for the NIM deployment, terms of use and license agreement associated with the NIM offer. The pricing breakdown helps to inform what the aggregated pricing for the NIM software deployed would be, which is a function of the number of NVIDIA GPUs in the VM instance that was selected in the previous steps. In addition to the applicable NIM software price, Azure Compute charges also applies based on your deployment configuration.
74+
75+
:::image type="content" source="../media/how-to/deploy-nvidia-inference-microservice/payment-description.png" alt-text="A screenshot showing the necessary user payment agreement detailing how the user is charged for deploying the models." lightbox="../media/how-to/deploy-nvidia-inference-microservice/payment-description.png":::
76+
77+
10. Select the checkbox to acknowledge understanding of pricing and terms of use, and then, select **Deploy**.
78+
79+
## Consume NVIDIA NIM deployments
80+
81+
After your deployment is successfully created, you can go to **Models + Endpoints** under My assets in your Azure AI Foundry project, select your deployment under "Model deployments" and navigate to the Test tab for sample inference to the endpoint. You can also go to the Chat Playground by selecting **Open in Playground** in Deployment Details tab, to be able to modify parameters for the inference requests.
82+
83+
NVIDIA NIMs on Foundry expose an OpenAI compatible API, learn more about the payload supported [here](https://docs.nvidia.com/nim/large-language-models/latest/api-reference.html#). The 'model' parameter for NIMs on Foundry is set to a default value within the container, and is not required to pass through in the payload to your online endpoint. The **Consume** tab of the NIM deployment on Foundry includes code samples for inference with the target URL of your deployment. You can also consume NIM deployments using the Azure AI Model Inference SDK.
84+
85+
## Security scanning for NIMs by NVIDIA
86+
87+
88+
Redeploy to get the latest version of NIM from NVIDIA on Foundry.
89+
90+
## Network Isolation support for NIMs
91+
92+
NVIDIA ensures the security and reliability of NVIDIA NIM container images through best-in-class vulnerability scanning, rigorous patch management, and transparent processes. Learn the details [here](https://docs.nvidia.com/ai-enterprise/planning-resource/security-for-azure-ai-foundry/latest/introduction.html). Microsoft works with NVIDIA to get the latest patches of the NIMs to deliver secure, stable, and reliable production-grade software within AI Foundry.
93+
Users can refer to the last updated time for the NIM in the model overview page, and you can redeploy to get the latest version of NIM from NVIDIA on Foundry.
94+
95+
While NIMs are in preview on Foundry, workspaces with Public Network Access disabled will have a limitation of being able to create only one successful deployment in the private workspace or project. Note, there can only be a single active deployment in a private workspace, attempts to create more active deployments will end in failure.
96+
97+
## Related content
98+
99+
* Learn more about the [Model Catalog](./model-catalog-overview.md)
100+
* Learn more about [built-in policies for deployment](./built-in-policy-model-deployment.md)

0 commit comments

Comments
 (0)