You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Deployment options for Azure AI Foundry Models
3
3
titleSuffix: Azure AI Foundry
4
-
description: Learn about deployment options for Azure AI Foundry Models.
4
+
description: Learn about deployment options for Azure AI Foundry Models including standard, serverless API, and managed compute deployments.
5
5
ms.service: azure-ai-foundry
6
6
ms.topic: concept-article
7
-
ms.date: 06/30/2025
8
-
ms.reviewer: fasantia
7
+
ms.date: 09/22/2025
9
8
ms.author: mopeakande
10
-
manager: nitinme
11
9
author: msakande
10
+
manager: nitinme
11
+
#CustomerIntent: As a developer or AI practitioner, I want to understand the different deployment options available for Azure AI Foundry Models so that I can choose the most appropriate deployment method for my specific use case, requirements, and infrastructure needs.
12
12
---
13
13
14
14
# Deployment overview for Azure AI Foundry Models
15
15
16
-
The model catalog in Azure AI Foundry is the hub to discover and use a wide range of Foundry Models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. Azure AI Foundry offers a comprehensive suite of deployment options for Foundry Models, depending on your needs and model requirements.
16
+
The model catalog in Azure AI Foundry is the hub to discover and use a wide range of Foundry Models for building generative AI applications. You need to deploy models to make them available for receiving inference requests. Azure AI Foundry offers a comprehensive suite of deployment options for Foundry Models, depending on your needs and model requirements.
17
17
18
18
## Deployment options
19
19
@@ -23,6 +23,9 @@ Azure AI Foundry provides several deployment options depending on the type of mo
23
23
- Deployment to serverless API endpoints
24
24
- Deployment to managed computes
25
25
26
+
Azure AI Foundry portal might automatically pick a deployment option based on your environment and configuration. Use Azure AI Foundry resources for deployment whenever possible.
27
+
Models that support multiple deployment options default to Azure AI Foundry resources for deployment. To access other deployment options, use the Azure CLI or Azure Machine Learning SDK for deployment.
28
+
26
29
### Standard deployment in Azure AI Foundry resources
27
30
28
31
Azure AI Foundry resources (formerly referred to as Azure AI Services resources), is **the preferred deployment option** in Azure AI Foundry. It offers the widest range of capabilities, including regional, data zone, or global processing, and it offers standard and [provisioned throughput (PTU)](../../ai-services/openai/concepts/provisioned-throughput.md) options. Flagship models in Azure AI Foundry Models support this deployment option.
@@ -31,23 +34,21 @@ This deployment option is available in:
31
34
32
35
* Azure AI Foundry resources
33
36
* Azure OpenAI resources<sup>1</sup>
34
-
* Azure AI hub, when connected to an Azure AI Foundry resource (requires the [Deploy models to Azure AI Foundry resources](#configure-azure-ai-foundry-portal-for-deployment-options) feature to be turned on).
37
+
* Azure AI hub, when connected to an Azure AI Foundry resource
35
38
36
-
<sup>1</sup>If you're using Azure OpenAI resources, the model catalog shows only Azure OpenAI in Foundry Models for deployment. You can get the full list of Foundry Models by upgrading to an Azure AI Foundry resource.
39
+
<sup>1</sup>If you use Azure OpenAI resources, the model catalog shows only Azure OpenAI in Foundry Models for deployment. You can get the full list of Foundry Models by upgrading to an Azure AI Foundry resource.
37
40
38
41
To get started with standard deployment in Azure AI Foundry resources, see [How-to: Deploy models to Azure AI Foundry Models](../foundry-models/how-to/create-model-deployments.md).
39
42
40
43
### Serverless API endpoint
41
44
42
-
This deployment option is available **only in**[Azure AI hub resources](ai-resources.md) and it allows the creation of dedicated endpoints to host the model, accessible via API. Azure AI Foundry Models support serverless API endpoints with pay-as-you-go billing.
43
-
44
-
Only regional deployments can be created for serverless API endpoints, and to use it, you _must_**turn off** the "Deploy models to Azure AI Foundry resources" option.
45
+
This deployment option is available **only in**[Azure AI hub resources](ai-resources.md). It allows you to create dedicated endpoints to host the model, accessible through an API. Azure AI Foundry Models support serverless API endpoints with pay-as-you-go billing, and you can create only regional deployments for serverless API endpoints.
45
46
46
47
To get started with deployment to a serverless API endpoint, see [Deploy models as serverless API deployments](../how-to/deploy-models-serverless.md).
47
48
48
49
### Managed compute
49
50
50
-
This deployment option is available **only in**[Azure AI hub resources](ai-resources.md) and it allows the creation of a dedicated endpoint to host the model in a **dedicated compute**. You need to have compute quota in your subscription to host the model, and you're billed per compute uptime.
51
+
This deployment option is available **only in**[Azure AI hub resources](ai-resources.md). It allows you to create a dedicated endpoint to host the model in a **dedicated compute**. You need to have compute quota in your subscription to host the model, and you're billed per compute uptime.
51
52
52
53
Managed compute deployment is required for model collections that include:
53
54
@@ -61,7 +62,7 @@ To get started, see [How to deploy and inference a managed compute deployment](.
61
62
62
63
## Capabilities for the deployment options
63
64
64
-
We recommend using [Standard deployments in Azure AI Foundry resources](#standard-deployment-in-azure-ai-foundry-resources) whenever possible, as it offers the largest set of capabilities among the available deployment options. The following table lists details about specific capabilities available for each deployment option:
65
+
Use [Standard deployments in Azure AI Foundry resources](#standard-deployment-in-azure-ai-foundry-resources) whenever possible. This deployment option provides the most capabilities among the available deployment options. The following table lists details about specific capabilities for each deployment option:
65
66
66
67
| Capability | Standard deployment in Azure AI Foundry resources | Serverless API Endpoint | Managed compute |
<sup>1</sup> A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in standard deployment. After you delete the endpoint, no further charges accrue.
79
-
80
-
<sup>2</sup> Billing is on a per-minute basis, depending on the product tier and the number of instances used in the deployment since the moment of creation. After you delete the endpoint, no further charges accrue.
81
-
82
-
## Configure Azure AI Foundry portal for deployment options
Azure AI Foundry portal might automatically pick up a deployment option based on your environment and configuration. We recommend using Azure AI Foundry resources for deployment whenever possible. To do that, ensure that the **Deploy models to Azure AI Foundry resources** feature is **turned on**.
79
+
<sup>2</sup> A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in serverless deployment. After you delete the endpoint, no further charges accrue.
85
80
86
-
:::image type="content" source="../media/concepts/deployments-overview/docs-flag-enable-foundry.png" alt-text="A screenshot showing the steps to enable deployment to Azure AI Foundry resources in the Azure AI Foundry portal." lightbox="../media/concepts/deployments-overview/docs-flag-enable-foundry.png":::
81
+
<sup>3</sup> Billing is on a per-minute basis, depending on the product tier and the number of instances used in the deployment since the moment of creation. After you delete the endpoint, no further charges accrue.
87
82
88
-
Once the **Deploy models to Azure AI Foundry resources** feature is enabled, models that support multiple deployment options default to deploy to Azure AI Foundry resources for deployment. To access other deployment options, either disable the feature or use the Azure CLI or Azure Machine Learning SDK for deployment. You can disable and enable the feature as many times as needed without affecting existing deployments.
89
83
90
84
## Related content
91
85
92
86
*[Configure your AI project to use Foundry Models](../../ai-foundry/foundry-models/how-to/quickstart-ai-project.md)
93
-
*[Add and configure models to Foundry Models](../foundry-models/how-to/create-model-deployments.md)
87
+
*[Deployment types in Azure AI Foundry Models](../foundry-models/concepts/deployment-types.md)
94
88
*[Deploy Azure OpenAI models with Azure AI Foundry](../how-to/deploy-models-openai.md)
95
89
*[Deploy open models with Azure AI Foundry](../how-to/deploy-models-managed.md)
96
90
*[Explore Azure AI Foundry Models](../how-to/model-catalog-overview.md)
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/includes/models-azure-direct-others.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,14 @@ author: msakande
14
14
15
15
## Black Forest Labs models sold directly by Azure
16
16
17
-
The Black Forest Labs collection of image generation models includes FLUX.1 Kontext [pro] for in-context generation and editing and FLUX1.1 [pro] for text-to-image generation.
17
+
The Black Forest Labs collection of image generation models includes FLUX.1 Kontext [pro] for in-context generation and editing and FLUX1.1 [pro] for text-to-image generation.
18
18
19
-
| Model | Type | Capabilities | Deployment type (region availability) | Project type |
19
+
You can run these models via our service provider API and through the [images/generations and images/edits endpoints](../../openai/reference-preview.md).
20
+
21
+
22
+
| Model | Type | Capabilities | Deployment type (region availability) | Project type |
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-agents-sdk.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,6 +172,14 @@ with tracer.start_as_current_span("example-tracing"):
172
172
run = project_client.agents.runs.create_and_process(thread_id=thread.id, agent_id=agent.id)
173
173
```
174
174
175
+
### Alternative: AI Toolkit for VS Code
176
+
177
+
AI Toolkit gives you a simple way to trace locally in VS Code. It uses a local OTLP-compatible collector, making it great for development and debugging.
178
+
179
+
The toolkit supports AI frameworks like Azure AI Foundry Agents Service, OpenAI, Anthropic, and LangChain through OpenTelemetry. You can see traces instantly in VS Code without needing cloud access.
180
+
181
+
For detailed setup instructions and SDK-specific code examples, see [Tracing in AI Toolkit](https://code.visualstudio.com/docs/intelligentapps/tracing).
182
+
175
183
## Trace custom functions
176
184
177
185
To trace your custom functions, use the OpenTelemetry SDK to instrument your code.
@@ -247,7 +255,7 @@ Once necessary packages are installed, you can easily begin to [Instrument traci
247
255
The Agents playground in the Azure AI Foundry portal lets you view trace results for threads and runs that your agents produce. To see trace results, select **Thread logs** in an active thread. You can also optionally select **Metrics** to enable automatic evaluations of the model's performance across several dimensions of **AI quality** and **Risk and safety**.
248
256
249
257
> [!NOTE]
250
-
> Evaluation results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
258
+
> Evaluation in the playground is billed as outlined under Trust and Azure AI Foundry Observability on [the pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/?msockid=1f44c87dd9fa6d1e257fdd6dd8406c42). Results are available for 24 hours before expiring. To get evaluation results, select your desired metrics and chat with your agent.
251
259
> - Evaluations aren't available in the following regions.
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/trace-application.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: View trace results for AI applications using OpenAI SDK with OpenTe
5
5
author: lgayhardt
6
6
ms.author: lagayhar
7
7
ms.reviewer: ychen
8
-
ms.date: 09/15/2025
8
+
ms.date: 09/22/2025
9
9
ms.service: azure-ai-foundry
10
10
ms.topic: how-to
11
11
ai-usage: ai-assisted
@@ -291,6 +291,15 @@ Configure tracing as follows:
291
291
}
292
292
```
293
293
294
+
## Trace locally with AI Toolkit
295
+
296
+
AI Toolkit offers a simple way to trace locally in VS Code. It uses a local OTLP-compatible collector, making it perfect for development and debugging without needing cloud access.
297
+
298
+
The toolkit supports the OpenAI SDK and other AI frameworks through OpenTelemetry. You can see traces instantly in your development environment.
299
+
300
+
For detailed setup instructions and SDK-specific code examples, see [Tracing in AI Toolkit](https://code.visualstudio.com/docs/intelligentapps/tracing).
301
+
302
+
294
303
## Related content
295
304
296
305
* [Trace agents using Azure AI Foundry SDK](trace-agents-sdk.md)
0 commit comments