Skip to content

Commit 08e08c6

Browse files
committed
remove AOAI column from table
1 parent 6c34721 commit 08e08c6

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

articles/ai-foundry/concepts/deployments-overview.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about deployment options for Azure AI Foundry Models.
55
manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.topic: concept-article
8-
ms.date: 06/26/2025
8+
ms.date: 06/30/2025
99
ms.reviewer: fasantia
1010
ms.author: mopeakande
1111
author: msakande
@@ -25,7 +25,7 @@ Azure AI Foundry provides several deployment options depending on the type of mo
2525

2626
### Standard deployment in Azure AI Foundry resources
2727

28-
Azure AI Foundry resources (formerly referred to as Azure AI model inference, in Azure AI Services), is **the preferred deployment option** in Azure AI Foundry. It offers the widest range of capabilities, including regional, data zone, or global processing, and it offers standard and [provisioned throughput (PTU)](../../ai-services/openai/concepts/provisioned-throughput.md) options. Flagship models in Azure AI Foundry Models support this deployment option.
28+
Azure AI Foundry resources (formerly referred to as Azure AI Services resources), is **the preferred deployment option** in Azure AI Foundry. It offers the widest range of capabilities, including regional, data zone, or global processing, and it offers standard and [provisioned throughput (PTU)](../../ai-services/openai/concepts/provisioned-throughput.md) options. Flagship models in Azure AI Foundry Models support this deployment option.
2929

3030
This deployment option is available in:
3131

@@ -63,17 +63,17 @@ To get started, see [How to deploy and inference a managed compute deployment](.
6363

6464
We recommend using [Standard deployments in Azure AI Foundry resources](#standard-deployment-in-azure-ai-foundry-resources) whenever possible, as it offers the largest set of capabilities among the available deployment options. The following table lists details about specific capabilities available for each deployment option:
6565

66-
| Capability | Azure OpenAI | Standard deployment in Azure AI Foundry resources| Serverless API Endpoint | Managed compute |
67-
|-------------------------------|----------------------|-------------------|----------------|-----------------|
68-
| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Foundry Models](../../ai-foundry/foundry-models/concepts/models.md) | [Foundry Models with pay-as-you-go billing](../how-to/model-catalog-overview.md) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
69-
| Deployment resource | Azure OpenAI resource | Azure AI Foundry resource | AI project (in AI hub resource) | AI project (in AI hub resource) |
70-
| Requires AI Hubs | No | No | Yes | Yes |
71-
| Data processing options | Regional <br /> Data-zone <br /> Global | Regional <br /> Data-zone <br /> Global | Regional | Regional |
72-
| Private networking | Yes | Yes | Yes | Yes |
73-
| Content filtering | Yes | Yes | Yes | No |
74-
| Custom content filtering | Yes | Yes | No | No |
75-
| Key-less authentication | Yes | Yes | No | No |
76-
| Billing bases | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
66+
| Capability | Standard deployment in Azure AI Foundry resources | Serverless API Endpoint | Managed compute |
67+
|-------------------------------|--------------------------------------------------|------------------------|-----------------|
68+
| Which models can be deployed? | [Foundry Models](../../ai-foundry/foundry-models/concepts/models.md) | [Foundry Models with pay-as-you-go billing](../how-to/model-catalog-overview.md) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
69+
| Deployment resource | Azure AI Foundry resource | AI project (in AI hub resource) | AI project (in AI hub resource) |
70+
| Requires AI Hubs | No | Yes | Yes |
71+
| Data processing options | Regional <br /> Data-zone <br /> Global | Regional | Regional |
72+
| Private networking | Yes | Yes | Yes |
73+
| Content filtering | Yes | Yes | No |
74+
| Custom content filtering | Yes | No | No |
75+
| Key-less authentication | Yes | No | No |
76+
| Billing bases | Token usage & [provisioned throughput units](../../ai-services/openai/concepts/provisioned-throughput.md) | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
7777

7878
<sup>1</sup> A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in standard deployment. After you delete the endpoint, no further charges accrue.
7979

0 commit comments

Comments
 (0)